Sample records for ensemble average velocity

  1. Statistical Ensemble of Large Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Carati, Daniele; Rogers, Michael M.; Wray, Alan A.; Mansour, Nagi N. (Technical Monitor)

    2001-01-01

    A statistical ensemble of large eddy simulations (LES) is run simultaneously for the same flow. The information provided by the different large scale velocity fields is used to propose an ensemble averaged version of the dynamic model. This produces local model parameters that only depend on the statistical properties of the flow. An important property of the ensemble averaged dynamic procedure is that it does not require any spatial averaging and can thus be used in fully inhomogeneous flows. Also, the ensemble of LES's provides statistics of the large scale velocity that can be used for building new models for the subgrid-scale stress tensor. The ensemble averaged dynamic procedure has been implemented with various models for three flows: decaying isotropic turbulence, forced isotropic turbulence, and the time developing plane wake. It is found that the results are almost independent of the number of LES's in the statistical ensemble provided that the ensemble contains at least 16 realizations.

  2. Scale-invariant Green-Kubo relation for time-averaged diffusivity

    NASA Astrophysics Data System (ADS)

    Meyer, Philipp; Barkai, Eli; Kantz, Holger

    2017-12-01

    In recent years it was shown both theoretically and experimentally that in certain systems exhibiting anomalous diffusion the time- and ensemble-averaged mean-squared displacement are remarkably different. The ensemble-averaged diffusivity is obtained from a scaling Green-Kubo relation, which connects the scale-invariant nonstationary velocity correlation function with the transport coefficient. Here we obtain the relation between time-averaged diffusivity, usually recorded in single-particle tracking experiments, and the underlying scale-invariant velocity correlation function. The time-averaged mean-squared displacement is given by 〈δ2¯〉 ˜2 DνtβΔν -β , where t is the total measurement time and Δ is the lag time. Here ν is the anomalous diffusion exponent obtained from ensemble-averaged measurements 〈x2〉 ˜tν , while β ≥-1 marks the growth or decline of the kinetic energy 〈v2〉 ˜tβ . Thus, we establish a connection between exponents that can be read off the asymptotic properties of the velocity correlation function and similarly for the transport constant Dν. We demonstrate our results with nonstationary scale-invariant stochastic and deterministic models, thereby highlighting that systems with equivalent behavior in the ensemble average can differ strongly in their time average. If the averaged kinetic energy is finite, β =0 , the time scaling of 〈δ2¯〉 and 〈x2〉 are identical; however, the time-averaged transport coefficient Dν is not identical to the corresponding ensemble-averaged diffusion constant.

  3. A method for determining the weak statistical stationarity of a random process

    NASA Technical Reports Server (NTRS)

    Sadeh, W. Z.; Koper, C. A., Jr.

    1978-01-01

    A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.

  4. Ensemble Sampling vs. Time Sampling in Molecular Dynamics Simulations of Thermal Conductivity

    DOE PAGES

    Gordiz, Kiarash; Singh, David J.; Henry, Asegun

    2015-01-29

    In this report we compare time sampling and ensemble averaging as two different methods available for phase space sampling. For the comparison, we calculate thermal conductivities of solid argon and silicon structures, using equilibrium molecular dynamics. We introduce two different schemes for the ensemble averaging approach, and show that both can reduce the total simulation time as compared to time averaging. It is also found that velocity rescaling is an efficient mechanism for phase space exploration. Although our methodology is tested using classical molecular dynamics, the ensemble generation approaches may find their greatest utility in computationally expensive simulations such asmore » first principles molecular dynamics. For such simulations, where each time step is costly, time sampling can require long simulation times because each time step must be evaluated sequentially and therefore phase space averaging is achieved through sequential operations. On the other hand, with ensemble averaging, phase space sampling can be achieved through parallel operations, since each ensemble is independent. For this reason, particularly when using massively parallel architectures, ensemble sampling can result in much shorter simulation times and exhibits similar overall computational effort.« less

  5. Application of a Split-Fiber Probe to Velocity Measurement in the NASA Research Compressor

    NASA Technical Reports Server (NTRS)

    Lepicovsky, Jan

    2003-01-01

    A split-fiber probe was used to acquire unsteady data in a research compressor. The probe has two thin films deposited on a quartz cylinder 200 microns in diameter. A split-fiber probe allows simultaneous measurement of velocity magnitude and direction in a plane that is perpendicular to the sensing cylinder, because it has its circumference divided into two independent parts. Local heat transfer considerations indicated that the probe direction characteristic is linear in the range of flow incidence angles of +/- 35. Calibration tests confirmed this assumption. Of course, the velocity characteristic is nonlinear as is typical in thermal anemometry. The probe was used extensively in the NASA Glenn Research Center (GRC) low-speed, multistage axial compressor, and worked reliably during a test program of several months duration. The velocity and direction characteristics of the probe showed only minute changes during the entire test program. An algorithm was developed to decompose the probe signals into velocity magnitude and velocity direction. The averaged unsteady data were compared with data acquired by pneumatic probes. An overall excellent agreement between the averaged data acquired by a split-fiber probe and a pneumatic probe boosts confidence in the reliability of the unsteady content of the split-fiber probe data. To investigate the features of unsteady data, two methods were used: ensemble averaging and frequency analysis. The velocity distribution in a rotor blade passage was retrieved using the ensemble averaging method. Frequencies of excitation forces that may contribute to high cycle fatigue problems were identified by applying a fast Fourier transform to the absolute velocity data.

  6. Measurements of wind-waves under transient wind conditions.

    NASA Astrophysics Data System (ADS)

    Shemer, Lev; Zavadsky, Andrey

    2015-11-01

    Wind forcing in nature is always unsteady, resulting in a complicated evolution pattern that involves numerous time and space scales. In the present work, wind waves in a laboratory wind-wave flume are studied under unsteady forcing`. The variation of the surface elevation is measured by capacitance wave gauges, while the components of the instantaneous surface slope in across-wind and along-wind directions are determined by a regular or scanning laser slope gauge. The locations of the wave gauge and of the laser slope gauge are separated by few centimeters in across-wind direction. Instantaneous wind velocity was recorded simultaneously using Pitot tube. Measurements are performed at a number of fetches and for different patterns of wind velocity variation. For each case, at least 100 independent realizations were recorded for a given wind velocity variation pattern. The accumulated data sets allow calculating ensemble-averaged values of the measured parameters. Significant differences between the evolution patterns of the surface elevation and of the slope components were found. Wavelet analysis was applied to determine dominant wave frequency of the surface elevation and of the slope variation at each instant. Corresponding ensemble-averaged values acquired by different sensors were computed and compared. Analysis of the measured ensemble-averaged quantities at different fetches makes it possible to identify different stages in the wind-wave evolution and to estimate the appropriate time and length scales.

  7. Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem

    NASA Technical Reports Server (NTRS)

    Deissler, Robert G.

    1992-01-01

    Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.

  8. Investigation of stickiness influence in the anomalous transport and diffusion for a non-dissipative Fermi-Ulam model

    NASA Astrophysics Data System (ADS)

    Livorati, André L. P.; Palmero, Matheus S.; Díaz-I, Gabriel; Dettmann, Carl P.; Caldas, Iberê L.; Leonel, Edson D.

    2018-02-01

    We study the dynamics of an ensemble of non interacting particles constrained by two infinitely heavy walls, where one of them is moving periodically in time, while the other is fixed. The system presents mixed dynamics, where the accessible region for the particle to diffuse chaotically is bordered by an invariant spanning curve. Statistical analysis for the root mean square velocity, considering high and low velocity ensembles, leads the dynamics to the same steady state plateau for long times. A transport investigation of the dynamics via escape basins reveals that depending of the initial velocity ensemble, the decay rates of the survival probability present different shapes and bumps, in a mix of exponential, power law and stretched exponential decays. After an analysis of step-size averages, we found that the stable manifolds play the role of a preferential path for faster escape, being responsible for the bumps and different shapes of the survival probability.

  9. Characteristics of ion flow in the quiet state of the inner plasma sheet

    NASA Technical Reports Server (NTRS)

    Angelopoulos, V.; Kennel, C. F.; Coroniti, F. V.; Pellat, R.; Spence, H. E.; Kivelson, M. G.; Walker, R. J.; Baumjohann, W.; Feldman, W. C.; Gosling, J. T.

    1993-01-01

    We use AMPTE/IRM and ISEE 2 data to study the properties of the high beta plasma sheet, the inner plasma sheet (IPS). Bursty bulk flows (BBFs) are excised from the two databases, and the average flow pattern in the non-BBF (quiet) IPS is constructed. At local midnight this ensemble-average flow is predominantly duskward; closer to the flanks it is mostly earthward. The flow pattern agrees qualitatively with calculations based on the Tsyganenko (1987) model (T87), where the earthward flow is due to the ensemble-average cross tail electric field and the duskward flow is the diamagnetic drift due to an inward pressure gradient. The IPS is on the average in pressure equilibrium with the lobes. Because of its large variance the average flow does not represent the instantaneous flow field. Case studies also show that the non-BBF flow is highly irregular and inherently unsteady, a reason why earthward convection can avoid a pressure balance inconsistency with the lobes. The ensemble distribution of velocities is a fundamental observable of the quiet plasma sheet flow field.

  10. A translating stage system for µ-PIV measurements surrounding the tip of a migrating semi-infinite bubble.

    PubMed

    Smith, B J; Yamaguchi, E; Gaver, D P

    2010-01-01

    We have designed, fabricated and evaluated a novel translating stage system (TSS) that augments a conventional micro particle image velocimetry (µ-PIV) system. The TSS has been used to enhance the ability to measure flow fields surrounding the tip of a migrating semi-infinite bubble in a glass capillary tube under both steady and pulsatile reopening conditions. With conventional µ-PIV systems, observations near the bubble tip are challenging because the forward progress of the bubble rapidly sweeps the air-liquid interface across the microscopic field of view. The translating stage mechanically cancels the mean bubble tip velocity, keeping the interface within the microscope field of view and providing a tenfold increase in data collection efficiency compared to fixed-stage techniques. This dramatic improvement allows nearly continuous observation of the flow field over long propagation distances. A large (136-frame) ensemble-averaged velocity field recorded with the TSS near the tip of a steadily migrating bubble is shown to compare well with fixed-stage results under identical flow conditions. Use of the TSS allows the ensemble-averaged measurement of pulsatile bubble propagation flow fields, which would be practically impossible using conventional fixed-stage techniques. We demonstrate our ability to analyze these time-dependent two-phase flows using the ensemble-averaged flow field at four points in the oscillatory cycle.

  11. Thermodynamics of a time-dependent and dissipative oval billiard: A heat transfer and billiard approach.

    PubMed

    Leonel, Edson D; Galia, Marcus Vinícius Camillo; Barreiro, Luiz Antonio; Oliveira, Diego F M

    2016-12-01

    We study some statistical properties for the behavior of the average squared velocity-hence the temperature-for an ensemble of classical particles moving in a billiard whose boundary is time dependent. We assume the collisions of the particles with the boundary of the billiard are inelastic, leading the average squared velocity to reach a steady-state dynamics for large enough time. The description of the stationary state is made by using two different approaches: (i) heat transfer motivated by the Fourier law and (ii) billiard dynamics using either numerical simulations and theoretical description.

  12. Effects of a Rotating Aerodynamic Probe on the Flow Field of a Compressor Rotor

    NASA Technical Reports Server (NTRS)

    Lepicovsky, Jan

    2008-01-01

    An investigation of distortions of the rotor exit flow field caused by an aerodynamic probe mounted in the rotor is described in this paper. A rotor total pressure Kiel probe, mounted on the rotor hub and extending up to the mid-span radius of a rotor blade channel, generates a wake that forms additional flow blockage. Three types of high-response aerodynamic probes were used to investigate the distorted flow field behind the rotor. These probes were: a split-fiber thermo-anemometric probe to measure velocity and flow direction, a total pressure probe, and a disk probe for in-flow static pressure measurement. The signals acquired from these high-response probes were reduced using an ensemble averaging method based on a once per rotor revolution signal. The rotor ensemble averages were combined to construct contour plots for each rotor channel of the rotor tested. In order to quantify the rotor probe effects, the contour plots for each individual rotor blade passage were averaged into a single value. The distribution of these average values along the rotor circumference is a measure of changes in the rotor exit flow field due to the presence of a probe in the rotor. These distributions were generated for axial flow velocity and for static pressure.

  13. Unsteady Velocity Measurements in the NASA Research Low Speed Axial Compressor: Smooth Wall Configuration

    NASA Technical Reports Server (NTRS)

    Lepicovsky, Jan

    2007-01-01

    The report is a collection of experimental unsteady data acquired in the first stage of the NASA Low Speed Axial Compressor in configuration with smooth (solid) wall treatment over the first rotor. The aim of the report is to present a reliable experimental data base that can be used for analysis of the compressor flow behavior, and hopefully help with further improvements of compressor CFD codes. All data analysis is strictly restricted to verification of reliability of the experimental data reported. The report is divided into six main sections. First two sections cover the low speed axial compressor, the basic instrumentation, and the in-house developed methodology of unsteady velocity measurements using a thermo-anemometric split-fiber probe. The next two sections contain experimental data presented as averaged radial distributions for three compressor operation conditions, including the distribution of the total temperature rise over the first rotor, and ensemble averages of unsteady flow data based on a rotor blade passage period. Ensemble averages based on the rotor revolution period, and spectral analysis of unsteady flow parameters are presented in the last two sections. The report is completed with two appendices where performance and dynamic response of thermo-anemometric probes is discussed.

  14. Single-ping ADCP measurements in the Strait of Gibraltar

    NASA Astrophysics Data System (ADS)

    Sammartino, Simone; García Lafuente, Jesús; Naranjo, Cristina; Sánchez Garrido, José Carlos; Sánchez Leal, Ricardo

    2016-04-01

    In most Acoustic Doppler Current Profiler (ADCP) user manuals, it is widely recommended to apply ensemble averaging of the single-pings measurements, in order to obtain reliable observations of the current speed. The random error related to the single-ping measurement is typically too high to be used directly, while the averaging operation reduces the ensemble error of a factor of approximately √N, with N the number of averaged pings. A 75 kHz ADCP moored in the western exit of the Strait of Gibraltar, included in the long-term monitoring of the Mediterranean outflow, has recently served as test setup for a different approach to current measurements. The ensemble averaging has been disabled, while maintaining the internal coordinate conversion made by the instrument, and a series of single-ping measurements has been collected every 36 seconds during a period of approximately 5 months. The huge amount of data has been fluently handled by the instrument, and no abnormal battery consumption has been recorded. On the other hand a long and unique series of very high frequency current measurements has been collected. Results of this novel approach have been exploited in a dual way: from a statistical point of view, the availability of single-ping measurements allows a real estimate of the (a posteriori) ensemble average error of both current and ancillary variables. While the theoretical random error for horizontal velocity is estimated a priori as ˜2 cm s-1 for a 50 pings ensemble, the value obtained by the a posteriori averaging is ˜15 cm s-1, with an asymptotical behavior starting from an averaging size of 10 pings per ensemble. This result suggests the presence of external sources of random error (e.g.: turbulence), of higher magnitude than the internal sources (ADCP intrinsic precision), which cannot be reduced by the ensemble averaging. On the other hand, although the instrumental configuration is clearly not suitable for a precise estimation of turbulent parameters, some hints of the turbulent structure of the flow can be obtained by the empirical computation of zonal Reynolds stress (along the predominant direction of the current) and rate of production and dissipation of turbulent kinetic energy. All the parameters show a clear correlation with tidal fluctuations of the current, with maximum values coinciding with flood tides, during the maxima of the outflow Mediterranean current.

  15. Unsteady Flows in a Single-Stage Transonic Axial-Flow Fan Stator Row. Ph.D. Thesis - Iowa State Univ.

    NASA Technical Reports Server (NTRS)

    Hathaway, Michael D.

    1986-01-01

    Measurements of the unsteady velocity field within the stator row of a transonic axial-flow fan were acquired using a laser anemometer. Measurements were obtained on axisymmetric surfaces located at 10 and 50 percent span from the shroud, with the fan operating at maximum efficiency at design speed. The ensemble-average and variance of the measured velocities are used to identify rotor-wake-generated (deterministic) unsteadiness and turbulence, respectively. Correlations of both deterministic and turbulent velocity fluctuations provide information on the characteristics of unsteady interactions within the stator row. These correlations are derived from the Navier-Stokes equation in a manner similar to deriving the Reynolds stress terms, whereby various averaging operators are used to average the aperiodic, deterministic, and turbulent velocity fluctuations which are known to be present in multistage turbomachines. The correlations of deterministic and turbulent velocity fluctuations throughout the axial fan stator row are presented. In particular, amplification and attenuation of both types of unsteadiness are shown to occur within the stator blade passage.

  16. Estimation of the vortex length scale and intensity from two-dimensional samples

    NASA Technical Reports Server (NTRS)

    Reuss, D. L.; Cheng, W. P.

    1992-01-01

    A method is proposed for estimating flow features that influence flame wrinkling in reciprocating internal combustion engines, where traditional statistical measures of turbulence are suspect. Candidate methods were tested in a computed channel flow where traditional turbulence measures are valid and performance can be rationally evaluated. Two concepts are tested. First, spatial filtering is applied to the two-dimensional velocity distribution and found to reveal structures corresponding to the vorticity field. Decreasing the spatial-frequency cutoff of the filter locally changes the character and size of the flow structures that are revealed by the filter. Second, vortex length scale and intensity is estimated by computing the ensemble-average velocity distribution conditionally sampled on the vorticity peaks. The resulting conditionally sampled 'average vortex' has a peak velocity less than half the rms velocity and a size approximately equal to the two-point-correlation integral-length scale.

  17. Re-understanding the law-of-the-wall for wall-bounded turbulence based on in-depth investigation of DNS data

    NASA Astrophysics Data System (ADS)

    Cao, Bochao; Xu, Hongyi

    2018-05-01

    Based on direct numerical simulation (DNS) data of the straight ducts, namely square and rectangular annular ducts, detailed analyses were conducted for the mean streamwise velocity, relevant velocity scales, and turbulence statistics. It is concluded that turbulent boundary layers (TBL) should be broadly classified into three types (Type-A, -B, and -C) in terms of their distribution patterns of the time-averaged local wall-shear stress (τ _w ) or the mean local frictional velocity (u_τ ) . With reference to the Type-A TBL analysis by von Karman in developing the law-of-the-wall using the time-averaged local frictional velocity (u_τ ) as scale, the current study extended the approach to the Type-B TBL and obtained the analytical expressions for streamwise velocity in the inner-layer using ensemble-averaged frictional velocity (\\bar{{u}}_τ ) as scale. These analytical formulae were formed by introducing the general damping and enhancing functions. Further, the research applied a near-wall DNS-guided integration to the governing equations of Type-B TBL and quantitatively proved the correctness and accuracy of the inner-layer analytical expressions for this type.

  18. A snapshot attractor view of the advection of inertial particles in the presence of history force

    NASA Astrophysics Data System (ADS)

    Guseva, Ksenia; Daitche, Anton; Tél, Tamás

    2017-06-01

    We analyse the effect of the Basset history force on the sedimentation or rising of inertial particles in a two-dimensional convection flow. We find that the concept of snapshot attractors is useful to understand the extraordinary slow convergence due to long-term memory: an ensemble of particles converges exponentially fast towards a snapshot attractor, and this attractor undergoes a slow drift for long times. We demonstrate for the case of a periodic attractor that the drift of the snapshot attractor can be well characterized both in the space of the fluid and in the velocity space. For the case of quasiperiodic and chaotic dynamics we propose the use of the average settling velocity of the ensemble as a distinctive measure to characterize the snapshot attractor and the time scale separation corresponding to the convergence towards the snapshot attractor and its own slow dynamics.

  19. Quantifying non-ergodic dynamics of force-free granular gases.

    PubMed

    Bodrova, Anna; Chechkin, Aleksei V; Cherstvy, Andrey G; Metzler, Ralf

    2015-09-14

    Brownian motion is ergodic in the Boltzmann-Khinchin sense that long time averages of physical observables such as the mean squared displacement provide the same information as the corresponding ensemble average, even at out-of-equilibrium conditions. This property is the fundamental prerequisite for single particle tracking and its analysis in simple liquids. We study analytically and by event-driven molecular dynamics simulations the dynamics of force-free cooling granular gases and reveal a violation of ergodicity in this Boltzmann-Khinchin sense as well as distinct ageing of the system. Such granular gases comprise materials such as dilute gases of stones, sand, various types of powders, or large molecules, and their mixtures are ubiquitous in Nature and technology, in particular in Space. We treat-depending on the physical-chemical properties of the inter-particle interaction upon their pair collisions-both a constant and a velocity-dependent (viscoelastic) restitution coefficient ε. Moreover we compare the granular gas dynamics with an effective single particle stochastic model based on an underdamped Langevin equation with time dependent diffusivity. We find that both models share the same behaviour of the ensemble mean squared displacement (MSD) and the velocity correlations in the limit of weak dissipation. Qualitatively, the reported non-ergodic behaviour is generic for granular gases with any realistic dependence of ε on the impact velocity of particles.

  20. Program for narrow-band analysis of aircraft flyover noise using ensemble averaging techniques

    NASA Technical Reports Server (NTRS)

    Gridley, D.

    1982-01-01

    A package of computer programs was developed for analyzing acoustic data from an aircraft flyover. The package assumes the aircraft is flying at constant altitude and constant velocity in a fixed attitude over a linear array of ground microphones. Aircraft position is provided by radar and an option exists for including the effects of the aircraft's rigid-body attitude relative to the flight path. Time synchronization between radar and acoustic recording stations permits ensemble averaging techniques to be applied to the acoustic data thereby increasing the statistical accuracy of the acoustic results. Measured layered meteorological data obtained during the flyovers are used to compute propagation effects through the atmosphere. Final results are narrow-band spectra and directivities corrected for the flight environment to an equivalent static condition at a specified radius.

  1. Modeling of Ureolytic Calcite Precipitation for the Remediation of Sr-90 Using a Variable Velocity Streamtube Ensemble

    NASA Astrophysics Data System (ADS)

    Weathers, T. S.; Ginn, T. R.; Spycher, N.; Barkouki, T. H.; Fujita, Y.; Smith, R. W.

    2009-12-01

    Subsurface contamination is often mitigated with an injection/extraction well system. An understanding of heterogeneities within this radial flowfield is critical for modeling, prediction, and remediation of the subsurface. We address this using a Lagrangian approach: instead of depicting spatial extents of solutes in the subsurface we focus on their arrival distribution at the control well(s). A well-to-well treatment system that incorporates in situ microbially-mediated ureolysis to induce calcite precipitation for the immobilization of strontium-90 has been explored at the Vadose Zone Research Park (VZRP) near Idaho Falls, Idaho. PHREEQC2 is utilized to model the kinetically-controlled ureolysis and consequent calcite precipitation. PHREEQC2 provides a one-dimensional advective-dispersive transport option that can be and has been used in streamtube ensemble models. Traditionally, each streamtube maintains uniform velocity; however in radial flow in homogeneous media, the velocity within any given streamtube is variable in space, being highest at the input and output wells and approaching a minimum at the midpoint between the wells. This idealized velocity variability is of significance if kinetic reactions are present with multiple components, if kinetic reaction rates vary in space, if the reactions involve multiple phases (e.g. heterogeneous reactions), and/or if they impact physical characteristics (porosity/permeability), as does ureolytically driven calcite precipitation. Streamtube velocity patterns for any particular configuration of injection and withdrawal wells are available as explicit calculations from potential theory, and also from particle tracking programs. To approximate the actual spatial distribution of velocity along streamtubes, we assume idealized non-uniform velocity associated with homogeneous media. This is implemented in PHREEQC2 via a non-uniform spatial discretization within each streamtube that honors both the streamtube’s travel time and the idealized “fast-slow-fast” nonuniform velocity along the streamline. Breakthrough curves produced by each simulation are weighted by the path-respective flux fractions (obtained by deconvolution of tracer tests conducted at the VZRP) to obtain the flux-average of flow contributions to the observation well. Breakthrough data from urea injection experiments performed at the VZRP are compared to the model results from the PHREEQC2 variable velocity ensemble.

  2. Phase-resolved and time-averaged puff motions of an excited stack-issued transverse jet

    NASA Astrophysics Data System (ADS)

    Hsu, C. M.; Huang, R. F.

    2013-07-01

    The dynamics of puff motions in an excited stack-issued transverse jet were studied experimentally in a wind tunnel. The temporal and spatial evolution processes of the puffs induced by acoustic excitation were examined using the smoke flow visualization method and high-speed particle image velocimetry. The temporal and spatial evolutions of the puffs were examined using phase-resolved ensemble-averaged velocity fields and the velocity, length scales, and vorticity characteristics of the puffs were studied. The time-averaged velocity fields were calculated to analyze the velocity distributions and vorticity contours. The results show that a puff consists of a pair of counter-rotating vortex rings. An initial vortex ring was formed due to a concentration of vorticity at the lee side of the issuing jet at the instant of the mid-oscillation cycle. A vortex ring rotating in the opposite direction to that of the initial vortex ring was subsequently formed at the upwind side of the issuing jet. These two counter-rotating vortex rings formed a "mushroom" vortex pair, which was deflected by the crossflow and traveled downstream along a time-averaged trajectory of zero vorticity. The trajectory was situated far above the time-averaged streamline evolving from the leading edge of the tube. The velocity magnitudes of the vortex rings at the upwind and the lee side decreased with time evolution as the puffs traveled downstream due to momentum dissipation and entrainment effects. The puffs traveling along the trajectory of zero vorticity caused large velocities to appear above the leading-edge streamline.

  3. Green-Kubo relations for the viscosity of biaxial nematic liquid crystals

    NASA Astrophysics Data System (ADS)

    Sarman, Sten

    1996-09-01

    We derive Green-Kubo relations for the viscosities of a biaxial nematic liquid crystal. In this system there are seven shear viscosities, three twist viscosities, and three cross coupling coefficients between the antisymmetric strain rate and the symmetric traceless pressure tensor. According to the Onsager reciprocity relations these couplings are equal to the cross couplings between the symmetric traceless strain rate and the antisymmetric pressure. Our method is based on a comparison of the microscopic linear response generated by the SLLOD equations of motion for planar Couette flow (so named because of their close connection to the Doll's tensor Hamiltonian) and the macroscopic linear phenomenological relations between the pressure tensor and the strain rate. In order to obtain simple Green-Kubo relations we employ an equilibrium ensemble where the angular velocities of the directors are identically zero. This is achieved by adding constraint torques to the equations for the molecular angular accelerations. One finds that all the viscosity coefficients can be expressed as linear combinations of time correlation function integrals (TCFIs). This is much simpler compared to the expressions in the conventional canonical ensemble, where the viscosities are complicated rational functions of the TCFIs. The reason for this is, that in the constrained angular velocity ensemble, the thermodynamic forces are given external parameters whereas the thermodynamic fluxes are ensemble averages of phase functions. This is not the case in the canonical ensemble. The simplest way of obtaining numerical estimates of viscosity coefficients of a particular molecular model system is to evaluate these fluctuation relations by equilibrium molecular dynamics simulations.

  4. Long-time correlation for the chaotic orbit in the two-wave Hamiltonian

    NASA Astrophysics Data System (ADS)

    Hatori, Tadatsugu; Irie, Haruyuki

    1987-03-01

    The time correlation function of velocity is found to decay with the power law for the orbit governed by a Hamiltonian, H=v sup 2/2 - Mcosx - Pcos (k(x-t)). The renormalization group technique can predict the power of decay for the correlation function defined by the ensemble average. The power spectrum becomes the 1/f-type for a special case.

  5. Effects of Periodic Unsteady Wake Flow and Pressure Gradient on Boundary Layer Transition Along the Concave Surface of a Curved Plate. Part 3

    NASA Technical Reports Server (NTRS)

    Schobeiri, M. T.; Radke, R. E.

    1996-01-01

    Boundary layer transition and development on a turbomachinery blade is subjected to highly periodic unsteady turbulent flow, pressure gradient in longitudinal as well as lateral direction, and surface curvature. To study the effects of periodic unsteady wakes on the concave surface of a turbine blade, a curved plate was utilized. On the concave surface of this plate, detailed experimental investigations were carried out under zero and negative pressure gradient. The measurements were performed in an unsteady flow research facility using a rotating cascade of rods positioned upstream of the curved plate. Boundary layer measurements using a hot-wire probe were analyzed by the ensemble-averaging technique. The results presented in the temporal-spatial domain display the transition and further development of the boundary layer, specifically the ensemble-averaged velocity and turbulence intensity. As the results show, the turbulent patches generated by the wakes have different leading and trailing edge velocities and merge with the boundary layer resulting in a strong deformation and generation of a high turbulence intensity core. After the turbulent patch has totally penetrated into the boundary layer, pronounced becalmed regions were formed behind the turbulent patch and were extended far beyond the point they would occur in the corresponding undisturbed steady boundary layer.

  6. On averaging aspect ratios and distortion parameters over ice crystal population ensembles for estimating effective scattering asymmetry parameters

    PubMed Central

    van Diedenhoven, Bastiaan; Ackerman, Andrew S.; Fridlind, Ann M.; Cairns, Brian

    2017-01-01

    The use of ensemble-average values of aspect ratio and distortion parameter of hexagonal ice prisms for the estimation of ensemble-average scattering asymmetry parameters is evaluated. Using crystal aspect ratios greater than unity generally leads to ensemble-average values of aspect ratio that are inconsistent with the ensemble-average asymmetry parameters. When a definition of aspect ratio is used that limits the aspect ratio to below unity (α≤1) for both hexagonal plates and columns, the effective asymmetry parameters calculated using ensemble-average aspect ratios are generally consistent with ensemble-average asymmetry parameters, especially if aspect ratios are geometrically averaged. Ensemble-average distortion parameters generally also yield effective asymmetry parameters that are largely consistent with ensemble-average asymmetry parameters. In the case of mixtures of plates and columns, it is recommended to geometrically average the α≤1 aspect ratios and to subsequently calculate the effective asymmetry parameter using a column or plate geometry when the contribution by columns to a given mixture’s total projected area is greater or lower than 50%, respectively. In addition, we show that ensemble-average aspect ratios, distortion parameters and asymmetry parameters can generally be retrieved accurately from simulated multi-directional polarization measurements based on mixtures of varying columns and plates. However, such retrievals tend to be somewhat biased toward yielding column-like aspect ratios. Furthermore, generally large retrieval errors can occur for mixtures with approximately equal contributions of columns and plates and for ensembles with strong contributions of thin plates. PMID:28983127

  7. μ-PIV measurements of the ensemble flow fields surrounding a migrating semi-infinite bubble.

    PubMed

    Yamaguchi, Eiichiro; Smith, Bradford J; Gaver, Donald P

    2009-08-01

    Microscale particle image velocimetry (μ-PIV) measurements of ensemble flow fields surrounding a steadily-migrating semi-infinite bubble through the novel adaptation of a computer controlled linear motor flow control system. The system was programmed to generate a square wave velocity input in order to produce accurate constant bubble propagation repeatedly and effectively through a fused glass capillary tube. We present a novel technique for re-positioning of the coordinate axis to the bubble tip frame of reference in each instantaneous field through the analysis of the sudden change of standard deviation of centerline velocity profiles across the bubble interface. Ensemble averages were then computed in this bubble tip frame of reference. Combined fluid systems of water/air, glycerol/air, and glycerol/Si-oil were used to investigate flows comparable to computational simulations described in Smith and Gaver (2008) and to past experimental observations of interfacial shape. Fluorescent particle images were also analyzed to measure the residual film thickness trailing behind the bubble. The flow fields and film thickness agree very well with the computational simulations as well as existing experimental and analytical results. Particle accumulation and migration associated with the flow patterns near the bubble tip after long experimental durations are discussed as potential sources of error in the experimental method.

  8. A comparison between EDA-EnVar and ETKF-EnVar data assimilation techniques using radar observations at convective scales through a case study of Hurricane Ike (2008)

    NASA Astrophysics Data System (ADS)

    Shen, Feifei; Xu, Dongmei; Xue, Ming; Min, Jinzhong

    2017-07-01

    This study examines the impacts of assimilating radar radial velocity (Vr) data for the simulation of hurricane Ike (2008) with two different ensemble generation techniques in the framework of the hybrid ensemble-variational (EnVar) data assimilation system of Weather Research and Forecasting model. For the generation of ensemble perturbations we apply two techniques, the ensemble transform Kalman filter (ETKF) and the ensemble of data assimilation (EDA). For the ETKF-EnVar, the forecast ensemble perturbations are updated by the ETKF, while for the EDA-EnVar, the hybrid is employed to update each ensemble member with perturbed observations. The ensemble mean is analyzed by the hybrid method with flow-dependent ensemble covariance for both EnVar. The sensitivity of analyses and forecasts to the two applied ensemble generation techniques is investigated in our current study. It is found that the EnVar system is rather stable with different ensemble update techniques in terms of its skill on improving the analyses and forecasts. The EDA-EnVar-based ensemble perturbations are likely to include slightly less organized spatial structures than those in ETKF-EnVar, and the perturbations of the latter are constructed more dynamically. Detailed diagnostics reveal that both of the EnVar schemes not only produce positive temperature increments around the hurricane center but also systematically adjust the hurricane location with the hurricane-specific error covariance. On average, the analysis and forecast from the ETKF-EnVar have slightly smaller errors than that from the EDA-EnVar in terms of track, intensity, and precipitation forecast. Moreover, ETKF-EnVar yields better forecasts when verified against conventional observations.

  9. A Scanning laser-velocimeter technique for measuring two-dimensional wake-vortex velocity distributions. [Langley Vortex Research Facility

    NASA Technical Reports Server (NTRS)

    Gartrell, L. R.; Rhodes, D. B.

    1980-01-01

    A rapid scanning two dimensional laser velocimeter (LV) has been used to measure simultaneously the vortex vertical and axial velocity distributions in the Langley Vortex Research Facility. This system utilized a two dimensional Bragg cell for removing flow direction ambiguity by translating the optical frequency for each velocity component, which was separated by band-pass filters. A rotational scan mechanism provided an incremental rapid scan to compensate for the large displacement of the vortex with time. The data were processed with a digital counter and an on-line minicomputer. Vaporized kerosene (0.5 micron to 5 micron particle sizes) was used for flow visualization and LV scattering centers. The overall measured mean-velocity uncertainity is less than 2 percent. These measurements were obtained from ensemble averaging of individual realizations.

  10. Investigation of the tip clearance flow inside and at the exit of a compressor rotor passage

    NASA Technical Reports Server (NTRS)

    Pandya, A.; Lakshminarayana, B.

    1982-01-01

    The nature of the tip clearance flow in a moderately loaded compressor rotor is studied. The measurements were taken inside the clearance between the annulus-wall casing and the rotor blade tip. These measurements were obtained using a stationary two-sensor hot-wire probe in combination with an ensemble averaging technique. The flowfield was surveyed at various radial locations and at ten axial locations, four of which were inside the blade passage in the clearance region and the remaining six outside the passage. Variations of the mean flow properties in the tangential and the radial directions at various axial locations were derived from the data. Variation of the leakage velocity at different axial stations and the annulus-wall boundary layer profiles from passage-averaged mean velocities were also estimated.

  11. --No Title--

    Science.gov Websites

    2008073000 2008072900 2008072800 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias | NAEFS Products | NAEFS | EMC Ensemble Products EMC | NCEP | National Weather Service

  12. Null-space and statistical significance of first-arrival traveltime inversion

    NASA Astrophysics Data System (ADS)

    Morozov, Igor B.

    2004-03-01

    The strong uncertainty inherent in the traveltime inversion of first arrivals from surface sources is usually removed by using a priori constraints or regularization. This leads to the null-space (data-independent model variability) being inadequately sampled, and consequently, model uncertainties may be underestimated in traditional (such as checkerboard) resolution tests. To measure the full null-space model uncertainties, we use unconstrained Monte Carlo inversion and examine the statistics of the resulting model ensembles. In an application to 1-D first-arrival traveltime inversion, the τ-p method is used to build a set of models that are equivalent to the IASP91 model within small, ~0.02 per cent, time deviations. The resulting velocity variances are much larger, ~2-3 per cent within the regions above the mantle discontinuities, and are interpreted as being due to the null-space. Depth-variant depth averaging is required for constraining the velocities within meaningful bounds, and the averaging scalelength could also be used as a measure of depth resolution. Velocity variances show structure-dependent, negative correlation with the depth-averaging scalelength. Neither the smoothest (Herglotz-Wiechert) nor the mean velocity-depth functions reproduce the discontinuities in the IASP91 model; however, the discontinuities can be identified by the increased null-space velocity (co-)variances. Although derived for a 1-D case, the above conclusions also relate to higher dimensions.

  13. Time Resolved Digital PIV Measurements of Flow Field Cyclic Variation in an Optical IC Engine

    NASA Astrophysics Data System (ADS)

    Jarvis, S.; Justham, T.; Clarke, A.; Garner, C. P.; Hargrave, G. K.; Halliwell, N. A.

    2006-07-01

    Time resolved digital particle image velocimetry (DPIV) experimental data is presented for the in-cylinder flow field development of a motored four stroke spark ignition (SI) optical internal combustion (IC) engine. A high speed DPIV system was employed to quantify the velocity field development during the intake and compression stroke at an engine speed of 1500 rpm. The results map the spatial and temporal development of the in-cylinder flow field structure allowing comparison between traditional ensemble average and cycle average flow field structures. Conclusions are drawn with respect to engine flow field cyclic variations.

  14. Flow disturbance due to presence of the vane anemometer

    NASA Astrophysics Data System (ADS)

    Bujalski, M.; Gawor, M.; Sobczyk, J.

    2014-08-01

    This paper presents the results of the preliminary experimental investigations of the disturbance of velocity field resulting from placing a vane anemometer in the analyzed air flow. Experiments were conducted in a wind tunnel with a closed loop. For the measurement process, Particle Image Velocimetry (PIV) method was used to visualize the flow structure and evaluate the instantaneous, two-dimensional velocity vector fields. Regions of inflow on the vane anemometer as well as flow behind it were examined. Ensemble averaged velocity distribution and root-mean-square (RMS) velocity fluctuations were determined. The results below are presented in the form of contour-velocity maps and profile plots. In order to investigate velocity fluctuations in the wake of vane anemometer with high temporal resolution hot-wire anemometry (HWA) technique was used. Frequency analysis by means of Fast Fourier Transform was carried out. The obtained results give evidence to a significant spatially and temporally complex flow disturbance in the vicinity of analyzed instrument.

  15. --No Title--

    Science.gov Websites

    2008112500 2008112400 Background information bias reduction = ( | domain-averaged ensemble mean bias | - | domain-averaged bias-corrected ensemble mean bias | / | domain-averaged bias-corrected ensemble mean bias

  16. Laboratory investigation and direct numerical simulation of wind effect on steep surface waves

    NASA Astrophysics Data System (ADS)

    Troitskaya, Yuliya; Sergeev, Daniil; Druzhinin, Oleg; Ermakova, Olga

    2015-04-01

    The small scale ocean-atmosphere interaction at the water-air interface is one of the most important factors determining the processes of heat, mass, and energy exchange in the boundary layers of both geospheres. Another important aspect of the air-sea interaction is excitation of surface waves. One of the most debated open questions of wave modeling is concerned with the wind input in the wave field, especially for the case of steep and breaking waves. Two physical mechanisms are suggested to describe the excitation of finite amplitude waves. The first one is based on the treatment of the wind-wave interaction in quasi-linear approximation in the frameworks of semi-empirical models of turbulence of the low atmospheric boundary layer. An alternative mechanism is associated with separation of wind flow at the crests of the surface waves. The "separating" and "non-separating" mechanisms of wave generation lead to different dependences of the wind growth rate on the wave steepness: the latter predicts a decrease in the increment with wave steepness, and the former - an increase. In this paper the mechanism of the wind-wave interaction is investigated basing on physical and numerical experiments. In the physical experiment, turbulent airflow over waves was studied using the video-PIV method, based on the application of high-speed video photography. Alternatively to the classical PIV technique this approach provides the statistical ensembles of realizations of instantaneous velocity fields. Experiments were performed in a round wind-wave channel at Institute of Applied Physics, Russian Academy of Sciences. A fan generated the airflow with the centerline velocity 4 m/s. The surface waves were generated by a programmed wave-maker at the frequency of 2.5 Hz with the amplitudes of 0.65 cm, 1.4 cm, and 2 cm. The working area (27.4 × 10.7 cm2) was at a distance of 3 m from the fan. To perform the measurements of the instantaneous velocity fields, spherical polyamide particles 20 μm in diameter were injected into the airflow. The images of the illuminated particles were photographed with a digital CCD video camera at a rate of 1000 frames per second. For the each given parameters of wind and waves, a statistical ensemble of 30 movies with duration from 200 to 600 ms was obtained. Individual flow realizations manifested the typical features of flow separation, while the average vector velocity fields obtained by the phase averaging of the individual vector fields were smooth and slightly asymmetrical, with the minimum of the horizontal velocity near the water surface shifted to the leeward side of the wave profile, but do not demonstrate the features of flow separation. The wave-induced pressure perturbations, averaged over the turbulent fluctuations, were retrieved from the measured velocity fields, using the Reynolds equations. It ensures sufficient accuracy for study of the dependence of the wave increment on the wave amplitude. The dependences of the wave growth rate on the wave steepness are weakly decreasing, serving as indirect proof of the non-separated character of flow over waves. Also direct numerical simulation of the airflow over finite amplitude periodic surface wave was performed. In the experiments the primitive 3-dimensional fluid mechanics equations were solved in the airflow over curved water boundary for the following parameters: the Reynolds number Re=15000, the wave steepness ka=0-0.2, the parameter c/u*=0-10 (where u* is the friction velocity and c is the wave celerity). Similar to the physical experiment the instant realizations of the velocity field demonstrate flow separation at the crests of the waves, but the ensemble averaged velocity fields had typical structures similar to those excising in shear flows near critical levels, where the phase velocity of the disturbance coincides with the flow velocity. The wind growth rate determined by the ensemble averaged wave-induced pressure component in phase of the wave slope was retrieved from the DNS results. Similar to the physical experiment the wave growth rate weakly decreased with the wave steepness. The results of physical and numerical experiments were compared with the calculations within the theoretical model of a turbulent boundary layer based on the system of Reynolds equations with the first-order closing hypothesis. Within the model the wind-wave interaction is considered within the quasi-linear approximation and the mean airflow over waves within the model is treated as a non-separated. The calculations within the model represents well profiles of the mean wind velocity, turbulent stress, amplitude and phase of the main harmonics of the wave-induced velocity components and also wave-induced pressure fluctuations and wind wave growth rate obtained both in the physical experiment and DNS. Applicability of the non-separating quasi-linear theory for description of average fields in the airflow over steep and even breaking waves, when the effect of separation is manifested in the instantaneous flow images, can possibly be explained qualitatively by the strongly non-stationary character of the separation process with the typical time being much less than the wave period, and by the small scale of flow heterogeneity in the area of separation. In such a situation small-scale vortices produced within the separation bubble affect the mean flow and wind-induced disturbances as eddy viscosity. Then, the flow turbulence affects the averaged fields as a very viscous fluid, where the effective Reynolds number for the average fields determined by the eddy viscosity was small even for steep waves. It follows from this assumption that strongly nonlinear effects, such as flow separations should not be expected in the flow averaged over turbulent fluctuations, and the main harmonics of the wave-induced disturbances of the averaged flow, which determine the energy flux to surface waves, can be described in the weakly-nonlinear approximation. This paper was supported by a grant from the Government of the Russian Federation under Contract no. 11.G34.31.0048; the European Research Council Advanced Grant, FP7-IDEAS, 227915; RFBF grant 13-05-00865-а, 13-05-12093-ofi-m,15-05-91767.

  17. Alterations of Vertical Jump Mechanics after a Half-Marathon Mountain Running Race

    PubMed Central

    Rousanoglou, Elissavet N.; Noutsos, Konstantinos; Pappas, Achilleas; Bogdanis, Gregory; Vagenas, Georgios; Bayios, Ioannis A.; Boudolos, Konstantinos D.

    2016-01-01

    The fatiguing effect of long-distance running has been examined in the context of a variety of parameters. However, there is scarcity of data regarding its effect on the vertical jump mechanics. The purpose of this study was to investigate the alterations of countermovement jump (CMJ) mechanics after a half-marathon mountain race. Twenty-seven runners performed CMJs before the race (Pre), immediately after the race (Post 1) and five minutes after Post 1 (Post 2). Instantaneous and ensemble-average analysis focused on jump height and, the maximum peaks and time-to-maximum peaks of: Displacement, vertical force (Fz), anterior-posterior force (Fx), Velocity and Power, in the eccentric (tECC) and concentric (tCON) phase of the jump, respectively. Repeated measures ANOVAs were used for statistical analysis (p ≤ 0.05). The jump height decrease was significant in Post 2 (-7.9%) but not in Post 1 (-4.1%). Fx and Velocity decreased significantly in both Post 1 (only in tECC) and Post 2 (both tECC and tCON). Α timing shift of the Fz peaks (earlier during tECC and later during tCON) and altered relative peak times (only in tECC) were also observed. Ensemble-average analysis revealed several time intervals of significant post-race alterations and a timing shift in the Fz-Velocity loop. An overall trend of lowered post-race jump output and mechanics was characterised by altered jump timing, restricted anterior-posterior movement and altered force-velocity relations. The specificity of mountain running fatigue to eccentric muscle work, appears to be reflected in the different time order of the post-race reductions, with the eccentric phase reductions preceding those of the concentric one. Thus, those who engage in mountain running should particularly consider downhill training to optimise eccentric muscular action. Key points The 4.1% reduction of jump height immediately after the race is not statistically significant The eccentric phase alterations of jump mechanics precede those of the concentric ones. Force-velocity alterations present a timing shift rather than a change in force or velocity magnitude. PMID:27274665

  18. Alterations of Vertical Jump Mechanics after a Half-Marathon Mountain Running Race.

    PubMed

    Rousanoglou, Elissavet N; Noutsos, Konstantinos; Pappas, Achilleas; Bogdanis, Gregory; Vagenas, Georgios; Bayios, Ioannis A; Boudolos, Konstantinos D

    2016-06-01

    The fatiguing effect of long-distance running has been examined in the context of a variety of parameters. However, there is scarcity of data regarding its effect on the vertical jump mechanics. The purpose of this study was to investigate the alterations of countermovement jump (CMJ) mechanics after a half-marathon mountain race. Twenty-seven runners performed CMJs before the race (Pre), immediately after the race (Post 1) and five minutes after Post 1 (Post 2). Instantaneous and ensemble-average analysis focused on jump height and, the maximum peaks and time-to-maximum peaks of: Displacement, vertical force (Fz), anterior-posterior force (Fx), Velocity and Power, in the eccentric (tECC) and concentric (tCON) phase of the jump, respectively. Repeated measures ANOVAs were used for statistical analysis (p ≤ 0.05). The jump height decrease was significant in Post 2 (-7.9%) but not in Post 1 (-4.1%). Fx and Velocity decreased significantly in both Post 1 (only in tECC) and Post 2 (both tECC and tCON). Α timing shift of the Fz peaks (earlier during tECC and later during tCON) and altered relative peak times (only in tECC) were also observed. Ensemble-average analysis revealed several time intervals of significant post-race alterations and a timing shift in the Fz-Velocity loop. An overall trend of lowered post-race jump output and mechanics was characterised by altered jump timing, restricted anterior-posterior movement and altered force-velocity relations. The specificity of mountain running fatigue to eccentric muscle work, appears to be reflected in the different time order of the post-race reductions, with the eccentric phase reductions preceding those of the concentric one. Thus, those who engage in mountain running should particularly consider downhill training to optimise eccentric muscular action. Key pointsThe 4.1% reduction of jump height immediately after the race is not statistically significantThe eccentric phase alterations of jump mechanics precede those of the concentric ones.Force-velocity alterations present a timing shift rather than a change in force or velocity magnitude.

  19. Individual differences in ensemble perception reveal multiple, independent levels of ensemble representation.

    PubMed

    Haberman, Jason; Brady, Timothy F; Alvarez, George A

    2015-04-01

    Ensemble perception, including the ability to "see the average" from a group of items, operates in numerous feature domains (size, orientation, speed, facial expression, etc.). Although the ubiquity of ensemble representations is well established, the large-scale cognitive architecture of this process remains poorly defined. We address this using an individual differences approach. In a series of experiments, observers saw groups of objects and reported either a single item from the group or the average of the entire group. High-level ensemble representations (e.g., average facial expression) showed complete independence from low-level ensemble representations (e.g., average orientation). In contrast, low-level ensemble representations (e.g., orientation and color) were correlated with each other, but not with high-level ensemble representations (e.g., facial expression and person identity). These results suggest that there is not a single domain-general ensemble mechanism, and that the relationship among various ensemble representations depends on how proximal they are in representational space. (c) 2015 APA, all rights reserved).

  20. Laser transit anemometer software development program

    NASA Technical Reports Server (NTRS)

    Abbiss, John B.

    1989-01-01

    Algorithms were developed for the extraction of two components of mean velocity, standard deviation, and the associated correlation coefficient from laser transit anemometry (LTA) data ensembles. The solution method is based on an assumed two-dimensional Gaussian probability density function (PDF) model of the flow field under investigation. The procedure consists of transforming the data ensembles from the data acquisition domain (consisting of time and angle information) to the velocity space domain (consisting of velocity component information). The mean velocity results are obtained from the data ensemble centroid. Through a least squares fitting of the transformed data to an ellipse representing the intersection of a plane with the PDF, the standard deviations and correlation coefficient are obtained. A data set simulation method is presented to test the data reduction process. Results of using the simulation system with a limited test matrix of input values is also given.

  1. Stochastic dynamics of small ensembles of non-processive molecular motors: The parallel cluster model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erdmann, Thorsten; Albert, Philipp J.; Schwarz, Ulrich S.

    2013-11-07

    Non-processive molecular motors have to work together in ensembles in order to generate appreciable levels of force or movement. In skeletal muscle, for example, hundreds of myosin II molecules cooperate in thick filaments. In non-muscle cells, by contrast, small groups with few tens of non-muscle myosin II motors contribute to essential cellular processes such as transport, shape changes, or mechanosensing. Here we introduce a detailed and analytically tractable model for this important situation. Using a three-state crossbridge model for the myosin II motor cycle and exploiting the assumptions of fast power stroke kinetics and equal load sharing between motors inmore » equivalent states, we reduce the stochastic reaction network to a one-step master equation for the binding and unbinding dynamics (parallel cluster model) and derive the rules for ensemble movement. We find that for constant external load, ensemble dynamics is strongly shaped by the catch bond character of myosin II, which leads to an increase of the fraction of bound motors under load and thus to firm attachment even for small ensembles. This adaptation to load results in a concave force-velocity relation described by a Hill relation. For external load provided by a linear spring, myosin II ensembles dynamically adjust themselves towards an isometric state with constant average position and load. The dynamics of the ensembles is now determined mainly by the distribution of motors over the different kinds of bound states. For increasing stiffness of the external spring, there is a sharp transition beyond which myosin II can no longer perform the power stroke. Slow unbinding from the pre-power-stroke state protects the ensembles against detachment.« less

  2. Ensemble of electrophoretically captured gold nanoparticles as a fingerprint of Boltzmann velocity distribution

    NASA Astrophysics Data System (ADS)

    Hong, S. H.; Kang, M. G.; Lim, J. H.; Hwang, S. W.

    2008-07-01

    An ensemble of electrophoretically captured gold nanoparticles is exploited to fingerprint their velocity distribution in solution. The electrophoretic capture is performed using a dc biased nanogap electrode, and panoramic scanning electron microscopic images are inspected to obtain the regional density of the captured gold nanoparticles. The regional density profile along the surface of the electrode is in a quantitative agreement with the calculated density of the captured nanoparticles. The calculated density is obtained by counting, in the Boltzmann distribution, the number of nanoparticles whose thermal velocity is smaller than the electrophoretic velocity.

  3. Reynolds Stress Closure for Inertial Frames and Rotating Frames

    NASA Astrophysics Data System (ADS)

    Petty, Charles; Benard, Andre

    2017-11-01

    In a rotating frame-of-reference, the Coriolis acceleration and the mean vorticity field have a profound impact on the redistribution of kinetic energy among the three components of the fluctuating velocity. Consequently, the normalized Reynolds (NR) stress is not objective. Furthermore, because the Reynolds stress is defined as an ensemble average of a product of fluctuating velocity vector fields, its eigenvalues must be non-negative for all turbulent flows. These fundamental properties (realizability and non-objectivity) of the NR-stress cannot be compromised in computational fluid dynamic (CFD) simulations of turbulent flows in either inertial frames or in rotating frames. The recently developed universal realizable anisotropic prestress (URAPS) closure for the NR-stress depends explicitly on the local mean velocity gradient and the Coriolis operator. The URAPS-closure is a significant paradigm shift from turbulent closure models that assume that dyadic-valued operators associated with turbulent fluctuations are objective.

  4. Model-based assessment of a Northwestern Tropical Pacific moored array to monitor intraseasonal variability

    NASA Astrophysics Data System (ADS)

    Liu, Danian; Zhu, Jiang; Shu, Yeqiang; Wang, Dongxiao; Wang, Weiqiang; Cai, Shuqun

    2018-06-01

    The Northwestern Tropical Pacific Ocean (NWTPO) moorings observing system, including 15 moorings, was established in 2013 to provide velocity profile data. Observing system simulation experiments (OSSEs) were carried out to assess the ability of the observation system to monitor intraseasonal variability in a pilot study, where ideal "mooring-observed" velocity was assimilated using Ensemble Optimal Interpolation (EnOI) based on the Regional Oceanic Modeling System (ROMS). Because errors between the control and "nature" runs have a mesoscale structure, a random ensemble derived from 20-90-day bandpass-filtered nine-year model outputs is proved to be more appropriate for the NWTPO mooring array assimilation than a random ensemble derived from a 30-day running mean. The simulation of the intraseasonal currents in the North Equatorial Current (NEC), North Equatorial Countercurrent (NECC), and Equatorial Undercurrent (EUC) areas can be improved by assimilating velocity profiles using a 20-90-day bandpass-filtered ensemble. The root mean square errors (RMSEs) of the intraseasonal zonal (U) and meridional velocity (V) above 500 m depth within the study area (between 0°N-18°N and 122°E-147°E) were reduced by 15.4% and 16.9%, respectively. Improvements in the downstream area of the NEC moorings transect were optimum where the RMSEs of the intraseasonal velocities above 500 m were reduced by more than 30%. Assimilating velocity profiles can have a positive impact on the simulation and forecast of thermohaline structure and sea level anomalies in the ocean.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ginn, Timothy R.; Weathers, Tess

    Biogeochemical modeling using PHREEQC2 and a streamtube ensemble approach is utilized to understand a well-to-well subsurface treatment system at the Vadose Zone Research Park (VZRP) near Idaho Falls, Idaho. Treatment involves in situ microbially-mediated ureolysis to induce calcite precipitation for the immobilization of strontium-90. PHREEQC2 is utilized to model the kinetically-controlled ureolysis and consequent calcite precipitation. Reaction kinetics, equilibrium phases, and cation exchange are used within PHREEQC2 to track pH and levels of calcium, ammonium, urea, and calcite precipitation over time, within a series of one-dimensional advective-dispersive transport paths creating a streamtube ensemble representation of the well-to-well transport. An understandingmore » of the impact of physical heterogeneities within this radial flowfield is critical for remediation design; we address this via the streamtube approach: instead of depicting spatial extents of solutes in the subsurface we focus on their arrival distribution at the control well(s). Traditionally, each streamtube maintains uniform velocity; however in radial flow in homogeneous media, the velocity within any given streamtube is spatially-variable in a common way, being highest at the input and output wells and approaching a minimum at the midpoint between the wells. This idealized velocity variability is of significance in the case of ureolytically driven calcite precipitation. Streamtube velocity patterns for any particular configuration of injection and withdrawal wells are available as explicit calculations from potential theory, and also from particle tracking programs. To approximate the actual spatial distribution of velocity along streamtubes, we assume idealized radial non-uniform velocity associated with homogeneous media. This is implemented in PHREEQC2 via a non-uniform spatial discretization within each streamtube that honors both the streamtube’s travel time and the idealized “fast-slow-fast” pattern of non-uniform velocity along the streamline. Breakthrough curves produced by each simulation are weighted by the path-respective flux fractions (obtained by deconvolution of tracer tests conducted at the VZRP) to obtain the flux-average of flow contributions to the observation well.« less

  6. Cosmological ensemble and directional averages of observables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonvin, Camille; Clarkson, Chris; Durrer, Ruth

    We show that at second order, ensemble averages of observables and directional averages do not commute due to gravitational lensing—observing the same thing in many directions over the sky is not the same as taking an ensemble average. In principle this non-commutativity is significant for a variety of quantities that we often use as observables and can lead to a bias in parameter estimation. We derive the relation between the ensemble average and the directional average of an observable, at second order in perturbation theory. We discuss the relevance of these two types of averages for making predictions of cosmologicalmore » observables, focusing on observables related to distances and magnitudes. In particular, we show that the ensemble average of the distance in a given observed direction is increased by gravitational lensing, whereas the directional average of the distance is decreased. For a generic observable, there exists a particular function of the observable that is not affected by second-order lensing perturbations. We also show that standard areas have an advantage over standard rulers, and we discuss the subtleties involved in averaging in the case of supernova observations.« less

  7. Tomographic PIV behind a prosthetic heart valve

    NASA Astrophysics Data System (ADS)

    Hasler, D.; Landolt, A.; Obrist, D.

    2016-05-01

    The instantaneous three-dimensional velocity field past a bioprosthetic heart valve was measured using tomographic particle image velocimetry. Two digital cameras were used together with a mirror setup to record PIV images from four different angles. Measurements were conducted in a transparent silicone phantom with a simplified geometry of the aortic root. The refraction indices of the silicone phantom and the working fluid were matched to minimize optical distortion from the flow field to the cameras. The silicone phantom of the aorta was integrated in a flow loop driven by a piston pump. Measurements were conducted for steady and pulsatile flow conditions. Results of the instantaneous, ensemble and phase-averaged flow field are presented. The three-dimensional velocity field reveals a flow topology, which can be related to features of the aortic valve prosthesis.

  8. Riverine Bathymetry Imaging with Indirect Observations

    NASA Astrophysics Data System (ADS)

    Farthing, M.; Lee, J. H.; Ghorbanidehno, H.; Hesser, T.; Darve, E. F.; Kitanidis, P. K.

    2017-12-01

    Bathymetry, i.e, depth, imaging in a river is of crucial importance for shipping operations and flood management. With advancements in sensor technology and computational resources, various types of indirect measurements can be used to estimate high-resolution riverbed topography. Especially, the use of surface velocity measurements has been actively investigated recently since they are easy to acquire at a low cost in all river conditions and surface velocities are sensitive to the river depth. In this work, we image riverbed topography using depth-averaged quasi-steady velocity observations related to the topography through the 2D shallow water equations (SWE). The principle component geostatistical approach (PCGA), a fast and scalable variational inverse modeling method powered by low-rank representation of covariance matrix structure, is presented and applied to two "twin" riverine bathymetry identification problems. To compare the efficiency and effectiveness of the proposed method, an ensemble-based approach is also applied to the test problems. Results demonstrate that PCGA is superior to the ensemble-based approach in terms of computational effort and accuracy. Especially, the results obtained from PCGA capture small-scale bathymetry features irrespective of the initial guess through the successive linearization of the forward model. Analysis on the direct survey data of the riverine bathymetry used in one of the test problems shows an efficient, parsimonious choice of the solution basis in PCGA so that the number of the numerical model runs used to achieve the inversion results is close to the minimum number that reconstructs the underlying bathymetry.

  9. Multi-Model Ensemble Wake Vortex Prediction

    NASA Technical Reports Server (NTRS)

    Koerner, Stephan; Holzaepfel, Frank; Ahmad, Nash'at N.

    2015-01-01

    Several multi-model ensemble methods are investigated for predicting wake vortex transport and decay. This study is a joint effort between National Aeronautics and Space Administration and Deutsches Zentrum fuer Luft- und Raumfahrt to develop a multi-model ensemble capability using their wake models. An overview of different multi-model ensemble methods and their feasibility for wake applications is presented. The methods include Reliability Ensemble Averaging, Bayesian Model Averaging, and Monte Carlo Simulations. The methodologies are evaluated using data from wake vortex field experiments.

  10. Implementation of unsteady sampling procedures for the parallel direct simulation Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Tseng, K.-C.; Wu, J.-S.; Jermy, M. C.; Huang, J.-C.; Krumdieck, S. P.

    2008-06-01

    An unsteady sampling routine for a general parallel direct simulation Monte Carlo method called PDSC is introduced, allowing the simulation of time-dependent flow problems in the near continuum range. A post-processing procedure called DSMC rapid ensemble averaging method (DREAM) is developed to improve the statistical scatter in the results while minimising both memory and simulation time. This method builds an ensemble average of repeated runs over small number of sampling intervals prior to the sampling point of interest by restarting the flow using either a Maxwellian distribution based on macroscopic properties for near equilibrium flows (DREAM-I) or output instantaneous particle data obtained by the original unsteady sampling of PDSC for strongly non-equilibrium flows (DREAM-II). The method is validated by simulating shock tube flow and the development of simple Couette flow. Unsteady PDSC is found to accurately predict the flow field in both cases with significantly reduced run-times over single processor code and DREAM greatly reduces the statistical scatter in the results while maintaining accurate particle velocity distributions. Simulations are then conducted of two applications involving the interaction of shocks over wedges. The results of these simulations are compared to experimental data and simulations from the literature where there these are available. In general, it was found that 10 ensembled runs of DREAM processing could reduce the statistical uncertainty in the raw PDSC data by 2.5-3.3 times, based on the limited number of cases in the present study.

  11. Reduced set averaging of face identity in children and adolescents with autism.

    PubMed

    Rhodes, Gillian; Neumann, Markus F; Ewing, Louise; Palermo, Romina

    2015-01-01

    Individuals with autism have difficulty abstracting and updating average representations from their diet of faces. These averages function as perceptual norms for coding faces, and poorly calibrated norms may contribute to face recognition difficulties in autism. Another kind of average, known as an ensemble representation, can be abstracted from briefly glimpsed sets of faces. Here we show for the first time that children and adolescents with autism also have difficulty abstracting ensemble representations from sets of faces. On each trial, participants saw a study set of four identities and then indicated whether a test face was present. The test face could be a set average or a set identity, from either the study set or another set. Recognition of set averages was reduced in participants with autism, relative to age- and ability-matched typically developing participants. This difference, which actually represents more accurate responding, indicates weaker set averaging and thus weaker ensemble representations of face identity in autism. Our finding adds to the growing evidence for atypical abstraction of average face representations from experience in autism. Weak ensemble representations may have negative consequences for face processing in autism, given the importance of ensemble representations in dealing with processing capacity limitations.

  12. Comparison of the WSA-ENLIL model with three CME cone types

    NASA Astrophysics Data System (ADS)

    Jang, Soojeong; Moon, Y.; Na, H.

    2013-07-01

    We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.Abstract (2,250 Maximum Characters): We have made a comparison of the CME-associated shock propagation based on the WSA-ENLIL model with three cone types using 29 halo CMEs from 2001 to 2002. These halo CMEs have cone model parameters as well as their associated interplanetary (IP) shocks. For this study we consider three different cone types (an asymmetric cone model, an ice-cream cone model and an elliptical cone model) to determine 3-D CME parameters (radial velocity, angular width and source location), which are the input values of the WSA-ENLIL model. The mean absolute error (MAE) of the arrival times for the asymmetric cone model is 10.6 hours, which is about 1 hour smaller than those of the other models. Their ensemble average of MAE is 9.5 hours. However, this value is still larger than that (8.7 hours) of the empirical model of Kim et al. (2007). We will compare their IP shock velocities and densities with those from ACE in-situ measurements and discuss them in terms of the prediction of geomagnetic storms.

  13. Machine-Learning Algorithms to Automate Morphological and Functional Assessments in 2D Echocardiography.

    PubMed

    Narula, Sukrit; Shameer, Khader; Salem Omar, Alaa Mabrouk; Dudley, Joel T; Sengupta, Partho P

    2016-11-29

    Machine-learning models may aid cardiac phenotypic recognition by using features of cardiac tissue deformation. This study investigated the diagnostic value of a machine-learning framework that incorporates speckle-tracking echocardiographic data for automated discrimination of hypertrophic cardiomyopathy (HCM) from physiological hypertrophy seen in athletes (ATH). Expert-annotated speckle-tracking echocardiographic datasets obtained from 77 ATH and 62 HCM patients were used for developing an automated system. An ensemble machine-learning model with 3 different machine-learning algorithms (support vector machines, random forests, and artificial neural networks) was developed and a majority voting method was used for conclusive predictions with further K-fold cross-validation. Feature selection using an information gain (IG) algorithm revealed that volume was the best predictor for differentiating between HCM ands. ATH (IG = 0.24) followed by mid-left ventricular segmental (IG = 0.134) and average longitudinal strain (IG = 0.131). The ensemble machine-learning model showed increased sensitivity and specificity compared with early-to-late diastolic transmitral velocity ratio (p < 0.01), average early diastolic tissue velocity (e') (p < 0.01), and strain (p = 0.04). Because ATH were younger, adjusted analysis was undertaken in younger HCM patients and compared with ATH with left ventricular wall thickness >13 mm. In this subgroup analysis, the automated model continued to show equal sensitivity, but increased specificity relative to early-to-late diastolic transmitral velocity ratio, e', and strain. Our results suggested that machine-learning algorithms can assist in the discrimination of physiological versus pathological patterns of hypertrophic remodeling. This effort represents a step toward the development of a real-time, machine-learning-based system for automated interpretation of echocardiographic images, which may help novice readers with limited experience. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  14. Effect of wakes from moving upstream rods on boundary layer separation from a high lift airfoil

    NASA Astrophysics Data System (ADS)

    Volino, Ralph J.

    2011-11-01

    Highly loaded airfoils in turbines allow power generation using fewer airfoils. High loading, however, can cause boundary layer separation, resulting in reduced lift and increased aerodynamic loss. Separation is affected by the interaction between rotating blades and stationary vanes. Wakes from upstream vanes periodically impinge on downstream blades, and can reduce separation. The wakes include elevated turbulence, which can induce transition, and a velocity deficit, which results in an impinging flow on the blade surface known as a ``negative jet.'' In the present study, flow through a linear cascade of very high lift airfoils is studied experimentally. Wakes are produced with moving rods which cut through the flow upstream of the airfoils, simulating the effect of upstream vanes. Pressure and velocity fields are documented. Wake spacing and velocity are varied. At low Reynolds numbers without wakes, the boundary layer separates and does not reattach. At high wake passing frequencies separation is largely suppressed. At lower frequencies, ensemble averaged velocity results show intermittent separation and reattachment during the wake passing cycle. Supported by NASA.

  15. Convective cloud vertical velocity and mass-flux characteristics from radar wind profiler observations during GoAmazon2014/5: VERTICAL VELOCITY GOAMAZON2014/5

    DOE PAGES

    Giangrande, Scott E.; Toto, Tami; Jensen, Michael P.; ...

    2016-11-15

    A radar wind profiler data set collected during the 2 year Department of Energy Atmospheric Radiation Measurement Observations and Modeling of the Green Ocean Amazon (GoAmazon2014/5) campaign is used to estimate convective cloud vertical velocity, area fraction, and mass flux profiles. Vertical velocity observations are presented using cumulative frequency histograms and weighted mean profiles to provide insights in a manner suitable for global climate model scale comparisons (spatial domains from 20 km to 60 km). Convective profile sensitivity to changes in environmental conditions and seasonal regime controls is also considered. Aggregate and ensemble average vertical velocity, convective area fraction, andmore » mass flux profiles, as well as magnitudes and relative profile behaviors, are found consistent with previous studies. Updrafts and downdrafts increase in magnitude with height to midlevels (6 to 10 km), with updraft area also increasing with height. Updraft mass flux profiles similarly increase with height, showing a peak in magnitude near 8 km. Downdrafts are observed to be most frequent below the freezing level, with downdraft area monotonically decreasing with height. Updraft and downdraft profile behaviors are further stratified according to environmental controls. These results indicate stronger vertical velocity profile behaviors under higher convective available potential energy and lower low-level moisture conditions. Sharp contrasts in convective area fraction and mass flux profiles are most pronounced when retrievals are segregated according to Amazonian wet and dry season conditions. During this deployment, wet season regimes favored higher domain mass flux profiles, attributed to more frequent convection that offsets weaker average convective cell vertical velocities.« less

  16. Reproducing multi-model ensemble average with Ensemble-averaged Reconstructed Forcings (ERF) in regional climate modeling

    NASA Astrophysics Data System (ADS)

    Erfanian, A.; Fomenko, L.; Wang, G.

    2016-12-01

    Multi-model ensemble (MME) average is considered the most reliable for simulating both present-day and future climates. It has been a primary reference for making conclusions in major coordinated studies i.e. IPCC Assessment Reports and CORDEX. The biases of individual models cancel out each other in MME average, enabling the ensemble mean to outperform individual members in simulating the mean climate. This enhancement however comes with tremendous computational cost, which is especially inhibiting for regional climate modeling as model uncertainties can originate from both RCMs and the driving GCMs. Here we propose the Ensemble-based Reconstructed Forcings (ERF) approach to regional climate modeling that achieves a similar level of bias reduction at a fraction of cost compared with the conventional MME approach. The new method constructs a single set of initial and boundary conditions (IBCs) by averaging the IBCs of multiple GCMs, and drives the RCM with this ensemble average of IBCs to conduct a single run. Using a regional climate model (RegCM4.3.4-CLM4.5), we tested the method over West Africa for multiple combination of (up to six) GCMs. Our results indicate that the performance of the ERF method is comparable to that of the MME average in simulating the mean climate. The bias reduction seen in ERF simulations is achieved by using more realistic IBCs in solving the system of equations underlying the RCM physics and dynamics. This endows the new method with a theoretical advantage in addition to reducing computational cost. The ERF output is an unaltered solution of the RCM as opposed to a climate state that might not be physically plausible due to the averaging of multiple solutions with the conventional MME approach. The ERF approach should be considered for use in major international efforts such as CORDEX. Key words: Multi-model ensemble, ensemble analysis, ERF, regional climate modeling

  17. Ocean currents and acoustic backscatter data from shipboard ADCP measurements at three North Atlantic seamounts between 2004 and 2015.

    PubMed

    Mohn, Christian; Denda, Anneke; Christiansen, Svenja; Kaufmann, Manfred; Peine, Florian; Springer, Barbara; Turnewitsch, Robert; Christiansen, Bernd

    2018-04-01

    Seamounts are amongst the most common physiographic structures of the deep-ocean landscape, but remoteness and geographic complexity have limited the systematic collection of integrated and multidisciplinary data in the past. Consequently, important aspects of seamount ecology and dynamics remain poorly studied. We present a data collection of ocean currents and raw acoustic backscatter from shipboard Acoustic Doppler Current Profiler (ADCP) measurements during six cruises between 2004 and 2015 in the tropical and subtropical Northeast Atlantic to narrow this gap. Measurements were conducted at seamount locations between the island of Madeira and the Portuguese mainland (Ampère, Seine Seamount), as well as east of the Cape Verde archipelago (Senghor Seamount). The dataset includes two-minute ensemble averaged continuous velocity and backscatter profiles, supplemented by spatially gridded maps for each velocity component, error velocity and local bathymetry. The dataset is freely available from the digital data library PANGAEA at https://doi.pangaea.de/10.1594/PANGAEA.883193.

  18. Influence of backflow on skin friction in turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Jalalabadi, Razieh; Sung, Hyung Jin

    2018-06-01

    A direct numerical simulation of a turbulent pipe flow (Reτ = 544) is used to investigate the influence of the backflow on the vortical structures that contribute to the local skin friction. The backflow is a rare event with a probability density function (PDF) of less than 10-3. The backflow is found to extend up to y+ ≈ 4 and is induced by the presence of a vortex in the buffer layer. The flow statistics are conditionally sampled under the condition of a negative streamwise velocity (u < 0) at y+ = 3. The conditionally averaged u <0 reaches its maximum at y+ ≈ 27. The intensified conditionally averaged velocity fluctuations contribute to vertical and spanwise momentum transport around the backflow. The ensemble averaged + and + reveal layered structures in the Q2 and Q4 events. A strong Q4 event appears above the backflow, flanked by two regions of Q2. The strong downwash of the flow along with the spanwise vortex induces the backflow. The upwash at upstream and downstream of the backflow enhances the movement of the low-speed flow in the streamwise and spanwise directions. The velocity-vorticity correlation reveals that the main contributions to Cf are the vorticity advection and vorticity stretching. The main contribution to the conditionally averaged Cf is the wall-normal gradient of the mean spanwise vorticity at the wall. The spanwise vorticity is positive above the backflow flanked by two regions of negative spanwise vorticity. The conditional PDF of the backflow under negative ul+ at y+ = 100 is more frequent than that under positive ul+.

  19. On the statistical and transport properties of a non-dissipative Fermi-Ulam model

    NASA Astrophysics Data System (ADS)

    Livorati, André L. P.; Dettmann, Carl P.; Caldas, Iberê L.; Leonel, Edson D.

    2015-10-01

    The transport and diffusion properties for the velocity of a Fermi-Ulam model were characterized using the decay rate of the survival probability. The system consists of an ensemble of non-interacting particles confined to move along and experience elastic collisions with two infinitely heavy walls. One is fixed, working as a returning mechanism of the colliding particles, while the other one moves periodically in time. The diffusion equation is solved, and the diffusion coefficient is numerically estimated by means of the averaged square velocity. Our results show remarkably good agreement of the theory and simulation for the chaotic sea below the first elliptic island in the phase space. From the decay rates of the survival probability, we obtained transport properties that can be extended to other nonlinear mappings, as well to billiard problems.

  20. Unveiling Inherent Degeneracies in Determining Population-weighted Ensembles of Inter-domain Orientational Distributions Using NMR Residual Dipolar Couplings: Application to RNA Helix Junction Helix Motifs

    PubMed Central

    Yang, Shan; Al-Hashimi, Hashim M.

    2016-01-01

    A growing number of studies employ time-averaged experimental data to determine dynamic ensembles of biomolecules. While it is well known that different ensembles can satisfy experimental data to within error, the extent and nature of these degeneracies, and their impact on the accuracy of the ensemble determination remains poorly understood. Here, we use simulations and a recently introduced metric for assessing ensemble similarity to explore degeneracies in determining ensembles using NMR residual dipolar couplings (RDCs) with specific application to A-form helices in RNA. Various target ensembles were constructed representing different domain-domain orientational distributions that are confined to a topologically restricted (<10%) conformational space. Five independent sets of ensemble averaged RDCs were then computed for each target ensemble and a ‘sample and select’ scheme used to identify degenerate ensembles that satisfy RDCs to within experimental uncertainty. We find that ensembles with different ensemble sizes and that can differ significantly from the target ensemble (by as much as ΣΩ ~ 0.4 where ΣΩ varies between 0 and 1 for maximum and minimum ensemble similarity, respectively) can satisfy the ensemble averaged RDCs. These deviations increase with the number of unique conformers and breadth of the target distribution, and result in significant uncertainty in determining conformational entropy (as large as 5 kcal/mol at T = 298 K). Nevertheless, the RDC-degenerate ensembles are biased towards populated regions of the target ensemble, and capture other essential features of the distribution, including the shape. Our results identify ensemble size as a major source of uncertainty in determining ensembles and suggest that NMR interactions such as RDCs and spin relaxation, on their own, do not carry the necessary information needed to determine conformational entropy at a useful level of precision. The framework introduced here provides a general approach for exploring degeneracies in ensemble determination for different types of experimental data. PMID:26131693

  1. Quantifying Uncertainty in Flood Inundation Mapping Using Streamflow Ensembles and Multiple Hydraulic Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.

    2016-12-01

    The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.

  2. Ensemble perception of color in autistic adults.

    PubMed

    Maule, John; Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna

    2017-05-01

    Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839-851. © 2016 International Society for Autism Research, Wiley Periodicals, Inc. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.

  3. Ensemble perception of color in autistic adults

    PubMed Central

    Stanworth, Kirstie; Pellicano, Elizabeth; Franklin, Anna

    2016-01-01

    Dominant accounts of visual processing in autism posit that autistic individuals have an enhanced access to details of scenes [e.g., weak central coherence] which is reflected in a general bias toward local processing. Furthermore, the attenuated priors account of autism predicts that the updating and use of summary representations is reduced in autism. Ensemble perception describes the extraction of global summary statistics of a visual feature from a heterogeneous set (e.g., of faces, sizes, colors), often in the absence of local item representation. The present study investigated ensemble perception in autistic adults using a rapidly presented (500 msec) ensemble of four, eight, or sixteen elements representing four different colors. We predicted that autistic individuals would be less accurate when averaging the ensembles, but more accurate in recognizing individual ensemble colors. The results were consistent with the predictions. Averaging was impaired in autism, but only when ensembles contained four elements. Ensembles of eight or sixteen elements were averaged equally accurately across groups. The autistic group also showed a corresponding advantage in rejecting colors that were not originally seen in the ensemble. The results demonstrate the local processing bias in autism, but also suggest that the global perceptual averaging mechanism may be compromised under some conditions. The theoretical implications of the findings and future avenues for research on summary statistics in autism are discussed. Autism Res 2017, 10: 839–851. © 2016 The Authors Autism Research published by Wiley Periodicals, Inc. on behalf of International Society for Autism Research PMID:27874263

  4. Multimodel Ensemble Methods for Prediction of Wake-Vortex Transport and Decay Originating NASA

    NASA Technical Reports Server (NTRS)

    Korner, Stephan; Ahmad, Nashat N.; Holzapfel, Frank; VanValkenburg, Randal L.

    2017-01-01

    Several multimodel ensemble methods are selected and further developed to improve the deterministic and probabilistic prediction skills of individual wake-vortex transport and decay models. The different multimodel ensemble methods are introduced, and their suitability for wake applications is demonstrated. The selected methods include direct ensemble averaging, Bayesian model averaging, and Monte Carlo simulation. The different methodologies are evaluated employing data from wake-vortex field measurement campaigns conducted in the United States and Germany.

  5. Velocity variations and uncertainty from transdimensional P-wave tomography of North America

    NASA Astrophysics Data System (ADS)

    Burdick, Scott; Lekić, Vedran

    2017-05-01

    High-resolution models of seismic velocity variations constructed using body-wave tomography inform the study of the origin, fate and thermochemical state of mantle domains. In order to reliably relate these variations to material properties including temperature, composition and volatile content, we must accurately retrieve both the patterns and amplitudes of variations and quantify the uncertainty associated with the estimates of each. For these reasons, we image the mantle beneath North America with P-wave traveltimes from USArray using a novel method for 3-D probabilistic body-wave tomography. The method uses a Transdimensional Hierarchical Bayesian framework with a reversible-jump Markov Chain Monte Carlo algorithm in order to generate an ensemble of possible velocity models. We analyse this ensemble solution to obtain the posterior probability distribution of velocities, thereby yielding error bars and enabling rigorous hypothesis testing. Overall, we determine that the average uncertainty (1σ) of compressional wave velocity estimates beneath North America is ∼0.25 per cent dVP/VP, increasing with proximity to complex structure and decreasing with depth. The addition of USArray data reduces the uncertainty beneath the Eastern US by over 50 per cent in the upper mantle and 25-40 per cent below the transition zone and ∼30 per cent throughout the mantle beneath the Western US. In the absence of damping and smoothing, we recover amplitudes of variations 10-80 per cent higher than a standard inversion approach. Accounting for differences in data coverage, we infer that the length scale of heterogeneity is ∼50 per cent longer at shallow depths beneath the continental platform than beneath tectonically active regions. We illustrate the model trade-off analysis for the Cascadia slab and the New Madrid Seismic Zone, where we find that smearing due to the limitations of the illumination is relatively minor.

  6. Molecular dynamics of liquid crystals

    NASA Astrophysics Data System (ADS)

    Sarman, Sten

    1997-02-01

    We derive Green-Kubo relations for the viscosities of a nematic liquid crystal. The derivation is based on the application of a Gaussian constraint algorithm that makes the director angular velocity of a liquid crystal a constant of motion. Setting this velocity equal to zero means that a director-based coordinate system becomes an inertial frame and that the constraint torques do not do any work on the system. The system consequently remains in equilibrium. However, one generates a different equilibrium ensemble. The great advantage of this ensemble is that the Green-Kubo relations for the viscosities become linear combinations of time correlation function integrals, whereas they are complicated rational functions in the conventional canonical ensemble. This facilitates the numerical evaluation of the viscosities by molecular dynamics simulations.

  7. Creating "Intelligent" Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, Noel; Taylor, Patrick

    2014-05-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is used to add value to individual model projections and construct a consensus projection. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, individual models reproduce certain climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. The intention is to produce improved ("intelligent") unequal-weight ensemble averages. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Several climate process metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument in combination with surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing the equal-weighted ensemble average and an ensemble weighted using the process-based metric. Additionally, this study investigates the dependence of the metric weighting scheme on the climate state using a combination of model simulations including a non-forced preindustrial control experiment, historical simulations, and several radiative forcing Representative Concentration Pathway (RCP) scenarios. Ultimately, the goal of the framework is to advise better methods for ensemble averaging models and create better climate predictions.

  8. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2014-11-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural vs. model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty is far more important than model parametric uncertainty to estimate irrigation water requirement. Using the Reliability Ensemble Averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  9. EMC Global Climate And Weather Modeling Branch Personnel

    Science.gov Websites

    Comparison Statistics which includes: NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias NCEP Raw and Bias-Corrected Ensemble Domain Averaged Bias Reduction (Percents) CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias CMC Raw and Bias-Corrected Control Forecast Domain Averaged Bias Reduction

  10. Ensemble Averaged Probability Density Function (APDF) for Compressible Turbulent Reacting Flows

    NASA Technical Reports Server (NTRS)

    Shih, Tsan-Hsing; Liu, Nan-Suey

    2012-01-01

    In this paper, we present a concept of the averaged probability density function (APDF) for studying compressible turbulent reacting flows. The APDF is defined as an ensemble average of the fine grained probability density function (FG-PDF) with a mass density weighting. It can be used to exactly deduce the mass density weighted, ensemble averaged turbulent mean variables. The transport equation for APDF can be derived in two ways. One is the traditional way that starts from the transport equation of FG-PDF, in which the compressible Navier- Stokes equations are embedded. The resulting transport equation of APDF is then in a traditional form that contains conditional means of all terms from the right hand side of the Navier-Stokes equations except for the chemical reaction term. These conditional means are new unknown quantities that need to be modeled. Another way of deriving the transport equation of APDF is to start directly from the ensemble averaged Navier-Stokes equations. The resulting transport equation of APDF derived from this approach appears in a closed form without any need for additional modeling. The methodology of ensemble averaging presented in this paper can be extended to other averaging procedures: for example, the Reynolds time averaging for statistically steady flow and the Reynolds spatial averaging for statistically homogeneous flow. It can also be extended to a time or spatial filtering procedure to construct the filtered density function (FDF) for the large eddy simulation (LES) of compressible turbulent reacting flows.

  11. DREAM: An Efficient Methodology for DSMC Simulation of Unsteady Processes

    NASA Astrophysics Data System (ADS)

    Cave, H. M.; Jermy, M. C.; Tseng, K. C.; Wu, J. S.

    2008-12-01

    A technique called the DSMC Rapid Ensemble Averaging Method (DREAM) for reducing the statistical scatter in the output from unsteady DSMC simulations is introduced. During post-processing by DREAM, the DSMC algorithm is re-run multiple times over a short period before the temporal point of interest thus building up a combination of time- and ensemble-averaged sampling data. The particle data is regenerated several mean collision times before the output time using the particle data generated during the original DSMC run. This methodology conserves the original phase space data from the DSMC run and so is suitable for reducing the statistical scatter in highly non-equilibrium flows. In this paper, the DREAM-II method is investigated and verified in detail. Propagating shock waves at high Mach numbers (Mach 8 and 12) are simulated using a parallel DSMC code (PDSC) and then post-processed using DREAM. The ability of DREAM to obtain the correct particle velocity distribution in the shock structure is demonstrated and the reduction of statistical scatter in the output macroscopic properties is measured. DREAM is also used to reduce the statistical scatter in the results from the interaction of a Mach 4 shock with a square cavity and for the interaction of a Mach 12 shock on a wedge in a channel.

  12. MSEBAG: a dynamic classifier ensemble generation based on `minimum-sufficient ensemble' and bagging

    NASA Astrophysics Data System (ADS)

    Chen, Lei; Kamel, Mohamed S.

    2016-01-01

    In this paper, we propose a dynamic classifier system, MSEBAG, which is characterised by searching for the 'minimum-sufficient ensemble' and bagging at the ensemble level. It adopts an 'over-generation and selection' strategy and aims to achieve a good bias-variance trade-off. In the training phase, MSEBAG first searches for the 'minimum-sufficient ensemble', which maximises the in-sample fitness with the minimal number of base classifiers. Then, starting from the 'minimum-sufficient ensemble', a backward stepwise algorithm is employed to generate a collection of ensembles. The objective is to create a collection of ensembles with a descending fitness on the data, as well as a descending complexity in the structure. MSEBAG dynamically selects the ensembles from the collection for the decision aggregation. The extended adaptive aggregation (EAA) approach, a bagging-style algorithm performed at the ensemble level, is employed for this task. EAA searches for the competent ensembles using a score function, which takes into consideration both the in-sample fitness and the confidence of the statistical inference, and averages the decisions of the selected ensembles to label the test pattern. The experimental results show that the proposed MSEBAG outperforms the benchmarks on average.

  13. Reduction of predictive uncertainty in estimating irrigation water requirement through multi-model ensembles and ensemble averaging

    NASA Astrophysics Data System (ADS)

    Multsch, S.; Exbrayat, J.-F.; Kirby, M.; Viney, N. R.; Frede, H.-G.; Breuer, L.

    2015-04-01

    Irrigation agriculture plays an increasingly important role in food supply. Many evapotranspiration models are used today to estimate the water demand for irrigation. They consider different stages of crop growth by empirical crop coefficients to adapt evapotranspiration throughout the vegetation period. We investigate the importance of the model structural versus model parametric uncertainty for irrigation simulations by considering six evapotranspiration models and five crop coefficient sets to estimate irrigation water requirements for growing wheat in the Murray-Darling Basin, Australia. The study is carried out using the spatial decision support system SPARE:WATER. We find that structural model uncertainty among reference ET is far more important than model parametric uncertainty introduced by crop coefficients. These crop coefficients are used to estimate irrigation water requirement following the single crop coefficient approach. Using the reliability ensemble averaging (REA) technique, we are able to reduce the overall predictive model uncertainty by more than 10%. The exceedance probability curve of irrigation water requirements shows that a certain threshold, e.g. an irrigation water limit due to water right of 400 mm, would be less frequently exceeded in case of the REA ensemble average (45%) in comparison to the equally weighted ensemble average (66%). We conclude that multi-model ensemble predictions and sophisticated model averaging techniques are helpful in predicting irrigation demand and provide relevant information for decision making.

  14. Quantifying rapid changes in cardiovascular state with a moving ensemble average.

    PubMed

    Cieslak, Matthew; Ryan, William S; Babenko, Viktoriya; Erro, Hannah; Rathbun, Zoe M; Meiring, Wendy; Kelsey, Robert M; Blascovich, Jim; Grafton, Scott T

    2018-04-01

    MEAP, the moving ensemble analysis pipeline, is a new open-source tool designed to perform multisubject preprocessing and analysis of cardiovascular data, including electrocardiogram (ECG), impedance cardiogram (ICG), and continuous blood pressure (BP). In addition to traditional ensemble averaging, MEAP implements a moving ensemble averaging method that allows for the continuous estimation of indices related to cardiovascular state, including cardiac output, preejection period, heart rate variability, and total peripheral resistance, among others. Here, we define the moving ensemble technique mathematically, highlighting its differences from fixed-window ensemble averaging. We describe MEAP's interface and features for signal processing, artifact correction, and cardiovascular-based fMRI analysis. We demonstrate the accuracy of MEAP's novel B point detection algorithm on a large collection of hand-labeled ICG waveforms. As a proof of concept, two subjects completed a series of four physical and cognitive tasks (cold pressor, Valsalva maneuver, video game, random dot kinetogram) on 3 separate days while ECG, ICG, and BP were recorded. Critically, the moving ensemble method reliably captures the rapid cyclical cardiovascular changes related to the baroreflex during the Valsalva maneuver and the classic cold pressor response. Cardiovascular measures were seen to vary considerably within repetitions of the same cognitive task for each individual, suggesting that a carefully designed paradigm could be used to capture fast-acting event-related changes in cardiovascular state. © 2017 Society for Psychophysiological Research.

  15. Coherent transmission of an ultrasonic shock wave through a multiple scattering medium.

    PubMed

    Viard, Nicolas; Giammarinaro, Bruno; Derode, Arnaud; Barrière, Christophe

    2013-08-01

    We report measurements of the transmitted coherent (ensemble-averaged) wave resulting from the interaction of an ultrasonic shock wave with a two-dimensional random medium. Despite multiple scattering, the coherent waveform clearly shows the steepening that is typical of nonlinear harmonic generation. This is taken advantage of to measure the elastic mean free path and group velocity over a broad frequency range (2-15 MHz) in only one experiment. Experimental results are found to be in good agreement with a linear theoretical model taking into account spatial correlations between scatterers. These results show that nonlinearity and multiple scattering are both present, yet uncoupled.

  16. Impact of Bias-Correction Type and Conditional Training on Bayesian Model Averaging over the Northeast United States

    Treesearch

    Michael J. Erickson; Brian A. Colle; Joseph J. Charney

    2012-01-01

    The performance of a multimodel ensemble over the northeast United States is evaluated before and after applying bias correction and Bayesian model averaging (BMA). The 13-member Stony Brook University (SBU) ensemble at 0000 UTC is combined with the 21-member National Centers for Environmental Prediction (NCEP) Short-Range Ensemble Forecast (SREF) system at 2100 UTC....

  17. The Weighted-Average Lagged Ensemble.

    PubMed

    DelSole, T; Trenary, L; Tippett, M K

    2017-11-01

    A lagged ensemble is an ensemble of forecasts from the same model initialized at different times but verifying at the same time. The skill of a lagged ensemble mean can be improved by assigning weights to different forecasts in such a way as to maximize skill. If the forecasts are bias corrected, then an unbiased weighted lagged ensemble requires the weights to sum to one. Such a scheme is called a weighted-average lagged ensemble. In the limit of uncorrelated errors, the optimal weights are positive and decay monotonically with lead time, so that the least skillful forecasts have the least weight. In more realistic applications, the optimal weights do not always behave this way. This paper presents a series of analytic examples designed to illuminate conditions under which the weights of an optimal weighted-average lagged ensemble become negative or depend nonmonotonically on lead time. It is shown that negative weights are most likely to occur when the errors grow rapidly and are highly correlated across lead time. The weights are most likely to behave nonmonotonically when the mean square error is approximately constant over the range forecasts included in the lagged ensemble. An extreme example of the latter behavior is presented in which the optimal weights vanish everywhere except at the shortest and longest lead times.

  18. Ionospheric Storm Reconstructions with a Multimodel Ensemble Prdiction System (MEPS) of Data Assimilation Models: Mid and Low Latitude Dynamics

    NASA Astrophysics Data System (ADS)

    Schunk, R. W.; Scherliess, L.; Eccles, V.; Gardner, L. C.; Sojka, J. J.; Zhu, L.; Pi, X.; Mannucci, A. J.; Komjathy, A.; Wang, C.; Rosen, G.

    2016-12-01

    As part of the NASA-NSF Space Weather Modeling Collaboration, we created a Multimodel Ensemble Prediction System (MEPS) for the Ionosphere-Thermosphere-Electrodynamics system that is based on Data Assimilation (DA) models. MEPS is composed of seven physics-based data assimilation models that cover the globe. Ensemble modeling can be conducted for the mid-low latitude ionosphere using the four GAIM data assimilation models, including the Gauss Markov (GM), Full Physics (FP), Band Limited (BL) and 4DVAR DA models. These models can assimilate Total Electron Content (TEC) from a constellation of satellites, bottom-side electron density profiles from digisondes, in situ plasma densities, occultation data and ultraviolet emissions. The four GAIM models were run for the March 16-17, 2013, geomagnetic storm period with the same data, but we also systematically added new data types and re-ran the GAIM models to see how the different data types affected the GAIM results, with the emphasis on elucidating differences in the underlying ionospheric dynamics and thermospheric coupling. Also, for each scenario the outputs from the four GAIM models were used to produce an ensemble mean for TEC, NmF2, and hmF2. A simple average of the models was used in the ensemble averaging to see if there was an improvement of the ensemble average over the individual models. For the scenarios considered, the ensemble average yielded better specifications than the individual GAIM models. The model differences and averages, and the consequent differences in ionosphere-thermosphere coupling and dynamics will be discussed.

  19. Role of pseudo-turbulent stresses in shocked particle clouds and construction of surrogate models for closure

    NASA Astrophysics Data System (ADS)

    Sen, O.; Gaul, N. J.; Davis, S.; Choi, K. K.; Jacobs, G.; Udaykumar, H. S.

    2018-05-01

    Macroscale models of shock-particle interactions require closure terms for unresolved solid-fluid momentum and energy transfer. These comprise the effects of mean as well as fluctuating fluid-phase velocity fields in the particle cloud. Mean drag and Reynolds stress equivalent terms (also known as pseudo-turbulent terms) appear in the macroscale equations. Closure laws for the pseudo-turbulent terms are constructed in this work from ensembles of high-fidelity mesoscale simulations. The computations are performed over a wide range of Mach numbers ( M) and particle volume fractions (φ ) and are used to explicitly compute the pseudo-turbulent stresses from the Favre average of the velocity fluctuations in the flow field. The computed stresses are then used as inputs to a Modified Bayesian Kriging method to generate surrogate models. The surrogates can be used as closure models for the pseudo-turbulent terms in macroscale computations of shock-particle interactions. It is found that the kinetic energy associated with the velocity fluctuations is comparable to that of the mean flow—especially for increasing M and φ . This work is a first attempt to quantify and evaluate the effect of velocity fluctuations for problems of shock-particle interactions.

  20. Role of pseudo-turbulent stresses in shocked particle clouds and construction of surrogate models for closure

    NASA Astrophysics Data System (ADS)

    Sen, O.; Gaul, N. J.; Davis, S.; Choi, K. K.; Jacobs, G.; Udaykumar, H. S.

    2018-02-01

    Macroscale models of shock-particle interactions require closure terms for unresolved solid-fluid momentum and energy transfer. These comprise the effects of mean as well as fluctuating fluid-phase velocity fields in the particle cloud. Mean drag and Reynolds stress equivalent terms (also known as pseudo-turbulent terms) appear in the macroscale equations. Closure laws for the pseudo-turbulent terms are constructed in this work from ensembles of high-fidelity mesoscale simulations. The computations are performed over a wide range of Mach numbers (M) and particle volume fractions (φ ) and are used to explicitly compute the pseudo-turbulent stresses from the Favre average of the velocity fluctuations in the flow field. The computed stresses are then used as inputs to a Modified Bayesian Kriging method to generate surrogate models. The surrogates can be used as closure models for the pseudo-turbulent terms in macroscale computations of shock-particle interactions. It is found that the kinetic energy associated with the velocity fluctuations is comparable to that of the mean flow—especially for increasing M and φ . This work is a first attempt to quantify and evaluate the effect of velocity fluctuations for problems of shock-particle interactions.

  1. An interplanetary magnetic field ensemble at 1 AU

    NASA Technical Reports Server (NTRS)

    Matthaeus, W. H.; Goldstein, M. L.; King, J. H.

    1985-01-01

    A method for calculation ensemble averages from magnetic field data is described. A data set comprising approximately 16 months of nearly continuous ISEE-3 magnetic field data is used in this study. Individual subintervals of this data, ranging from 15 hours to 15.6 days comprise the ensemble. The sole condition for including each subinterval in the averages is the degree to which it represents a weakly time-stationary process. Averages obtained by this method are appropriate for a turbulence description of the interplanetary medium. The ensemble average correlation length obtained from all subintervals is found to be 4.9 x 10 to the 11th cm. The average value of the variances of the magnetic field components are in the approximate ratio 8:9:10, where the third component is the local mean field direction. The correlation lengths and variances are found to have a systematic variation with subinterval duration, reflecting the important role of low-frequency fluctuations in the interplanetary medium.

  2. Perceived Average Orientation Reflects Effective Gist of the Surface.

    PubMed

    Cha, Oakyoon; Chong, Sang Chul

    2018-03-01

    The human ability to represent ensemble visual information, such as average orientation and size, has been suggested as the foundation of gist perception. To effectively summarize different groups of objects into the gist of a scene, observers should form ensembles separately for different groups, even when objects have similar visual features across groups. We hypothesized that the visual system utilizes perceptual groups characterized by spatial configuration and represents separate ensembles for different groups. Therefore, participants could not integrate ensembles of different perceptual groups on a task basis. We asked participants to determine the average orientation of visual elements comprising a surface with a contour situated inside. Although participants were asked to estimate the average orientation of all the elements, they ignored orientation signals embedded in the contour. This constraint may help the visual system to keep the visual features of occluding objects separate from those of the occluded objects.

  3. Fluid mechanics experiments in oscillatory flow. Volume 2: Tabulated data

    NASA Technical Reports Server (NTRS)

    Seume, J.; Friedman, G.; Simon, T. W.

    1992-01-01

    Results of a fluid mechanics measurement program in oscillating flow within a circular duct are presented. The program began with a survey of transition behavior over a range of oscillation frequency and magnitude and continued with a detailed study at a single operating point. Such measurements were made in support of Stirling engine development. Values of three dimensionless parameters, Re sub max, Re sub w, and A sub R, embody the velocity amplitude, frequency of oscillation, and mean fluid displacement of the cycle, respectively. Measurements were first made over a range of these parameters that are representative of the heat exchanger tubes in the heater section of NASA's Stirling cycle Space Power Research Engine (SPRE). Measurements were taken of the axial and radial components of ensemble-averaged velocity and rms velocity fluctuation and the dominant Reynolds shear stress, at various radial positions for each of four axial stations. In each run, transition from laminar to turbulent flow, and its reverse, were identified and sufficient data was gathered to propose the transition mechanism. Volume 2 contains data reduction program listings and tabulated data (including its graphics).

  4. Velocity Deficits in the Wake of Model Lemon Shark Dorsal Fins Measured with Particle Image Velocimetry

    NASA Astrophysics Data System (ADS)

    Terry, K. N.; Turner, V.; Hackett, E.

    2017-12-01

    Aquatic animals' morphology provides inspiration for human technological developments, as their bodies have evolved and become adapted for efficient swimming. Lemon sharks exhibit a uniquely large second dorsal fin that is nearly the same size as the first fin, the hydrodynamic role of which is unknown. This experimental study looks at the drag forces on a scale model of the Lemon shark's unique two-fin configuration in comparison to drag forces on a more typical one-fin configuration. The experiments were performed in a recirculating water flume, where the wakes behind the scale models are measured using particle image velocimetry. The experiments are performed at three different flow speeds for both fin configurations. The measured instantaneous 2D distributions of the streamwise and wall-normal velocity components are ensemble averaged to generate streamwise velocity vertical profiles. In addition, velocity deficit profiles are computed from the difference between these mean streamwise velocity profiles and the free stream velocity, which is computed based on measured flow rates during the experiments. Results show that the mean velocities behind the fin and near the fin tip are smallest and increase as the streamwise distance from the fin tip increases. The magnitude of velocity deficits increases with increasing flow speed for both fin configurations, but at all flow speeds, the two-fin configurations generate larger velocity deficits than the one-fin configurations. Because the velocity deficit is directly proportional to the drag force, these results suggest that the two-fin configuration produces more drag.

  5. Simulation studies of the fidelity of biomolecular structure ensemble recreation

    NASA Astrophysics Data System (ADS)

    Lätzer, Joachim; Eastwood, Michael P.; Wolynes, Peter G.

    2006-12-01

    We examine the ability of Bayesian methods to recreate structural ensembles for partially folded molecules from averaged data. Specifically we test the ability of various algorithms to recreate different transition state ensembles for folding proteins using a multiple replica simulation algorithm using input from "gold standard" reference ensembles that were first generated with a Gō-like Hamiltonian having nonpairwise additive terms. A set of low resolution data, which function as the "experimental" ϕ values, were first constructed from this reference ensemble. The resulting ϕ values were then treated as one would treat laboratory experimental data and were used as input in the replica reconstruction algorithm. The resulting ensembles of structures obtained by the replica algorithm were compared to the gold standard reference ensemble, from which those "data" were, in fact, obtained. It is found that for a unimodal transition state ensemble with a low barrier, the multiple replica algorithm does recreate the reference ensemble fairly successfully when no experimental error is assumed. The Kolmogorov-Smirnov test as well as principal component analysis show that the overlap of the recovered and reference ensembles is significantly enhanced when multiple replicas are used. Reduction of the multiple replica ensembles by clustering successfully yields subensembles with close similarity to the reference ensembles. On the other hand, for a high barrier transition state with two distinct transition state ensembles, the single replica algorithm only samples a few structures of one of the reference ensemble basins. This is due to the fact that the ϕ values are intrinsically ensemble averaged quantities. The replica algorithm with multiple copies does sample both reference ensemble basins. In contrast to the single replica case, the multiple replicas are constrained to reproduce the average ϕ values, but allow fluctuations in ϕ for each individual copy. These fluctuations facilitate a more faithful sampling of the reference ensemble basins. Finally, we test how robustly the reconstruction algorithm can function by introducing errors in ϕ comparable in magnitude to those suggested by some authors. In this circumstance we observe that the chances of ensemble recovery with the replica algorithm are poor using a single replica, but are improved when multiple copies are used. A multimodal transition state ensemble, however, turns out to be more sensitive to large errors in ϕ (if appropriately gauged) and attempts at successful recreation of the reference ensemble with simple replica algorithms can fall short.

  6. Cooperation and Defection at the Crossroads

    PubMed Central

    Abramson, Guillermo; Semeshenko, Viktoriya; Iglesias, José Roberto

    2013-01-01

    We study a simple traffic model with a non-signalized road intersection. In this model the car arriving from the right has precedence. The vehicle dynamics far from the crossing are governed by the rules introduced by Nagel and Paczuski, which define how drivers behave when braking or accelerating. We measure the average velocity of the ensemble of cars and its flow as a function of the density of cars on the roadway. An additional set of rules is defined to describe the dynamics at the intersection assuming a fraction of drivers that do not obey the rule of precedence. This problem is treated within a game-theory framework, where the drivers that obey the rule are cooperators and those who ignore it are defectors. We study the consequences of these behaviors as a function of the fraction of cooperators and defectors. The results show that cooperation is the best strategy because it maximizes the flow of vehicles and minimizes the number of accidents. A rather paradoxical effect is observed: for any percentage of defectors the number of accidents is larger when the density of cars is low because of the higher average velocity. PMID:23610596

  7. Implicit ligand theory for relative binding free energies

    NASA Astrophysics Data System (ADS)

    Nguyen, Trung Hai; Minh, David D. L.

    2018-03-01

    Implicit ligand theory enables noncovalent binding free energies to be calculated based on an exponential average of the binding potential of mean force (BPMF)—the binding free energy between a flexible ligand and rigid receptor—over a precomputed ensemble of receptor configurations. In the original formalism, receptor configurations were drawn from or reweighted to the apo ensemble. Here we show that BPMFs averaged over a holo ensemble yield binding free energies relative to the reference ligand that specifies the ensemble. When using receptor snapshots from an alchemical simulation with a single ligand, the new statistical estimator outperforms the original.

  8. Reproducing the Ensemble Average Polar Solvation Energy of a Protein from a Single Structure: Gaussian-Based Smooth Dielectric Function for Macromolecular Modeling.

    PubMed

    Chakravorty, Arghya; Jia, Zhe; Li, Lin; Zhao, Shan; Alexov, Emil

    2018-02-13

    Typically, the ensemble average polar component of solvation energy (ΔG polar solv ) of a macromolecule is computed using molecular dynamics (MD) or Monte Carlo (MC) simulations to generate conformational ensemble and then single/rigid conformation solvation energy calculation is performed on each snapshot. The primary objective of this work is to demonstrate that Poisson-Boltzmann (PB)-based approach using a Gaussian-based smooth dielectric function for macromolecular modeling previously developed by us (Li et al. J. Chem. Theory Comput. 2013, 9 (4), 2126-2136) can reproduce that ensemble average (ΔG polar solv ) of a protein from a single structure. We show that the Gaussian-based dielectric model reproduces the ensemble average ΔG polar solv (⟨ΔG polar solv ⟩) from an energy-minimized structure of a protein regardless of the minimization environment (structure minimized in vacuo, implicit or explicit waters, or crystal structure); the best case, however, is when it is paired with an in vacuo-minimized structure. In other minimization environments (implicit or explicit waters or crystal structure), the traditional two-dielectric model can still be selected with which the model produces correct solvation energies. Our observations from this work reflect how the ability to appropriately mimic the motion of residues, especially the salt bridge residues, influences a dielectric model's ability to reproduce the ensemble average value of polar solvation free energy from a single in vacuo-minimized structure.

  9. Field measurement of velocity time series in the center of Sequim Bay

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, Samuel F.; Harker-Klimes, Genevra EL

    A 600 kHz RDI Workhorse was installed in the center of Sequim Bay from 15:04 June 23, 2017 to 09:34 August 24, 2017 at a depth of 25.9 m from MLLW. The instrument was configured to record the flow velocity in vertical cells of 1.0 m in 10 minute ensembles. Each ensemble was calculated as the mean of 24 pings, sampled with an interval of 5.0 s. A burst of increased sampling rate (1200 samples at 2 Hz) was recorded to characterize the wave climate on an hourly basis. The peak depth-averaged flow speed for the deployment was recorded duringmore » the flood tide on June 24, 2017 with a magnitude of 0.34 m/s. The peak flow speed in a single bin was recorded during the same tide at a location of 11.6 m from the seabed with a magnitude of 0.46 m/s. The velocity direction was observed to be relatively constant as a function of depth for the higher flow velocities (flood tides) but highly variable during times of slower flow (ebb tides). A peak significant wave height of 0.36 m was recorded on June 30, 2017 at 18:54. The measured waves showed no indication of a prevalent wave direction during this deployment. The wave record of the fetch-limited site during this deployment approaches the lower limit of the wave measurement resolution. The water temperature fluctuated over a range of 1.7°C during the deployment duration. The mean pitch of the instrument was -1.2° and the mean roll angle of the instrument was 0.3°. The low pitch and roll angles are important factors in the accurate measurement of the wave activity at the surface.« less

  10. Ergodicity Breaking in Geometric Brownian Motion

    NASA Astrophysics Data System (ADS)

    Peters, O.; Klein, W.

    2013-03-01

    Geometric Brownian motion (GBM) is a model for systems as varied as financial instruments and populations. The statistical properties of GBM are complicated by nonergodicity, which can lead to ensemble averages exhibiting exponential growth while any individual trajectory collapses according to its time average. A common tactic for bringing time averages closer to ensemble averages is diversification. In this Letter, we study the effects of diversification using the concept of ergodicity breaking.

  11. Prediction of dosage-based parameters from the puff dispersion of airborne materials in urban environments using the CFD-RANS methodology

    NASA Astrophysics Data System (ADS)

    Efthimiou, G. C.; Andronopoulos, S.; Bartzis, J. G.

    2018-02-01

    One of the key issues of recent research on the dispersion inside complex urban environments is the ability to predict dosage-based parameters from the puff release of an airborne material from a point source in the atmospheric boundary layer inside the built-up area. The present work addresses the question of whether the computational fluid dynamics (CFD)-Reynolds-averaged Navier-Stokes (RANS) methodology can be used to predict ensemble-average dosage-based parameters that are related with the puff dispersion. RANS simulations with the ADREA-HF code were, therefore, performed, where a single puff was released in each case. The present method is validated against the data sets from two wind-tunnel experiments. In each experiment, more than 200 puffs were released from which ensemble-averaged dosage-based parameters were calculated and compared to the model's predictions. The performance of the model was evaluated using scatter plots and three validation metrics: fractional bias, normalized mean square error, and factor of two. The model presented a better performance for the temporal parameters (i.e., ensemble-average times of puff arrival, peak, leaving, duration, ascent, and descent) than for the ensemble-average dosage and peak concentration. The majority of the obtained values of validation metrics were inside established acceptance limits. Based on the obtained model performance indices, the CFD-RANS methodology as implemented in the code ADREA-HF is able to predict the ensemble-average temporal quantities related to transient emissions of airborne material in urban areas within the range of the model performance acceptance criteria established in the literature. The CFD-RANS methodology as implemented in the code ADREA-HF is also able to predict the ensemble-average dosage, but the dosage results should be treated with some caution; as in one case, the observed ensemble-average dosage was under-estimated slightly more than the acceptance criteria. Ensemble-average peak concentration was systematically underpredicted by the model to a degree higher than the allowable by the acceptance criteria, in 1 of the 2 wind-tunnel experiments. The model performance depended on the positions of the examined sensors in relation to the emission source and the buildings configuration. The work presented in this paper was carried out (partly) within the scope of COST Action ES1006 "Evaluation, improvement, and guidance for the use of local-scale emergency prediction and response tools for airborne hazards in built environments".

  12. Reprint of "Investigating ensemble perception of emotions in autistic and typical children and adolescents".

    PubMed

    Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth

    2018-01-01

    Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  13. Ensemble perception of emotions in autistic and typical children and adolescents.

    PubMed

    Karaminis, Themelis; Neil, Louise; Manning, Catherine; Turi, Marco; Fiorentini, Chiara; Burr, David; Pellicano, Elizabeth

    2017-04-01

    Ensemble perception, the ability to assess automatically the summary of large amounts of information presented in visual scenes, is available early in typical development. This ability might be compromised in autistic children, who are thought to present limitations in maintaining summary statistics representations for the recent history of sensory input. Here we examined ensemble perception of facial emotional expressions in 35 autistic children, 30 age- and ability-matched typical children and 25 typical adults. Participants received three tasks: a) an 'ensemble' emotion discrimination task; b) a baseline (single-face) emotion discrimination task; and c) a facial expression identification task. Children performed worse than adults on all three tasks. Unexpectedly, autistic and typical children were, on average, indistinguishable in their precision and accuracy on all three tasks. Computational modelling suggested that, on average, autistic and typical children used ensemble-encoding strategies to a similar extent; but ensemble perception was related to non-verbal reasoning abilities in autistic but not in typical children. Eye-movement data also showed no group differences in the way children attended to the stimuli. Our combined findings suggest that the abilities of autistic and typical children for ensemble perception of emotions are comparable on average. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Supermodeling With A Global Atmospheric Model

    NASA Astrophysics Data System (ADS)

    Wiegerinck, Wim; Burgers, Willem; Selten, Frank

    2013-04-01

    In weather and climate prediction studies it often turns out to be the case that the multi-model ensemble mean prediction has the best prediction skill scores. One possible explanation is that the major part of the model error is random and is averaged out in the ensemble mean. In the standard multi-model ensemble approach, the models are integrated in time independently and the predicted states are combined a posteriori. Recently an alternative ensemble prediction approach has been proposed in which the models exchange information during the simulation and synchronize on a common solution that is closer to the truth than any of the individual model solutions in the standard multi-model ensemble approach or a weighted average of these. This approach is called the super modeling approach (SUMO). The potential of the SUMO approach has been demonstrated in the context of simple, low-order, chaotic dynamical systems. The information exchange takes the form of linear nudging terms in the dynamical equations that nudge the solution of each model to the solution of all other models in the ensemble. With a suitable choice of the connection strengths the models synchronize on a common solution that is indeed closer to the true system than any of the individual model solutions without nudging. This approach is called connected SUMO. An alternative approach is to integrate a weighted averaged model, weighted SUMO. At each time step all models in the ensemble calculate the tendency, these tendencies are weighted averaged and the state is integrated one time step into the future with this weighted averaged tendency. It was shown that in case the connected SUMO synchronizes perfectly, the connected SUMO follows the weighted averaged trajectory and both approaches yield the same solution. In this study we pioneer both approaches in the context of a global, quasi-geostrophic, three-level atmosphere model that is capable of simulating quite realistically the extra-tropical circulation in the Northern Hemisphere winter.

  15. Estimation of Uncertainties in the Global Distance Test (GDT_TS) for CASP Models.

    PubMed

    Li, Wenlin; Schaeffer, R Dustin; Otwinowski, Zbyszek; Grishin, Nick V

    2016-01-01

    The Critical Assessment of techniques for protein Structure Prediction (or CASP) is a community-wide blind test experiment to reveal the best accomplishments of structure modeling. Assessors have been using the Global Distance Test (GDT_TS) measure to quantify prediction performance since CASP3 in 1998. However, identifying significant score differences between close models is difficult because of the lack of uncertainty estimations for this measure. Here, we utilized the atomic fluctuations caused by structure flexibility to estimate the uncertainty of GDT_TS scores. Structures determined by nuclear magnetic resonance are deposited as ensembles of alternative conformers that reflect the structural flexibility, whereas standard X-ray refinement produces the static structure averaged over time and space for the dynamic ensembles. To recapitulate the structural heterogeneous ensemble in the crystal lattice, we performed time-averaged refinement for X-ray datasets to generate structural ensembles for our GDT_TS uncertainty analysis. Using those generated ensembles, our study demonstrates that the time-averaged refinements produced structure ensembles with better agreement with the experimental datasets than the averaged X-ray structures with B-factors. The uncertainty of the GDT_TS scores, quantified by their standard deviations (SDs), increases for scores lower than 50 and 70, with maximum SDs of 0.3 and 1.23 for X-ray and NMR structures, respectively. We also applied our procedure to the high accuracy version of GDT-based score and produced similar results with slightly higher SDs. To facilitate score comparisons by the community, we developed a user-friendly web server that produces structure ensembles for NMR and X-ray structures and is accessible at http://prodata.swmed.edu/SEnCS. Our work helps to identify the significance of GDT_TS score differences, as well as to provide structure ensembles for estimating SDs of any scores.

  16. Creating "Intelligent" Climate Model Ensemble Averages Using a Process-Based Framework

    NASA Astrophysics Data System (ADS)

    Baker, N. C.; Taylor, P. C.

    2014-12-01

    The CMIP5 archive contains future climate projections from over 50 models provided by dozens of modeling centers from around the world. Individual model projections, however, are subject to biases created by structural model uncertainties. As a result, ensemble averaging of multiple models is often used to add value to model projections: consensus projections have been shown to consistently outperform individual models. Previous reports for the IPCC establish climate change projections based on an equal-weighted average of all model projections. However, certain models reproduce climate processes better than other models. Should models be weighted based on performance? Unequal ensemble averages have previously been constructed using a variety of mean state metrics. What metrics are most relevant for constraining future climate projections? This project develops a framework for systematically testing metrics in models to identify optimal metrics for unequal weighting multi-model ensembles. A unique aspect of this project is the construction and testing of climate process-based model evaluation metrics. A climate process-based metric is defined as a metric based on the relationship between two physically related climate variables—e.g., outgoing longwave radiation and surface temperature. Metrics are constructed using high-quality Earth radiation budget data from NASA's Clouds and Earth's Radiant Energy System (CERES) instrument and surface temperature data sets. It is found that regional values of tested quantities can vary significantly when comparing weighted and unweighted model ensembles. For example, one tested metric weights the ensemble by how well models reproduce the time-series probability distribution of the cloud forcing component of reflected shortwave radiation. The weighted ensemble for this metric indicates lower simulated precipitation (up to .7 mm/day) in tropical regions than the unweighted ensemble: since CMIP5 models have been shown to overproduce precipitation, this result could indicate that the metric is effective in identifying models which simulate more realistic precipitation. Ultimately, the goal of the framework is to identify performance metrics for advising better methods for ensemble averaging models and create better climate predictions.

  17. Ensemble coding remains accurate under object and spatial visual working memory load.

    PubMed

    Epstein, Michael L; Emmanouil, Tatiana A

    2017-10-01

    A number of studies have provided evidence that the visual system statistically summarizes large amounts of information that would exceed the limitations of attention and working memory (ensemble coding). However the necessity of working memory resources for ensemble coding has not yet been tested directly. In the current study, we used a dual task design to test the effect of object and spatial visual working memory load on size averaging accuracy. In Experiment 1, we tested participants' accuracy in comparing the mean size of two sets under various levels of object visual working memory load. Although the accuracy of average size judgments depended on the difference in mean size between the two sets, we found no effect of working memory load. In Experiment 2, we tested the same average size judgment while participants were under spatial visual working memory load, again finding no effect of load on averaging accuracy. Overall our results reveal that ensemble coding can proceed unimpeded and highly accurately under both object and spatial visual working memory load, providing further evidence that ensemble coding reflects a basic perceptual process distinct from that of individual object processing.

  18. Estimates of peak flood discharge for 21 sites in the Front Range in Colorado in response to extreme rainfall in September 2013

    USGS Publications Warehouse

    Moody, John A.

    2016-03-21

    Extreme rainfall in September 2013 caused destructive floods in part of the Front Range in Boulder County, Colorado. Erosion from these floods cut roads and isolated mountain communities for several weeks, and large volumes of eroded sediment were deposited downstream, which caused further damage of property and infrastructures. Estimates of peak discharge for these floods and the associated rainfall characteristics will aid land and emergency managers in the future. Several methods (an ensemble) were used to estimate peak discharge at 21 measurement sites, and the ensemble average and standard deviation provided a final estimate of peak discharge and its uncertainty. Because of the substantial erosion and deposition of sediment, an additional estimate of peak discharge was made based on the flow resistance caused by sediment transport effects.Although the synoptic-scale rainfall was extreme (annual exceedance probability greater than 1,000 years, about 450 millimeters in 7 days) for these mountains, the resulting peak discharges were not. Ensemble average peak discharges per unit drainage area (unit peak discharge, [Qu]) for the floods were 1–2 orders of magnitude less than those for the maximum worldwide floods with similar drainage areas and had a wide range of values (0.21–16.2 cubic meters per second per square kilometer [m3 s-1 km-2]). One possible explanation for these differences was that the band of high-accumulation, high-intensity rainfall was narrow (about 50 kilometers wide), oriented nearly perpendicular to the predominant drainage pattern of the mountains, and therefore entire drainage areas were not subjected to the same range of extreme rainfall. A linear relation (coefficient of determination [R2]=0.69) between Qu and the rainfall intensity (ITc, computed for a time interval equal to the time-of-concentration for the drainage area upstream from each site), had the form: Qu=0.26(ITc-8.6), where the coefficient 0.26 can be considered to be an area-averaged peak runoff coefficient for the September 2013 rain storms in Boulder County, and the 8.6 millimeters per hour to be the rainfall intensity corresponding to a soil moisture threshold that controls the soil infiltration rate. Peak discharge estimates based on the sediment transport effects were generally less than the ensemble average and indicated that sediment transport may be a mechanism that limits velocities in these types of mountain streams such that the Froude number fluctuates about 1 suggesting that this type of floodflow can be approximated as critical flow.

  19. Influences of Atmospheric Stability State on Wind Turbine Aerodynamic Loadings

    NASA Astrophysics Data System (ADS)

    Vijayakumar, Ganesh; Lavely, Adam; Brasseur, James; Paterson, Eric; Kinzel, Michael

    2011-11-01

    Wind turbine power and loadings are influenced by the structure of atmospheric turbulence and thus on the stability state of the atmosphere. Statistical differences in loadings with atmospheric stability could impact controls, blade design, etc. Large-eddy simulation (LES) of the neutral and moderately convective atmospheric boundary layer (NBL, MCBL) are used as inflow to the NREL FAST advanced blade-element momentum theory code to predict wind turbine rotor power, sectional lift and drag, blade bending moments and shaft torque. Using horizontal homogeneity, we combine time and ensemble averages to obtain converged statistics equivalent to ``infinite'' time averages over a single turbine. The MCBL required longer effective time periods to obtain converged statistics than the NBL. Variances and correlation coefficients among wind velocities, turbine power and blade loadings were higher in the MCBL than the NBL. We conclude that the stability state of the ABL strongly influences wind turbine performance. Supported by NSF and DOE.

  20. Thermal Aging of Oceanic Asthenosphere

    NASA Astrophysics Data System (ADS)

    Paulson, E.; Jordan, T. H.

    2013-12-01

    To investigate the depth extent of mantle thermal aging beneath ocean basins, we project 3D Voigt-averaged S-velocity variations from an ensemble of global tomographic models onto a 1x1 degree age-based regionalization and average over bins delineated by equal increments in the square-root of crustal age. From comparisons among the bin-averaged S-wave profiles, we estimate age-dependent convergence depths (minimum depths where the age variations become statistically insignificant) as well as S travel times from these depths to a shallow reference surface. Using recently published techniques (Jordan & Paulson, JGR, doi:10.1002/jgrb.50263, 2013), we account for the aleatory variability in the bin-averaged S-wave profiles using the angular correlation functions of the individual tomographic models, we correct the convergence depths for vertical-smearing bias using their radial correlation functions, and we account for epistemic uncertainties through Bayesian averaging over the tomographic model ensemble. From this probabilistic analysis, we can assert with 90% confidence that the age-correlated variations in Voigt-averaged S velocities persist to depths greater than 170 km; i.e., more than 100 km below the mean depth of the G discontinuity (~70 km). Moreover, the S travel time above the convergence depth decays almost linearly with the square-root of crustal age out to 200 Ma, consistent with a half-space cooling model. Given the strong evidence that the G discontinuity approximates the lithosphere-asthenosphere boundary (LAB) beneath ocean basins, we conclude that the upper (and probably weakest) part of the oceanic asthenosphere, like the oceanic lithosphere, participates in the cooling that forms the kinematic plates, or tectosphere. In other words, the thermal boundary layer of a mature oceanic plate appears to be more than twice the thickness of its mechanical boundary layer. We do not discount the possibility that small-scale convection creates heterogeneities in the oceanic upper mantle; however, the large-scale flow evidently advects these small-scale heterogeneities along with the plates, allowing the upper part of the asthenosphere to continue cooling with lithospheric age. The dominance of this large-scale horizontal flow may be related to the high stresses associated with its channelization in a thin (~100 km) asthenosphere, as well as the possible focusing of the subtectospheric strain in a low-viscosity channel immediately above the 410-km discontinuity. These speculations aside, the observed thermal aging of oceanic asthenosphere is inconsistent with a tenet of plate tectonics, the LAB hypothesis, which states that lithospheric plates are decoupled from deeper mantle flow by a shear zone in the upper part of the asthenosphere.

  1. Collective effects in force generation by multiple cytoskeletal filaments pushing an obstacle

    NASA Astrophysics Data System (ADS)

    Aparna, J. S.; Das, Dipjyoti; Padinhateeri, Ranjith; Das, Dibyendu

    2015-09-01

    We report here recent findings that multiple cytoskeletal filaments (assumed rigid) pushing an obstacle typically generate more force than just the sum of the forces due to individual ones. This interesting phenomenon, due to the hydrolysis process being out of equilibrium, escaped attention in previous experimental and theoretical literature. We first demonstrate this numerically within a constant force ensemble, for a well known model of cytoskeletal filament dynamics with random mechanism of hydrolysis. Two methods of detecting the departure from additivity of the collective stall force, namely from the force-velocity curve in the growing phase, and from the average collapse time versus force curve in the bounded phase, is discussed. Since experiments have already been done for a similar system of multiple microtubules in a harmonic optical trap, we study the problem theoretically under harmonic force. We show that within the varying harmonic force ensemble too, the mean collective stall force of N filaments is greater than N times the mean stall force due to a single filament; the actual extent of departure is a function of the monomer concentration.

  2. Equilibrium energy spectrum of point vortex motion with remarks on ensemble choice and ergodicity

    NASA Astrophysics Data System (ADS)

    Esler, J. G.

    2017-01-01

    The dynamics and statistical mechanics of N chaotically evolving point vortices in the doubly periodic domain are revisited. The selection of the correct microcanonical ensemble for the system is first investigated. The numerical results of Weiss and McWilliams [Phys. Fluids A 3, 835 (1991), 10.1063/1.858014], who argued that the point vortex system with N =6 is nonergodic because of an apparent discrepancy between ensemble averages and dynamical time averages, are shown to be due to an incorrect ensemble definition. When the correct microcanonical ensemble is sampled, accounting for the vortex momentum constraint, time averages obtained from direct numerical simulation agree with ensemble averages within the sampling error of each calculation, i.e., there is no numerical evidence for nonergodicity. Further, in the N →∞ limit it is shown that the vortex momentum no longer constrains the long-time dynamics and therefore that the correct microcanonical ensemble for statistical mechanics is that associated with the entire constant energy hypersurface in phase space. Next, a recently developed technique is used to generate an explicit formula for the density of states function for the system, including for arbitrary distributions of vortex circulations. Exact formulas for the equilibrium energy spectrum, and for the probability density function of the energy in each Fourier mode, are then obtained. Results are compared with a series of direct numerical simulations with N =50 and excellent agreement is found, confirming the relevance of the results for interpretation of quantum and classical two-dimensional turbulence.

  3. Ensemble coding of face identity is present but weaker in congenital prosopagnosia.

    PubMed

    Robson, Matthew K; Palermo, Romina; Jeffery, Linda; Neumann, Markus F

    2018-03-01

    Individuals with congenital prosopagnosia (CP) are impaired at identifying individual faces but do not appear to show impairments in extracting the average identity from a group of faces (known as ensemble coding). However, possible deficits in ensemble coding in a previous study (CPs n = 4) may have been masked because CPs relied on pictorial (image) cues rather than identity cues. Here we asked whether a larger sample of CPs (n = 11) would show intact ensemble coding of identity when availability of image cues was minimised. Participants viewed a "set" of four faces and then judged whether a subsequent individual test face, either an exemplar or a "set average", was in the preceding set. Ensemble coding occurred when matching (vs. mismatching) averages were mistakenly endorsed as set members. We assessed both image- and identity-based ensemble coding, by varying whether test faces were either the same or different images of the identities in the set. CPs showed significant ensemble coding in both tasks, indicating that their performance was independent of image cues. As a group, CPs' ensemble coding was weaker than controls in both tasks, consistent with evidence that perceptual processing of face identity is disrupted in CP. This effect was driven by CPs (n= 3) who, in addition to having impaired face memory, also performed particularly poorly on a measure of face perception (CFPT). Future research, using larger samples, should examine whether deficits in ensemble coding may be restricted to CPs who also have substantial face perception deficits. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Measurement of Flow Pattern Within a Rotating Stall Cell in an Axial Compressor

    NASA Technical Reports Server (NTRS)

    Lepicovsky, Jan; Braunscheidel, Edward P.

    2006-01-01

    Effective active control of rotating stall in axial compressors requires detailed understanding of flow instabilities associated with this compressor regime. Newly designed miniature high frequency response total and static pressure probes as well as commercial thermoanemometric probes are suitable tools for this task. However, during the rotating stall cycle the probes are subjected to flow direction changes that are far larger than the range of probe incidence acceptance, and therefore probe data without a proper correction would misrepresent unsteady variations of flow parameters. A methodology, based on ensemble averaging, is proposed to circumvent this problem. In this approach the ensemble averaged signals acquired for various probe setting angles are segmented, and only the sections for probe setting angles close to the actual flow angle are used for signal recombination. The methodology was verified by excellent agreement between velocity distributions obtained from pressure probe data, and data measured with thermoanemometric probes. Vector plots of unsteady flow behavior during the rotating stall regime indicate reversed flow within the rotating stall cell that spreads over to adjacent rotor blade channels. Results of this study confirmed that the NASA Low Speed Axial Compressor (LSAC) while in a rotating stall regime at rotor design speed exhibits one stall cell that rotates at a speed equal to 50.6 percent of the rotor shaft speed.

  5. Bayesian ensemble refinement by replica simulations and reweighting.

    PubMed

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-28

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  6. Bayesian ensemble refinement by replica simulations and reweighting

    NASA Astrophysics Data System (ADS)

    Hummer, Gerhard; Köfinger, Jürgen

    2015-12-01

    We describe different Bayesian ensemble refinement methods, examine their interrelation, and discuss their practical application. With ensemble refinement, the properties of dynamic and partially disordered (bio)molecular structures can be characterized by integrating a wide range of experimental data, including measurements of ensemble-averaged observables. We start from a Bayesian formulation in which the posterior is a functional that ranks different configuration space distributions. By maximizing this posterior, we derive an optimal Bayesian ensemble distribution. For discrete configurations, this optimal distribution is identical to that obtained by the maximum entropy "ensemble refinement of SAXS" (EROS) formulation. Bayesian replica ensemble refinement enhances the sampling of relevant configurations by imposing restraints on averages of observables in coupled replica molecular dynamics simulations. We show that the strength of the restraints should scale linearly with the number of replicas to ensure convergence to the optimal Bayesian result in the limit of infinitely many replicas. In the "Bayesian inference of ensembles" method, we combine the replica and EROS approaches to accelerate the convergence. An adaptive algorithm can be used to sample directly from the optimal ensemble, without replicas. We discuss the incorporation of single-molecule measurements and dynamic observables such as relaxation parameters. The theoretical analysis of different Bayesian ensemble refinement approaches provides a basis for practical applications and a starting point for further investigations.

  7. Decadal climate prediction in the large ensemble limit

    NASA Astrophysics Data System (ADS)

    Yeager, S. G.; Rosenbloom, N. A.; Strand, G.; Lindsay, K. T.; Danabasoglu, G.; Karspeck, A. R.; Bates, S. C.; Meehl, G. A.

    2017-12-01

    In order to quantify the benefits of initialization for climate prediction on decadal timescales, two parallel sets of historical simulations are required: one "initialized" ensemble that incorporates observations of past climate states and one "uninitialized" ensemble whose internal climate variations evolve freely and without synchronicity. In the large ensemble limit, ensemble averaging isolates potentially predictable forced and internal variance components in the "initialized" set, but only the forced variance remains after averaging the "uninitialized" set. The ensemble size needed to achieve this variance decomposition, and to robustly distinguish initialized from uninitialized decadal predictions, remains poorly constrained. We examine a large ensemble (LE) of initialized decadal prediction (DP) experiments carried out using the Community Earth System Model (CESM). This 40-member CESM-DP-LE set of experiments represents the "initialized" complement to the CESM large ensemble of 20th century runs (CESM-LE) documented in Kay et al. (2015). Both simulation sets share the same model configuration, historical radiative forcings, and large ensemble sizes. The twin experiments afford an unprecedented opportunity to explore the sensitivity of DP skill assessment, and in particular the skill enhancement associated with initialization, to ensemble size. This talk will highlight the benefits of a large ensemble size for initialized predictions of seasonal climate over land in the Atlantic sector as well as predictions of shifts in the likelihood of climate extremes that have large societal impact.

  8. Relativistic Light Sails

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kipping, David, E-mail: dkipping@astro.columbia.edu

    One proposed method for spacecraft to reach nearby stars is by accelerating sails using either solar radiation pressure or directed energy. This idea constitutes the thesis behind the Breakthrough Starshot project, which aims to accelerate a gram-mass spacecraft up to one-fifth the speed of light toward Proxima Centauri. For such a case, the combination of the sail’s low mass and relativistic velocity renders previous treatments incorrect at the 10% level, including that of Einstein himself in his seminal 1905 paper introducing special relativity. To address this, we present formulae for a sail’s acceleration, first in response to a single photonmore » and then extended to an ensemble. We show how the sail’s motion in response to an ensemble of incident photons is equivalent to that of a single photon of energy equal to that of the ensemble. We use this principle of ensemble equivalence for both perfect and imperfect mirrors, enabling a simple analytic prediction of the sail’s velocity curve. Using our results and adopting putative parameters for Starshot , we estimate that previous relativistic treatments underestimate the spacecraft’s terminal velocity by ∼10% for the same incident energy. Additionally, we use a simple model to predict the sail’s temperature and diffraction beam losses during the laser firing period; this allows us to estimate that, for firing times of a few minutes and operating temperatures below 300°C (573 K), Starshot will require a sail that absorbs less than one in 260,000 photons.« less

  9. Relativistic Light Sails

    NASA Astrophysics Data System (ADS)

    Kipping, David

    2017-06-01

    One proposed method for spacecraft to reach nearby stars is by accelerating sails using either solar radiation pressure or directed energy. This idea constitutes the thesis behind the Breakthrough Starshot project, which aims to accelerate a gram-mass spacecraft up to one-fifth the speed of light toward Proxima Centauri. For such a case, the combination of the sail’s low mass and relativistic velocity renders previous treatments incorrect at the 10% level, including that of Einstein himself in his seminal 1905 paper introducing special relativity. To address this, we present formulae for a sail’s acceleration, first in response to a single photon and then extended to an ensemble. We show how the sail’s motion in response to an ensemble of incident photons is equivalent to that of a single photon of energy equal to that of the ensemble. We use this principle of ensemble equivalence for both perfect and imperfect mirrors, enabling a simple analytic prediction of the sail’s velocity curve. Using our results and adopting putative parameters for Starshot, we estimate that previous relativistic treatments underestimate the spacecraft’s terminal velocity by ∼10% for the same incident energy. Additionally, we use a simple model to predict the sail’s temperature and diffraction beam losses during the laser firing period; this allows us to estimate that, for firing times of a few minutes and operating temperatures below 300°C (573 K), Starshot will require a sail that absorbs less than one in 260,000 photons.

  10. Interpolation of property-values between electron numbers is inconsistent with ensemble averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miranda-Quintana, Ramón Alain; Department of Chemistry and Chemical Biology, McMaster University, Hamilton, Ontario L8S 4M1; Ayers, Paul W.

    2016-06-28

    In this work we explore the physical foundations of models that study the variation of the ground state energy with respect to the number of electrons (E vs. N models), in terms of general grand-canonical (GC) ensemble formulations. In particular, we focus on E vs. N models that interpolate the energy between states with integer number of electrons. We show that if the interpolation of the energy corresponds to a GC ensemble, it is not differentiable. Conversely, if the interpolation is smooth, then it cannot be formulated as any GC ensemble. This proves that interpolation of electronic properties between integermore » electron numbers is inconsistent with any form of ensemble averaging. This emphasizes the role of derivative discontinuities and the critical role of a subsystem’s surroundings in determining its properties.« less

  11. The random coding bound is tight for the average code.

    NASA Technical Reports Server (NTRS)

    Gallager, R. G.

    1973-01-01

    The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.

  12. Hydrostratigraphy characterization of the Floridan aquifer system using ambient seismic noise

    NASA Astrophysics Data System (ADS)

    James, Stephanie R.; Screaton, Elizabeth J.; Russo, Raymond M.; Panning, Mark P.; Bremner, Paul M.; Stanciu, A. Christian; Torpey, Megan E.; Hongsresawat, Sutatcha; Farrell, Matthew E.

    2017-05-01

    We investigated a new technique for aquifer characterization that uses cross-correlation of ambient seismic noise to determine seismic velocity structure of the Floridan aquifer system (FAS). Accurate characterization of aquifer systems is vital to hydrogeological research and groundwater management but is difficult due to limited subsurface data and heterogeneity. Previous research on the carbonate FAS found that confining units and high permeability flow zones have distinct seismic velocities. We deployed an array of 9 short period seismometers from 11/2013 to 3/2014 in Indian Lake State Forest near Ocala, Florida, to image the hydrostratigraphy of the aquifer system using ambient seismic noise. We find that interstation distance strongly influences the upper and lower frequency limits of the data set. Seismic waves propagating within 1.5 and 7 wavelengths between stations were optimal for reliable group velocity measurements and both an upper and lower wavelength threshold was used. A minimum of 100-250 hr of signal was needed to maximize signal-to-noise ratio and to allow cross-correlation convergence. We averaged measurements of group velocity between station pairs at each frequency band to create a network average dispersion curve. A family of 1-D shear-wave velocity profiles that best represents the network average dispersion was then generated using a Markov Chain Monte Carlo (MCMC) algorithm. The MCMC algorithm was implemented with either a fixed number of layers, or as transdimensional in which the number of layers was a free parameter. Results from both algorithms require a prominent velocity increase at ∼200 m depth. A shallower velocity increase at ∼60 m depth was also observed, but only in model ensembles created by collecting models with the lowest overall misfit to the observed data. A final round of modelling with additional prior constraints based on initial results and well logs produced a mean shear-wave velocity profile taken as the preferred solution for the study site. The velocity increases at ∼200 and ∼60 m depth are consistent with the top surfaces of two semi-confining units of the study area and the depths of high-resistivity dolomite units seen in geophysical logs and cores from the study site. Our results suggest that correlation of ambient seismic noise holds promise for hydrogeological investigations. However, complexities in the cross-correlations at high frequencies and short traveltimes at low frequencies added uncertainty to the data set.

  13. On the Likely Utility of Hybrid Weights Optimized for Variances in Hybrid Error Covariance Models

    NASA Astrophysics Data System (ADS)

    Satterfield, E.; Hodyss, D.; Kuhl, D.; Bishop, C. H.

    2017-12-01

    Because of imperfections in ensemble data assimilation schemes, one cannot assume that the ensemble covariance is equal to the true error covariance of a forecast. Previous work demonstrated how information about the distribution of true error variances given an ensemble sample variance can be revealed from an archive of (observation-minus-forecast, ensemble-variance) data pairs. Here, we derive a simple and intuitively compelling formula to obtain the mean of this distribution of true error variances given an ensemble sample variance from (observation-minus-forecast, ensemble-variance) data pairs produced by a single run of a data assimilation system. This formula takes the form of a Hybrid weighted average of the climatological forecast error variance and the ensemble sample variance. Here, we test the extent to which these readily obtainable weights can be used to rapidly optimize the covariance weights used in Hybrid data assimilation systems that employ weighted averages of static covariance models and flow-dependent ensemble based covariance models. Univariate data assimilation and multi-variate cycling ensemble data assimilation are considered. In both cases, it is found that our computationally efficient formula gives Hybrid weights that closely approximate the optimal weights found through the simple but computationally expensive process of testing every plausible combination of weights.

  14. GEMINI SPECTROSCOPY OF ULTRACOMPACT DWARFS IN THE FOSSIL GROUP NGC 1132

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Madrid, Juan P.; Donzelli, Carlos J.

    2013-06-20

    A spectroscopic follow-up of ultracompact dwarf (UCD) candidates in the fossil group NGC 1132 is undertaken with the Gemini Multi Object Spectrograph. These new Gemini spectra prove the presence of six UCDs in the fossil group NGC 1132 at a distance of D {approx} 100 Mpc and a recessional velocity of v{sub r} = 6935 {+-} 11 km s{sup -1}. The brightest and largest member of the UCD population is an M32 analog with a size of 77.1 pc and a magnitude of M{sub V} = -14.8 mag with the characteristics in between those of the brightest UCDs and compactmore » elliptical galaxies. The ensemble of UCDs have an average radial velocity of (v{sub r} ) = 6966 {+-} 208 km s{sup -1} and a velocity dispersion of {sigma}{sub v} = 169 {+-} 18 km s{sup -1} similar to the one of poor galaxy groups. This work shows that UCDs can be used as test particles to determine the dynamical properties of galaxy groups. The presence of UCDs in the fossil group environment is confirmed and thus the fact that UCDs can form across diverse evolutionary conditions.« less

  15. Experimental investigation of supersonic flow over elliptic surface

    NASA Astrophysics Data System (ADS)

    Zhang, Qinghu; Yi, Shihe; He, Lin; Zhu, Yangzhu; Chen, Zhi

    2013-11-01

    The coherent structures of flow over a compression elliptic surface are experimentally investigated in a supersonic low-noise wind tunnel at Mach Number 3 using nano-tracer planar laser scattering (NPLS) and particle image velocimetry (PIV) techniques. High spacial resolution images and the average velocity profiles of both laminar inflow and turbulent inflow over the testing model were captured. From statistically significant ensembles, spatial correlation analysis of both cases is performed to quantify the mean size and orientation of large structures. The results indicate that the mean structure is elliptical in shape and structure angles in separated region of laminar inflow are slightly smaller than that of turbulent inflow. Moreover, the structure angle of both cases increases with its distance away from from the wall. POD analysis of velocity and vorticity fields is performed for both cases. The energy portion of the first mode for the velocity data is much larger than that for the vorticity field. For vorticity decompositions, the contribution from the first mode for the laminar inflow is slightly larger than that for the turbulent inflow and the cumulative contributions for laminar inflow converges slightly faster than that for turbulent inflow

  16. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  17. Application Bayesian Model Averaging method for ensemble system for Poland

    NASA Astrophysics Data System (ADS)

    Guzikowski, Jakub; Czerwinska, Agnieszka

    2014-05-01

    The aim of the project is to evaluate methods for generating numerical ensemble weather prediction using a meteorological data from The Weather Research & Forecasting Model and calibrating this data by means of Bayesian Model Averaging (WRF BMA) approach. We are constructing height resolution short range ensemble forecasts using meteorological data (temperature) generated by nine WRF's models. WRF models have 35 vertical levels and 2.5 km x 2.5 km horizontal resolution. The main emphasis is that the used ensemble members has a different parameterization of the physical phenomena occurring in the boundary layer. To calibrate an ensemble forecast we use Bayesian Model Averaging (BMA) approach. The BMA predictive Probability Density Function (PDF) is a weighted average of predictive PDFs associated with each individual ensemble member, with weights that reflect the member's relative skill. For test we chose a case with heat wave and convective weather conditions in Poland area from 23th July to 1st August 2013. From 23th July to 29th July 2013 temperature oscillated below or above 30 Celsius degree in many meteorology stations and new temperature records were added. During this time the growth of the hospitalized patients with cardiovascular system problems was registered. On 29th July 2013 an advection of moist tropical air masses was recorded in the area of Poland causes strong convection event with mesoscale convection system (MCS). MCS caused local flooding, damage to the transport infrastructure, destroyed buildings, trees and injuries and direct threat of life. Comparison of the meteorological data from ensemble system with the data recorded on 74 weather stations localized in Poland is made. We prepare a set of the model - observations pairs. Then, the obtained data from single ensemble members and median from WRF BMA system are evaluated on the basis of the deterministic statistical error Root Mean Square Error (RMSE), Mean Absolute Error (MAE). To evaluation probabilistic data The Brier Score (BS) and Continuous Ranked Probability Score (CRPS) were used. Finally comparison between BMA calibrated data and data from ensemble members will be displayed.

  18. Characterizing RNA ensembles from NMR data with kinematic models

    PubMed Central

    Fonseca, Rasmus; Pachov, Dimitar V.; Bernauer, Julie; van den Bedem, Henry

    2014-01-01

    Functional mechanisms of biomolecules often manifest themselves precisely in transient conformational substates. Researchers have long sought to structurally characterize dynamic processes in non-coding RNA, combining experimental data with computer algorithms. However, adequate exploration of conformational space for these highly dynamic molecules, starting from static crystal structures, remains challenging. Here, we report a new conformational sampling procedure, KGSrna, which can efficiently probe the native ensemble of RNA molecules in solution. We found that KGSrna ensembles accurately represent the conformational landscapes of 3D RNA encoded by NMR proton chemical shifts. KGSrna resolves motionally averaged NMR data into structural contributions; when coupled with residual dipolar coupling data, a KGSrna ensemble revealed a previously uncharacterized transient excited state of the HIV-1 trans-activation response element stem–loop. Ensemble-based interpretations of averaged data can aid in formulating and testing dynamic, motion-based hypotheses of functional mechanisms in RNAs with broad implications for RNA engineering and therapeutic intervention. PMID:25114056

  19. Ensemble Mean Density and its Connection to Other Microphysical Properties of Falling Snow as Observed in Southern Finland

    NASA Technical Reports Server (NTRS)

    Tiira, Jussi; Moisseev, Dmitri N.; Lerber, Annakaisa von; Ori, Davide; Tokay, Ali; Bliven, Larry F.; Petersen, Walter

    2016-01-01

    In this study measurements collected during winters 2013/2014 and 2014/2015 at the University of Helsinki measurement station in Hyytiala are used to investigate connections between ensemble mean snow density, particle fall velocity and parameters of the particle size distribution (PSD). The density of snow is derived from measurements of particle fall velocity and PSD, provided by a particle video imager, and weighing gauge measurements of precipitation rate. Validity of the retrieved density values is checked against snow depth measurements. A relation retrieved for the ensemble mean snow density and median volume diameter is in general agreement with previous studies, but it is observed to vary significantly from one winter to the other. From these observations, characteristic mass- dimensional relations of snow are retrieved. For snow rates more than 0.2mm/h, a correlation between the intercept parameter of normalized gamma PSD and median volume diameter was observed.

  20. Ensemble mean density and its connection to other microphysical properties of falling snow as observed in Southern Finland

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tiira, Jussi; Moisseev, Dmitri N.; von Lerber, Annakaisa

    In this study measurements collected during winters 2013/2014 and 2014/2015 at the University of Helsinki measurement station in Hyytiala are used to investigate connections between ensemble mean snow density, particle fall velocity and parameters of the particle size distribution (PSD). The density of snow is derived from measurements of particle fall velocity and PSD, provided by a particle video imager, and weighing gauge measurements of precipitation rate. Validity of the retrieved density values is checked against snow depth measurements. Here, a relation retrieved for the ensemble mean snow density and median volume diameter is in general agreement with previous studies,more » but it is observed to vary significantly from one winter to the other. From these observations, characteristic mass–dimensional relations of snow are retrieved. For snow rates more than 0.2 mm h -1, a correlation between the intercept parameter of normalized gamma PSD and median volume diameter was observed.« less

  1. Ensemble mean density and its connection to other microphysical properties of falling snow as observed in Southern Finland

    DOE PAGES

    Tiira, Jussi; Moisseev, Dmitri N.; von Lerber, Annakaisa; ...

    2016-09-28

    In this study measurements collected during winters 2013/2014 and 2014/2015 at the University of Helsinki measurement station in Hyytiala are used to investigate connections between ensemble mean snow density, particle fall velocity and parameters of the particle size distribution (PSD). The density of snow is derived from measurements of particle fall velocity and PSD, provided by a particle video imager, and weighing gauge measurements of precipitation rate. Validity of the retrieved density values is checked against snow depth measurements. Here, a relation retrieved for the ensemble mean snow density and median volume diameter is in general agreement with previous studies,more » but it is observed to vary significantly from one winter to the other. From these observations, characteristic mass–dimensional relations of snow are retrieved. For snow rates more than 0.2 mm h -1, a correlation between the intercept parameter of normalized gamma PSD and median volume diameter was observed.« less

  2. Determination of ensemble-average pairwise root mean-square deviation from experimental B-factors.

    PubMed

    Kuzmanic, Antonija; Zagrovic, Bojan

    2010-03-03

    Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, (1/2), is directly related to average B-factors () and (1/2). We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is approximately 1.1 A, under the assumption that the principal contribution to experimental B-factors is conformational variability. 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  3. Determination of Ensemble-Average Pairwise Root Mean-Square Deviation from Experimental B-Factors

    PubMed Central

    Kuzmanic, Antonija; Zagrovic, Bojan

    2010-01-01

    Abstract Root mean-square deviation (RMSD) after roto-translational least-squares fitting is a measure of global structural similarity of macromolecules used commonly. On the other hand, experimental x-ray B-factors are used frequently to study local structural heterogeneity and dynamics in macromolecules by providing direct information about root mean-square fluctuations (RMSF) that can also be calculated from molecular dynamics simulations. We provide a mathematical derivation showing that, given a set of conservative assumptions, a root mean-square ensemble-average of an all-against-all distribution of pairwise RMSD for a single molecular species, 1/2, is directly related to average B-factors () and 1/2. We show this relationship and explore its limits of validity on a heterogeneous ensemble of structures taken from molecular dynamics simulations of villin headpiece generated using distributed-computing techniques and the Folding@Home cluster. Our results provide a basis for quantifying global structural diversity of macromolecules in crystals directly from x-ray experiments, and we show this on a large set of structures taken from the Protein Data Bank. In particular, we show that the ensemble-average pairwise backbone RMSD for a microscopic ensemble underlying a typical protein x-ray structure is ∼1.1 Å, under the assumption that the principal contribution to experimental B-factors is conformational variability. PMID:20197040

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hou Fengji; Hogg, David W.; Goodman, Jonathan

    Markov chain Monte Carlo (MCMC) proves to be powerful for Bayesian inference and in particular for exoplanet radial velocity fitting because MCMC provides more statistical information and makes better use of data than common approaches like chi-square fitting. However, the nonlinear density functions encountered in these problems can make MCMC time-consuming. In this paper, we apply an ensemble sampler respecting affine invariance to orbital parameter extraction from radial velocity data. This new sampler has only one free parameter, and does not require much tuning for good performance, which is important for automatization. The autocorrelation time of this sampler is approximatelymore » the same for all parameters and far smaller than Metropolis-Hastings, which means it requires many fewer function calls to produce the same number of independent samples. The affine-invariant sampler speeds up MCMC by hundreds of times compared with Metropolis-Hastings in the same computing situation. This novel sampler would be ideal for projects involving large data sets such as statistical investigations of planet distribution. The biggest obstacle to ensemble samplers is the existence of multiple local optima; we present a clustering technique to deal with local optima by clustering based on the likelihood of the walkers in the ensemble. We demonstrate the effectiveness of the sampler on real radial velocity data.« less

  5. Variety and volatility in financial markets

    NASA Astrophysics Data System (ADS)

    Lillo, Fabrizio; Mantegna, Rosario N.

    2000-11-01

    We study the price dynamics of stocks traded in a financial market by considering the statistical properties of both a single time series and an ensemble of stocks traded simultaneously. We use the n stocks traded on the New York Stock Exchange to form a statistical ensemble of daily stock returns. For each trading day of our database, we study the ensemble return distribution. We find that a typical ensemble return distribution exists in most of the trading days with the exception of crash and rally days and of the days following these extreme events. We analyze each ensemble return distribution by extracting its first two central moments. We observe that these moments fluctuate in time and are stochastic processes, themselves. We characterize the statistical properties of ensemble return distribution central moments by investigating their probability density functions and temporal correlation properties. In general, time-averaged and portfolio-averaged price returns have different statistical properties. We infer from these differences information about the relative strength of correlation between stocks and between different trading days. Last, we compare our empirical results with those predicted by the single-index model and we conclude that this simple model cannot explain the statistical properties of the second moment of the ensemble return distribution.

  6. Applications of Bayesian Procrustes shape analysis to ensemble radar reflectivity nowcast verification

    NASA Astrophysics Data System (ADS)

    Fox, Neil I.; Micheas, Athanasios C.; Peng, Yuqiang

    2016-07-01

    This paper introduces the use of Bayesian full Procrustes shape analysis in object-oriented meteorological applications. In particular, the Procrustes methodology is used to generate mean forecast precipitation fields from a set of ensemble forecasts. This approach has advantages over other ensemble averaging techniques in that it can produce a forecast that retains the morphological features of the precipitation structures and present the range of forecast outcomes represented by the ensemble. The production of the ensemble mean avoids the problems of smoothing that result from simple pixel or cell averaging, while producing credible sets that retain information on ensemble spread. Also in this paper, the full Bayesian Procrustes scheme is used as an object verification tool for precipitation forecasts. This is an extension of a previously presented Procrustes shape analysis based verification approach into a full Bayesian format designed to handle the verification of precipitation forecasts that match objects from an ensemble of forecast fields to a single truth image. The methodology is tested on radar reflectivity nowcasts produced in the Warning Decision Support System - Integrated Information (WDSS-II) by varying parameters in the K-means cluster tracking scheme.

  7. Weak ergodicity breaking, irreproducibility, and ageing in anomalous diffusion processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metzler, Ralf

    2014-01-14

    Single particle traces are standardly evaluated in terms of time averages of the second moment of the position time series r(t). For ergodic processes, one can interpret such results in terms of the known theories for the corresponding ensemble averaged quantities. In anomalous diffusion processes, that are widely observed in nature over many orders of magnitude, the equivalence between (long) time and ensemble averages may be broken (weak ergodicity breaking), and these time averages may no longer be interpreted in terms of ensemble theories. Here we detail some recent results on weakly non-ergodic systems with respect to the time averagedmore » mean squared displacement, the inherent irreproducibility of individual measurements, and methods to determine the exact underlying stochastic process. We also address the phenomenon of ageing, the dependence of physical observables on the time span between initial preparation of the system and the start of the measurement.« less

  8. Ensemble flood simulation for a small dam catchment in Japan using 10 and 2 km resolution nonhydrostatic model rainfalls

    NASA Astrophysics Data System (ADS)

    Kobayashi, Kenichiro; Otsuka, Shigenori; Apip; Saito, Kazuo

    2016-08-01

    This paper presents a study on short-term ensemble flood forecasting specifically for small dam catchments in Japan. Numerical ensemble simulations of rainfall from the Japan Meteorological Agency nonhydrostatic model (JMA-NHM) are used as the input data to a rainfall-runoff model for predicting river discharge into a dam. The ensemble weather simulations use a conventional 10 km and a high-resolution 2 km spatial resolutions. A distributed rainfall-runoff model is constructed for the Kasahori dam catchment (approx. 70 km2) and applied with the ensemble rainfalls. The results show that the hourly maximum and cumulative catchment-average rainfalls of the 2 km resolution JMA-NHM ensemble simulation are more appropriate than the 10 km resolution rainfalls. All the simulated inflows based on the 2 and 10 km rainfalls become larger than the flood discharge of 140 m3 s-1, a threshold value for flood control. The inflows with the 10 km resolution ensemble rainfall are all considerably smaller than the observations, while at least one simulated discharge out of 11 ensemble members with the 2 km resolution rainfalls reproduces the first peak of the inflow at the Kasahori dam with similar amplitude to observations, although there are spatiotemporal lags between simulation and observation. To take positional lags into account of the ensemble discharge simulation, the rainfall distribution in each ensemble member is shifted so that the catchment-averaged cumulative rainfall of the Kasahori dam maximizes. The runoff simulation with the position-shifted rainfalls shows much better results than the original ensemble discharge simulations.

  9. Turbulence intensity in a region of interest 2cm distal to the carotid bifurcation in a family of seven anthropomorphic flow phantoms

    NASA Astrophysics Data System (ADS)

    Powell, Janet L.; Poepping, Tamie L.

    2011-03-01

    An in vitro flow system has been used to assess the flow disturbances downstream of the stenosis in a family of seven carotid bifurcation phantoms modelling varying plaque build-up both axially symmetrically (concentrically) and asymmetrically (eccentrically). Radio frequency data were collected for 10 s at each of over 1000 sites within each model, and a sliding 1024-point FFT is applied to the data to extract the Doppler spectrum every 12 ms. From this, the ensemble average over 10 cardiac cycles of the spectral mean velocity, and the root mean square over these same 10 cardiac cycles - the turbulence intensity (TI), can be obtained as a function of an ensemble averaged cardiac cycle at each spatial point in all phantoms. TI was investigated by looking at the average over a 25 mm2 square region of interest in the ICA centered 2 cm distal to the apex of the bifurcation. TI in the region of interest increased with stenosis severity; at 23ms following peak systole, the time point when TI was maximal for the majority of models, this ranged from 2.4+/-0.1 cm/s in the non-diseased model to 6.6+/-0.3, 16.0+/-1.4 and 26.1+/-1.3 cm/s in the 30, 50 and 70% concentrically stenosed (by NASCET criteria) models, respectively. Similarly, TI was 8.3+/-0.7, 19.9+/-1.1, and 26.2+/-1.2 cm/s in the 30, 50 and 70% eccentrically stenosed models, respectively. Differences in TI between models, both in increasing stenosis severity and between eccentricities, were statistically different except between the 70% concentric and eccentric models.

  10. Ensemble Kalman filter for the reconstruction of the Earth's mantle circulation

    NASA Astrophysics Data System (ADS)

    Bocher, Marie; Fournier, Alexandre; Coltice, Nicolas

    2018-02-01

    Recent advances in mantle convection modeling led to the release of a new generation of convection codes, able to self-consistently generate plate-like tectonics at their surface. Those models physically link mantle dynamics to surface tectonics. Combined with plate tectonic reconstructions, they have the potential to produce a new generation of mantle circulation models that use data assimilation methods and where uncertainties in plate tectonic reconstructions are taken into account. We provided a proof of this concept by applying a suboptimal Kalman filter to the reconstruction of mantle circulation (Bocher et al., 2016). Here, we propose to go one step further and apply the ensemble Kalman filter (EnKF) to this problem. The EnKF is a sequential Monte Carlo method particularly adapted to solve high-dimensional data assimilation problems with nonlinear dynamics. We tested the EnKF using synthetic observations consisting of surface velocity and heat flow measurements on a 2-D-spherical annulus model and compared it with the method developed previously. The EnKF performs on average better and is more stable than the former method. Less than 300 ensemble members are sufficient to reconstruct an evolution. We use covariance adaptive inflation and localization to correct for sampling errors. We show that the EnKF results are robust over a wide range of covariance localization parameters. The reconstruction is associated with an estimation of the error, and provides valuable information on where the reconstruction is to be trusted or not.

  11. Toward an Accurate Theoretical Framework for Describing Ensembles for Proteins under Strongly Denaturing Conditions

    PubMed Central

    Tran, Hoang T.; Pappu, Rohit V.

    2006-01-01

    Our focus is on an appropriate theoretical framework for describing highly denatured proteins. In high concentrations of denaturants, proteins behave like polymers in a good solvent and ensembles for denatured proteins can be modeled by ignoring all interactions except excluded volume (EV) effects. To assay conformational preferences of highly denatured proteins, we quantify a variety of properties for EV-limit ensembles of 23 two-state proteins. We find that modeled denatured proteins can be best described as follows. Average shapes are consistent with prolate ellipsoids. Ensembles are characterized by large correlated fluctuations. Sequence-specific conformational preferences are restricted to local length scales that span five to nine residues. Beyond local length scales, chain properties follow well-defined power laws that are expected for generic polymers in the EV limit. The average available volume is filled inefficiently, and cavities of all sizes are found within the interiors of denatured proteins. All properties characterized from simulated ensembles match predictions from rigorous field theories. We use our results to resolve between conflicting proposals for structure in ensembles for highly denatured states. PMID:16766618

  12. Prediction of S-wave velocity using complete ensemble empirical mode decomposition and neural networks

    NASA Astrophysics Data System (ADS)

    Gaci, Said; Hachay, Olga; Zaourar, Naima

    2017-04-01

    One of the key elements in hydrocarbon reservoirs characterization is the S-wave velocity (Vs). Since the traditional estimating methods often fail to accurately predict this physical parameter, a new approach that takes into account its non-stationary and non-linear properties is needed. In this view, a prediction model based on complete ensemble empirical mode decomposition (CEEMD) and a multiple layer perceptron artificial neural network (MLP ANN) is suggested to compute Vs from P-wave velocity (Vp). Using a fine-to-coarse reconstruction algorithm based on CEEMD, the Vp log data is decomposed into a high frequency (HF) component, a low frequency (LF) component and a trend component. Then, different combinations of these components are used as inputs of the MLP ANN algorithm for estimating Vs log. Applications on well logs taken from different geological settings illustrate that the predicted Vs values using MLP ANN with the combinations of HF, LF and trend in inputs are more accurate than those obtained with the traditional estimating methods. Keywords: S-wave velocity, CEEMD, multilayer perceptron neural networks.

  13. Enhanced Sampling in the Well-Tempered Ensemble

    NASA Astrophysics Data System (ADS)

    Bonomi, M.; Parrinello, M.

    2010-05-01

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi , J. Comput. Chem. 30, 1615 (2009)JCCHDD0192-865110.1002/jcc.21305]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  14. Enhanced sampling in the well-tempered ensemble.

    PubMed

    Bonomi, M; Parrinello, M

    2010-05-14

    We introduce the well-tempered ensemble (WTE) which is the biased ensemble sampled by well-tempered metadynamics when the energy is used as collective variable. WTE can be designed so as to have approximately the same average energy as the canonical ensemble but much larger fluctuations. These two properties lead to an extremely fast exploration of phase space. An even greater efficiency is obtained when WTE is combined with parallel tempering. Unbiased Boltzmann averages are computed on the fly by a recently developed reweighting method [M. Bonomi, J. Comput. Chem. 30, 1615 (2009)]. We apply WTE and its parallel tempering variant to the 2d Ising model and to a Gō model of HIV protease, demonstrating in these two representative cases that convergence is accelerated by orders of magnitude.

  15. Inhomogeneous diffusion and ergodicity breaking induced by global memory effects

    NASA Astrophysics Data System (ADS)

    Budini, Adrián A.

    2016-11-01

    We introduce a class of discrete random-walk model driven by global memory effects. At any time, the right-left transitions depend on the whole previous history of the walker, being defined by an urnlike memory mechanism. The characteristic function is calculated in an exact way, which allows us to demonstrate that the ensemble of realizations is ballistic. Asymptotically, each realization is equivalent to that of a biased Markovian diffusion process with transition rates that strongly differs from one trajectory to another. Using this "inhomogeneous diffusion" feature, the ergodic properties of the dynamics are analytically studied through the time-averaged moments. Even in the long-time regime, they remain random objects. While their average over realizations recovers the corresponding ensemble averages, departure between time and ensemble averages is explicitly shown through their probability densities. For the density of the second time-averaged moment, an ergodic limit and the limit of infinite lag times do not commutate. All these effects are induced by the memory effects. A generalized Einstein fluctuation-dissipation relation is also obtained for the time-averaged moments.

  16. Constructing optimal ensemble projections for predictive environmental modelling in Northern Eurasia

    NASA Astrophysics Data System (ADS)

    Anisimov, Oleg; Kokorev, Vasily

    2013-04-01

    Large uncertainties in climate impact modelling are associated with the forcing climate data. This study is targeted at the evaluation of the quality of GCM-based climatic projections in the specific context of predictive environmental modelling in Northern Eurasia. To accomplish this task, we used the output from 36 CMIP5 GCMs from the IPCC AR-5 data base for the control period 1975-2005 and calculated several climatic characteristics and indexes that are most often used in the impact models, i.e. the summer warmth index, duration of the vegetation growth period, precipitation sums, dryness index, thawing degree-day sums, and the annual temperature amplitude. We used data from 744 weather stations in Russia and neighbouring countries to analyze the spatial patterns of modern climatic change and to delineate 17 large regions with coherent temperature changes in the past few decades. GSM results and observational data were averaged over the coherent regions and compared with each other. Ultimately, we evaluated the skills of individual models, ranked them in the context of regional impact modelling and identified top-end GCMs that "better than average" reproduce modern regional changes of the selected meteorological parameters and climatic indexes. Selected top-end GCMs were used to compose several ensembles, each combining results from the different number of models. Ensembles were ranked using the same algorithm and outliers eliminated. We then used data from top-end ensembles for the 2000-2100 period to construct the climatic projections that are likely to be "better than average" in predicting climatic parameters that govern the state of environment in Northern Eurasia. The ultimate conclusions of our study are the following. • High-end GCMs that demonstrate excellent skills in conventional atmospheric model intercomparison experiments are not necessarily the best in replicating climatic characteristics that govern the state of environment in Northern Eurasia, and independent model evaluation on regional level is necessary to identify "better than average" GCMs. • Each of the ensembles combining results from several "better than average" models replicate selected meteorological parameters and climatic indexes better than any single GCM. The ensemble skills are parameter-specific and depend on models it consists of. The best results are not necessarily those based on the ensemble comprised by all "better than average" models. • Comprehensive evaluation of climatic scenarios using specific criteria narrows the range of uncertainties in environmental projections.

  17. Vertical Motion Changes Related to North-East Brazil Rainfall Variability: a GCM Simulation

    NASA Astrophysics Data System (ADS)

    Roucou, Pascal; Oribe Rocha de Aragão, José; Harzallah, Ali; Fontaine, Bernard; Janicot, Serge

    1996-08-01

    The atmospheric structure over north-east Brazil during anomalous rainfall years is studied in the 11 levels of the outputs of the Laboratoire de Météorologie Dynamique atmospheric general circulation model (LMD AGCM). Seven 19-year simulations were performed using observed sea-surface temperature (SST) corresponding to the period 1970- 1988. The ensemble mean is calculated for each month of the period, leading to an ensemble-averaged simulation. The simulated March-April rainfall is in good agreement with observations. Correlations of simulated rainfall and three SST indices relative to the equatorial Pacific and northern and southern parts of the Atlantic Ocean exhibit stronger relationships in the simulation than in the observations. This is particularly true with the SST gradient in the Atlantic (Atlantic dipole). Analyses on 200 ;hPa velocity potential, vertical velocity, and vertical integral of the zonal component of mass flux are performed for years of abnormal rainfall and positive/negative SST anomalies in the Pacific and Atlantic oceans in March-April during the rainy season over the Nordeste region. The results at 200 hPa show a convergence anomaly over Nordeste and a divergence anomaly over the Pacific concomitant with dry seasons associated with warm SST anomalies in the Pacific and warm (cold) waters in the North (South) Atlantic. During drought years convection inside the ITCZ indicated by the vertical velocity exhibits a displacement of the convection zone corresponding to a northward migration of the ITCZ. The east-west circulation depicted by the zonal divergent mass flux shows subsiding motion over Nordeste and ascending motion over the Pacific in drought years, accompanied by warm waters in the eastern Pacific and warm/cold waters in northern/southern Atlantic. Rainfall variability of the Nordeste rainfall is linked mainly to vertical motion and SST variability through the migration of the ITCZ and the east-west circulation.

  18. Predicting areas of sustainable error growth in quasigeostrophic flows using perturbation alignment properties

    NASA Astrophysics Data System (ADS)

    Rivière, G.; Hua, B. L.

    2004-10-01

    A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.

  19. Quantum canonical ensemble: A projection operator approach

    NASA Astrophysics Data System (ADS)

    Magnus, Wim; Lemmens, Lucien; Brosens, Fons

    2017-09-01

    Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.

  20. Exact solutions for kinetic models of macromolecular dynamics.

    PubMed

    Chemla, Yann R; Moffitt, Jeffrey R; Bustamante, Carlos

    2008-05-15

    Dynamic biological processes such as enzyme catalysis, molecular motor translocation, and protein and nucleic acid conformational dynamics are inherently stochastic processes. However, when such processes are studied on a nonsynchronized ensemble, the inherent fluctuations are lost, and only the average rate of the process can be measured. With the recent development of methods of single-molecule manipulation and detection, it is now possible to follow the progress of an individual molecule, measuring not just the average rate but the fluctuations in this rate as well. These fluctuations can provide a great deal of detail about the underlying kinetic cycle that governs the dynamical behavior of the system. However, extracting this information from experiments requires the ability to calculate the general properties of arbitrarily complex theoretical kinetic schemes. We present here a general technique that determines the exact analytical solution for the mean velocity and for measures of the fluctuations. We adopt a formalism based on the master equation and show how the probability density for the position of a molecular motor at a given time can be solved exactly in Fourier-Laplace space. With this analytic solution, we can then calculate the mean velocity and fluctuation-related parameters, such as the randomness parameter (a dimensionless ratio of the diffusion constant and the velocity) and the dwell time distributions, which fully characterize the fluctuations of the system, both commonly used kinetic parameters in single-molecule measurements. Furthermore, we show that this formalism allows calculation of these parameters for a much wider class of general kinetic models than demonstrated with previous methods.

  1. A virtual pebble game to ensemble average graph rigidity.

    PubMed

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2015-01-01

    The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most accurate but slowest method of ensemble averaging over hundreds to thousands of independent PG runs, and the fastest but least accurate MCC.

  2. Creation of the BMA ensemble for SST using a parallel processing technique

    NASA Astrophysics Data System (ADS)

    Kim, Kwangjin; Lee, Yang Won

    2013-10-01

    Despite the same purpose, each satellite product has different value because of its inescapable uncertainty. Also the satellite products have been calculated for a long time, and the kinds of the products are various and enormous. So the efforts for reducing the uncertainty and dealing with enormous data will be necessary. In this paper, we create an ensemble Sea Surface Temperature (SST) using MODIS Aqua, MODIS Terra and COMS (Communication Ocean and Meteorological Satellite). We used Bayesian Model Averaging (BMA) as ensemble method. The principle of the BMA is synthesizing the conditional probability density function (PDF) using posterior probability as weight. The posterior probability is estimated using EM algorithm. The BMA PDF is obtained by weighted average. As the result, the ensemble SST showed the lowest RMSE and MAE, which proves the applicability of BMA for satellite data ensemble. As future work, parallel processing techniques using Hadoop framework will be adopted for more efficient computation of very big satellite data.

  3. Blending of Radial HF Radar Surface Current and Model Using ETKF Scheme For The Sunda Strait

    NASA Astrophysics Data System (ADS)

    Mujiasih, Subekti; Riyadi, Mochammad; Wandono, Dr; Wayan Suardana, I.; Nyoman Gede Wiryajaya, I.; Nyoman Suarsa, I.; Hartanto, Dwi; Barth, Alexander; Beckers, Jean-Marie

    2017-04-01

    Preliminary study of data blending of surface current for Sunda Strait-Indonesia has been done using the analysis scheme of the Ensemble Transform Kalman Filter (ETKF). The method is utilized to combine radial velocity from HF Radar and u and v component of velocity from Global Copernicus - Marine environment monitoring service (CMEMS) model. The initial ensemble is based on the time variability of the CMEMS model result. Data tested are from 2 CODAR Seasonde radar sites in Sunda Strait and 2 dates such as 09 September 2013 and 08 February 2016 at 12.00 UTC. The radial HF Radar data has a hourly temporal resolution, 20-60 km of spatial range, 3 km of range resolution, 5 degree of angular resolution and spatial resolution and 11.5-14 MHz of frequency range. The u and v component of the model velocity represents a daily mean with 1/12 degree spatial resolution. The radial data from one HF radar site is analyzed and the result compared to the equivalent radial velocity from CMEMS for the second HF radar site. Error checking is calculated by root mean squared error (RMSE). Calculation of ensemble analysis and ensemble mean is using Sangoma software package. The tested R which represents observation error covariance matrix, is a diagonal matrix with diagonal elements equal 0.05, 0.5 or 1.0 m2/s2. The initial ensemble members comes from a model simulation spanning a month (September 2013 or February 2016), one year (2013) or 4 years (2013-2016). The spatial distribution of the radial current are analyzed and the RMSE values obtained from independent HF radar station are optimized. It was verified that the analysis reproduces well the structure included in the analyzed HF radar data. More importantly, the analysis was also improved relative to the second independent HF radar site. RMSE of the improved analysis is better than first HF Radar site Analysis. The best result of the blending exercise was obtained for observation error variance equal to 0.05 m2/s2. This study is still preliminary step, but it gives promising result for bigger size of data, combining other model and further development. Keyword: HF Radar, Sunda Strait, ETKF, CMEMS

  4. Experimental investigation of turbulent diffusion of slightly buoyant droplets in locally isotropic turbulence

    NASA Astrophysics Data System (ADS)

    Gopalan, Balaji; Malkiel, Edwin; Katz, Joseph

    2008-09-01

    High-speed inline digital holographic cinematography is used for studying turbulent diffusion of slightly buoyant 0.5-1.2 mm diameter diesel droplets and 50 μm diameter neutral density particles. Experiments are performed in a 50×50×70 mm3 sample volume in a controlled, nearly isotropic turbulence facility, which is characterized by two dimensional particle image velocimetry. An automated tracking program has been used for measuring velocity time history of more than 17 000 droplets and 15 000 particles. For most of the present conditions, rms values of horizontal droplet velocity exceed those of the fluid. The rms values of droplet vertical velocity are higher than those of the fluid only for the highest turbulence level. The turbulent diffusion coefficient is calculated by integration of the ensemble-averaged Lagrangian velocity autocovariance. Trends of the asymptotic droplet diffusion coefficient are examined by noting that it can be viewed as a product of a mean square velocity and a diffusion time scale. To compare the effects of turbulence and buoyancy, the turbulence intensity (ui') is scaled by the droplet quiescent rise velocity (Uq). The droplet diffusion coefficients in horizontal and vertical directions are lower than those of the fluid at low normalized turbulence intensity, but exceed it with increasing normalized turbulence intensity. For most of the present conditions the droplet horizontal diffusion coefficient is higher than the vertical diffusion coefficient, consistent with trends of the droplet velocity fluctuations and in contrast to the trends of the diffusion timescales. The droplet diffusion coefficients scaled by the product of turbulence intensity and an integral length scale are a monotonically increasing function of ui'/Uq.

  5. Experimental Demonstration of Quantum Stationary Light Pulses in an Atomic Ensemble

    NASA Astrophysics Data System (ADS)

    Park, Kwang-Kyoon; Cho, Young-Wook; Chough, Young-Tak; Kim, Yoon-Ho

    2018-04-01

    We report an experimental demonstration of the nonclassical stationary light pulse (SLP) in a cold atomic ensemble. A single collective atomic excitation is created and heralded by detecting a Stokes photon in the spontaneous Raman scattering process. The heralded single atomic excitation is converted into a single stationary optical excitation or the single-photon SLP, whose effective group velocity is zero, effectively forming a trapped single-photon pulse within the cold atomic ensemble. The single-photon SLP is then released from the atomic ensemble as an anti-Stokes photon after a specified trapping time. The second-order correlation measurement between the Stokes and anti-Stokes photons reveals the nonclassical nature of the single-photon SLP. Our work paves the way toward quantum nonlinear optics without a cavity.

  6. How well the Reliable Ensemble Averaging Method (REA) for 15 CMIP5 GCMs simulations works for Mexico?

    NASA Astrophysics Data System (ADS)

    Colorado, G.; Salinas, J. A.; Cavazos, T.; de Grau, P.

    2013-05-01

    15 CMIP5 GCMs precipitation simulations were combined in a weighted ensemble using the Reliable Ensemble Averaging (REA) method, obtaining the weight of each model. This was done for a historical period (1961-2000) and for the future emissions based on low (RCP4.5) and high (RCP8.5) radiating forcing for the period 2075-2099. The annual cycle of simple ensemble of the historical GCMs simulations, the historical REA average and the Climate Research Unit (CRU TS3.1) database was compared in four zones of México. In the case of precipitation we can see the improvements by using the REA method, especially in the two northern zones of México where the REA average is more close to the observations (CRU) that the simple average. However in the southern zones although there is an improvement it is not as good as it is in the north, particularly in the southeast where instead of the REA average is able to reproduce qualitatively good the annual cycle with the mid-summer drought it was greatly underestimated. The main reason is because the precipitation is underestimated for all the models and the mid-summer drought do not even exists in some models. In the REA average of the future scenarios, as we can expected, the most drastic decrease in precipitation was simulated using the RCP8.5 especially in the monsoon area and in the south of Mexico in summer and in winter. In the center and southern of Mexico however, the same scenario in autumn simulates an increase of precipitation.

  7. Coaxial volumetric velocimetry

    NASA Astrophysics Data System (ADS)

    Schneiders, Jan F. G.; Scarano, Fulvio; Jux, Constantin; Sciacchitano, Andrea

    2018-06-01

    This study describes the working principles of the coaxial volumetric velocimeter (CVV) for wind tunnel measurements. The measurement system is derived from the concept of tomographic PIV in combination with recent developments of Lagrangian particle tracking. The main characteristic of the CVV is its small tomographic aperture and the coaxial arrangement between the illumination and imaging directions. The system consists of a multi-camera arrangement subtending only few degrees solid angle and a long focal depth. Contrary to established PIV practice, laser illumination is provided along the same direction as that of the camera views, reducing the optical access requirements to a single viewing direction. The laser light is expanded to illuminate the full field of view of the cameras. Such illumination and imaging conditions along a deep measurement volume dictate the use of tracer particles with a large scattering area. In the present work, helium-filled soap bubbles are used. The fundamental principles of the CVV in terms of dynamic velocity and spatial range are discussed. Maximum particle image density is shown to limit tracer particle seeding concentration and instantaneous spatial resolution. Time-averaged flow fields can be obtained at high spatial resolution by ensemble averaging. The use of the CVV for time-averaged measurements is demonstrated in two wind tunnel experiments. After comparing the CVV measurements with the potential flow in front of a sphere, the near-surface flow around a complex wind tunnel model of a cyclist is measured. The measurements yield the volumetric time-averaged velocity and vorticity field. The measurements of the streamlines in proximity of the surface give an indication of the skin-friction lines pattern, which is of use in the interpretation of the surface flow topology.

  8. Quantifying nonergodicity in nonautonomous dissipative dynamical systems: An application to climate change

    NASA Astrophysics Data System (ADS)

    Drótos, Gábor; Bódai, Tamás; Tél, Tamás

    2016-08-01

    In nonautonomous dynamical systems, like in climate dynamics, an ensemble of trajectories initiated in the remote past defines a unique probability distribution, the natural measure of a snapshot attractor, for any instant of time, but this distribution typically changes in time. In cases with an aperiodic driving, temporal averages taken along a single trajectory would differ from the corresponding ensemble averages even in the infinite-time limit: ergodicity does not hold. It is worth considering this difference, which we call the nonergodic mismatch, by taking time windows of finite length for temporal averaging. We point out that the probability distribution of the nonergodic mismatch is qualitatively different in ergodic and nonergodic cases: its average is zero and typically nonzero, respectively. A main conclusion is that the difference of the average from zero, which we call the bias, is a useful measure of nonergodicity, for any window length. In contrast, the standard deviation of the nonergodic mismatch, which characterizes the spread between different realizations, exhibits a power-law decrease with increasing window length in both ergodic and nonergodic cases, and this implies that temporal and ensemble averages differ in dynamical systems with finite window lengths. It is the average modulus of the nonergodic mismatch, which we call the ergodicity deficit, that represents the expected deviation from fulfilling the equality of temporal and ensemble averages. As an important finding, we demonstrate that the ergodicity deficit cannot be reduced arbitrarily in nonergodic systems. We illustrate via a conceptual climate model that the nonergodic framework may be useful in Earth system dynamics, within which we propose the measure of nonergodicity, i.e., the bias, as an order-parameter-like quantifier of climate change.

  9. Ensemble representations: effects of set size and item heterogeneity on average size perception.

    PubMed

    Marchant, Alexander P; Simons, Daniel J; de Fockert, Jan W

    2013-02-01

    Observers can accurately perceive and evaluate the statistical properties of a set of objects, forming what is now known as an ensemble representation. The accuracy and speed with which people can judge the mean size of a set of objects have led to the proposal that ensemble representations of average size can be computed in parallel when attention is distributed across the display. Consistent with this idea, judgments of mean size show little or no decrement in accuracy when the number of objects in the set increases. However, the lack of a set size effect might result from the regularity of the item sizes used in previous studies. Here, we replicate these previous findings, but show that judgments of mean set size become less accurate when set size increases and the heterogeneity of the item sizes increases. This pattern can be explained by assuming that average size judgments are computed using a limited capacity sampling strategy, and it does not necessitate an ensemble representation computed in parallel across all items in a display. Copyright © 2012 Elsevier B.V. All rights reserved.

  10. Inertial measurement unit pre-processors and post-flight STS-1 comparisons

    NASA Technical Reports Server (NTRS)

    Findlay, J. T.; Mcconnell, J. G.

    1981-01-01

    The flight results show that the relative tri-redundant Inertial Measurement Unit IMU performance throughout the entire entry flight was within the expected accuracy. Comparisons are presented which show differences in the accumulated sensed velocity changes as measured by the tri-redundant IMUs (in Mean Equator and Equinox of 1950.0), differences in the equivalent inertial Euler angles as measured with respect to the M50 system, and finally, preliminary instrument calibrations determined relative to the ensemble average measurement set. Also, differences in the derived body axes rates and accelerations are presented. Because of the excellent performance of the IMUs during the STS-1 entry, the selection as to which particular IMU would best serve as the dynamic data source for entry reconstruction is arbitrary.

  11. Field Measurements to Characterize Turbulent Inflow for Marine Hydrokinetic Devices - Marrowstone Island, WA

    NASA Astrophysics Data System (ADS)

    Richmond, M. C.; Thomson, J. M.; Durgesh, V.; Polagye, B. L.

    2011-12-01

    Field measurements are essential for developing an improved understanding of turbulent inflow conditions that affect the design and operation of marine and hydrokinetic (MHK) devices. The Marrowstone Island site in Puget Sound, Washington State is a potential location for installing MHK devices, as it experiences strong tides and associated currents. Here, field measurements from Nodule Point on the eastern side of Marrowstone Island are used to characterize the turbulence in terms of velocity variance as a function of length and time scales. The field measurements were performed using Acoustic Doppler Velocimetry (ADV) and Acoustic Doppler Current Profiler (ADCP) instruments. Both were deployed on a bottom-mounted tripod at the site by the Applied Physics Lab at the University of Washington (APL-UW). The ADV acquired single point, temporally resolved velocity data from 17-21 Feb 2011, at a height of 4.6 m above the seabed at a sampling frequency of 32 Hz. The ADCP measured the velocity profile over the water column from a height of 2.6 m above the seabed up to the sea-surface in 36 bins, with each bin of 0.5 m size. The ADCP acquired data from 11-27 Feb 2011 at a sampling frequency of 2 Hz. Analysis of the ADV measurements shows distinct dynamic regions by scale: anisotropic eddies at large scales, an isotropic turbulent cascade (-5/3 slope in frequency spectra) at mesoscales, and contamination by Doppler noise at small scales. While Doppler noise is an order of magnitude greater for the ADCP measurements, the turbulence bulk statistics are consistent between the two instruments. There are significant variations in turbulence statistics with stage of the tidal currents (i.e., from slack to non-slack tidal conditions), however an average turbulent intensity of 10% is a robust, canonical value for this site. The ADCP velocity profiles are useful in quantifying the variability in velocity along the water column, and the ensemble averaged velocity profiles may be described by a power law, commonly used to characterize boundary layers.

  12. Not a Copernican observer: biased peculiar velocity statistics in the local Universe

    NASA Astrophysics Data System (ADS)

    Hellwing, Wojciech A.; Nusser, Adi; Feix, Martin; Bilicki, Maciej

    2017-05-01

    We assess the effect of the local large-scale structure on the estimation of two-point statistics of the observed radial peculiar velocities of galaxies. A large N-body simulation is used to examine these statistics from the perspective of random observers as well as 'Local Group-like' observers conditioned to reside in an environment resembling the observed Universe within 20 Mpc. The local environment systematically distorts the shape and amplitude of velocity statistics with respect to ensemble-averaged measurements made by a Copernican (random) observer. The Virgo cluster has the most significant impact, introducing large systematic deviations in all the statistics. For a simple 'top-hat' selection function, an idealized survey extending to ˜160 h-1 Mpc or deeper is needed to completely mitigate the effects of the local environment. Using shallower catalogues leads to systematic deviations of the order of 50-200 per cent depending on the scale considered. For a flat redshift distribution similar to the one of the CosmicFlows-3 survey, the deviations are even more prominent in both the shape and amplitude at all separations considered (≲100 h-1 Mpc). Conclusions based on statistics calculated without taking into account the impact of the local environment should be revisited.

  13. Experimental validation benchmark data for CFD of transient convection from forced to natural with flow reversal on a vertical flat plate

    DOE PAGES

    Lance, Blake W.; Smith, Barton L.

    2016-06-23

    Transient convection has been investigated experimentally for the purpose of providing Computational Fluid Dynamics (CFD) validation benchmark data. A specialized facility for validation benchmark experiments called the Rotatable Buoyancy Tunnel was used to acquire thermal and velocity measurements of flow over a smooth, vertical heated plate. The initial condition was forced convection downward with subsequent transition to mixed convection, ending with natural convection upward after a flow reversal. Data acquisition through the transient was repeated for ensemble-averaged results. With simple flow geometry, validation data were acquired at the benchmark level. All boundary conditions (BCs) were measured and their uncertainties quantified.more » Temperature profiles on all four walls and the inlet were measured, as well as as-built test section geometry. Inlet velocity profiles and turbulence levels were quantified using Particle Image Velocimetry. System Response Quantities (SRQs) were measured for comparison with CFD outputs and include velocity profiles, wall heat flux, and wall shear stress. Extra effort was invested in documenting and preserving the validation data. Details about the experimental facility, instrumentation, experimental procedure, materials, BCs, and SRQs are made available through this paper. As a result, the latter two are available for download and the other details are included in this work.« less

  14. Application of Generalized Feynman-Hellmann Theorem in Quantization of LC Circuit in Thermo Bath

    NASA Astrophysics Data System (ADS)

    Fan, Hong-Yi; Tang, Xu-Bing

    For the quantized LC electric circuit, when taking the Joule thermal effect into account, we think that physical observables should be evaluated in the context of ensemble average. We then use the generalized Feynman-Hellmann theorem for ensemble average to calculate them, which seems convenient. Fluctuation of observables in various LC electric circuits in the presence of thermo bath growing with temperature is exhibited.

  15. Calculating ensemble averaged descriptions of protein rigidity without sampling.

    PubMed

    González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J

    2012-01-01

    Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG) algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG) that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars) that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.

  16. Boltzmann equations for a binary one-dimensional ideal gas.

    PubMed

    Boozer, A D

    2011-09-01

    We consider a time-reversal invariant dynamical model of a binary ideal gas of N molecules in one spatial dimension. By making time-asymmetric assumptions about the behavior of the gas, we derive Boltzmann and anti-Boltzmann equations that describe the evolution of the single-molecule velocity distribution functions for an ensemble of such systems. We show that for a special class of initial states of the ensemble one can obtain an exact expression for the N-molecule velocity distribution function, and we use this expression to rigorously prove that the time-asymmetric assumptions needed to derive the Boltzmann and anti-Boltzmann equations hold in the limit of large N. Our results clarify some subtle issues regarding the origin of the time asymmetry of Boltzmann's H theorem.

  17. Diurnal cross-shore thermal exchange on a tropical forereef

    NASA Astrophysics Data System (ADS)

    Molina, L.; Pawlak, G.; Wells, J. R.; Monismith, S. G.; Merrifield, M. A.

    2014-09-01

    Observations of the velocity structure at the Kilo Nalu Observatory on the south shore of Oahu, Hawaii show that thermally driven baroclinic exchange is a dominant mechanism for cross-shore transport for this tropical forereef environment. Estimates of the exchange and net volume fluxes are comparable and show that the average residence time for the zone shoreward of the 12 m isobath is generally much less than 1 day. Although cross-shore wind stress influences the diurnal cross-shore exchange, surface heat flux is identified as the primary forcing mechanism from the phase relationships and from analysis of momentum and buoyancy balances for the record-averaged diurnal structure. Dynamic flow regimes are characterized based on a two-dimensional theoretical framework and the observations of the thermal structure at Kilo Nalu are shown to be in the unsteady temperature regime. Diurnal phasing and the cross-shore momentum balance suggest that turbulent stress divergence is an important driver of the baroclinic exchange. While the thermally driven exchange has a robust diurnal profile in the long term, there is high temporal variability on shorter time scales. Ensemble-averaged diurnal profiles indicate that the exchange is strongly modulated by surface heat flux, wind speed/direction, and alongshore velocity direction. The latter highlights the role of alongshore variability in the thermally driven exchange. Analysis of the thermal balance in the nearshore region indicates that the cross-shore exchange accounts for roughly 38% of the advective heat transport on a daily basis. This article was corrected on 10 OCT 2014. See the end of the full text for details.

  18. Plasticity of the Binding Site of Renin: Optimized Selection of Protein Structures for Ensemble Docking.

    PubMed

    Strecker, Claas; Meyer, Bernd

    2018-05-29

    Protein flexibility poses a major challenge to docking of potential ligands in that the binding site can adopt different shapes. Docking algorithms usually keep the protein rigid and only allow the ligand to be treated as flexible. However, a wrong assessment of the shape of the binding pocket can prevent a ligand from adapting a correct pose. Ensemble docking is a simple yet promising method to solve this problem: Ligands are docked into multiple structures, and the results are subsequently merged. Selection of protein structures is a significant factor for this approach. In this work we perform a comprehensive and comparative study evaluating the impact of structure selection on ensemble docking. We perform ensemble docking with several crystal structures and with structures derived from molecular dynamics simulations of renin, an attractive target for antihypertensive drugs. Here, 500 ns of MD simulations revealed binding site shapes not found in any available crystal structure. We evaluate the importance of structure selection for ensemble docking by comparing binding pose prediction, ability to rank actives above nonactives (screening utility), and scoring accuracy. As a result, for ensemble definition k-means clustering appears to be better suited than hierarchical clustering with average linkage. The best performing ensemble consists of four crystal structures and is able to reproduce the native ligand poses better than any individual crystal structure. Moreover this ensemble outperforms 88% of all individual crystal structures in terms of screening utility as well as scoring accuracy. Similarly, ensembles of MD-derived structures perform on average better than 75% of any individual crystal structure in terms of scoring accuracy at all inspected ensembles sizes.

  19. Toward canonical ensemble distribution from self-guided Langevin dynamics simulation

    NASA Astrophysics Data System (ADS)

    Wu, Xiongwu; Brooks, Bernard R.

    2011-04-01

    This work derives a quantitative description of the conformational distribution in self-guided Langevin dynamics (SGLD) simulations. SGLD simulations employ guiding forces calculated from local average momentums to enhance low-frequency motion. This enhancement in low-frequency motion dramatically accelerates conformational search efficiency, but also induces certain perturbations in conformational distribution. Through the local averaging, we separate properties of molecular systems into low-frequency and high-frequency portions. The guiding force effect on the conformational distribution is quantitatively described using these low-frequency and high-frequency properties. This quantitative relation provides a way to convert between a canonical ensemble and a self-guided ensemble. Using example systems, we demonstrated how to utilize the relation to obtain canonical ensemble properties and conformational distributions from SGLD simulations. This development makes SGLD not only an efficient approach for conformational searching, but also an accurate means for conformational sampling.

  20. Causal network in a deafferented non-human primate brain.

    PubMed

    Balasubramanian, Karthikeyan; Takahashi, Kazutaka; Hatsopoulos, Nicholas G

    2015-01-01

    De-afferented/efferented neural ensembles can undergo causal changes when interfaced to neuroprosthetic devices. These changes occur via recruitment or isolation of neurons, alterations in functional connectivity within the ensemble and/or changes in the role of neurons, i.e., excitatory/inhibitory. In this work, emergence of a causal network and changes in the dynamics are demonstrated for a deafferented brain region exposed to BMI (brain-machine interface) learning. The BMI was controlling a robot for reach-and-grasp behavior. And, the motor cortical regions used for the BMI were deafferented due to chronic amputation, and ensembles of neurons were decoded for velocity control of the multi-DOF robot. A generalized linear model-framework based Granger causality (GLM-GC) technique was used in estimating the ensemble connectivity. Model selection was based on the AIC (Akaike Information Criterion).

  1. TESTING THE ASTEROSEISMIC SCALING RELATIONS FOR RED GIANTS WITH ECLIPSING BINARIES OBSERVED BY KEPLER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaulme, P.; McKeever, J.; Jackiewicz, J.

    2016-12-01

    Given the potential of ensemble asteroseismology for understanding fundamental properties of large numbers of stars, it is critical to determine the accuracy of the scaling relations on which these measurements are based. From several powerful validation techniques, all indications so far show that stellar radius estimates from the asteroseismic scaling relations are accurate to within a few percent. Eclipsing binary systems hosting at least one star with detectable solar-like oscillations constitute the ideal test objects for validating asteroseismic radius and mass inferences. By combining radial velocity (RV) measurements and photometric time series of eclipses, it is possible to determine themore » masses and radii of each component of a double-lined spectroscopic binary. We report the results of a four-year RV survey performed with the échelle spectrometer of the Astrophysical Research Consortium’s 3.5 m telescope and the APOGEE spectrometer at Apache Point Observatory. We compare the masses and radii of 10 red giants (RGs) obtained by combining radial velocities and eclipse photometry with the estimates from the asteroseismic scaling relations. We find that the asteroseismic scaling relations overestimate RG radii by about 5% on average and masses by about 15% for stars at various stages of RG evolution. Systematic overestimation of mass leads to underestimation of stellar age, which can have important implications for ensemble asteroseismology used for Galactic studies. As part of a second objective, where asteroseismology is used for understanding binary systems, we confirm that oscillations of RGs in close binaries can be suppressed enough to be undetectable, a hypothesis that was proposed in a previous work.« less

  2. On the v-representability of ensemble densities of electron systems

    NASA Astrophysics Data System (ADS)

    Gonis, A.; Däne, M.

    2018-05-01

    Analogously to the case at zero temperature, where the density of the ground state of an interacting many-particle system determines uniquely (within an arbitrary additive constant) the external potential acting on the system, the thermal average of the density over an ensemble defined by the Boltzmann distribution at the minimum of the thermodynamic potential, or the free energy, determines the external potential uniquely (and not just modulo a constant) acting on a system described by this thermodynamic potential or free energy. The paper describes a formal procedure that generates the domain of a constrained search over general ensembles (at zero or elevated temperatures) that lead to a given density, including as a special case a density thermally averaged at a given temperature, and in the case of a v-representable density determines the external potential leading to the ensemble density. As an immediate consequence of the general formalism, the concept of v-representability is extended beyond the hitherto discussed case of ground state densities to encompass excited states as well. Specific application to thermally averaged densities solves the v-representability problem in connection with the Mermin functional in a manner analogous to that in which this problem was recently settled with respect to the Hohenberg and Kohn functional. The main formalism is illustrated with numerical results for ensembles of one-dimensional, non-interacting systems of particles under a harmonic potential.

  3. On the v-representability of ensemble densities of electron systems

    DOE PAGES

    Gonis, A.; Dane, M.

    2017-12-30

    Analogously to the case at zero temperature, where the density of the ground state of an interacting many-particle system determines uniquely (within an arbitrary additive constant) the external potential acting on the system, the thermal average of the density over an ensemble defined by the Boltzmann distribution at the minimum of the thermodynamic potential, or the free energy, determines the external potential uniquely (and not just modulo a constant) acting on a system described by this thermodynamic potential or free energy. The study describes a formal procedure that generates the domain of a constrained search over general ensembles (at zeromore » or elevated temperatures) that lead to a given density, including as a special case a density thermally averaged at a given temperature, and in the case of a v-representable density determines the external potential leading to the ensemble density. As an immediate consequence of the general formalism, the concept of v-representability is extended beyond the hitherto discussed case of ground state densities to encompass excited states as well. Specific application to thermally averaged densities solves the v-representability problem in connection with the Mermin functional in a manner analogous to that in which this problem was recently settled with respect to the Hohenberg and Kohn functional. Finally, the main formalism is illustrated with numerical results for ensembles of one-dimensional, non-interacting systems of particles under a harmonic potential.« less

  4. Post-processing method for wind speed ensemble forecast using wind speed and direction

    NASA Astrophysics Data System (ADS)

    Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin

    2017-04-01

    Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.

  5. Using Bayesian Model Averaging (BMA) to calibrate probabilistic surface temperature forecasts over Iran

    NASA Astrophysics Data System (ADS)

    Soltanzadeh, I.; Azadi, M.; Vakili, G. A.

    2011-07-01

    Using Bayesian Model Averaging (BMA), an attempt was made to obtain calibrated probabilistic numerical forecasts of 2-m temperature over Iran. The ensemble employs three limited area models (WRF, MM5 and HRM), with WRF used with five different configurations. Initial and boundary conditions for MM5 and WRF are obtained from the National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) and for HRM the initial and boundary conditions come from analysis of Global Model Europe (GME) of the German Weather Service. The resulting ensemble of seven members was run for a period of 6 months (from December 2008 to May 2009) over Iran. The 48-h raw ensemble outputs were calibrated using BMA technique for 120 days using a 40 days training sample of forecasts and relative verification data. The calibrated probabilistic forecasts were assessed using rank histogram and attribute diagrams. Results showed that application of BMA improved the reliability of the raw ensemble. Using the weighted ensemble mean forecast as a deterministic forecast it was found that the deterministic-style BMA forecasts performed usually better than the best member's deterministic forecast.

  6. Langevin equation with fluctuating diffusivity: A two-state model

    NASA Astrophysics Data System (ADS)

    Miyaguchi, Tomoshige; Akimoto, Takuma; Yamamoto, Eiji

    2016-07-01

    Recently, anomalous subdiffusion, aging, and scatter of the diffusion coefficient have been reported in many single-particle-tracking experiments, though the origins of these behaviors are still elusive. Here, as a model to describe such phenomena, we investigate a Langevin equation with diffusivity fluctuating between a fast and a slow state. Namely, the diffusivity follows a dichotomous stochastic process. We assume that the sojourn time distributions of these two states are given by power laws. It is shown that, for a nonequilibrium ensemble, the ensemble-averaged mean-square displacement (MSD) shows transient subdiffusion. In contrast, the time-averaged MSD shows normal diffusion, but an effective diffusion coefficient transiently shows aging behavior. The propagator is non-Gaussian for short time and converges to a Gaussian distribution in a long-time limit; this convergence to Gaussian is extremely slow for some parameter values. For equilibrium ensembles, both ensemble-averaged and time-averaged MSDs show only normal diffusion and thus we cannot detect any traces of the fluctuating diffusivity with these MSDs. Therefore, as an alternative approach to characterizing the fluctuating diffusivity, the relative standard deviation (RSD) of the time-averaged MSD is utilized and it is shown that the RSD exhibits slow relaxation as a signature of the long-time correlation in the fluctuating diffusivity. Furthermore, it is shown that the RSD is related to a non-Gaussian parameter of the propagator. To obtain these theoretical results, we develop a two-state renewal theory as an analytical tool.

  7. Statistical properties of a cloud ensemble - A numerical study

    NASA Technical Reports Server (NTRS)

    Tao, Wei-Kuo; Simpson, Joanne; Soong, Su-Tzai

    1987-01-01

    The statistical properties of cloud ensembles under a specified large-scale environment, such as mass flux by cloud drafts and vertical velocity as well as the condensation and evaporation associated with these cloud drafts, are examined using a three-dimensional numerical cloud ensemble model described by Soong and Ogura (1980) and Tao and Soong (1986). The cloud drafts are classified as active and inactive, and separate contributions to cloud statistics in areas of different cloud activity are then evaluated. The model results compare well with results obtained from aircraft measurements of a well-organized ITCZ rainband that occurred on August 12, 1974, during the Global Atmospheric Research Program's Atlantic Tropical Experiment.

  8. Cloudy Windows: What GCM Ensembles, Reanalyses and Observations Tell Us About Uncertainty in Greenland's Future Climate and Surface Melting

    NASA Astrophysics Data System (ADS)

    Reusch, D. B.

    2016-12-01

    Any analysis that wants to use a GCM-based scenario of future climate benefits from knowing how much uncertainty the GCM's inherent variability adds to the development of climate change predictions. This is extra relevant in the polar regions due to the potential of global impacts (e.g., sea level rise) from local (ice sheet) climate changes such as more frequent/intense surface melting. High-resolution, regional-scale models using GCMs for boundary/initial conditions in future scenarios inherit a measure of GCM-derived externally-driven uncertainty. We investigate these uncertainties for the Greenland ice sheet using the 30-member CESM1.0-CAM5-BGC Large Ensemble (CESMLE) for recent (1981-2000) and future (2081-2100, RCP 8.5) decades. Recent simulations are skill-tested against the ERA-Interim reanalysis and AWS observations with results informing future scenarios. We focus on key variables influencing surface melting through decadal climatologies, nonlinear analysis of variability with self-organizing maps (SOMs), regional-scale modeling (Polar WRF), and simple melt models. Relative to the ensemble average, spatially averaged climatological July temperature anomalies over a Greenland ice-sheet/ocean domain are mostly between +/- 0.2 °C. The spatial average hides larger local anomalies of up to +/- 2 °C. The ensemble average itself is 2 °C cooler than ERA-Interim. SOMs extend our diagnostics by providing a concise, objective summary of model variability as a set of generalized patterns. For CESMLE, the SOM patterns summarize the variability of multiple realizations of climate. Changes in pattern frequency by ensemble member show the influence of initial conditions. For example, basic statistical analysis of pattern frequency yields interquartile ranges of 2-4% for individual patterns across the ensemble. In climate terms, this tells us about climate state variability through the range of the ensemble, a potentially significant source of melt-prediction uncertainty. SOMs can also capture the different trajectories of climate due to intramodel variability over time. Polar WRF provides higher resolution regional modeling with improved, polar-centric model physics. Simple melt models allow us to characterize impacts of the upstream uncertainties on estimates of surface melting.

  9. Direct shear mapping - a new weak lensing tool

    NASA Astrophysics Data System (ADS)

    de Burgh-Day, C. O.; Taylor, E. N.; Webster, R. L.; Hopkins, A. M.

    2015-08-01

    We have developed a new technique called direct shear mapping (DSM) to measure gravitational lensing shear directly from observations of a single background source. The technique assumes the velocity map of an unlensed, stably rotating galaxy will be rotationally symmetric. Lensing distorts the velocity map making it asymmetric. The degree of lensing can be inferred by determining the transformation required to restore axisymmetry. This technique is in contrast to traditional weak lensing methods, which require averaging an ensemble of background galaxy ellipticity measurements, to obtain a single shear measurement. We have tested the efficacy of our fitting algorithm with a suite of systematic tests on simulated data. We demonstrate that we are in principle able to measure shears as small as 0.01. In practice, we have fitted for the shear in very low redshift (and hence unlensed) velocity maps, and have obtained null result with an error of ±0.01. This high-sensitivity results from analysing spatially resolved spectroscopic images (i.e. 3D data cubes), including not just shape information (as in traditional weak lensing measurements) but velocity information as well. Spirals and rotating ellipticals are ideal targets for this new technique. Data from any large Integral Field Unit (IFU) or radio telescope is suitable, or indeed any instrument with spatially resolved spectroscopy such as the Sydney-Australian-Astronomical Observatory Multi-Object Integral Field Spectrograph (SAMI), the Atacama Large Millimeter/submillimeter Array (ALMA), the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) and the Square Kilometer Array (SKA).

  10. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  11. Aerodynamic Surface Stress Intermittency and Conditionally Averaged Turbulence Statistics

    NASA Astrophysics Data System (ADS)

    Anderson, W.

    2015-12-01

    Aeolian erosion of dry, flat, semi-arid landscapes is induced (and sustained) by kinetic energy fluxes in the aloft atmospheric surface layer. During saltation -- the mechanism responsible for surface fluxes of dust and sediment -- briefly suspended sediment grains undergo a ballistic trajectory before impacting and `splashing' smaller-diameter (dust) particles vertically. Conceptual models typically indicate that sediment flux, q (via saltation or drift), scales with imposed aerodynamic (basal) stress raised to some exponent, n, where n > 1. Since basal stress (in fully rough, inertia-dominated flows) scales with the incoming velocity squared, u^2, it follows that q ~ u^2n (where u is some relevant component of the above flow field, u(x,t)). Thus, even small (turbulent) deviations of u from its time-averaged value may play an enormously important role in aeolian activity on flat, dry landscapes. The importance of this argument is further augmented given that turbulence in the atmospheric surface layer exhibits maximum Reynolds stresses in the fluid immediately above the landscape. In order to illustrate the importance of surface stress intermittency, we have used conditional averaging predicated on aerodynamic surface stress during large-eddy simulation of atmospheric boundary layer flow over a flat landscape with momentum roughness length appropriate for the Llano Estacado in west Texas (a flat agricultural region that is notorious for dust transport). By using data from a field campaign to measure diurnal variability of aeolian activity and prevailing winds on the Llano Estacado, we have retrieved the threshold friction velocity (which can be used to compute threshold surface stress under the geostrophic balance with the Monin-Obukhov similarity theory). This averaging procedure provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of inclined, high-momentum regions flanked by adjacent low-momentum regions. We will characterize geometric attributes of such structures and explore streamwise and vertical vorticity distribution within the conditionally averaged flow field.

  12. Training set extension for SVM ensemble in P300-speller with familiar face paradigm.

    PubMed

    Li, Qi; Shi, Kaiyang; Gao, Ning; Li, Jian; Bai, Ou

    2018-03-27

    P300-spellers are brain-computer interface (BCI)-based character input systems. Support vector machine (SVM) ensembles are trained with large-scale training sets and used as classifiers in these systems. However, the required large-scale training data necessitate a prolonged collection time for each subject, which results in data collected toward the end of the period being contaminated by the subject's fatigue. This study aimed to develop a method for acquiring more training data based on a collected small training set. A new method was developed in which two corresponding training datasets in two sequences are superposed and averaged to extend the training set. The proposed method was tested offline on a P300-speller with the familiar face paradigm. The SVM ensemble with extended training set achieved 85% classification accuracy for the averaged results of four sequences, and 100% for 11 sequences in the P300-speller. In contrast, the conventional SVM ensemble with non-extended training set achieved only 65% accuracy for four sequences, and 92% for 11 sequences. The SVM ensemble with extended training set achieves higher classification accuracies than the conventional SVM ensemble, which verifies that the proposed method effectively improves the classification performance of BCI P300-spellers, thus enhancing their practicality.

  13. Comparison of projection skills of deterministic ensemble methods using pseudo-simulation data generated from multivariate Gaussian distribution

    NASA Astrophysics Data System (ADS)

    Oh, Seok-Geun; Suh, Myoung-Seok

    2017-07-01

    The projection skills of five ensemble methods were analyzed according to simulation skills, training period, and ensemble members, using 198 sets of pseudo-simulation data (PSD) produced by random number generation assuming the simulated temperature of regional climate models. The PSD sets were classified into 18 categories according to the relative magnitude of bias, variance ratio, and correlation coefficient, where each category had 11 sets (including 1 truth set) with 50 samples. The ensemble methods used were as follows: equal weighted averaging without bias correction (EWA_NBC), EWA with bias correction (EWA_WBC), weighted ensemble averaging based on root mean square errors and correlation (WEA_RAC), WEA based on the Taylor score (WEA_Tay), and multivariate linear regression (Mul_Reg). The projection skills of the ensemble methods improved generally as compared with the best member for each category. However, their projection skills are significantly affected by the simulation skills of the ensemble member. The weighted ensemble methods showed better projection skills than non-weighted methods, in particular, for the PSD categories having systematic biases and various correlation coefficients. The EWA_NBC showed considerably lower projection skills than the other methods, in particular, for the PSD categories with systematic biases. Although Mul_Reg showed relatively good skills, it showed strong sensitivity to the PSD categories, training periods, and number of members. On the other hand, the WEA_Tay and WEA_RAC showed relatively superior skills in both the accuracy and reliability for all the sensitivity experiments. This indicates that WEA_Tay and WEA_RAC are applicable even for simulation data with systematic biases, a short training period, and a small number of ensemble members.

  14. Self-averaging and weak ergodicity breaking of diffusion in heterogeneous media

    NASA Astrophysics Data System (ADS)

    Russian, Anna; Dentz, Marco; Gouze, Philippe

    2017-08-01

    Diffusion in natural and engineered media is quantified in terms of stochastic models for the heterogeneity-induced fluctuations of particle motion. However, fundamental properties such as ergodicity and self-averaging and their dependence on the disorder distribution are often not known. Here, we investigate these questions for diffusion in quenched disordered media characterized by spatially varying retardation properties, which account for particle retention due to physical or chemical interactions with the medium. We link self-averaging and ergodicity to the disorder sampling efficiency Rn, which quantifies the number of disorder realizations a noise ensemble may sample in a single disorder realization. Diffusion for disorder scenarios characterized by a finite mean transition time is ergodic and self-averaging for any dimension. The strength of the sample to sample fluctuations decreases with increasing spatial dimension. For an infinite mean transition time, particle motion is weakly ergodicity breaking in any dimension because single particles cannot sample the heterogeneity spectrum in finite time. However, even though the noise ensemble is not representative of the single-particle time statistics, subdiffusive motion in q ≥2 dimensions is self-averaging, which means that the noise ensemble in a single realization samples a representative part of the heterogeneity spectrum.

  15. Boundary Layer Control of a Circular Cylinder Using a Synthetic Jet

    DTIC Science & Technology

    2005-06-01

    Average Velocity at . 375 Hz .............................................................................65 Figure 54 Average Velocity at 0.45 Hz...Figure 53 Average Velocity at . 375 Hz Columns=0; Rows=0 Figure 54 Average Velocity at 0.45 Hz Columns=0; Rows=0 Figure 55 Average Velocity

  16. Correlation Scales of the Turbulent Cascade at 1 au

    NASA Astrophysics Data System (ADS)

    Smith, Charles W.; Vasquez, Bernard J.; Coburn, Jesse T.; Forman, Miriam A.; Stawarz, Julia E.

    2018-05-01

    We examine correlation functions of the mixed, third-order expressions that, when ensemble-averaged, describe the cascade of energy in the inertial range of magnetohydrodynamic turbulence. Unlike the correlation function of primitive variables such as the magnetic field, solar wind velocity, temperature, and density, the third-order expressions decorrelate at a scale that is approximately 20% of the lag. This suggests the nonlinear dynamics decorrelate in less than one wavelength. Therefore, each scale can behave differently from one wavelength to the next. In the same manner, different scales within the inertial range can behave independently at any given time or location. With such a cascade that can be strongly patchy and highly variable, it is often possible to obtain negative cascade rates for short periods of time, as reported earlier for individual samples of data.

  17. Ensemble Weight Enumerators for Protograph LDPC Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush

    2006-01-01

    Recently LDPC codes with projected graph, or protograph structures have been proposed. In this paper, finite length ensemble weight enumerators for LDPC codes with protograph structures are obtained. Asymptotic results are derived as the block size goes to infinity. In particular we are interested in obtaining ensemble average weight enumerators for protograph LDPC codes which have minimum distance that grows linearly with block size. As with irregular ensembles, linear minimum distance property is sensitive to the proportion of degree-2 variable nodes. In this paper the derived results on ensemble weight enumerators show that linear minimum distance condition on degree distribution of unstructured irregular LDPC codes is a sufficient but not a necessary condition for protograph LDPC codes.

  18. Effect of land model ensemble versus coupled model ensemble on the simulation of precipitation climatology and variability

    NASA Astrophysics Data System (ADS)

    Wei, Jiangfeng; Dirmeyer, Paul A.; Yang, Zong-Liang; Chen, Haishan

    2017-10-01

    Through a series of model simulations with an atmospheric general circulation model coupled to three different land surface models, this study investigates the impacts of land model ensembles and coupled model ensemble on precipitation simulation. It is found that coupling an ensemble of land models to an atmospheric model has a very minor impact on the improvement of precipitation climatology and variability, but a simple ensemble average of the precipitation from three individually coupled land-atmosphere models produces better results, especially for precipitation variability. The generally weak impact of land processes on precipitation should be the main reason that the land model ensembles do not improve precipitation simulation. However, if there are big biases in the land surface model or land surface data set, correcting them could improve the simulated climate, especially for well-constrained regional climate simulations.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ortoleva, Peter J.

    Illustrative embodiments of systems and methods for the deductive multiscale simulation of macromolecules are disclosed. In one illustrative embodiment, a deductive multiscale simulation method may include (i) constructing a set of order parameters that model one or more structural characteristics of a macromolecule, (ii) simulating an ensemble of atomistic configurations for the macromolecule using instantaneous values of the set of order parameters, (iii) simulating thermal-average forces and diffusivities for the ensemble of atomistic configurations, and (iv) evolving the set of order parameters via Langevin dynamics using the thermal-average forces and diffusivities.

  20. How Interplanetary Scintillation Data Can Improve Modeling of Coronal Mass Ejection Propagation

    NASA Astrophysics Data System (ADS)

    Taktakishvili, A.; Mays, M. L.; Manoharan, P. K.; Rastaetter, L.; Kuznetsova, M. M.

    2017-12-01

    Coronal mass ejections (CMEs) can have a significant impact on the Earth's magnetosphere-ionosphere system and cause widespread anomalies for satellites from geosynchronous to low-Earth orbit and produce effects such as geomagnetically induced currents. At the NASA/GSFC Community Coordinated Modeling Center we have been using ensemble modeling of CMEs since 2012. In this presnetation we demonstrate that using of interplanetary scintillation (IPS) observations from the Ooty Radio Telescope facility in India can help to track CME propagaion and improve ensemble forecasting of CMEs. The observations of the solar wind density and velocity using IPS from hundreds of distant sources in ensemble modeling of CMEs can be a game-changing improvement of the current state of the art in CME forecasting.

  1. Optimal averaging of soil moisture predictions from ensemble land surface model simulations

    USDA-ARS?s Scientific Manuscript database

    The correct interpretation of ensemble information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an instrumental variabl...

  2. Investigation of short-term effective radiative forcing of fire aerosols over North America using nudged hindcast ensembles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Yawen; Zhang, Kai; Qian, Yun

    Aerosols from fire emissions can potentially have large impact on clouds and radiation. However, fire aerosol sources are often intermittent, and their effect on weather and climate is difficult to quantify. Here we investigated the short-term effective radiative forcing of fire aerosols using the global aerosol–climate model Community Atmosphere Model version 5 (CAM5). Different from previous studies, we used nudged hindcast ensembles to quantify the forcing uncertainty due to the chaotic response to small perturbations in the atmosphere state. Daily mean emissions from three fire inventories were used to consider the uncertainty in emission strength and injection heights. The simulated aerosolmore » optical depth (AOD) and mass concentrations were evaluated against in situ measurements and reanalysis data. Overall, the results show the model has reasonably good predicting skills. Short (10-day) nudged ensemble simulations were then performed with and without fire emissions to estimate the effective radiative forcing. Results show fire aerosols have large effects on both liquid and ice clouds over the two selected regions in April 2009. Ensemble mean results show strong negative shortwave cloud radiative effect (SCRE) over almost the entirety of southern Mexico, with a 10-day regional mean value of –3.0 W m –2. Over the central US, the SCRE is positive in the north but negative in the south, and the regional mean SCRE is small (–0.56 W m –2). For the 10-day average, we found a large ensemble spread of regional mean shortwave cloud radiative effect over southern Mexico (15.6 % of the corresponding ensemble mean) and the central US (64.3 %), despite the regional mean AOD time series being almost indistinguishable during the 10-day period. Moreover, the ensemble spread is much larger when using daily averages instead of 10-day averages. In conclusion, this demonstrates the importance of using a large ensemble of simulations to estimate the short-term aerosol effective radiative forcing.« less

  3. Investigation of short-term effective radiative forcing of fire aerosols over North America using nudged hindcast ensembles

    DOE PAGES

    Liu, Yawen; Zhang, Kai; Qian, Yun; ...

    2018-01-03

    Aerosols from fire emissions can potentially have large impact on clouds and radiation. However, fire aerosol sources are often intermittent, and their effect on weather and climate is difficult to quantify. Here we investigated the short-term effective radiative forcing of fire aerosols using the global aerosol–climate model Community Atmosphere Model version 5 (CAM5). Different from previous studies, we used nudged hindcast ensembles to quantify the forcing uncertainty due to the chaotic response to small perturbations in the atmosphere state. Daily mean emissions from three fire inventories were used to consider the uncertainty in emission strength and injection heights. The simulated aerosolmore » optical depth (AOD) and mass concentrations were evaluated against in situ measurements and reanalysis data. Overall, the results show the model has reasonably good predicting skills. Short (10-day) nudged ensemble simulations were then performed with and without fire emissions to estimate the effective radiative forcing. Results show fire aerosols have large effects on both liquid and ice clouds over the two selected regions in April 2009. Ensemble mean results show strong negative shortwave cloud radiative effect (SCRE) over almost the entirety of southern Mexico, with a 10-day regional mean value of –3.0 W m –2. Over the central US, the SCRE is positive in the north but negative in the south, and the regional mean SCRE is small (–0.56 W m –2). For the 10-day average, we found a large ensemble spread of regional mean shortwave cloud radiative effect over southern Mexico (15.6 % of the corresponding ensemble mean) and the central US (64.3 %), despite the regional mean AOD time series being almost indistinguishable during the 10-day period. Moreover, the ensemble spread is much larger when using daily averages instead of 10-day averages. In conclusion, this demonstrates the importance of using a large ensemble of simulations to estimate the short-term aerosol effective radiative forcing.« less

  4. Performance analysis of a Principal Component Analysis ensemble classifier for Emotiv headset P300 spellers.

    PubMed

    Elsawy, Amr S; Eldawlatly, Seif; Taher, Mohamed; Aly, Gamal M

    2014-01-01

    The current trend to use Brain-Computer Interfaces (BCIs) with mobile devices mandates the development of efficient EEG data processing methods. In this paper, we demonstrate the performance of a Principal Component Analysis (PCA) ensemble classifier for P300-based spellers. We recorded EEG data from multiple subjects using the Emotiv neuroheadset in the context of a classical oddball P300 speller paradigm. We compare the performance of the proposed ensemble classifier to the performance of traditional feature extraction and classifier methods. Our results demonstrate the capability of the PCA ensemble classifier to classify P300 data recorded using the Emotiv neuroheadset with an average accuracy of 86.29% on cross-validation data. In addition, offline testing of the recorded data reveals an average classification accuracy of 73.3% that is significantly higher than that achieved using traditional methods. Finally, we demonstrate the effect of the parameters of the P300 speller paradigm on the performance of the method.

  5. Real­-Time Ensemble Forecasting of Coronal Mass Ejections Using the Wsa-Enlil+Cone Model

    NASA Astrophysics Data System (ADS)

    Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; Odstrcil, D.; MacNeice, P. J.; Rastaetter, L.; LaSota, J. A.

    2014-12-01

    Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions. Real-time ensemble modeling of CME propagation is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL+cone model available at the Community Coordinated Modeling Center (CCMC). To estimate the effect of uncertainties in determining CME input parameters on arrival time predictions, a distribution of n (routinely n=48) CME input parameter sets are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest, including a probability distribution of CME arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). We present the results of ensemble simulations for a total of 38 CME events in 2013-2014. For 28 of the ensemble runs containing hits, the observed CME arrival was within the range of ensemble arrival time predictions for 14 runs (half). The average arrival time prediction was computed for each of the 28 ensembles predicting hits and using the actual arrival time, an average absolute error of 10.0 hours (RMSE=11.4 hours) was found for all 28 ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling sysem was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME. The parameter sensitivity study suggests future directions for the system, such as running ensembles using various magnetogram inputs to the WSA model.

  6. Assessing the impact of land use change on hydrology by ensemble modeling (LUCHEM) III: Scenario analysis

    USGS Publications Warehouse

    Huisman, J.A.; Breuer, L.; Bormann, H.; Bronstert, A.; Croke, B.F.W.; Frede, H.-G.; Graff, T.; Hubrechts, L.; Jakeman, A.J.; Kite, G.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Viney, N.R.; Willems, P.

    2009-01-01

    An ensemble of 10 hydrological models was applied to the same set of land use change scenarios. There was general agreement about the direction of changes in the mean annual discharge and 90% discharge percentile predicted by the ensemble members, although a considerable range in the magnitude of predictions for the scenarios and catchments under consideration was obvious. Differences in the magnitude of the increase were attributed to the different mean annual actual evapotranspiration rates for each land use type. The ensemble of model runs was further analyzed with deterministic and probabilistic ensemble methods. The deterministic ensemble method based on a trimmed mean resulted in a single somewhat more reliable scenario prediction. The probabilistic reliability ensemble averaging (REA) method allowed a quantification of the model structure uncertainty in the scenario predictions. It was concluded that the use of a model ensemble has greatly increased our confidence in the reliability of the model predictions. ?? 2008 Elsevier Ltd.

  7. Scaling of peak flows with constant flow velocity in random self-similar networks

    USGS Publications Warehouse

    Troutman, Brent M.; Mantilla, Ricardo; Gupta, Vijay K.

    2011-01-01

    A methodology is presented to understand the role of the statistical self-similar topology of real river networks on scaling, or power law, in peak flows for rainfall-runoff events. We created Monte Carlo generated sets of ensembles of 1000 random self-similar networks (RSNs) with geometrically distributed interior and exterior generators having parameters pi and pe, respectively. The parameter values were chosen to replicate the observed topology of real river networks. We calculated flow hydrographs in each of these networks by numerically solving the link-based mass and momentum conservation equation under the assumption of constant flow velocity. From these simulated RSNs and hydrographs, the scaling exponents β and φ characterizing power laws with respect to drainage area, and corresponding to the width functions and flow hydrographs respectively, were estimated. We found that, in general, φ > β, which supports a similar finding first reported for simulations in the river network of the Walnut Gulch basin, Arizona. Theoretical estimation of β and φ in RSNs is a complex open problem. Therefore, using results for a simpler problem associated with the expected width function and expected hydrograph for an ensemble of RSNs, we give heuristic arguments for theoretical derivations of the scaling exponents β(E) and φ(E) that depend on the Horton ratios for stream lengths and areas. These ratios in turn have a known dependence on the parameters of the geometric distributions of RSN generators. Good agreement was found between the analytically conjectured values of β(E) and φ(E) and the values estimated by the simulated ensembles of RSNs and hydrographs. The independence of the scaling exponents φ(E) and φ with respect to the value of flow velocity and runoff intensity implies an interesting connection between unit hydrograph theory and flow dynamics. Our results provide a reference framework to study scaling exponents under more complex scenarios of flow dynamics and runoff generation processes using ensembles of RSNs.

  8. Quantifying Nucleic Acid Ensembles with X-ray Scattering Interferometry.

    PubMed

    Shi, Xuesong; Bonilla, Steve; Herschlag, Daniel; Harbury, Pehr

    2015-01-01

    The conformational ensemble of a macromolecule is the complete description of the macromolecule's solution structures and can reveal important aspects of macromolecular folding, recognition, and function. However, most experimental approaches determine an average or predominant structure, or follow transitions between states that each can only be described by an average structure. Ensembles have been extremely difficult to experimentally characterize. We present the unique advantages and capabilities of a new biophysical technique, X-ray scattering interferometry (XSI), for probing and quantifying structural ensembles. XSI measures the interference of scattered waves from two heavy metal probes attached site specifically to a macromolecule. A Fourier transform of the interference pattern gives the fractional abundance of different probe separations directly representing the multiple conformation states populated by the macromolecule. These probe-probe distance distributions can then be used to define the structural ensemble of the macromolecule. XSI provides accurate, calibrated distance in a model-independent fashion with angstrom scale sensitivity in distances. XSI data can be compared in a straightforward manner to atomic coordinates determined experimentally or predicted by molecular dynamics simulations. We describe the conceptual framework for XSI and provide a detailed protocol for carrying out an XSI experiment. © 2015 Elsevier Inc. All rights reserved.

  9. The mean and turbulent flow structure of a weak hydraulic jump

    NASA Astrophysics Data System (ADS)

    Misra, S. K.; Kirby, J. T.; Brocchini, M.; Veron, F.; Thomas, M.; Kambhamettu, C.

    2008-03-01

    The turbulent air-water interface and flow structure of a weak, turbulent hydraulic jump are analyzed in detail using particle image velocimetry measurements. The study is motivated by the need to understand the detailed dynamics of turbulence generated in steady spilling breakers and the relative importance of the reverse-flow and breaker shear layer regions with attention to their topology, mean flow, and turbulence structure. The intermittency factor derived from turbulent fluctuations of the air-water interface in the breaker region is found to fit theoretical distributions of turbulent interfaces well. A conditional averaging technique is used to calculate ensemble-averaged properties of the flow. The computed mean velocity field accurately satisfies mass conservation. A thin, curved shear layer oriented parallel to the surface is responsible for most of the turbulence production with the turbulence intensity decaying rapidly away from the toe of the breaker (location of largest surface curvature) with both increasing depth and downstream distance. The reverse-flow region, localized about the ensemble-averaged free surface, is characterized by a weak downslope mean flow and entrainment of water from below. The Reynolds shear stress is negative in the breaker shear layer, which shows that momentum diffuses upward into the shear layer from the flow underneath, and it is positive just below the mean surface indicating a downward flux of momentum from the reverse-flow region into the shear layer. The turbulence structure of the breaker shear layer resembles that of a mixing layer originating from the toe of the breaker, and the streamwise variations of the length scale and growth rate are found to be in good agreement with observed values in typical mixing layers. All evidence suggests that breaking is driven by a surface-parallel adverse pressure gradient and a streamwise flow deceleration at the toe of the breaker. Both effects force the shear layer to thicken rapidly, thereby inducing a sharp free surface curvature change at the toe.

  10. Optimal averaging of soil moisture predictions from ensemble land surface model simulations

    USDA-ARS?s Scientific Manuscript database

    The correct interpretation of ensemble 3 soil moisture information obtained from the parallel implementation of multiple land surface models (LSMs) requires information concerning the LSM ensemble’s mutual error covariance. Here we propose a new technique for obtaining such information using an inst...

  11. Classifier ensemble construction with rotation forest to improve medical diagnosis performance of machine learning algorithms.

    PubMed

    Ozcift, Akin; Gulten, Arif

    2011-12-01

    Improving accuracies of machine learning algorithms is vital in designing high performance computer-aided diagnosis (CADx) systems. Researches have shown that a base classifier performance might be enhanced by ensemble classification strategies. In this study, we construct rotation forest (RF) ensemble classifiers of 30 machine learning algorithms to evaluate their classification performances using Parkinson's, diabetes and heart diseases from literature. While making experiments, first the feature dimension of three datasets is reduced using correlation based feature selection (CFS) algorithm. Second, classification performances of 30 machine learning algorithms are calculated for three datasets. Third, 30 classifier ensembles are constructed based on RF algorithm to assess performances of respective classifiers with the same disease data. All the experiments are carried out with leave-one-out validation strategy and the performances of the 60 algorithms are evaluated using three metrics; classification accuracy (ACC), kappa error (KE) and area under the receiver operating characteristic (ROC) curve (AUC). Base classifiers succeeded 72.15%, 77.52% and 84.43% average accuracies for diabetes, heart and Parkinson's datasets, respectively. As for RF classifier ensembles, they produced average accuracies of 74.47%, 80.49% and 87.13% for respective diseases. RF, a newly proposed classifier ensemble algorithm, might be used to improve accuracy of miscellaneous machine learning algorithms to design advanced CADx systems. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Experiments on waves under impulsive wind forcing in view of the Phillips (1957) theory

    NASA Astrophysics Data System (ADS)

    Shemer, Lev; Zavadsky, Andrey

    2016-11-01

    Only limited information is currently available on the initial stages of wind-waves growth from rest under sudden wind forcing; the mechanisms leading to the appearance of waves are still not well understood. In the present work, waves emerging in a small-scale laboratory facility under the action of step-like turbulent wind forcing are studied using capacitance and laser slope gauges. Measurements are performed at a number of fetches and for a range of wind velocities. Taking advantage of the fully automated experimental procedure, at least 100 independent realizations are recorded for each wind velocity at every fetch. The accumulated data sets allow calculating ensemble-averaged values of the measured parameters as a function of time elapsed from the blower activation. The accumulated results on the temporal variation of wind-wave field initially at rest allow quantitative comparison with the theory of Phillips (1957). Following Phillips, appearance of the initial detectable ripples was considered first, while the growth of short gravity waves at later times was analyzed separately. Good qualitative and partial quantitative agreement between the Phillips predictions and the measurements was obtained for both those stages of the initial wind-wave field evolution.

  13. Experimental study of temporal evolution of waves under transient wind conditions

    NASA Astrophysics Data System (ADS)

    Zavadsky, Andrey; Shemer, Lev

    2016-11-01

    Temporal variation of the waves excited by nearly sudden wind forcing over an initially still water surface is studied in a small wind-wave flume at Tel Aviv University for variety of fetches and wind velocities. Simultaneous measurements of the surface elevation using a conventional capacitance wave-gauge and of the surface slope in along-wind and cross-wind directions by a laser slope gauge were performed. Variation with time of two components of instantaneous surface velocity was measured by particle tracking velocimetry. The size of the experimental facility and thus relatively short characteristic time scales of the phenomena under investigation, as well as an automated experimental procedure controlling the experiments made it possible to record a large amount of independent realizations for each wind-fetch condition. Sufficient data were accumulated to compute reliable ensemble averaged temporal variation of governing wave parameters. The essentially three-dimensional structure of wind-waves at all stages of evolution is demonstrated. The results obtained at each wind-fetch condition allowed to characterize the major stages of the evolution of the wind-wave field and to suggest a plausible scenario for the initial growth of the wind-waves.

  14. Rapid Geodetic Shortening Across the Eastern Cordillera of NW Argentina Observed by the Puna-Andes GPS Array

    NASA Astrophysics Data System (ADS)

    McFarland, Phillip K.; Bennett, Richard A.; Alvarado, Patricia; DeCelles, Peter G.

    2017-10-01

    We present crustal velocities for 29 continuously recording GPS stations from the southern central Andes across the Puna, Eastern Cordillera, and Santa Barbara system for the period between the 27 February 2010 Maule and 1 April 2014 Iquique earthquakes in a South American frame. The velocity field exhibits a systematic decrease in magnitude from 35 mm/yr near the trench to <1 mm/yr within the craton. We forward model loading on the Nazca-South America (NZ-SA) subduction interface using back slip on elastic dislocations to approximate a fully locked interface from 10 to 50 km depth. We generate an ensemble of models by iterating over the percentage of NZ-SA convergence accommodated at the subduction interface. Velocity residuals calculated for each model demonstrate that locking on the NZ-SA interface is insufficient to reproduce the observed velocities. We model deformation associated with a back-arc décollement using an edge dislocation, estimating model parameters from the velocity residuals for each forward model of the subduction interface ensemble using a Bayesian approach. We realize our best fit to the thrust-perpendicular velocity field with 70 ± 5% of NZ-SA convergence accommodated at the subduction interface and a slip rate of 9.1 ± 0.9 mm/yr on the fold-thrust belt décollement. We also estimate a locking depth of 14 ± 9 km, which places the downdip extent of the locked zone 135 ± 20 km from the thrust front. The thrust-parallel component of velocity is fit by a constant shear strain rate of -19 × 10-9 yr-1, equivalent to clockwise rigid block rotation of the back arc at a rate of 1.1°/Myr.

  15. Comparison of different assimilation schemes in an operational assimilation system with Ensemble Kalman Filter

    NASA Astrophysics Data System (ADS)

    Yan, Yajing; Barth, Alexander; Beckers, Jean-Marie; Candille, Guillem; Brankart, Jean-Michel; Brasseur, Pierre

    2016-04-01

    In this paper, four assimilation schemes, including an intermittent assimilation scheme (INT) and three incremental assimilation schemes (IAU 0, IAU 50 and IAU 100), are compared in the same assimilation experiments with a realistic eddy permitting primitive equation model of the North Atlantic Ocean using the Ensemble Kalman Filter. The three IAU schemes differ from each other in the position of the increment update window that has the same size as the assimilation window. 0, 50 and 100 correspond to the degree of superposition of the increment update window on the current assimilation window. Sea surface height, sea surface temperature, and temperature profiles at depth collected between January and December 2005 are assimilated. Sixty ensemble members are generated by adding realistic noise to the forcing parameters related to the temperature. The ensemble is diagnosed and validated by comparison between the ensemble spread and the model/observation difference, as well as by rank histogram before the assimilation experiments The relevance of each assimilation scheme is evaluated through analyses on thermohaline variables and the current velocities. The results of the assimilation are assessed according to both deterministic and probabilistic metrics with independent/semi-independent observations. For deterministic validation, the ensemble means, together with the ensemble spreads are compared to the observations, in order to diagnose the ensemble distribution properties in a deterministic way. For probabilistic validation, the continuous ranked probability score (CRPS) is used to evaluate the ensemble forecast system according to reliability and resolution. The reliability is further decomposed into bias and dispersion by the reduced centered random variable (RCRV) score in order to investigate the reliability properties of the ensemble forecast system.

  16. Analyses and forecasts of a tornadic supercell outbreak using a 3DVAR system ensemble

    NASA Astrophysics Data System (ADS)

    Zhuang, Zhaorong; Yussouf, Nusrat; Gao, Jidong

    2016-05-01

    As part of NOAA's "Warn-On-Forecast" initiative, a convective-scale data assimilation and prediction system was developed using the WRF-ARW model and ARPS 3DVAR data assimilation technique. The system was then evaluated using retrospective short-range ensemble analyses and probabilistic forecasts of the tornadic supercell outbreak event that occurred on 24 May 2011 in Oklahoma, USA. A 36-member multi-physics ensemble system provided the initial and boundary conditions for a 3-km convective-scale ensemble system. Radial velocity and reflectivity observations from four WSR-88Ds were assimilated into the ensemble using the ARPS 3DVAR technique. Five data assimilation and forecast experiments were conducted to evaluate the sensitivity of the system to data assimilation frequencies, in-cloud temperature adjustment schemes, and fixed- and mixed-microphysics ensembles. The results indicated that the experiment with 5-min assimilation frequency quickly built up the storm and produced a more accurate analysis compared with the 10-min assimilation frequency experiment. The predicted vertical vorticity from the moist-adiabatic in-cloud temperature adjustment scheme was larger in magnitude than that from the latent heat scheme. Cycled data assimilation yielded good forecasts, where the ensemble probability of high vertical vorticity matched reasonably well with the observed tornado damage path. Overall, the results of the study suggest that the 3DVAR analysis and forecast system can provide reasonable forecasts of tornadic supercell storms.

  17. Understanding the Central Equatorial African long-term drought using AMIP-type simulations

    NASA Astrophysics Data System (ADS)

    Hua, Wenjian; Zhou, Liming; Chen, Haishan; Nicholson, Sharon E.; Jiang, Yan; Raghavendra, Ajay

    2018-02-01

    Previous studies show that Indo-Pacific sea surface temperature (SST) variations may help to explain the observed long-term drought during April-May-June (AMJ) since the 1990s over Central equatorial Africa (CEA). However, the underlying physical mechanisms for this drought are still not clear due to observation limitations. Here we use the AMIP-type simulations with 24 ensemble members forced by observed SSTs from the ECHAM4.5 model to explore the likely physical processes that determine the rainfall variations over CEA. We not only examine the ensemble mean (EM), but also compare the "good" and "poor" ensemble members to understand the intra-ensemble variability. In general, EM and the "good" ensemble member can simulate the drought and associated reduced vertical velocity and anomalous anti-cyclonic circulation in the lower troposphere. However, the "poor" ensemble members cannot simulate the drought and associated circulation patterns. These contrasts indicate that the drought is tightly associated with the tropical Walker circulation and atmospheric teleconnection patterns. If the observational circulation patterns cannot be reproduced, the CEA drought will not be captured. Despite the large intra-ensemble spread, the model simulations indicate an essential role of SST forcing in causing the drought. These results suggest that the long-term drought may result from tropical Indo-Pacific SST variations associated with the enhanced and westward extended tropical Walker circulation.

  18. Aerodynamic surface stress intermittency and conditionally averaged turbulence statistics

    NASA Astrophysics Data System (ADS)

    Anderson, William; Lanigan, David

    2015-11-01

    Aeolian erosion is induced by aerodynamic stress imposed by atmospheric winds. Erosion models prescribe that sediment flux, Q, scales with aerodynamic stress raised to exponent, n, where n > 1 . Since stress (in fully rough, inertia-dominated flows) scales with incoming velocity squared, u2, it follows that q ~u2n (where u is some relevant component of the flow). Thus, even small (turbulent) deviations of u from its time-mean may be important for aeolian activity. This rationale is augmented given that surface layer turbulence exhibits maximum Reynolds stresses in the fluid immediately above the landscape. To illustrate the importance of stress intermittency, we have used conditional averaging predicated on stress during large-eddy simulation of atmospheric boundary layer flow over an arid, bare landscape. Conditional averaging provides an ensemble-mean visualization of flow structures responsible for erosion `events'. Preliminary evidence indicates that surface stress peaks are associated with the passage of inclined, high-momentum regions flanked by adjacent low-momentum regions. We characterize geometric attributes of such structures and explore streamwise and vertical vorticity distribution within the conditionally averaged flow field. This work was supported by the National Sci. Foundation, Phys. and Dynamic Meteorology Program (PM: Drs. N. Anderson, C. Lu, and E. Bensman) under Grant # 1500224. Computational resources were provided by the Texas Adv. Comp. Center at the Univ. of Texas.

  19. Ensemble Deep Learning for Biomedical Time Series Classification

    PubMed Central

    2016-01-01

    Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost. PMID:27725828

  20. On estimating attenuation from the amplitude of the spectrally whitened ambient seismic field

    NASA Astrophysics Data System (ADS)

    Weemstra, Cornelis; Westra, Willem; Snieder, Roel; Boschi, Lapo

    2014-06-01

    Measuring attenuation on the basis of interferometric, receiver-receiver surface waves is a non-trivial task: the amplitude, more than the phase, of ensemble-averaged cross-correlations is strongly affected by non-uniformities in the ambient wavefield. In addition, ambient noise data are typically pre-processed in ways that affect the amplitude itself. Some authors have recently attempted to measure attenuation in receiver-receiver cross-correlations obtained after the usual pre-processing of seismic ambient-noise records, including, most notably, spectral whitening. Spectral whitening replaces the cross-spectrum with a unit amplitude spectrum. It is generally assumed that cross-terms have cancelled each other prior to spectral whitening. Cross-terms are peaks in the cross-correlation due to simultaneously acting noise sources, that is, spurious traveltime delays due to constructive interference of signal coming from different sources. Cancellation of these cross-terms is a requirement for the successful retrieval of interferometric receiver-receiver signal and results from ensemble averaging. In practice, ensemble averaging is replaced by integrating over sufficiently long time or averaging over several cross-correlation windows. Contrary to the general assumption, we show in this study that cross-terms are not required to cancel each other prior to spectral whitening, but may also cancel each other after the whitening procedure. Specifically, we derive an analytic approximation for the amplitude difference associated with the reversed order of cancellation and normalization. Our approximation shows that an amplitude decrease results from the reversed order. This decrease is predominantly non-linear at small receiver-receiver distances: at distances smaller than approximately two wavelengths, whitening prior to ensemble averaging causes a significantly stronger decay of the cross-spectrum.

  1. Equipartition terms in transition path ensemble: Insights from molecular dynamics simulations of alanine dipeptide.

    PubMed

    Li, Wenjin

    2018-02-28

    Transition path ensemble consists of reactive trajectories and possesses all the information necessary for the understanding of the mechanism and dynamics of important condensed phase processes. However, quantitative description of the properties of the transition path ensemble is far from being established. Here, with numerical calculations on a model system, the equipartition terms defined in thermal equilibrium were for the first time estimated in the transition path ensemble. It was not surprising to observe that the energy was not equally distributed among all the coordinates. However, the energies distributed on a pair of conjugated coordinates remained equal. Higher energies were observed to be distributed on several coordinates, which are highly coupled to the reaction coordinate, while the rest were almost equally distributed. In addition, the ensemble-averaged energy on each coordinate as a function of time was also quantified. These quantitative analyses on energy distributions provided new insights into the transition path ensemble.

  2. Perception of ensemble statistics requires attention.

    PubMed

    Jackson-Nielsen, Molly; Cohen, Michael A; Pitts, Michael A

    2017-02-01

    To overcome inherent limitations in perceptual bandwidth, many aspects of the visual world are represented as summary statistics (e.g., average size, orientation, or density of objects). Here, we investigated the relationship between summary (ensemble) statistics and visual attention. Recently, it was claimed that one ensemble statistic in particular, color diversity, can be perceived without focal attention. However, a broader debate exists over the attentional requirements of conscious perception, and it is possible that some form of attention is necessary for ensemble perception. To test this idea, we employed a modified inattentional blindness paradigm and found that multiple types of summary statistics (color and size) often go unnoticed without attention. In addition, we found attentional costs in dual-task situations, further implicating a role for attention in statistical perception. Overall, we conclude that while visual ensembles may be processed efficiently, some amount of attention is necessary for conscious perception of ensemble statistics. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Genetic programming based ensemble system for microarray data classification.

    PubMed

    Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To

    2015-01-01

    Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved.

  4. Genetic Programming Based Ensemble System for Microarray Data Classification

    PubMed Central

    Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To

    2015-01-01

    Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved. PMID:25810748

  5. Some far-field acoustics characteristics of the XV-15 tilt-rotor aircraft

    NASA Technical Reports Server (NTRS)

    Golub, Robert A.; Conner, David A.; Becker, Lawrence E.; Rutledge, C. Kendall; Smith, Rita A.

    1990-01-01

    Far-field acoustics tests have been conducted on an instrumented XV-15 tilt-rotor aircraft. The purpose of these acoustic measurements was to create an encompassing, high confidence (90 percent), and accurate (-1.4/ +1/8 dB theoretical confidence interval) far-field acoustics data base to validate ROTONET and other current rotorcraft noise prediction computer codes. This paper describes the flight techniques used, with emphasis on the care taken to obtain high-quality far-field acoustic data. The quality and extensiveness of the data base collected are shown by presentation of ground acoustic contours for level flyovers for the airplane flight mode and for several forward velocities and nacelle tilts for the transition mode and helicopter flight mode. Acoustic pressure time-histories and fully analyzed ensemble averaged far-field data results (spectra) are shown for each of the ground contour cases.

  6. The twin cell model and its excellence in determining the glass transition temperature of thin film metallic glass

    NASA Astrophysics Data System (ADS)

    Kanjilal, Baishali; Iram, Samreen; Das, Atreyee; Chakrabarti, Haimanti

    2018-05-01

    This work reports a novel two dimensional approach to the theoretical computation of the glass transition temperature in simple hypothetical icosahedral packed structures based on Thin Film metallic glasses using liquid state theories in the realm of transport properties. The model starts from Navier-Stokes equation and evaluates the statistical average velocity of each different species of atom under the condition of ensemble equality to compute diffusion lengths and the diffusion coefficients as a function of temperature. The additional correction brought in is that of the limited states due to tethering of one nodule vis -a-vis the others. The movement of the molecules use our Twin Cell Model a typical model pertinent for modeling chain motions. A temperature viscosity correction by Cohen and Grest is included through the temperature dependence of the relaxation times for glass formers.

  7. Tube Visualization and Properties from Isoconfigurational Averaging

    NASA Astrophysics Data System (ADS)

    Qin, Jian; Bisbee, Windsor; Milner, Scott

    2012-02-01

    We introduce a simulation method to visualize the confining tube in polymer melts and measure its properties. We studied bead-spring ring polymers, which conveniently suppresses constraint release and contour length fluctuations. We allow molecules to cross and reach topologically equilibrated states by invoking various molecular rebridging moves in Monte Carlo simulations. To reveal the confining tube, we start with a well equilibrated configuration, turn off rebridging moves, and run molecular dynamics simulation multiple times, each with different initial velocities. The resulting set of ``movies'' of molecular trajectories defines an isoconfigurational ensemble, with the bead positions at different times and in different ``movies'' giving rise to a cloud. The cloud shows the shape, range and strength of the tube confinement, which enables us to study the statistical properties of tube. Using this approach, we studied the effects of free surface, and found that the tube diameter near the surface is greater than the bulk value by about 25%.

  8. Drift-wave turbulence and zonal flow generation.

    PubMed

    Balescu, R

    2003-10-01

    Drift-wave turbulence in a plasma is analyzed on the basis of the wave Liouville equation, describing the evolution of the distribution function of wave packets (quasiparticles) characterized by position x and wave vector k. A closed kinetic equation is derived for the ensemble-averaged part of this function by the methods of nonequilibrium statistical mechanics. It has the form of a non-Markovian advection-diffusion equation describing coupled diffusion processes in x and k spaces. General forms of the diffusion coefficients are obtained in terms of Lagrangian velocity correlations. The latter are calculated in the decorrelation trajectory approximation, a method recently developed for an accurate measure of the important trapping phenomena of particles in the rugged electrostatic potential. The analysis of individual decorrelation trajectories provides an illustration of the fragmentation of drift-wave structures in the radial direction and the generation of long-wavelength structures in the poloidal direction that are identified as zonal flows.

  9. Continuous centrifuge decelerator for polar molecules.

    PubMed

    Chervenkov, S; Wu, X; Bayerl, J; Rohlfes, A; Gantner, T; Zeppenfeld, M; Rempe, G

    2014-01-10

    Producing large samples of slow molecules from thermal-velocity ensembles is a formidable challenge. Here we employ a centrifugal force to produce a continuous molecular beam with a high flux at near-zero velocities. We demonstrate deceleration of three electrically guided molecular species, CH3F, CF3H, and CF3CCH, with input velocities of up to 200  m s(-1) to obtain beams with velocities below 15  m s(-1) and intensities of several 10(9)  mm(-2) s(-1). The centrifuge decelerator is easy to operate and can, in principle, slow down any guidable particle. It has the potential to become a standard technique for continuous deceleration of molecules.

  10. Fidelity decay of the two-level bosonic embedded ensembles of random matrices

    NASA Astrophysics Data System (ADS)

    Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.

    2010-12-01

    We study the fidelity decay of the k-body embedded ensembles of random matrices for bosons distributed over two single-particle states. Fidelity is defined in terms of a reference Hamiltonian, which is a purely diagonal matrix consisting of a fixed one-body term and includes the diagonal of the perturbing k-body embedded ensemble matrix, and the perturbed Hamiltonian which includes the residual off-diagonal elements of the k-body interaction. This choice mimics the typical mean-field basis used in many calculations. We study separately the cases k = 2 and 3. We compute the ensemble-averaged fidelity decay as well as the fidelity of typical members with respect to an initial random state. Average fidelity displays a revival at the Heisenberg time, t = tH = 1, and a freeze in the fidelity decay, during which periodic revivals of period tH are observed. We obtain the relevant scaling properties with respect to the number of bosons and the strength of the perturbation. For certain members of the ensemble, we find that the period of the revivals during the freeze of fidelity occurs at fractional times of tH. These fractional periodic revivals are related to the dominance of specific k-body terms in the perturbation.

  11. Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: Charge-bond resonance in monomethine cyanines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Olsen, Seth, E-mail: seth.olsen@uq.edu.au

    2015-01-28

    This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed (“microcanonical”) SA-CASSCF ensembles, self-consistency is invariant tomore » any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with “more diabatic than adiabatic” states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse “temperature,” unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler’s hydrol blue. The diabatic CASVB representation is shown to vary weakly for “temperatures” corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.« less

  12. Canonical-ensemble state-averaged complete active space self-consistent field (SA-CASSCF) strategy for problems with more diabatic than adiabatic states: charge-bond resonance in monomethine cyanines.

    PubMed

    Olsen, Seth

    2015-01-28

    This paper reviews basic results from a theory of the a priori classical probabilities (weights) in state-averaged complete active space self-consistent field (SA-CASSCF) models. It addresses how the classical probabilities limit the invariance of the self-consistency condition to transformations of the complete active space configuration interaction (CAS-CI) problem. Such transformations are of interest for choosing representations of the SA-CASSCF solution that are diabatic with respect to some interaction. I achieve the known result that a SA-CASSCF can be self-consistently transformed only within degenerate subspaces of the CAS-CI ensemble density matrix. For uniformly distributed ("microcanonical") SA-CASSCF ensembles, self-consistency is invariant to any unitary CAS-CI transformation that acts locally on the ensemble support. Most SA-CASSCF applications in current literature are microcanonical. A problem with microcanonical SA-CASSCF models for problems with "more diabatic than adiabatic" states is described. The problem is that not all diabatic energies and couplings are self-consistently resolvable. A canonical-ensemble SA-CASSCF strategy is proposed to solve the problem. For canonical-ensemble SA-CASSCF, the equilibrated ensemble is a Boltzmann density matrix parametrized by its own CAS-CI Hamiltonian and a Lagrange multiplier acting as an inverse "temperature," unrelated to the physical temperature. Like the convergence criterion for microcanonical-ensemble SA-CASSCF, the equilibration condition for canonical-ensemble SA-CASSCF is invariant to transformations that act locally on the ensemble CAS-CI density matrix. The advantage of a canonical-ensemble description is that more adiabatic states can be included in the support of the ensemble without running into convergence problems. The constraint on the dimensionality of the problem is relieved by the introduction of an energy constraint. The method is illustrated with a complete active space valence-bond (CASVB) analysis of the charge/bond resonance electronic structure of a monomethine cyanine: Michler's hydrol blue. The diabatic CASVB representation is shown to vary weakly for "temperatures" corresponding to visible photon energies. Canonical-ensemble SA-CASSCF enables the resolution of energies and couplings for all covalent and ionic CASVB structures contributing to the SA-CASSCF ensemble. The CASVB solution describes resonance of charge- and bond-localized electronic structures interacting via bridge resonance superexchange. The resonance couplings can be separated into channels associated with either covalent charge delocalization or chemical bonding interactions, with the latter significantly stronger than the former.

  13. Decadal climate predictions improved by ocean ensemble dispersion filtering

    NASA Astrophysics Data System (ADS)

    Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.

    2017-06-01

    Decadal predictions by Earth system models aim to capture the state and phase of the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-term weather forecasts represent an initial value problem and long-term climate projections represent a boundary condition problem, the decadal climate prediction falls in-between these two time scales. In recent years, more precise initialization techniques of coupled Earth system models and increased ensemble sizes have improved decadal predictions. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure, called ensemble dispersion filter, results in more accurate results than the standard decadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from ocean ensemble dispersion filtering toward the ensemble mean.Plain Language SummaryDecadal predictions aim to predict the climate several years in advance. Atmosphere-ocean interaction plays an important role for such climate forecasts. The ocean memory due to its heat capacity holds big potential skill. In recent years, more precise initialization techniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions. Ensembles are another important aspect. Applying slightly perturbed predictions to trigger the famous butterfly effect results in an ensemble. Instead of evaluating one prediction, but the whole ensemble with its ensemble average, improves a prediction system. However, climate models in general start losing the initialized signal and its predictive skill from one forecast year to the next. Our study shows that the climate prediction skill of an Earth system model can be improved by a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. We found that this procedure applying the average during the model run, called ensemble dispersion filter, results in more accurate results than the standard prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions show an increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with larger ensembles and higher resolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMSH53A2143M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMSH53A2143M"><span>Real-time Ensemble Forecasting of Coronal Mass Ejections using the WSA-ENLIL+Cone Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mays, M. L.; Taktakishvili, A.; Pulkkinen, A. A.; MacNeice, P. J.; Rastaetter, L.; Kuznetsova, M. M.; Odstrcil, D.</p> <p>2013-12-01</p> <p>Ensemble forecasting of coronal mass ejections (CMEs) provides significant information in that it provides an estimation of the spread or uncertainty in CME arrival time predictions due to uncertainties in determining CME input parameters. Ensemble modeling of CME propagation in the heliosphere is performed by forecasters at the Space Weather Research Center (SWRC) using the WSA-ENLIL cone model available at the Community Coordinated Modeling Center (CCMC). SWRC is an in-house research-based operations team at the CCMC which provides interplanetary space weather forecasting for NASA's robotic missions and performs real-time model validation. A distribution of n (routinely n=48) CME input parameters are generated using the CCMC Stereo CME Analysis Tool (StereoCAT) which employs geometrical triangulation techniques. These input parameters are used to perform n different simulations yielding an ensemble of solar wind parameters at various locations of interest (satellites or planets), including a probability distribution of CME shock arrival times (for hits), and geomagnetic storm strength (for Earth-directed hits). Ensemble simulations have been performed experimentally in real-time at the CCMC since January 2013. We present the results of ensemble simulations for a total of 15 CME events, 10 of which were performed in real-time. The observed CME arrival was within the range of ensemble arrival time predictions for 5 out of the 12 ensemble runs containing hits. The average arrival time prediction was computed for each of the twelve ensembles predicting hits and using the actual arrival time an average absolute error of 8.20 hours was found for all twelve ensembles, which is comparable to current forecasting errors. Some considerations for the accuracy of ensemble CME arrival time predictions include the importance of the initial distribution of CME input parameters, particularly the mean and spread. When the observed arrivals are not within the predicted range, this still allows the ruling out of prediction errors caused by tested CME input parameters. Prediction errors can also arise from ambient model parameters such as the accuracy of the solar wind background, and other limitations. Additionally the ensemble modeling setup was used to complete a parametric event case study of the sensitivity of the CME arrival time prediction to free parameters for ambient solar wind model and CME.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27329703','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27329703"><span>Robustness of the far-field response of nonlocal plasmonic ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tserkezis, Christos; Maack, Johan R; Liu, Zhaowei; Wubs, Martijn; Mortensen, N Asger</p> <p>2016-06-22</p> <p>Contrary to classical predictions, the optical response of few-nm plasmonic particles depends on particle size due to effects such as nonlocality and electron spill-out. Ensembles of such nanoparticles are therefore expected to exhibit a nonclassical inhomogeneous spectral broadening due to size distribution. For a normal distribution of free-electron nanoparticles, and within the simple nonlocal hydrodynamic Drude model, both the nonlocal blueshift and the plasmon linewidth are shown to be considerably affected by ensemble averaging. Size-variance effects tend however to conceal nonlocality to a lesser extent when the homogeneous size-dependent broadening of individual nanoparticles is taken into account, either through a local size-dependent damping model or through the Generalized Nonlocal Optical Response theory. The role of ensemble averaging is further explored in realistic distributions of isolated or weakly-interacting noble-metal nanoparticles, as encountered in experiments, while an analytical expression to evaluate the importance of inhomogeneous broadening through measurable quantities is developed. Our findings are independent of the specific nonclassical theory used, thus providing important insight into a large range of experiments on nanoscale and quantum plasmonics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016APS..DFDA31004G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016APS..DFDA31004G"><span>Adaptive spectral filtering of PIV cross correlations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Giarra, Matthew; Vlachos, Pavlos; Aether Lab Team</p> <p>2016-11-01</p> <p>Using cross correlations (CCs) in particle image velocimetry (PIV) assumes that tracer particles in interrogation regions (IRs) move with the same velocity. But this assumption is nearly always violated because real flows exhibit velocity gradients, which degrade the signal-to-noise ratio (SNR) of the CC and are a major driver of error in PIV. Iterative methods help reduce these errors, but even they can fail when gradients are large within individual IRs. We present an algorithm to mitigate the effects of velocity gradients on PIV measurements. Our algorithm is based on a model of the CC, which predicts a relationship between the PDF of particle displacements and the variation of the correlation's SNR across the Fourier spectrum. We give an algorithm to measure this SNR from the CC, and use this insight to create a filter that suppresses the low-SNR portions of the spectrum. Our algorithm extends to the ensemble correlation, where it accelerates the convergence of the measurement and also reveals the PDF of displacements of the ensemble (and therefore of statistical metrics like diffusion coefficient). Finally, our model provides theoretical foundations for a number of "rules of thumb" in PIV, like the quarter-window rule.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70197818','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70197818"><span>A model ensemble for projecting multi‐decadal coastal cliff retreat during the 21st century</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Limber, Patrick; Barnard, Patrick; Vitousek, Sean; Erikson, Li</p> <p>2018-01-01</p> <p>Sea cliff retreat rates are expected to accelerate with rising sea levels during the 21st century. Here we develop an approach for a multi‐model ensemble that efficiently projects time‐averaged sea cliff retreat over multi‐decadal time scales and large (>50 km) spatial scales. The ensemble consists of five simple 1‐D models adapted from the literature that relate sea cliff retreat to wave impacts, sea level rise (SLR), historical cliff behavior, and cross‐shore profile geometry. Ensemble predictions are based on Monte Carlo simulations of each individual model, which account for the uncertainty of model parameters. The consensus of the individual models also weights uncertainty, such that uncertainty is greater when predictions from different models do not agree. A calibrated, but unvalidated, ensemble was applied to the 475 km‐long coastline of Southern California (USA), with 4 SLR scenarios of 0.5, 0.93, 1.5, and 2 m by 2100. Results suggest that future retreat rates could increase relative to mean historical rates by more than two‐fold for the higher SLR scenarios, causing an average total land loss of 19 – 41 m by 2100. However, model uncertainty ranges from +/‐ 5 – 15 m, reflecting the inherent difficulties of projecting cliff retreat over multiple decades. To enhance ensemble performance, future work could include weighting each model by its skill in matching observations in different morphological settings</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999AdAtS..16..159K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999AdAtS..16..159K"><span>An ensemble forecast of the South China Sea monsoon</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Krishnamurti, T. N.; Tewari, Mukul; Bensman, Ed; Han, Wei; Zhang, Zhan; Lau, William K. M.</p> <p>1999-05-01</p> <p>This paper presents a generalized ensemble forecast procedure for the tropical latitudes. Here we propose an empirical orthogonal function-based procedure for the definition of a seven-member ensemble. The wind and the temperature fields are perturbed over the global tropics. Although the forecasts are made over the global belt with a high-resolution model, the emphasis of this study is on a South China Sea monsoon. Over this domain of the South China Sea includes the passage of a Tropical Storm, Gary, that moved eastwards north of the Philippines. The ensemble forecast handled the precipitation of this storm reasonably well. A global model at the resolution Triangular Truncation 126 waves is used to carry out these seven forecasts. The evaluation of the ensemble of forecasts is carried out via standard root mean square errors of the precipitation and the wind fields. The ensemble average is shown to have a higher skill compared to a control experiment, which was a first analysis based on operational data sets over both the global tropical and South China Sea domain. All of these experiments were subjected to physical initialization which provides a spin-up of the model rain close to that obtained from satellite and gauge-based estimates. The results furthermore show that inherently much higher skill resides in the forecast precipitation fields if they are averaged over area elements of the order of 4° latitude by 4° longitude squares.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ESD.....9..153E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ESD.....9..153E"><span>Reliability ensemble averaging of 21st century projections of terrestrial net primary productivity reduces global and regional uncertainties</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Exbrayat, Jean-François; Bloom, A. Anthony; Falloon, Pete; Ito, Akihiko; Smallman, T. Luke; Williams, Mathew</p> <p>2018-02-01</p> <p>Multi-model averaging techniques provide opportunities to extract additional information from large ensembles of simulations. In particular, present-day model skill can be used to evaluate their potential performance in future climate simulations. Multi-model averaging methods have been used extensively in climate and hydrological sciences, but they have not been used to constrain projected plant productivity responses to climate change, which is a major uncertainty in Earth system modelling. Here, we use three global observationally orientated estimates of current net primary productivity (NPP) to perform a reliability ensemble averaging (REA) method using 30 global simulations of the 21st century change in NPP based on the Inter-Sectoral Impact Model Intercomparison Project (ISIMIP) <q>business as usual</q> emissions scenario. We find that the three REA methods support an increase in global NPP by the end of the 21st century (2095-2099) compared to 2001-2005, which is 2-3 % stronger than the ensemble ISIMIP mean value of 24.2 Pg C y-1. Using REA also leads to a 45-68 % reduction in the global uncertainty of 21st century NPP projection, which strengthens confidence in the resilience of the CO2 fertilization effect to climate change. This reduction in uncertainty is especially clear for boreal ecosystems although it may be an artefact due to the lack of representation of nutrient limitations on NPP in most models. Conversely, the large uncertainty that remains on the sign of the response of NPP in semi-arid regions points to the need for better observations and model development in these regions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..1712188O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..1712188O"><span>The total probabilities from high-resolution ensemble forecasting of floods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian</p> <p>2015-04-01</p> <p>Ensemble forecasting has for a long time been used in meteorological modelling, to give an indication of the uncertainty of the forecasts. As meteorological ensemble forecasts often show some bias and dispersion errors, there is a need for calibration and post-processing of the ensembles. Typical methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). To make optimal predictions of floods along the stream network in hydrology, we can easily use the ensemble members as input to the hydrological models. However, some of the post-processing methods will need modifications when regionalizing the forecasts outside the calibration locations, as done by Hemri et al. (2013). We present a method for spatial regionalization of the post-processed forecasts based on EMOS and top-kriging (Skøien et al., 2006). We will also look into different methods for handling the non-normality of runoff and the effect on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005. Skøien, J. O., Merz, R. and Blöschl, G.: Top-kriging - Geostatistics on stream networks, Hydrol. Earth Syst. Sci., 10(2), 277-287, 2006.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3731171','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3731171"><span>Efficient and Unbiased Sampling of Biomolecular Systems in the Canonical Ensemble: A Review of Self-Guided Langevin Dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wu, Xiongwu; Damjanovic, Ana; Brooks, Bernard R.</p> <p>2013-01-01</p> <p>This review provides a comprehensive description of the self-guided Langevin dynamics (SGLD) and the self-guided molecular dynamics (SGMD) methods and their applications. Example systems are included to provide guidance on optimal application of these methods in simulation studies. SGMD/SGLD has enhanced ability to overcome energy barriers and accelerate rare events to affordable time scales. It has been demonstrated that with moderate parameters, SGLD can routinely cross energy barriers of 20 kT at a rate that molecular dynamics (MD) or Langevin dynamics (LD) crosses 10 kT barriers. The core of these methods is the use of local averages of forces and momenta in a direct manner that can preserve the canonical ensemble. The use of such local averages results in methods where low frequency motion “borrows” energy from high frequency degrees of freedom when a barrier is approached and then returns that excess energy after a barrier is crossed. This self-guiding effect also results in an accelerated diffusion to enhance conformational sampling efficiency. The resulting ensemble with SGLD deviates in a small way from the canonical ensemble, and that deviation can be corrected with either an on-the-fly or a post processing reweighting procedure that provides an excellent canonical ensemble for systems with a limited number of accelerated degrees of freedom. Since reweighting procedures are generally not size extensive, a newer method, SGLDfp, uses local averages of both momenta and forces to preserve the ensemble without reweighting. The SGLDfp approach is size extensive and can be used to accelerate low frequency motion in large systems, or in systems with explicit solvent where solvent diffusion is also to be enhanced. Since these methods are direct and straightforward, they can be used in conjunction with many other sampling methods or free energy methods by simply replacing the integration of degrees of freedom that are normally sampled by MD or LD. PMID:23913991</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin-ensemble-averaged-structure-function-relationship-composite-nanocrystals-magnetic-bcc-fe-clusters-catalytically-active-fcc-pt-skin','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin-ensemble-averaged-structure-function-relationship-composite-nanocrystals-magnetic-bcc-fe-clusters-catalytically-active-fcc-pt-skin"><span>Ensemble averaged structure–function relationship for nanocrystals: effective superparamagnetic Fe clusters with catalytically active Pt skin [Ensemble averaged structure-function relationship for composite nanocrystals: magnetic bcc Fe clusters with catalytically active fcc Pt skin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Petkov, Valeri; Prasai, Binay; Shastri, Sarvjit</p> <p></p> <p>Practical applications require the production and usage of metallic nanocrystals (NCs) in large ensembles. Besides, due to their cluster-bulk solid duality, metallic NCs exhibit a large degree of structural diversity. This poses the question as to what atomic-scale basis is to be used when the structure–function relationship for metallic NCs is to be quantified precisely. In this paper, we address the question by studying bi-functional Fe core-Pt skin type NCs optimized for practical applications. In particular, the cluster-like Fe core and skin-like Pt surface of the NCs exhibit superparamagnetic properties and a superb catalytic activity for the oxygen reduction reaction,more » respectively. We determine the atomic-scale structure of the NCs by non-traditional resonant high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Using the experimental structure data we explain the observed magnetic and catalytic behavior of the NCs in a quantitative manner. Lastly, we demonstrate that NC ensemble-averaged 3D positions of atoms obtained by advanced X-ray scattering techniques are a very proper basis for not only establishing but also quantifying the structure–function relationship for the increasingly complex metallic NCs explored for practical applications.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29276332','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29276332"><span>Beating time: How ensemble musicians' cueing gestures communicate beat position and tempo.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bishop, Laura; Goebl, Werner</p> <p>2018-01-01</p> <p>Ensemble musicians typically exchange visual cues to coordinate piece entrances. "Cueing-in" gestures indicate when to begin playing and at what tempo. This study investigated how timing information is encoded in musicians' cueing-in gestures. Gesture acceleration patterns were expected to indicate beat position, while gesture periodicity, duration, and peak gesture velocity were expected to indicate tempo. Same-instrument ensembles (e.g., piano-piano) were expected to synchronize more successfully than mixed-instrument ensembles (e.g., piano-violin). Duos performed short passages as their head and (for violinists) bowing hand movements were tracked with accelerometers and Kinect sensors. Performers alternated between leader/follower roles; leaders heard a tempo via headphones and cued their partner in nonverbally. Violin duos synchronized more successfully than either piano duos or piano-violin duos, possibly because violinists were more experienced in ensemble playing than pianists. Peak acceleration indicated beat position in leaders' head-nodding gestures. Gesture duration and periodicity in leaders' head and bowing hand gestures indicated tempo. The results show that the spatio-temporal characteristics of cueing-in gestures guide beat perception, enabling synchronization with visual gestures that follow a range of spatial trajectories.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148x1731M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148x1731M"><span>Refining Markov state models for conformational dynamics using ensemble-averaged data and time-series trajectories</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Matsunaga, Y.; Sugita, Y.</p> <p>2018-06-01</p> <p>A data-driven modeling scheme is proposed for conformational dynamics of biomolecules based on molecular dynamics (MD) simulations and experimental measurements. In this scheme, an initial Markov State Model (MSM) is constructed from MD simulation trajectories, and then, the MSM parameters are refined using experimental measurements through machine learning techniques. The second step can reduce the bias of MD simulation results due to inaccurate force-field parameters. Either time-series trajectories or ensemble-averaged data are available as a training data set in the scheme. Using a coarse-grained model of a dye-labeled polyproline-20, we compare the performance of machine learning estimations from the two types of training data sets. Machine learning from time-series data could provide the equilibrium populations of conformational states as well as their transition probabilities. It estimates hidden conformational states in more robust ways compared to that from ensemble-averaged data although there are limitations in estimating the transition probabilities between minor states. We discuss how to use the machine learning scheme for various experimental measurements including single-molecule time-series trajectories.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JSWSC...7A..29W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JSWSC...7A..29W"><span>Forecasting Kp from solar wind data: input parameter study using 3-hour averages and 3-hour range values</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wintoft, Peter; Wik, Magnus; Matzka, Jürgen; Shprits, Yuri</p> <p>2017-11-01</p> <p>We have developed neural network models that predict Kp from upstream solar wind data. We study the importance of various input parameters, starting with the magnetic component Bz, particle density n, and velocity V and then adding total field B and the By component. As we also notice a seasonal and UT variation in average Kp we include functions of day-of-year and UT. Finally, as Kp is a global representation of the maximum range of geomagnetic variation over 3-hour UT intervals we conclude that sudden changes in the solar wind can have a big effect on Kp, even though it is a 3-hour value. Therefore, 3-hour solar wind averages will not always appropriately represent the solar wind condition, and we introduce 3-hour maxima and minima values to some degree address this problem. We find that introducing total field B and 3-hour maxima and minima, derived from 1-minute solar wind data, have a great influence on the performance. Due to the low number of samples for high Kp values there can be considerable variation in predicted Kp for different networks with similar validation errors. We address this issue by using an ensemble of networks from which we use the median predicted Kp. The models (ensemble of networks) provide prediction lead times in the range 20-90 min given by the time it takes a solar wind structure to travel from L1 to Earth. Two models are implemented that can be run with real time data: (1) IRF-Kp-2017-h3 uses the 3-hour averages of the solar wind data and (2) IRF-Kp-2017 uses in addition to the averages, also the minima and maxima values. The IRF-Kp-2017 model has RMS error of 0.55 and linear correlation of 0.92 based on an independent test set with final Kp covering 2 years using ACE Level 2 data. The IRF-Kp-2017-h3 model has RMSE = 0.63 and correlation = 0.89. We also explore the errors when tested on another two-year period with real-time ACE data which gives RMSE = 0.59 for IRF-Kp-2017 and RMSE = 0.73 for IRF-Kp-2017-h3. The errors as function of Kp and for different years are also studied.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27739015','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27739015"><span>Summary statistics in the attentional blink.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>McNair, Nicolas A; Goodbourn, Patrick T; Shone, Lauren T; Harris, Irina M</p> <p>2017-01-01</p> <p>We used the attentional blink (AB) paradigm to investigate the processing stage at which extraction of summary statistics from visual stimuli ("ensemble coding") occurs. Experiment 1 examined whether ensemble coding requires attentional engagement with the items in the ensemble. Participants performed two sequential tasks on each trial: gender discrimination of a single face (T1) and estimating the average emotional expression of an ensemble of four faces (or of a single face, as a control condition) as T2. Ensemble coding was affected by the AB when the tasks were separated by a short temporal lag. In Experiment 2, the order of the tasks was reversed to test whether ensemble coding requires more working-memory resources, and therefore induces a larger AB, than estimating the expression of a single face. Each condition produced a similar magnitude AB in the subsequent gender-discrimination T2 task. Experiment 3 additionally investigated whether the previous results were due to participants adopting a subsampling strategy during the ensemble-coding task. Contrary to this explanation, we found different patterns of performance in the ensemble-coding condition and a condition in which participants were instructed to focus on only a single face within an ensemble. Taken together, these findings suggest that ensemble coding emerges automatically as a result of the deployment of attentional resources across the ensemble of stimuli, prior to information being consolidated in working memory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20060040320&hterms=Database+uses&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DDatabase%2Buses','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20060040320&hterms=Database+uses&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DDatabase%2Buses"><span>Companions to isolated elliptical galaxies: revisiting the Bothun-Sullivan (1977) sample using the NASA/IPAC extragalactic database</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Madore, B. F.; Freedman, W. L.; Bothun, G. D.</p> <p>2002-01-01</p> <p>We investigate the number of physical companion galaxies for a sample of relatively isolated elliptical galaxies. The NASA/IPAC Extragalactic Database (NED) has been usedto reinvestigate the incidence of satellite galaxies for a sample of 34 elliptical galaxies, firstinvestigated by Bothun & Sullivan (1977) using a visual inspection of Palomar Sky Survey prints out to a projected search radius of 75 kpc. We have repeated their original investigation usingdata cataloged data in NED. Nine of these ellipticals appear to be members of galaxy clusters:the remaining sample of 25 galaxies reveals an average of +1.0 f 0.5 apparent companions per galaxy within a projected search radius of 75 kpc, in excess of two equal-area comparisonregions displaced by 150-300 kpc. This is nearly an order of magnitude larger than the +0.12+/- 0.42 companions/galaxy found by Bothun & Sullivan for the identical sample. Making use of published radial velocities, mostly available since the completion of the Bothun-Sullivan study,identifies the physical companions and gives a somewhat lower estimate of +0.4 companions per elliptical. This is still a factor of 3x larger than the original statistical study, but giventhe incomplete and heterogeneous nature of the survey redshifts in NED, it still yields a firmlower limit on the number (and identity) of physical companions. An expansion of the searchradius out to 300 kpc, again restricted to sampling only those objects with known redshifts in NED, gives another lower limit of 4.3 physical companions per galaxy. (Excluding fiveelliptical galaxies in the Fornax cluster this average drops to 3.5 companions per elliptical.)These physical companions are individually identified and listed, and the ensemble-averagedradial density distribution of these associated galaxies is presented. For the ensemble, the radial density distribution is found to have a fall-off consistent with p c( R^-0.5 out to approximately150 kpc. For non-Fornax cluster companions the fall-off continues out to the 300-kpc limit of thesurvey. The velocity dispersion of these companions is found to be constant with projected radial distance from the central elliptical, holding at a value of approximately +/- 300-350 km/sec overall.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1225583','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1225583"><span>Random-matrix approach to the statistical compound nuclear reaction at low energies using the Monte-Carlo technique [PowerPoint</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kawano, Toshihiko</p> <p>2015-11-10</p> <p>This theoretical treatment of low-energy compound nucleus reactions begins with the Bohr hypothesis, with corrections, and various statistical theories. The author investigates the statistical properties of the scattering matrix containing a Gaussian Orthogonal Ensemble (GOE) Hamiltonian in the propagator. The following conclusions are reached: For all parameter values studied, the numerical average of MC-generated cross sections coincides with the result of the Verbaarschot, Weidenmueller, Zirnbauer triple-integral formula. Energy average and ensemble average agree reasonably well when the width I is one or two orders of magnitude larger than the average resonance spacing d. In the strong-absorption limit, the channel degree-of-freedommore » ν a is 2. The direct reaction increases the inelastic cross sections while the elastic cross section is reduced.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009EGUGA..1113498R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009EGUGA..1113498R"><span>The NRL relocatable ocean/acoustic ensemble forecast system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rowley, C.; Martin, P.; Cummings, J.; Jacobs, G.; Coelho, E.; Bishop, C.; Hong, X.; Peggion, G.; Fabre, J.</p> <p>2009-04-01</p> <p>A globally relocatable regional ocean nowcast/forecast system has been developed to support rapid implementation of new regional forecast domains. The system is in operational use at the Naval Oceanographic Office for a growing number of regional and coastal implementations. The new system is the basis for an ocean acoustic ensemble forecast and adaptive sampling capability. We present an overview of the forecast system and the ocean ensemble and adaptive sampling methods. The forecast system consists of core ocean data analysis and forecast modules, software for domain configuration, surface and boundary condition forcing processing, and job control, and global databases for ocean climatology, bathymetry, tides, and river locations and transports. The analysis component is the Navy Coupled Ocean Data Assimilation (NCODA) system, a 3D multivariate optimum interpolation system that produces simultaneous analyses of temperature, salinity, geopotential, and vector velocity using remotely-sensed SST, SSH, and sea ice concentration, plus in situ observations of temperature, salinity, and currents from ships, buoys, XBTs, CTDs, profiling floats, and autonomous gliders. The forecast component is the Navy Coastal Ocean Model (NCOM). The system supports one-way nesting and multiple assimilation methods. The ensemble system uses the ensemble transform technique with error variance estimates from the NCODA analysis to represent initial condition error. Perturbed surface forcing or an atmospheric ensemble is used to represent errors in surface forcing. The ensemble transform Kalman filter is used to assess the impact of adaptive observations on future analysis and forecast uncertainty for both ocean and acoustic properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3970899','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3970899"><span>Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xue, Yi; Skrynnikov, Nikolai R</p> <p>2014-01-01</p> <p>Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG41A0126R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG41A0126R"><span>Long-time Dynamics of Stochastic Wave Breaking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Restrepo, J. M.; Ramirez, J. M.; Deike, L.; Melville, K.</p> <p>2017-12-01</p> <p>A stochastic parametrization is proposed for the dynamics of wave breaking of progressive water waves. The model is shown to agree with transport estimates, derived from the Lagrangian path of fluid parcels. These trajectories are obtained numerically and are shown to agree well with theory in the non-breaking regime. Of special interest is the impact of wave breaking on transport, momentum exchanges and energy dissipation, as well as dispersion of trajectories. The proposed model, ensemble averaged to larger time scales, is compared to ensemble averages of the numerically generated parcel dynamics, and is then used to capture energy dissipation and path dispersion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ExFl...57..154M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ExFl...57..154M"><span>Velocity field measurements in the wake of a propeller model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mukund, R.; Kumar, A. Chandan</p> <p>2016-10-01</p> <p>Turboprop configurations are being revisited for the modern-day regional transport aircrafts for their fuel efficiency. The use of laminar flow wings is an effort in this direction. One way to further improve their efficiency is by optimizing the flow over the wing in the propeller wake. Previous studies have focused on improving the gross aerodynamic characteristics of the wing. It is known that the propeller slipstream causes early transition of the boundary layer on the wing. However, an optimized design of the propeller and wing combination could delay this transition and decrease the skin friction drag. Such a wing design would require the detailed knowledge of the development of the slipstream in isolated conditions. There are very few studies in the literature addressing the requirements of transport aircraft having six-bladed propeller and cruising at a high propeller advance ratio. Low-speed wind tunnel experiments have been conducted on a powered propeller model in isolated conditions, measuring the velocity field in the vertical plane behind the propeller using two-component hot-wire anemometry. The data obtained clearly resolved the mean velocity, the turbulence, the ensemble phase averages and the structure and development of the tip vortex. The turbulence in the slipstream showed that transition could be close to the leading edge of the wing, making it a fine case for optimization. The development of the wake with distance shows some interesting flow features, and the data are valuable for flow computation and optimization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015FNL....1450033L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015FNL....1450033L"><span>Intelligent Ensemble Forecasting System of Stock Market Fluctuations Based on Symetric and Asymetric Wavelet Functions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lahmiri, Salim; Boukadoum, Mounir</p> <p>2015-08-01</p> <p>We present a new ensemble system for stock market returns prediction where continuous wavelet transform (CWT) is used to analyze return series and backpropagation neural networks (BPNNs) for processing CWT-based coefficients, determining the optimal ensemble weights, and providing final forecasts. Particle swarm optimization (PSO) is used for finding optimal weights and biases for each BPNN. To capture symmetry/asymmetry in the underlying data, three wavelet functions with different shapes are adopted. The proposed ensemble system was tested on three Asian stock markets: The Hang Seng, KOSPI, and Taiwan stock market data. Three statistical metrics were used to evaluate the forecasting accuracy; including, mean of absolute errors (MAE), root mean of squared errors (RMSE), and mean of absolute deviations (MADs). Experimental results showed that our proposed ensemble system outperformed the individual CWT-ANN models each with different wavelet function. In addition, the proposed ensemble system outperformed the conventional autoregressive moving average process. As a result, the proposed ensemble system is suitable to capture symmetry/asymmetry in financial data fluctuations for better prediction accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1360800','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1360800"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Biedermann, G. W.; McGuinness, H. J.; Rakholia, A. V.</p> <p></p> <p>Here, we demonstrate matter-wave interference in a warm vapor of rubidium atoms. Established approaches to light-pulse atom interferometry rely on laser cooling to concentrate a large ensemble of atoms into a velocity class resonant with the atom optical light pulse. In our experiment, we show that clear interference signals may be obtained without laser cooling. This effect relies on the Doppler selectivity of the atom interferometer resonance. Lastly, this interferometer may be configured to measure accelerations, and we demonstrate that multiple interferometers may be operated simultaneously by addressing multiple velocity classes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JMOp...65..640H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JMOp...65..640H"><span>Velocity selection in a Doppler-broadened ensemble of atoms interacting with a monochromatic laser beam</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hughes, Ifan G.</p> <p>2018-03-01</p> <p>There is extensive use of monochromatic lasers to select atoms with a narrow range of velocities in many atomic physics experiments. For the commonplace situation of the inhomogeneous Doppler-broadened (Gaussian) linewidth exceeding the homogeneous (Lorentzian) natural linewidth by typically two orders of magnitude, a substantial narrowing of the velocity class of atoms interacting with the light can be achieved. However, this is not always the case, and here we show that for a certain parameter regime there is essentially no selection - all of the atoms interact with the light in accordance with the velocity probability density. An explanation of this effect is provided, emphasizing the importance of the long tail of the constituent Lorentzian distribution in a Voigt profile.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28419025','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28419025"><span>Ensemble Methods for Classification of Physical Activities from Wrist Accelerometry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chowdhury, Alok Kumar; Tjondronegoro, Dian; Chandran, Vinod; Trost, Stewart G</p> <p>2017-09-01</p> <p>To investigate whether the use of ensemble learning algorithms improve physical activity recognition accuracy compared to the single classifier algorithms, and to compare the classification accuracy achieved by three conventional ensemble machine learning methods (bagging, boosting, random forest) and a custom ensemble model comprising four algorithms commonly used for activity recognition (binary decision tree, k nearest neighbor, support vector machine, and neural network). The study used three independent data sets that included wrist-worn accelerometer data. For each data set, a four-step classification framework consisting of data preprocessing, feature extraction, normalization and feature selection, and classifier training and testing was implemented. For the custom ensemble, decisions from the single classifiers were aggregated using three decision fusion methods: weighted majority vote, naïve Bayes combination, and behavior knowledge space combination. Classifiers were cross-validated using leave-one subject out cross-validation and compared on the basis of average F1 scores. In all three data sets, ensemble learning methods consistently outperformed the individual classifiers. Among the conventional ensemble methods, random forest models provided consistently high activity recognition; however, the custom ensemble model using weighted majority voting demonstrated the highest classification accuracy in two of the three data sets. Combining multiple individual classifiers using conventional or custom ensemble learning methods can improve activity recognition accuracy from wrist-worn accelerometer data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018WtFor..33..369V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018WtFor..33..369V"><span>Skill of Global Raw and Postprocessed Ensemble Predictions of Rainfall over Northern Tropical Africa</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Vogel, Peter; Knippertz, Peter; Fink, Andreas H.; Schlueter, Andreas; Gneiting, Tilmann</p> <p>2018-04-01</p> <p>Accumulated precipitation forecasts are of high socioeconomic importance for agriculturally dominated societies in northern tropical Africa. In this study, we analyze the performance of nine operational global ensemble prediction systems (EPSs) relative to climatology-based forecasts for 1 to 5-day accumulated precipitation based on the monsoon seasons 2007-2014 for three regions within northern tropical Africa. To assess the full potential of raw ensemble forecasts across spatial scales, we apply state-of-the-art statistical postprocessing methods in form of Bayesian Model Averaging (BMA) and Ensemble Model Output Statistics (EMOS), and verify against station and spatially aggregated, satellite-based gridded observations. Raw ensemble forecasts are uncalibrated, unreliable, and underperform relative to climatology, independently of region, accumulation time, monsoon season, and ensemble. Differences between raw ensemble and climatological forecasts are large, and partly stem from poor prediction for low precipitation amounts. BMA and EMOS postprocessed forecasts are calibrated, reliable, and strongly improve on the raw ensembles, but - somewhat disappointingly - typically do not outperform climatology. Most EPSs exhibit slight improvements over the period 2007-2014, but overall have little added value compared to climatology. We suspect that the parametrization of convection is a potential cause for the sobering lack of ensemble forecast skill in a region dominated by mesoscale convective systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2615214','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2615214"><span>Similarity Measures for Protein Ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lindorff-Larsen, Kresten; Ferkinghoff-Borg, Jesper</p> <p>2009-01-01</p> <p>Analyses of similarities and changes in protein conformation can provide important information regarding protein function and evolution. Many scores, including the commonly used root mean square deviation, have therefore been developed to quantify the similarities of different protein conformations. However, instead of examining individual conformations it is in many cases more relevant to analyse ensembles of conformations that have been obtained either through experiments or from methods such as molecular dynamics simulations. We here present three approaches that can be used to compare conformational ensembles in the same way as the root mean square deviation is used to compare individual pairs of structures. The methods are based on the estimation of the probability distributions underlying the ensembles and subsequent comparison of these distributions. We first validate the methods using a synthetic example from molecular dynamics simulations. We then apply the algorithms to revisit the problem of ensemble averaging during structure determination of proteins, and find that an ensemble refinement method is able to recover the correct distribution of conformations better than standard single-molecule refinement. PMID:19145244</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1544146','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1544146"><span>Relation between native ensembles and experimental structures of proteins</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Best, Robert B.; Lindorff-Larsen, Kresten; DePristo, Mark A.; Vendruscolo, Michele</p> <p>2006-01-01</p> <p>Different experimental structures of the same protein or of proteins with high sequence similarity contain many small variations. Here we construct ensembles of “high-sequence similarity Protein Data Bank” (HSP) structures and consider the extent to which such ensembles represent the structural heterogeneity of the native state in solution. We find that different NMR measurements probing structure and dynamics of given proteins in solution, including order parameters, scalar couplings, and residual dipolar couplings, are remarkably well reproduced by their respective high-sequence similarity Protein Data Bank ensembles; moreover, we show that the effects of uncertainties in structure determination are insufficient to explain the results. These results highlight the importance of accounting for native-state protein dynamics in making comparisons with ensemble-averaged experimental data and suggest that even a modest number of structures of a protein determined under different conditions, or with small variations in sequence, capture a representative subset of the true native-state ensemble. PMID:16829580</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1996NuPhS..47..405M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1996NuPhS..47..405M"><span>Renormalization of the Lattice Heavy Quark Classical Velocity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mandula, Jeffrey E.; Ogilvie, Michael C.</p> <p>1996-03-01</p> <p>In the lattice formulation of the Heavy Quark Effective Theory (LHQET), the "classical velocity" v becomes renormalized. The origin of this renormalization is the reduction of Lorentz (or O(4)) invariance to (hyper)cubic invariance. The renormalization is finite and depends on the form of the decretization of the reduced heavy quark Dirac equation. For the Forward Time — Centered Space discretization, the renormalization is computed both perturbatively, to one loop, and non-perturbatively using two ensembles of lattices, one at β = 5.7 and the other at β = 6.1 The estimates agree, and indicate that for small classical velocities, ν→ is reduced by about 25-30%.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AdSR...14..227L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AdSR...14..227L"><span>Wind power application research on the fusion of the determination and ensemble prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lan, Shi; Lina, Xu; Yuzhu, Hao</p> <p>2017-07-01</p> <p>The fused product of wind speed for the wind farm is designed through the use of wind speed products of ensemble prediction from the European Centre for Medium-Range Weather Forecasts (ECMWF) and professional numerical model products on wind power based on Mesoscale Model5 (MM5) and Beijing Rapid Update Cycle (BJ-RUC), which are suitable for short-term wind power forecasting and electric dispatch. The single-valued forecast is formed by calculating the different ensemble statistics of the Bayesian probabilistic forecasting representing the uncertainty of ECMWF ensemble prediction. Using autoregressive integrated moving average (ARIMA) model to improve the time resolution of the single-valued forecast, and based on the Bayesian model averaging (BMA) and the deterministic numerical model prediction, the optimal wind speed forecasting curve and the confidence interval are provided. The result shows that the fusion forecast has made obvious improvement to the accuracy relative to the existing numerical forecasting products. Compared with the 0-24 h existing deterministic forecast in the validation period, the mean absolute error (MAE) is decreased by 24.3 % and the correlation coefficient (R) is increased by 12.5 %. In comparison with the ECMWF ensemble forecast, the MAE is reduced by 11.7 %, and R is increased 14.5 %. Additionally, MAE did not increase with the prolongation of the forecast ahead.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20002430-analysis-three-dimensional-structure-bubble-wake-using-piv-galilean-decomposition','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20002430-analysis-three-dimensional-structure-bubble-wake-using-piv-galilean-decomposition"><span>Analysis of the three-dimensional structure of a bubble wake using PIV and Galilean decomposition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Hassan, Y.A.; Schmidl, W.D.; Ortiz-Villafuerte, J.</p> <p>1999-07-01</p> <p>Bubbly flow plays a key role in a variety of natural and industrial processes. An accurate and complete description of the phase interactions in two-phase bubbly flow is not available at this time. These phase interactions are, in general, always three-dimensional and unsteady. Therefore, measurement techniques utilized to obtain qualitative and quantitative data from two-phase flow should be able to acquire transient and three-dimensional data, in order to provide information to test theoretical models and numerical simulations. Even for dilute bubble flows, in which bubble interaction is at a minimum, the turbulent motion of the liquid generated by the bubblemore » is yet to be completely understood. For many years, the design of systems with bubbly flows was based primarily on empiricism. Dilute bubbly flows are an extension of single bubble dynamics, and therefore improvements in the description and modeling of single bubble motion, the flow field around the bubble, and the dynamical interactions between the bubble and the flow will consequently improve bubbly flow modeling. The improved understanding of the physical phenomena will have far-reaching benefits in upgrading the operation and efficiency of current processes and in supporting the development of new and innovative approaches. A stereoscopic particle image velocimetry measurement of the flow generated by the passage of a single air-bubble rising in stagnant water, in a circular pipe is presented. Three-dimensional velocity fields within the measurement zone were obtained. Ensemble-averaged instantaneous velocities for a specific bubble path were calculated and interpolated to obtain mean three-dimensional velocity fields. A Galilean velocity decomposition is used to study the vorticity generated in the flow.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18793021','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18793021"><span>Temporal correlation functions of concentration fluctuations: an anomalous case.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lubelski, Ariel; Klafter, Joseph</p> <p>2008-10-09</p> <p>We calculate, within the framework of the continuous time random walk (CTRW) model, multiparticle temporal correlation functions of concentration fluctuations (CCF) in systems that display anomalous subdiffusion. The subdiffusion stems from the nonstationary nature of the CTRW waiting times, which also lead to aging and ergodicity breaking. Due to aging, a system of diffusing particles tends to slow down as time progresses, and therefore, the temporal correlation functions strongly depend on the initial time of measurement. As a consequence, time averages of the CCF differ from ensemble averages, displaying therefore ergodicity breaking. We provide a simple example that demonstrates the difference between these two averages, a difference that might be amenable to experimental tests. We focus on the case of ensemble averaging and assume that the preparation time of the system coincides with the starting time of the measurement. Our analytical calculations are supported by computer simulations based on the CTRW model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3156487','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3156487"><span>The Upper and Lower Bounds of the Prediction Accuracies of Ensemble Methods for Binary Classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Xueyi; Davidson, Nicholas J.</p> <p>2011-01-01</p> <p>Ensemble methods have been widely used to improve prediction accuracy over individual classifiers. In this paper, we achieve a few results about the prediction accuracies of ensemble methods for binary classification that are missed or misinterpreted in previous literature. First we show the upper and lower bounds of the prediction accuracies (i.e. the best and worst possible prediction accuracies) of ensemble methods. Next we show that an ensemble method can achieve > 0.5 prediction accuracy, while individual classifiers have < 0.5 prediction accuracies. Furthermore, for individual classifiers with different prediction accuracies, the average of the individual accuracies determines the upper and lower bounds. We perform two experiments to verify the results and show that it is hard to achieve the upper and lower bounds accuracies by random individual classifiers and better algorithms need to be developed. PMID:21853162</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19830055988&hterms=disorder+stress&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Ddisorder%2Bstress','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19830055988&hterms=disorder+stress&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Ddisorder%2Bstress"><span>Unsteady behavior of a reattaching shear layer</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Driver, D. M.; Seegmiller, H. L.; Marvin, J.</p> <p>1983-01-01</p> <p>A detailed investigation of the unsteadiness in a reattaching, turbulent shear layer is reported. Laser-Doppler velocimeter measurements were conditionally sampled on the basis of instantaneous flow direction near reattachment. Conditions of abnormally short reattachment and abnormally long reattachment were considered. Ensemble-averaging of measurements made during these conditions was used to obtain mean velocities and Rreynolds stresses. In the mean flow, conditional streamlines show a global change in flow pattern which correlates with wall-flow direction. This motion can loosely be described as a 'flapping' of the shear layer. Tuft probes show that the flow direction reversals occur quite randomly and are shortlived. Streses shown also vary with the change in flow pattern. Yet, the global'flapping' motion does not appear to contribute significantly to the stress in the flow. A second type of unsteady motion was identified. Spectral analysis of both wall static pressure and streamwise velocity shows that most of the energy in the flow resides in frequencies that are significantly lower than that of the turbulence. The dominant frequency is at a Strouhal number equal to 0.2, which is the characteristic frequency of roll-up and pairing of vortical structure seen in free shear layers. It is conjectured that the 'flapping' is a disorder of the roll-up and pairing process occurring in the shear layer.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5936712','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5936712"><span>Single cardiac ventricular myosins are autonomous motors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Yihua; Yuan, Chen-Ching; Kazmierczak, Katarzyna; Szczesna-Cordary, Danuta</p> <p>2018-01-01</p> <p>Myosin transduces ATP free energy into mechanical work in muscle. Cardiac muscle has dynamically wide-ranging power demands on the motor as the muscle changes modes in a heartbeat from relaxation, via auxotonic shortening, to isometric contraction. The cardiac power output modulation mechanism is explored in vitro by assessing single cardiac myosin step-size selection versus load. Transgenic mice express human ventricular essential light chain (ELC) in wild- type (WT), or hypertrophic cardiomyopathy-linked mutant forms, A57G or E143K, in a background of mouse α-cardiac myosin heavy chain. Ensemble motility and single myosin mechanical characteristics are consistent with an A57G that impairs ELC N-terminus actin binding and an E143K that impairs lever-arm stability, while both species down-shift average step-size with increasing load. Cardiac myosin in vivo down-shifts velocity/force ratio with increasing load by changed unitary step-size selections. Here, the loaded in vitro single myosin assay indicates quantitative complementarity with the in vivo mechanism. Both have two embedded regulatory transitions, one inhibiting ADP release and a second novel mechanism inhibiting actin detachment via strain on the actin-bound ELC N-terminus. Competing regulators filter unitary step-size selection to control force-velocity modulation without myosin integration into muscle. Cardiac myosin is muscle in a molecule. PMID:29669825</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19840037170&hterms=attention+pictures&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dattention%2Bpictures','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19840037170&hterms=attention+pictures&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3Dattention%2Bpictures"><span>An experimental study of entrainment and transport in the turbulent near wake of a circular cylinder</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cantwell, B.; Coles, D.</p> <p>1983-01-01</p> <p>Attention is given to an experimental investigation of transport processes in the near wake of a circular cylinder, for a Reynolds number of 140,000, in which an X-array of hot wire probes mounted on a pair of whirling arms was used for flow measurement. Rotation of the arms in a uniform flow applies a wide range of relative flow angles to these X-arrays, making them inherently self-calibrating in pitch. A phase signal synchronized with the vortex-shedding process allowed a sorting of the velocity data into 16 populations, each having essentially constant phase. An ensemble average for each population yielded a sequence of pictures of the instantaneous mean flow field in which the vortices are frozen, as they would be on a photograph. The measurements also yield nonsteady mean data for velocity, intermittency, vorticity, stress, and turbulent energy production, as a function of phase. Emphasis is given in a discussion of study results to the nonsteady mean flow, which emerges as a pattern of centers and saddles in a frame of reference that moves with the eddies. The kinematics of the vortex formation process are described in terms of the formation and evolution of saddle points between vortices in the first few diameters of the near wake.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H21D1482L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H21D1482L"><span>Enhancing Flood Prediction Reliability Using Bayesian Model Averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Z.; Merwade, V.</p> <p>2017-12-01</p> <p>Uncertainty analysis is an indispensable part of modeling the hydrology and hydrodynamics of non-idealized environmental systems. Compared to reliance on prediction from one model simulation, using on ensemble of predictions that consider uncertainty from different sources is more reliable. In this study, Bayesian model averaging (BMA) is applied to Black River watershed in Arkansas and Missouri by combining multi-model simulations to get reliable deterministic water stage and probabilistic inundation extent predictions. The simulation ensemble is generated from 81 LISFLOOD-FP subgrid model configurations that include uncertainty from channel shape, channel width, channel roughness and discharge. Model simulation outputs are trained with observed water stage data during one flood event, and BMA prediction ability is validated for another flood event. Results from this study indicate that BMA does not always outperform all members in the ensemble, but it provides relatively robust deterministic flood stage predictions across the basin. Station based BMA (BMA_S) water stage prediction has better performance than global based BMA (BMA_G) prediction which is superior to the ensemble mean prediction. Additionally, high-frequency flood inundation extent (probability greater than 60%) in BMA_G probabilistic map is more accurate than the probabilistic flood inundation extent based on equal weights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JChPh.146x4112D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JChPh.146x4112D"><span>Girsanov reweighting for path ensembles and Markov state models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Donati, L.; Hartmann, C.; Keller, B. G.</p> <p>2017-06-01</p> <p>The sensitivity of molecular dynamics on changes in the potential energy function plays an important role in understanding the dynamics and function of complex molecules. We present a method to obtain path ensemble averages of a perturbed dynamics from a set of paths generated by a reference dynamics. It is based on the concept of path probability measure and the Girsanov theorem, a result from stochastic analysis to estimate a change of measure of a path ensemble. Since Markov state models (MSMs) of the molecular dynamics can be formulated as a combined phase-space and path ensemble average, the method can be extended to reweight MSMs by combining it with a reweighting of the Boltzmann distribution. We demonstrate how to efficiently implement the Girsanov reweighting in a molecular dynamics simulation program by calculating parts of the reweighting factor "on the fly" during the simulation, and we benchmark the method on test systems ranging from a two-dimensional diffusion process and an artificial many-body system to alanine dipeptide and valine dipeptide in implicit and explicit water. The method can be used to study the sensitivity of molecular dynamics on external perturbations as well as to reweight trajectories generated by enhanced sampling schemes to the original dynamics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhDT........42J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhDT........42J"><span>Advanced analysis of complex seismic waveforms to characterize the subsurface Earth structure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jia, Tianxia</p> <p>2011-12-01</p> <p>This thesis includes three major parts, (1) Body wave analysis of mantle structure under the Calabria slab, (2) Spatial Average Coherency (SPAC) analysis of microtremor to characterize the subsurface structure in urban areas, and (3) Surface wave dispersion inversion for shear wave velocity structure. Although these three projects apply different techniques and investigate different parts of the Earth, their aims are the same, which is to better understand and characterize the subsurface Earth structure by analyzing complex seismic waveforms that are recorded on the Earth surface. My first project is body wave analysis of mantle structure under the Calabria slab. Its aim is to better understand the subduction structure of the Calabria slab by analyzing seismograms generated by natural earthquakes. The rollback and subduction of the Calabrian Arc beneath the southern Tyrrhenian Sea is a case study of slab morphology and slab-mantle interactions at short spatial scale. I analyzed the seismograms traversing the Calabrian slab and upper mantle wedge under the southern Tyrrhenian Sea through body wave dispersion, scattering and attenuation, which are recorded during the PASSCAL CAT/SCAN experiment. Compressional body waves exhibit dispersion correlating with slab paths, which is high-frequency components arrivals being delayed relative to low-frequency components. Body wave scattering and attenuation are also spatially correlated with slab paths. I used this correlation to estimate the positions of slab boundaries, and further suggested that the observed spatial variation in near-slab attenuation could be ascribed to mantle flow patterns around the slab. My second project is Spatial Average Coherency (SPAC) analysis of microtremors for subsurface structure characterization. Shear-wave velocity (Vs) information in soil and rock has been recognized as a critical parameter for site-specific ground motion prediction study, which is highly necessary for urban areas located in seismic active zones. SPAC analysis of microtremors provides an efficient way to estimate Vs structure. Compared with other Vs estimating methods, SPAC is noninvasive and does not require any active sources, and therefore, it is especially useful in big cities. I applied SPAC method in two urban areas. The first is the historic city, Charleston, South Carolina, where high levels of seismic hazard lead to great public concern. Accurate Vs information, therefore, is critical for seismic site classification and site response studies. The second SPAC study is in Manhattan, New York City, where depths of high velocity contrast and soil-to-bedrock are different along the island. The two experiments show that Vs structure could be estimated with good accuracy using SPAC method compared with borehole and other techniques. SPAC is proved to be an effective technique for Vs estimation in urban areas. One important issue in seismology is the inversion of subsurface structures from surface recordings of seismograms. My third project focuses on solving this complex geophysical inverse problems, specifically, surface wave phase velocity dispersion curve inversion for shear wave velocity. In addition to standard linear inversion, I developed advanced inversion techniques including joint inversion using borehole data as constrains, nonlinear inversion using Monte Carlo, and Simulated Annealing algorithms. One innovative way of solving the inverse problem is to make inference from the ensemble of all acceptable models. The statistical features of the ensemble provide a better way to characterize the Earth model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA611630','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA611630"><span>Particle-Based Simulations of Microscopic Thermal Properties of Confined Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2014-11-01</p> <p>velocity versus electric field in gallium arsenide (GaAs) computed with the original CMC table structure (squares) at temperature T=150K, and the new...computer-aided design Cellular Monte Carlo Ensemble Monte Carlo gallium arsenide Heat Transport Equation DARPA Defense Advanced Research Projects</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26ES...58a2019R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26ES...58a2019R"><span>Model Averaging for Predicting the Exposure to Aflatoxin B1 Using DNA Methylation in White Blood Cells of Infants</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rahardiantoro, S.; Sartono, B.; Kurnia, A.</p> <p>2017-03-01</p> <p>In recent years, DNA methylation has been the special issue to reveal the pattern of a lot of human diseases. Huge amount of data would be the inescapable phenomenon in this case. In addition, some researchers interesting to take some predictions based on these huge data, especially using regression analysis. The classical approach would be failed to take the task. Model averaging by Ando and Li [1] could be an alternative approach to face this problem. This research applied the model averaging to get the best prediction in high dimension of data. In the practice, the case study by Vargas et al [3], data of exposure to aflatoxin B1 (AFB1) and DNA methylation in white blood cells of infants in The Gambia, take the implementation of model averaging. The best ensemble model selected based on the minimum of MAPE, MAE, and MSE of predictions. The result is ensemble model by model averaging with number of predictors in model candidate is 15.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017WRR....53.7521J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017WRR....53.7521J"><span>Remote determination of the velocity index and mean streamwise velocity profiles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Johnson, E. D.; Cowen, E. A.</p> <p>2017-09-01</p> <p>When determining volumetric discharge from surface measurements of currents in a river or open channel, the velocity index is typically used to convert surface velocities to depth-averaged velocities. The velocity index is given by, k=Ub/Usurf, where Ub is the depth-averaged velocity and Usurf is the local surface velocity. The USGS (United States Geological Survey) standard value for this coefficient, k = 0.85, was determined from a series of laboratory experiments and has been widely used in the field and in laboratory measurements of volumetric discharge despite evidence that the velocity index is site-specific. Numerous studies have documented that the velocity index varies with Reynolds number, flow depth, and relative bed roughness and with the presence of secondary flows. A remote method of determining depth-averaged velocity and hence the velocity index is developed here. The technique leverages the findings of Johnson and Cowen (2017) and permits remote determination of the velocity power-law exponent thereby, enabling remote prediction of the vertical structure of the mean streamwise velocity, the depth-averaged velocity, and the velocity index.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1990BoLMe..52..313G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1990BoLMe..52..313G"><span>Mesoscale model response to random, surface-based perturbations — A sea-breeze experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Garratt, J. R.; Pielke, R. A.; Miller, W. F.; Lee, T. J.</p> <p>1990-09-01</p> <p>The introduction into a mesoscale model of random (in space) variations in roughness length, or random (in space and time) surface perturbations of temperature and friction velocity, produces a measurable, but barely significant, response in the simulated flow dynamics of the lower atmosphere. The perturbations are an attempt to include the effects of sub-grid variability into the ensemble-mean parameterization schemes used in many numerical models. Their magnitude is set in our experiments by appeal to real-world observations of the spatial variations in roughness length and daytime surface temperature over the land on horizontal scales of one to several tens of kilometers. With sea-breeze simulations, comparisons of a number of realizations forced by roughness-length and surface-temperature perturbations with the standard simulation reveal no significant change in ensemble mean statistics, and only small changes in the sea-breeze vertical velocity. Changes in the updraft velocity for individual runs, of up to several cms-1 (compared to a mean of 14 cms-1), are directly the result of prefrontal temperature changes of 0.1 to 0.2K, produced by the random surface forcing. The correlation and magnitude of the changes are entirely consistent with a gravity-current interpretation of the sea breeze.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28208482','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28208482"><span>Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Safdari, Hadiseh; Cherstvy, Andrey G; Chechkin, Aleksei V; Bodrova, Anna; Metzler, Ralf</p> <p>2017-01-01</p> <p>We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvE..95a2120S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvE..95a2120S"><span>Aging underdamped scaled Brownian motion: Ensemble- and time-averaged particle displacements, nonergodicity, and the failure of the overdamping approximation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Safdari, Hadiseh; Cherstvy, Andrey G.; Chechkin, Aleksei V.; Bodrova, Anna; Metzler, Ralf</p> <p>2017-01-01</p> <p>We investigate both analytically and by computer simulations the ensemble- and time-averaged, nonergodic, and aging properties of massive particles diffusing in a medium with a time dependent diffusivity. We call this stochastic diffusion process the (aging) underdamped scaled Brownian motion (UDSBM). We demonstrate how the mean squared displacement (MSD) and the time-averaged MSD of UDSBM are affected by the inertial term in the Langevin equation, both at short, intermediate, and even long diffusion times. In particular, we quantify the ballistic regime for the MSD and the time-averaged MSD as well as the spread of individual time-averaged MSD trajectories. One of the main effects we observe is that, both for the MSD and the time-averaged MSD, for superdiffusive UDSBM the ballistic regime is much shorter than for ordinary Brownian motion. In contrast, for subdiffusive UDSBM, the ballistic region extends to much longer diffusion times. Therefore, particular care needs to be taken under what conditions the overdamped limit indeed provides a correct description, even in the long time limit. We also analyze to what extent ergodicity in the Boltzmann-Khinchin sense in this nonstationary system is broken, both for subdiffusive and superdiffusive UDSBM. Finally, the limiting case of ultraslow UDSBM is considered, with a mixed logarithmic and power-law dependence of the ensemble- and time-averaged MSDs of the particles. In the limit of strong aging, remarkably, the ordinary UDSBM and the ultraslow UDSBM behave similarly in the short time ballistic limit. The approaches developed here open ways for considering other stochastic processes under physically important conditions when a finite particle mass and aging in the system cannot be neglected.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21362155-schur-polynomials-biorthogonal-random-matrix-ensembles','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21362155-schur-polynomials-biorthogonal-random-matrix-ensembles"><span>Schur polynomials and biorthogonal random matrix ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Tierz, Miguel</p> <p></p> <p>The study of the average of Schur polynomials over a Stieltjes-Wigert ensemble has been carried out by Dolivet and Tierz [J. Math. Phys. 48, 023507 (2007); e-print arXiv:hep-th/0609167], where it was shown that it is equal to quantum dimensions. Using the same approach, we extend the result to the biorthogonal case. We also study, using the Littlewood-Richardson rule, some particular cases of the quantum dimension result. Finally, we show that the notion of Giambelli compatibility of Schur averages, introduced by Borodin et al. [Adv. Appl. Math. 37, 209 (2006); e-print arXiv:math-ph/0505021], also holds in the biorthogonal setting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhRvE..91d2107S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhRvE..91d2107S"><span>Aging scaled Brownian motion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Safdari, Hadiseh; Chechkin, Aleksei V.; Jafari, Gholamreza R.; Metzler, Ralf</p> <p>2015-04-01</p> <p>Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25974439','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25974439"><span>Aging scaled Brownian motion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Safdari, Hadiseh; Chechkin, Aleksei V; Jafari, Gholamreza R; Metzler, Ralf</p> <p>2015-04-01</p> <p>Scaled Brownian motion (SBM) is widely used to model anomalous diffusion of passive tracers in complex and biological systems. It is a highly nonstationary process governed by the Langevin equation for Brownian motion, however, with a power-law time dependence of the noise strength. Here we study the aging properties of SBM for both unconfined and confined motion. Specifically, we derive the ensemble and time averaged mean squared displacements and analyze their behavior in the regimes of weak, intermediate, and strong aging. A very rich behavior is revealed for confined aging SBM depending on different aging times and whether the process is sub- or superdiffusive. We demonstrate that the information on the aging factorizes with respect to the lag time and exhibits a functional form that is identical to the aging behavior of scale-free continuous time random walk processes. While SBM exhibits a disparity between ensemble and time averaged observables and is thus weakly nonergodic, strong aging is shown to effect a convergence of the ensemble and time averaged mean squared displacement. Finally, we derive the density of first passage times in the semi-infinite domain that features a crossover defined by the aging time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H41H1542A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H41H1542A"><span>Climatic Models Ensemble-based Mid-21st Century Runoff Projections: A Bayesian Framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Achieng, K. O.; Zhu, J.</p> <p>2017-12-01</p> <p>There are a number of North American Regional Climate Change Assessment Program (NARCCAP) climatic models that have been used to project surface runoff in the mid-21st century. Statistical model selection techniques are often used to select the model that best fits data. However, model selection techniques often lead to different conclusions. In this study, ten models are averaged in Bayesian paradigm to project runoff. Bayesian Model Averaging (BMA) is used to project and identify effect of model uncertainty on future runoff projections. Baseflow separation - a two-digital filter which is also called Eckhardt filter - is used to separate USGS streamflow (total runoff) into two components: baseflow and surface runoff. We use this surface runoff as the a priori runoff when conducting BMA of runoff simulated from the ten RCM models. The primary objective of this study is to evaluate how well RCM multi-model ensembles simulate surface runoff, in a Bayesian framework. Specifically, we investigate and discuss the following questions: How well do ten RCM models ensemble jointly simulate surface runoff by averaging over all the models using BMA, given a priori surface runoff? What are the effects of model uncertainty on surface runoff simulation?</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25615090','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25615090"><span>Gaussian memory in kinematic matrix theory for self-propellers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nourhani, Amir; Crespi, Vincent H; Lammert, Paul E</p> <p>2014-12-01</p> <p>We extend the kinematic matrix ("kinematrix") formalism [Phys. Rev. E 89, 062304 (2014)], which via simple matrix algebra accesses ensemble properties of self-propellers influenced by uncorrelated noise, to treat Gaussian correlated noises. This extension brings into reach many real-world biological and biomimetic self-propellers for which inertia is significant. Applying the formalism, we analyze in detail ensemble behaviors of a 2D self-propeller with velocity fluctuations and orientation evolution driven by an Ornstein-Uhlenbeck process. On the basis of exact results, a variety of dynamical regimes determined by the inertial, speed-fluctuation, orientational diffusion, and emergent disorientation time scales are delineated and discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvA..96f3832L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvA..96f3832L"><span>Single-photon superradiant beating from a Doppler-broadened ladder-type atomic ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, Yoon-Seok; Lee, Sang Min; Kim, Heonoh; Moon, Han Seb</p> <p>2017-12-01</p> <p>We report on heralded-single-photon superradiant beating in the spontaneous four-wave mixing process of Doppler-broadened ladder-type 87Rb atoms. When Doppler-broadened atoms contribute to two-photon coherence, the detection probability amplitudes of the heralded single photons are coherently superposed despite inhomogeneous broadened atomic media. Single-photon superradiant beating is observed, which constitutes evidence for the coherent superposition of two-photon amplitudes from different velocity classes in the Doppler-broadened atomic ensemble. We present a theoretical model in which the single-photon superradiant beating originates from the interference between wavelength-separated two-photon amplitudes via the reabsorption filtering effect.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1818469S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1818469S"><span>Ensemble hydro-meteorological forecasting for early warning of floods and scheduling of hydropower production</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Solvang Johansen, Stian; Steinsland, Ingelin; Engeland, Kolbjørn</p> <p>2016-04-01</p> <p>Running hydrological models with precipitation and temperature ensemble forcing to generate ensembles of streamflow is a commonly used method in operational hydrology. Evaluations of streamflow ensembles have however revealed that the ensembles are biased with respect to both mean and spread. Thus postprocessing of the ensembles is needed in order to improve the forecast skill. The aims of this study is (i) to to evaluate how postprocessing of streamflow ensembles works for Norwegian catchments within different hydrological regimes and to (ii) demonstrate how post processed streamflow ensembles are used operationally by a hydropower producer. These aims were achieved by postprocessing forecasted daily discharge for 10 lead-times for 20 catchments in Norway by using EPS forcing from ECMWF applied the semi-distributed HBV-model dividing each catchment into 10 elevation zones. Statkraft Energi uses forecasts from these catchments for scheduling hydropower production. The catchments represent different hydrological regimes. Some catchments have stable winter condition with winter low flow and a major flood event during spring or early summer caused by snow melting. Others has a more mixed snow-rain regime, often with a secondary flood season during autumn, and in the coastal areas, the stream flow is dominated by rain, and the main flood season is autumn and winter. For post processing, a Bayesian model averaging model (BMA) close to (Kleiber et al 2011) is used. The model creates a predictive PDF that is a weighted average of PDFs centered on the individual bias corrected forecasts. The weights are here equal since all ensemble members come from the same model, and thus have the same probability. For modeling streamflow, the gamma distribution is chosen as a predictive PDF. The bias correction parameters and the PDF parameters are estimated using a 30-day sliding window training period. Preliminary results show that the improvement varies between catchments depending on where they are situated and the hydrological regime. There is an improvement in CRPS for all catchments compared to raw EPS ensembles. The improvement is up to lead-time 5-7. The postprocessing also improves the MAE for the median of the predictive PDF compared to the median of the raw EPS. But less compared to CRPS, often up to lead-time 2-3. The streamflow ensembles are to some extent used operationally in Statkraft Energi (Hydro Power company, Norway), with respect to early warning, risk assessment and decision-making. Presently all forecast used operationally for short-term scheduling are deterministic, but ensembles are used visually for expert assessment of risk in difficult situations where e.g. there is a chance of overflow in a reservoir. However, there are plans to incorporate ensembles in the daily scheduling of hydropower production.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/42684','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/42684"><span>Unlocking the climate riddle in forested ecosystems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Greg C. Liknes; Christopher W. Woodall; Brian F. Walters; Sara A. Goeking</p> <p>2012-01-01</p> <p>Climate information is often used as a predictor in ecological studies, where temporal averages are typically based on climate normals (30-year means) or seasonal averages. While ensemble projections of future climate forecast a higher global average annual temperature, they also predict increased climate variability. It remains to be seen whether forest ecosystems...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2003JApMe..42..308D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2003JApMe..42..308D"><span>Evaluation of an Ensemble Dispersion Calculation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Draxler, Roland R.</p> <p>2003-02-01</p> <p>A Lagrangian transport and dispersion model was modified to generate multiple simulations from a single meteorological dataset. Each member of the simulation was computed by assuming a ±1-gridpoint shift in the horizontal direction and a ±250-m shift in the vertical direction of the particle position, with respect to the meteorological data. The configuration resulted in 27 ensemble members. Each member was assumed to have an equal probability. The model was tested by creating an ensemble of daily average air concentrations for 3 months at 75 measurement locations over the eastern half of the United States during the Across North America Tracer Experiment (ANATEX). Two generic graphical displays were developed to summarize the ensemble prediction and the resulting concentration probabilities for a specific event: a probability-exceed plot and a concentration-probability plot. Although a cumulative distribution of the ensemble probabilities compared favorably with the measurement data, the resulting distribution was not uniform. This result was attributed to release height sensitivity. The trajectory ensemble approach accounts for about 41%-47% of the variance in the measurement data. This residual uncertainty is caused by other model and data errors that are not included in the ensemble design.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29579536','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29579536"><span>Novel forecasting approaches using combination of machine learning and statistical models for flood susceptibility mapping.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Shafizadeh-Moghadam, Hossein; Valavi, Roozbeh; Shahabi, Himan; Chapi, Kamran; Shirzadi, Ataollah</p> <p>2018-07-01</p> <p>In this research, eight individual machine learning and statistical models are implemented and compared, and based on their results, seven ensemble models for flood susceptibility assessment are introduced. The individual models included artificial neural networks, classification and regression trees, flexible discriminant analysis, generalized linear model, generalized additive model, boosted regression trees, multivariate adaptive regression splines, and maximum entropy, and the ensemble models were Ensemble Model committee averaging (EMca), Ensemble Model confidence interval Inferior (EMciInf), Ensemble Model confidence interval Superior (EMciSup), Ensemble Model to estimate the coefficient of variation (EMcv), Ensemble Model to estimate the mean (EMmean), Ensemble Model to estimate the median (EMmedian), and Ensemble Model based on weighted mean (EMwmean). The data set covered 201 flood events in the Haraz watershed (Mazandaran province in Iran) and 10,000 randomly selected non-occurrence points. Among the individual models, the Area Under the Receiver Operating Characteristic (AUROC), which showed the highest value, belonged to boosted regression trees (0.975) and the lowest value was recorded for generalized linear model (0.642). On the other hand, the proposed EMmedian resulted in the highest accuracy (0.976) among all models. In spite of the outstanding performance of some models, nevertheless, variability among the prediction of individual models was considerable. Therefore, to reduce uncertainty, creating more generalizable, more stable, and less sensitive models, ensemble forecasting approaches and in particular the EMmedian is recommended for flood susceptibility assessment. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014EGUGA..16.2811P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014EGUGA..16.2811P"><span>Using Bayes Model Averaging for Wind Power Forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Preede Revheim, Pål; Beyer, Hans Georg</p> <p>2014-05-01</p> <p>For operational purposes predictions of the forecasts of the lumped output of groups of wind farms spread over larger geographic areas will often be of interest. A naive approach is to make forecasts for each individual site and sum them up to get the group forecast. It is however well documented that a better choice is to use a model that also takes advantage of spatial smoothing effects. It might however be the case that some sites tends to more accurately reflect the total output of the region, either in general or for certain wind directions. It will then be of interest giving these a greater influence over the group forecast. Bayesian model averaging (BMA) is a statistical post-processing method for producing probabilistic forecasts from ensembles. Raftery et al. [1] show how BMA can be used for statistical post processing of forecast ensembles, producing PDFs of future weather quantities. The BMA predictive PDF of a future weather quantity is a weighted average of the ensemble members' PDFs, where the weights can be interpreted as posterior probabilities and reflect the ensemble members' contribution to overall forecasting skill over a training period. In Revheim and Beyer [2] the BMA procedure used in Sloughter, Gneiting and Raftery [3] were found to produce fairly accurate PDFs for the future mean wind speed of a group of sites from the single sites wind speeds. However, when the procedure was attempted applied to wind power it resulted in either problems with the estimation of the parameters (mainly caused by longer consecutive periods of no power production) or severe underestimation (mainly caused by problems with reflecting the power curve). In this paper the problems that arose when applying BMA to wind power forecasting is met through two strategies. First, the BMA procedure is run with a combination of single site wind speeds and single site wind power production as input. This solves the problem with longer consecutive periods where the input data does not contain information, but it has the disadvantage of nearly doubling the number of model parameters to be estimated. Second, the BMA procedure is run with group mean wind power as the response variable instead of group mean wind speed. This also solves the problem with longer consecutive periods without information in the input data, but it leaves the power curve to also be estimated from the data. [1] Raftery, A. E., et al. (2005). Using Bayesian Model Averaging to Calibrate Forecast Ensembles. Monthly Weather Review, 133, 1155-1174. [2]Revheim, P. P. and H. G. Beyer (2013). Using Bayesian Model Averaging for wind farm group forecasts. EWEA Wind Power Forecasting Technology Workshop,Rotterdam, 4-5 December 2013. [3]Sloughter, J. M., T. Gneiting and A. E. Raftery (2010). Probabilistic Wind Speed Forecasting Using Ensembles and Bayesian Model Averaging. Journal of the American Statistical Association, Vol. 105, No. 489, 25-35</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4233720','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4233720"><span>The interplay between cooperativity and diversity in model threshold ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Cervera, Javier; Manzanares, José A.; Mafe, Salvador</p> <p>2014-01-01</p> <p>The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. PMID:25142516</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7963E..15H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7963E..15H"><span>Sampling-based ensemble segmentation against inter-operator variability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew</p> <p>2011-03-01</p> <p>Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JGRD..123.3443T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JGRD..123.3443T"><span>A Simple Ensemble Simulation Technique for Assessment of Future Variations in Specific High-Impact Weather Events</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Taniguchi, Kenji</p> <p>2018-04-01</p> <p>To investigate future variations in high-impact weather events, numerous samples are required. For the detailed assessment in a specific region, a high spatial resolution is also required. A simple ensemble simulation technique is proposed in this paper. In the proposed technique, new ensemble members were generated from one basic state vector and two perturbation vectors, which were obtained by lagged average forecasting simulations. Sensitivity experiments with different numbers of ensemble members, different simulation lengths, and different perturbation magnitudes were performed. Experimental application to a global warming study was also implemented for a typhoon event. Ensemble-mean results and ensemble spreads of total precipitation, atmospheric conditions showed similar characteristics across the sensitivity experiments. The frequencies of the maximum total and hourly precipitation also showed similar distributions. These results indicate the robustness of the proposed technique. On the other hand, considerable ensemble spread was found in each ensemble experiment. In addition, the results of the application to a global warming study showed possible variations in the future. These results indicate that the proposed technique is useful for investigating various meteorological phenomena and the impacts of global warming. The results of the ensemble simulations also enable the stochastic evaluation of differences in high-impact weather events. In addition, the impacts of a spectral nudging technique were also examined. The tracks of a typhoon were quite different between cases with and without spectral nudging; however, the ranges of the tracks among ensemble members were comparable. It indicates that spectral nudging does not necessarily suppress ensemble spread.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhA...50K5501M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhA...50K5501M"><span>A formal derivation of the local energy transfer (LET) theory of homogeneous turbulence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McComb, W. D.; Yoffe, S. R.</p> <p>2017-09-01</p> <p>A statistical closure of the Navier-Stokes hierarchy which leads to equations for the two-point, two-time covariance of the velocity field for stationary, homogeneous isotropic turbulence is presented. It is a generalisation of the self-consistent field method due to Edwards (1964) for the stationary, single-time velocity covariance. The probability distribution functional P≤ft[\\mathbf{u},t\\right] is obtained, in the form of a series, from the Liouville equation by means of a perturbation expansion about a Gaussian distribution, which is chosen to give the exact two-point, two-time covariance. The triple moment is calculated in terms of an ensemble-averaged infinitesimal velocity-field propagator, and shown to yield the Edwards result as a special case. The use of a Gaussian zero-order distribution has been found to justify the introduction of a fluctuation-response relation, which is in accord with modern dynamical theories. In a sense this work completes the analogy drawn by Edwards between turbulence and Brownian motion. Originally Edwards had shown that the noise input was determined by the correlation of the velocity field with the externally applied stirring forces but was unable to determine the system response. Now we find that the system response is determined by the correlation of the velocity field with internal quasi-entropic forces. This analysis is valid to all orders of perturbation theory, and allows the recovery of the local energy transfer (LET) theory, which had previously been derived by more heuristical methods. The LET theory is known to be in good agreement with experimental results. It is also unique among two-point statistical closures in displaying an acceptable (i.e. non-Markovian) relationship between the transfer spectrum and the system response, in accordance with experimental results. As a result of the latter property, it is compatible with the Kolmogorov (K41) spectral phenomenology. In memory of Professor Sir Sam Edwards F.R.S. 1928-2015.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26473882','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26473882"><span>The quasi-biennial vertical oscillations at global GPS stations: identification by ensemble empirical mode decomposition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pan, Yuanjin; Shen, Wen-Bin; Ding, Hao; Hwang, Cheinway; Li, Jin; Zhang, Tengxu</p> <p>2015-10-14</p> <p>Modeling nonlinear vertical components of a GPS time series is critical to separating sources contributing to mass displacements. Improved vertical precision in GPS positioning at stations for velocity fields is key to resolving the mechanism of certain geophysical phenomena. In this paper, we use ensemble empirical mode decomposition (EEMD) to analyze the daily GPS time series at 89 continuous GPS stations, spanning from 2002 to 2013. EEMD decomposes a GPS time series into different intrinsic mode functions (IMFs), which are used to identify different kinds of signals and secular terms. Our study suggests that the GPS records contain not only the well-known signals (such as semi-annual and annual signals) but also the seldom-noted quasi-biennial oscillations (QBS). The quasi-biennial signals are explained by modeled loadings of atmosphere, non-tidal and hydrology that deform the surface around the GPS stations. In addition, the loadings derived from GRACE gravity changes are also consistent with the quasi-biennial deformations derived from the GPS observations. By removing the modeled components, the weighted root-mean-square (WRMS) variation of the GPS time series is reduced by 7.1% to 42.3%, and especially, after removing the seasonal and QBO signals, the average improvement percentages for seasonal and QBO signals are 25.6% and 7.5%, respectively, suggesting that it is significant to consider the QBS signals in the GPS records to improve the observed vertical deformations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4634412','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4634412"><span>The Quasi-Biennial Vertical Oscillations at Global GPS Stations: Identification by Ensemble Empirical Mode Decomposition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Pan, Yuanjin; Shen, Wen-Bin; Ding, Hao; Hwang, Cheinway; Li, Jin; Zhang, Tengxu</p> <p>2015-01-01</p> <p>Modeling nonlinear vertical components of a GPS time series is critical to separating sources contributing to mass displacements. Improved vertical precision in GPS positioning at stations for velocity fields is key to resolving the mechanism of certain geophysical phenomena. In this paper, we use ensemble empirical mode decomposition (EEMD) to analyze the daily GPS time series at 89 continuous GPS stations, spanning from 2002 to 2013. EEMD decomposes a GPS time series into different intrinsic mode functions (IMFs), which are used to identify different kinds of signals and secular terms. Our study suggests that the GPS records contain not only the well-known signals (such as semi-annual and annual signals) but also the seldom-noted quasi-biennial oscillations (QBS). The quasi-biennial signals are explained by modeled loadings of atmosphere, non-tidal and hydrology that deform the surface around the GPS stations. In addition, the loadings derived from GRACE gravity changes are also consistent with the quasi-biennial deformations derived from the GPS observations. By removing the modeled components, the weighted root-mean-square (WRMS) variation of the GPS time series is reduced by 7.1% to 42.3%, and especially, after removing the seasonal and QBO signals, the average improvement percentages for seasonal and QBO signals are 25.6% and 7.5%, respectively, suggesting that it is significant to consider the QBS signals in the GPS records to improve the observed vertical deformations. PMID:26473882</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29564429','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29564429"><span>Cell-cell bioelectrical interactions and local heterogeneities in genetic networks: a model for the stabilization of single-cell states and multicellular oscillations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cervera, Javier; Manzanares, José A; Mafe, Salvador</p> <p>2018-04-04</p> <p>Genetic networks operate in the presence of local heterogeneities in single-cell transcription and translation rates. Bioelectrical networks and spatio-temporal maps of cell electric potentials can influence multicellular ensembles. Could cell-cell bioelectrical interactions mediated by intercellular gap junctions contribute to the stabilization of multicellular states against local genetic heterogeneities? We theoretically analyze this question on the basis of two well-established experimental facts: (i) the membrane potential is a reliable read-out of the single-cell electrical state and (ii) when the cells are coupled together, their individual cell potentials can be influenced by ensemble-averaged electrical potentials. We propose a minimal biophysical model for the coupling between genetic and bioelectrical networks that associates the local changes occurring in the transcription and translation rates of an ion channel protein with abnormally low (depolarized) cell potentials. We then analyze the conditions under which the depolarization of a small region (patch) in a multicellular ensemble can be reverted by its bioelectrical coupling with the (normally polarized) neighboring cells. We show also that the coupling between genetic and bioelectric networks of non-excitable cells, modulated by average electric potentials at the multicellular ensemble level, can produce oscillatory phenomena. The simulations show the importance of single-cell potentials characteristic of polarized and depolarized states, the relative sizes of the abnormally polarized patch and the rest of the normally polarized ensemble, and intercellular coupling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20160007389','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20160007389"><span>"Intelligent Ensemble" Projections of Precipitation and Surface Radiation in Support of Agricultural Climate Change Adaptation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Taylor, Patrick C.; Baker, Noel C.</p> <p>2015-01-01</p> <p>Earth's climate is changing and will continue to change into the foreseeable future. Expected changes in the climatological distribution of precipitation, surface temperature, and surface solar radiation will significantly impact agriculture. Adaptation strategies are, therefore, required to reduce the agricultural impacts of climate change. Climate change projections of precipitation, surface temperature, and surface solar radiation distributions are necessary input for adaption planning studies. These projections are conventionally constructed from an ensemble of climate model simulations (e.g., the Coupled Model Intercomparison Project 5 (CMIP5)) as an equal weighted average, one model one vote. Each climate model, however, represents the array of climate-relevant physical processes with varying degrees of fidelity influencing the projection of individual climate variables differently. Presented here is a new approach, termed the "Intelligent Ensemble, that constructs climate variable projections by weighting each model according to its ability to represent key physical processes, e.g., precipitation probability distribution. This approach provides added value over the equal weighted average method. Physical process metrics applied in the "Intelligent Ensemble" method are created using a combination of NASA and NOAA satellite and surface-based cloud, radiation, temperature, and precipitation data sets. The "Intelligent Ensemble" method is applied to the RCP4.5 and RCP8.5 anthropogenic climate forcing simulations within the CMIP5 archive to develop a set of climate change scenarios for precipitation, temperature, and surface solar radiation in each USDA Farm Resource Region for use in climate change adaptation studies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AGUFMGC41D0850B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AGUFMGC41D0850B"><span>A short-term ensemble wind speed forecasting system for wind power applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Baidya Roy, S.; Traiteur, J. J.; Callicutt, D.; Smith, M.</p> <p>2011-12-01</p> <p>This study develops an adaptive, blended forecasting system to provide accurate wind speed forecasts 1 hour ahead of time for wind power applications. The system consists of an ensemble of 21 forecasts with different configurations of the Weather Research and Forecasting Single Column Model (WRFSCM) and a persistence model. The ensemble is calibrated against observations for a 2 month period (June-July, 2008) at a potential wind farm site in Illinois using the Bayesian Model Averaging (BMA) technique. The forecasting system is evaluated against observations for August 2008 at the same site. The calibrated ensemble forecasts significantly outperform the forecasts from the uncalibrated ensemble while significantly reducing forecast uncertainty under all environmental stability conditions. The system also generates significantly better forecasts than persistence, autoregressive (AR) and autoregressive moving average (ARMA) models during the morning transition and the diurnal convective regimes. This forecasting system is computationally more efficient than traditional numerical weather prediction models and can generate a calibrated forecast, including model runs and calibration, in approximately 1 minute. Currently, hour-ahead wind speed forecasts are almost exclusively produced using statistical models. However, numerical models have several distinct advantages over statistical models including the potential to provide turbulence forecasts. Hence, there is an urgent need to explore the role of numerical models in short-term wind speed forecasting. This work is a step in that direction and is likely to trigger a debate within the wind speed forecasting community.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70194115','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70194115"><span>Flow over bedforms in a large sand-bed river: A field investigation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Holmes, Robert R.; Garcia, Marcelo H.</p> <p>2008-01-01</p> <p>An experimental field study of flows over bedforms was conducted on the Missouri River near St. Charles, Missouri. Detailed velocity data were collected under two different flow conditions along bedforms in this sand-bed river. The large river-scale data reflect flow characteristics similar to those of laboratory-scale flows, with flow separation occurring downstream of the bedform crest and flow reattachment on the stoss side of the next downstream bedform. Wave-like responses of the flow to the bedforms were detected, with the velocity decreasing throughout the flow depth over bedform troughs, and the velocity increasing over bedform crests. Local and spatially averaged velocity distributions were logarithmic for both datasets. The reach-wise spatially averaged vertical-velocity profile from the standard velocity-defect model was evaluated. The vertically averaged mean flow velocities for the velocity-defect model were within 5% of the measured values and estimated spatially averaged point velocities were within 10% for the upper 90% of the flow depth. The velocity-defect model, neglecting the wake function, was evaluated and found to estimate thevertically averaged mean velocity within 1% of the measured values.  </p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhRvE..83e6216B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhRvE..83e6216B"><span>Fidelity decay in interacting two-level boson systems: Freezing and revivals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.</p> <p>2011-05-01</p> <p>We study the fidelity decay in the k-body embedded ensembles of random matrices for bosons distributed in two single-particle states, considering the reference or unperturbed Hamiltonian as the one-body terms and the diagonal part of the k-body embedded ensemble of random matrices and the perturbation as the residual off-diagonal part of the interaction. We calculate the ensemble-averaged fidelity with respect to an initial random state within linear response theory to second order on the perturbation strength and demonstrate that it displays the freeze of the fidelity. During the freeze, the average fidelity exhibits periodic revivals at integer values of the Heisenberg time tH. By selecting specific k-body terms of the residual interaction, we find that the periodicity of the revivals during the freeze of fidelity is an integer fraction of tH, thus relating the period of the revivals with the range of the interaction k of the perturbing terms. Numerical calculations confirm the analytical results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFMPP41C1527T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFMPP41C1527T"><span>High northern latitude temperature extremes, 1400-1999</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tingley, M. P.; Huybers, P.; Hughen, K. A.</p> <p>2009-12-01</p> <p>There is often an interest in determining which interval features the most extreme value of a reconstructed climate field, such as the warmest year or decade in a temperature reconstruction. Previous approaches to this type of question have not fully accounted for the spatial and temporal covariance in the climate field when assessing the significance of extreme values. Here we present results from applying BARSAT, a new, Bayesian approach to reconstructing climate fields, to a 600 year multiproxy temperature data set that covers land areas between 45N and 85N. The end result of the analysis is an ensemble of spatially and temporally complete realizations of the temperature field, each of which is consistent with the observations and the estimated values of the parameters that define the assumed spatial and temporal covariance functions. In terms of the spatial average temperature, 1990-1999 was the warmest decade in the 1400-1999 interval in each of 2000 ensemble members, while 1995 was the warmest year in 98% of the ensemble members. A similar analysis at each node of a regular 5 degree grid gives insight into the spatial distribution of warm temperatures, and reveals that 1995 was anomalously warm in Eurasia, whereas 1998 featured extreme warmth in North America. In 70% of the ensemble members, 1601 featured the coldest spatial average, indicating that the eruption of Huaynaputina in Peru in 1600 (with a volcanic explosivity index of 6) had a major cooling impact on the high northern latitudes. Repeating this analysis at each node reveals the varying impacts of major volcanic eruptions on the distribution of extreme cooling. Finally, we use the ensemble to investigate extremes in the time evolution of centennial temperature trends, and find that in more than half the ensemble members, the greatest rate of change in the spatial mean time series was a cooling centered at 1600. The largest rate of centennial scale warming, however, occurred in the 20th Century in more than 98% of the ensemble members.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/20641092-group-velocity-probe-light-ensemble-lambda-atoms-under-two-photon-resonance','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/20641092-group-velocity-probe-light-ensemble-lambda-atoms-under-two-photon-resonance"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Li, Y.; Sun, C.P.</p> <p></p> <p>We study the propagation of a probe light in an ensemble of {lambda}-type atoms, utilizing the dynamic symmetry as recently discovered when the atoms are coupled to a classical control field and a quantum probe field [Sun et al., Phys. Rev. Lett. 91, 147903 (2003)]. Under two-photon resonance, we calculate the group velocity of the probe light with collective atomic excitations. Our result gives the dependence of the group velocity on the common one-photon detuning, and can be compared with the recent experiment of E. E. Mikhailov, Y. V. Rostovtsev, and G. R. Welch, e-print quant-ph/0309173.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1360800-atom-interferometry-warm-vapor','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1360800-atom-interferometry-warm-vapor"><span>Atom Interferometry in a Warm Vapor</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Biedermann, G. W.; McGuinness, H. J.; Rakholia, A. V.; ...</p> <p>2017-04-17</p> <p>Here, we demonstrate matter-wave interference in a warm vapor of rubidium atoms. Established approaches to light-pulse atom interferometry rely on laser cooling to concentrate a large ensemble of atoms into a velocity class resonant with the atom optical light pulse. In our experiment, we show that clear interference signals may be obtained without laser cooling. This effect relies on the Doppler selectivity of the atom interferometer resonance. Lastly, this interferometer may be configured to measure accelerations, and we demonstrate that multiple interferometers may be operated simultaneously by addressing multiple velocity classes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19940006497&hterms=Petit&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DPetit','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19940006497&hterms=Petit&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D20%26Ntt%3DPetit"><span>An ensemble pulsar time</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Petit, Gerard; Thomas, Claudine; Tavella, Patrizia</p> <p>1993-01-01</p> <p>Millisecond pulsars are galactic objects that exhibit a very stable spinning period. Several tens of these celestial clocks have now been discovered, which opens the possibility that an average time scale may be deduced through a long-term stability algorithm. Such an ensemble average makes it possible to reduce the level of the instabilities originating from the pulsars or from other sources of noise, which are unknown but independent. The basis for such an algorithm is presented and applied to real pulsar data. It is shown that pulsar time could shortly become more stable than the present atomic time, for averaging times of a few years. Pulsar time can also be used as a flywheel to maintain the accuracy of atomic time in case of temporary failure of the primary standards, or to transfer the improved accuracy of future standards back to the present.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24163333','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24163333"><span>Hierarchical encoding makes individuals in a group seem more attractive.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Walker, Drew; Vul, Edward</p> <p>2014-01-01</p> <p>In the research reported here, we found evidence of the cheerleader effect-people seem more attractive in a group than in isolation. We propose that this effect arises via an interplay of three cognitive phenomena: (a) The visual system automatically computes ensemble representations of faces presented in a group, (b) individual members of the group are biased toward this ensemble average, and (c) average faces are attractive. Taken together, these phenomena suggest that individual faces will seem more attractive when presented in a group because they will appear more similar to the average group face, which is more attractive than group members' individual faces. We tested this hypothesis in five experiments in which subjects rated the attractiveness of faces presented either alone or in a group with the same gender. Our results were consistent with the cheerleader effect.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70024444','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70024444"><span>Seismic structure of the crust and uppermost mantle of north America and adjacent oceanic basins: A synthesis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Chulick, G.S.; Mooney, W.D.</p> <p>2002-01-01</p> <p>We present a new set of contour maps of the seismic structure of North America and the surrounding ocean basins. These maps include the crustal thickness, whole-crustal average P-wave and S-wave velocity, and seismic velocity of the uppermost mantle, that is, Pn and Sn. We found the following: (1) The average thickness of the crust under North America is 36.7 km (standard deviation [s.d.] ??8.4 km), which is 2.5 km thinner than the world average of 39.2 km (s.d. ?? 8.5) for continental crust; (2) Histograms of whole-crustal P- and S-wave velocities for the North American crust are bimodal, with the lower peak occurring for crust without a high-velocity (6.9-7.3 km/sec) lower crustal layer; (3) Regions with anomalously high average crustal P-wave velocities correlate with Precambrian and Paleozoic orogens; low average crustal velocities are correlated with modern extensional regimes; (4) The average Pn velocity beneath North America is 8.03 km/sec (s.d. ?? 0.19 km/sec); (5) the well-known thin crust beneath the western United States extends into northwest Canada; (6) the average P-wave velocity of layer 3 of oceanic crust is 6.61 km/ sec (s.d. ?? 0.47 km/sec). However, the average crustal P-wave velocity under the eastern Pacific seafloor is higher than the western Atlantic seafloor due to the thicker sediment layer on the older Atlantic seafloor.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhRvA..86e2324H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhRvA..86e2324H"><span>Ensembles of physical states and random quantum circuits on graphs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hamma, Alioscia; Santra, Siddhartha; Zanardi, Paolo</p> <p>2012-11-01</p> <p>In this paper we continue and extend the investigations of the ensembles of random physical states introduced in Hamma [Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.109.040502 109, 040502 (2012)]. These ensembles are constructed by finite-length random quantum circuits (RQC) acting on the (hyper)edges of an underlying (hyper)graph structure. The latter encodes for the locality structure associated with finite-time quantum evolutions generated by physical, i.e., local, Hamiltonians. Our goal is to analyze physical properties of typical states in these ensembles; in particular here we focus on proxies of quantum entanglement as purity and α-Renyi entropies. The problem is formulated in terms of matrix elements of superoperators which depend on the graph structure, choice of probability measure over the local unitaries, and circuit length. In the α=2 case these superoperators act on a restricted multiqubit space generated by permutation operators associated to the subsets of vertices of the graph. For permutationally invariant interactions the dynamics can be further restricted to an exponentially smaller subspace. We consider different families of RQCs and study their typical entanglement properties for finite time as well as their asymptotic behavior. We find that area law holds in average and that the volume law is a typical property (that is, it holds in average and the fluctuations around the average are vanishing for the large system) of physical states. The area law arises when the evolution time is O(1) with respect to the size L of the system, while the volume law arises as is typical when the evolution time scales like O(L).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3996711','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3996711"><span>The Dropout Learning Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Baldi, Pierre; Sadowski, Peter</p> <p>2014-01-01</p> <p>Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006JCAMD..20..263B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006JCAMD..20..263B"><span>RNA unrestrained molecular dynamics ensemble improves agreement with experimental NMR data compared to single static structure: a test case</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Beckman, Robert A.; Moreland, David; Louise-May, Shirley; Humblet, Christine</p> <p>2006-05-01</p> <p>Nuclear magnetic resonance (NMR) provides structural and dynamic information reflecting an average, often non-linear, of multiple solution-state conformations. Therefore, a single optimized structure derived from NMR refinement may be misleading if the NMR data actually result from averaging of distinct conformers. It is hypothesized that a conformational ensemble generated by a valid molecular dynamics (MD) simulation should be able to improve agreement with the NMR data set compared with the single optimized starting structure. Using a model system consisting of two sequence-related self-complementary ribonucleotide octamers for which NMR data was available, 0.3 ns particle mesh Ewald MD simulations were performed in the AMBER force field in the presence of explicit water and counterions. Agreement of the averaged properties of the molecular dynamics ensembles with NMR data such as homonuclear proton nuclear Overhauser effect (NOE)-based distance constraints, homonuclear proton and heteronuclear 1H-31P coupling constant ( J) data, and qualitative NMR information on hydrogen bond occupancy, was systematically assessed. Despite the short length of the simulation, the ensemble generated from it agreed with the NMR experimental constraints more completely than the single optimized NMR structure. This suggests that short unrestrained MD simulations may be of utility in interpreting NMR results. As expected, a 0.5 ns simulation utilizing a distance dependent dielectric did not improve agreement with the NMR data, consistent with its inferior exploration of conformational space as assessed by 2-D RMSD plots. Thus, ability to rapidly improve agreement with NMR constraints may be a sensitive diagnostic of the MD methods themselves.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19960016617','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19960016617"><span>Numerical Investigation of Two-Phase Flows With Charged Droplets in Electrostatic Field</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kim, Sang-Wook</p> <p>1996-01-01</p> <p>A numerical method to solve two-phase turbulent flows with charged droplets in an electrostatic field is presented. The ensemble-averaged Navier-Stokes equations and the electrostatic potential equation are solved using a finite volume method. The transitional turbulence field is described using multiple-time-scale turbulence equations. The equations of motion of droplets are solved using a Lagrangian particle tracking scheme, and the inter-phase momentum exchange is described by the Particle-In-Cell scheme. The electrostatic force caused by an applied electrical potential is calculated using the electrostatic field obtained by solving a Laplacian equation and the force exerted by charged droplets is calculated using the Coulombic force equation. The method is applied to solve electro-hydrodynamic sprays. The calculated droplet velocity distributions for droplet dispersions occurring in a stagnant surrounding are in good agreement with the measured data. For droplet dispersions occurring in a two-phase flow, the droplet trajectories are influenced by aerodynamic forces, the Coulombic force, and the applied electrostatic potential field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100005038','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100005038"><span>Intermittent Behavior of the Separated Boundary Layer along the Suction Surface of a Low Pressure Turbine Blade under Periodic Unsteady Flow Conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Oeztuerk, B; Schobeiri, M. T.; Ashpis, David E.</p> <p>2005-01-01</p> <p>The paper experimentally and theoretically studies the effects of periodic unsteady wake flow and aerodynamic characteristics on boundary layer development, separation and re-attachment along the suction surface of a low pressure turbine blade. The experiments were carried out at Reynolds number of 110,000 (based on suction surface length and exit velocity). For one steady and two different unsteady inlet flow conditions with the corresponding passing frequencies, intermittency behaviors were experimentally and theoretically investigated. The current investigation attempts to extend the intermittency unsteady boundary layer transition model developed in previously to the LPT cases, where separation occurs on the suction surface at a low Reynolds number. The results of the unsteady boundary layer measurements and the intermittency analysis were presented in the ensemble-averaged and contour plot forms. The analysis of the boundary layer experimental data with the flow separation, confirms the universal character of the relative intermittency function which is described by a Gausssian function.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014ShWav..24..489S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014ShWav..24..489S"><span>Statistical behavior of post-shock overpressure past grid turbulence</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sasoh, Akihiro; Harasaki, Tatsuya; Kitamura, Takuya; Takagi, Daisuke; Ito, Shigeyoshi; Matsuda, Atsushi; Nagata, Kouji; Sakai, Yasuhiko</p> <p>2014-09-01</p> <p>When a shock wave ejected from the exit of a 5.4-mm inner diameter, stainless steel tube propagated through grid turbulence across a distance of 215 mm, which is 5-15 times larger than its integral length scale , and was normally incident onto a flat surface; the peak value of post-shock overpressure, , at a shock Mach number of 1.0009 on the flat surface experienced a standard deviation of up to about 9 % of its ensemble average. This value was more than 40 times larger than the dynamic pressure fluctuation corresponding to the maximum value of the root-mean-square velocity fluctuation, . By varying and , the statistical behavior of was obtained after at least 500 runs were performed for each condition. The standard deviation of due to the turbulence was almost proportional to . Although the overpressure modulations at two points 200 mm apart were independent of each other, we observed a weak positive correlation between the peak overpressure difference and the relative arrival time difference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFDL32006Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFDL32006Y"><span>Numerical study of wind over breaking waves and generation of spume droplets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Zixuan; Tang, Shuai; Dong, Yu-Hong; Shen, Lian</p> <p>2017-11-01</p> <p>We present direct numerical simulation (DNS) results on wind over breaking waves. The air and water are simulated as a coherent system. The air-water interface is captured using a coupled level-set and volume-of-fluid method. The initial condition for the simulation is fully-developed wind turbulence over strongly-forced steep waves. Because wave breaking is an unsteady process, we use ensemble averaging of a large number of runs to obtain turbulence statistics. The generation and transport of spume droplets during wave breaking is also simulated. The trajectories of sea spray droplets are tracked using a Lagrangian particle tracking method. The generation of droplets is captured using a kinematic criterion based on the relative velocity of fluid particles of water with respect to the wave phase speed. From the simulation, we observe that the wave plunging generates a large vortex in air, which makes an important contribution to the suspension of sea spray droplets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1313559-large-eddy-unsteady-rans-simulations-shock-accelerated-heavy-gas-cylinder','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1313559-large-eddy-unsteady-rans-simulations-shock-accelerated-heavy-gas-cylinder"><span>Large-eddy and unsteady RANS simulations of a shock-accelerated heavy gas cylinder</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Morgan, B. E.; Greenough, J. A.</p> <p>2015-04-08</p> <p>Two-dimensional numerical simulations of the Richtmyer–Meshkov unstable “shock-jet” problem are conducted using both large-eddy simulation (LES) and unsteady Reynolds-averaged Navier–Stokes (URANS) approaches in an arbitrary Lagrangian–Eulerian hydrodynamics code. Turbulence statistics are extracted from LES by running an ensemble of simulations with multimode perturbations to the initial conditions. Detailed grid convergence studies are conducted, and LES results are found to agree well with both experiment and high-order simulations conducted by Shankar et al. (Phys Fluids 23, 024102, 2011). URANS results using a k–L approach are found to be highly sensitive to initialization of the turbulence lengthscale L and to the timemore » at which L becomes resolved on the computational mesh. As a result, it is observed that a gradient diffusion closure for turbulent species flux is a poor approximation at early times, and a new closure based on the mass-flux velocity is proposed for low-Reynolds-number mixing.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1987JAP....62.3825A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1987JAP....62.3825A"><span>Self-consistent Monte Carlo study of high-field carrier transport in graded heterostructures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Al-Omar, A.; Krusius, J. P.</p> <p>1987-11-01</p> <p>Hot-electron transport over graded heterostructures was investigated. A new formulation of the carrier transport, based on the effective mass theorem, a position-dependent Hamiltonian, scattering rates that included overlap integrals with correct symmetry, and ohmic contact models preserving the stochastic nature of carrier injection, was developed and implemented into the self-consistent ensemble Monte Carlo method. Hot-carrier transport in a graded Al(x)Ga(1-x)As device was explored with the following results: (1) the transport across compositionally graded semiconductor structures cannot be described with drift and diffusion concepts; (2) although heterostructure launchers generate a ballistic electron fraction as high as 15 percent and 40 percent of the total electron population for 300 and 77 K, respectively, they simultaneously reduce macroscopic average currents and carrier velocities; and (3) the width of the ballistic electron distribution and the magnitude of the ballistic fraction are primarily determined by material parameters and operating voltages rather than details of the device structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMIN21D0065T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMIN21D0065T"><span>The NASA Reanalysis Ensemble Service - Advanced Capabilities for Integrated Reanalysis Access and Intercomparison</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.</p> <p>2017-12-01</p> <p>NASA's efforts to advance climate analytics-as-a-service are making new capabilities available to the research community: (1) A full-featured Reanalysis Ensemble Service (RES) comprising monthly means data from multiple reanalysis data sets, accessible through an enhanced set of extraction, analytic, arithmetic, and intercomparison operations. The operations are made accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib; (2) A cloud-based, high-performance Virtual Real-Time Analytics Testbed supporting a select set of climate variables. This near real-time capability enables advanced technologies like Spark and Hadoop-based MapReduce analytics over native NetCDF files; and (3) A WPS-compliant Web service interface to our climate data analytics service that will enable greater interoperability with next-generation systems such as ESGF. The Reanalysis Ensemble Service includes the following: - New API that supports full temporal, spatial, and grid-based resolution services with sample queries - A Docker-ready RES application to deploy across platforms - Extended capabilities that enable single- and multiple reanalysis area average, vertical average, re-gridding, standard deviation, and ensemble averages - Convenient, one-stop shopping for commonly used data products from multiple reanalyses including basic sub-setting and arithmetic operations (e.g., avg, sum, max, min, var, count, anomaly) - Full support for the MERRA-2 reanalysis dataset in addition to, ECMWF ERA-Interim, NCEP CFSR, JMA JRA-55 and NOAA/ESRL 20CR… - A Jupyter notebook-based distribution mechanism designed for client use cases that combines CDSlib documentation with interactive scenarios and personalized project management - Supporting analytic services for NASA GMAO Forward Processing datasets - Basic uncertainty quantification services that combine heterogeneous ensemble products with comparative observational products (e.g., reanalysis, observational, visualization) - The ability to compute and visualize multiple reanalysis for ease of inter-comparisons - Automated tools to retrieve and prepare data collections for analytic processing</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1913036S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1913036S"><span>The impact of Surface Wind Velocity Data Assimilation on the Predictability of Plume Advection in the Lower Troposphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sekiyama, Thomas; Kajino, Mizuo; Kunii, Masaru</p> <p>2017-04-01</p> <p>The authors investigated the impact of surface wind velocity data assimilation on the predictability of plume advection in the lower troposphere exploiting the radioactive cesium emitted by the Fukushima nuclear accident in March 2011 as an atmospheric tracer. It was because the radioactive cesium plume was dispersed from the sole point source exactly placed at the Fukushima Daiichi Nuclear Power Plant and its surface concentration was measured at many locations with a high frequency and high accuracy. We used a non-hydrostatic regional weather prediction model with a horizontal resolution of 3 km, which was coupled with an ensemble Kalman filter data assimilation system in this study, to simulate the wind velocity and plume advection. The main module of this weather prediction model has been developed and used operationally by the Japan Meteorological Agency (JMA) since before March 2011. The weather observation data assimilated into the model simulation were provided from two data resources; [#1] the JMA observation archives collected for numerical weather predictions (NWPs) and [#2] the land-surface wind velocity data archived by the JMA surface weather observation network. The former dataset [#1] does not contain land-surface wind velocity observations because their spatial representativeness is relatively small and therefore the land-surface wind velocity data assimilation normally deteriorates the more than one day NWP performance. The latter dataset [#2] is usually used for real-time weather monitoring and never used for the data assimilation of more than one day NWPs. We conducted two experiments (STD and TEST) to reproduce the radioactive cesium plume behavior for 48 hours from 12UTC 14 March to 12UTC 16 March 2011 over the land area of western Japan. The STD experiment was performed to replicate the operational NWP using only the #1 dataset, not assimilating land-surface wind observations. In contrast, the TEST experiment was performed assimilating both the #1 dataset and the #2 dataset including land-surface wind observations measured at more than 200 stations in the model domain. The meteorological boundary conditions for both the experiments were imported from the JMA operational global NWP model results. The modeled radioactive cesium concentrations were examined for plume arrival timing at each observatory comparing with the hourly-measured "suspended particulate matter" filter tape's cesium concentrations retrieved by Tsuruta et al. at more than 40 observatories. The averaged difference of the plume arrival times at 40 observatories between the observational reality and the STD experiment was 82.0 minutes; at this time, the forecast period was 13 hours on average. Meanwhile, The averaged difference of the TEST experiment was 72.8 minutes, which was smaller than that of the STD experiment with a statistical significance of 99.2 %. In summary, the land-surface wind velocity data assimilation improves the predictability of plume advection in the lower troposphere at least in the case of wintertime air pollution over complex terrain. We need more investigation into the data assimilation impact of land-surface weather observations on the predictability of pollutant dispersion especially in the planetary boundary layer.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27940377','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27940377"><span>Using simulation to interpret experimental data in terms of protein conformational ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Allison, Jane R</p> <p>2017-04-01</p> <p>In their biological environment, proteins are dynamic molecules, necessitating an ensemble structural description. Molecular dynamics simulations and solution-state experiments provide complimentary information in the form of atomically detailed coordinates and averaged or distributions of structural properties or related quantities. Recently, increases in the temporal and spatial scale of conformational sampling and comparison of the more diverse conformational ensembles thus generated have revealed the importance of sampling rare events. Excitingly, new methods based on maximum entropy and Bayesian inference are promising to provide a statistically sound mechanism for combining experimental data with molecular dynamics simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12929922','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12929922"><span>Training in cortical control of neuroprosthetic devices improves signal extraction from small neuronal ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Helms Tillery, S I; Taylor, D M; Schwartz, A B</p> <p>2003-01-01</p> <p>We have recently developed a closed-loop environment in which we can test the ability of primates to control the motion of a virtual device using ensembles of simultaneously recorded neurons /29/. Here we use a maximum likelihood method to assess the information about task performance contained in the neuronal ensemble. We trained two animals to control the motion of a computer cursor in three dimensions. Initially the animals controlled cursor motion using arm movements, but eventually they learned to drive the cursor directly from cortical activity. Using a population vector (PV) based upon the relation between cortical activity and arm motion, the animals were able to control the cursor directly from the brain in a closed-loop environment, but with difficulty. We added a supervised learning method that modified the parameters of the PV according to task performance (adaptive PV), and found that animals were able to exert much finer control over the cursor motion from brain signals. Here we describe a maximum likelihood method (ML) to assess the information about target contained in neuronal ensemble activity. Using this method, we compared the information about target contained in the ensemble during arm control, during brain control early in the adaptive PV, and during brain control after the adaptive PV had settled and the animal could drive the cursor reliably and with fine gradations. During the arm-control task, the ML was able to determine the target of the movement in as few as 10% of the trials, and as many as 75% of the trials, with an average of 65%. This average dropped when the animals used a population vector to control motion of the cursor. On average we could determine the target in around 35% of the trials. This low percentage was also reflected in poor control of the cursor, so that the animal was unable to reach the target in a large percentage of trials. Supervised adjustment of the population vector parameters produced new weighting coefficients and directional tuning parameters for many neurons. This produced a much better performance of the brain-controlled cursor motion. It was also reflected in the maximum likelihood measure of cell activity, producing the correct target based only on neuronal activity in over 80% of the trials on average. The changes in maximum likelihood estimates of target location based on ensemble firing show that an animal's ability to regulate the motion of a cortically controlled device is not crucially dependent on the experimenter's ability to estimate intention from neuronal activity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EGUGA..12.2601X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EGUGA..12.2601X"><span>Upgrades to the REA method for producing probabilistic climate change projections</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Ying; Gao, Xuejie; Giorgi, Filippo</p> <p>2010-05-01</p> <p>We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010APS..SES.CA001W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010APS..SES.CA001W"><span>Observing the conformation of individual SNARE proteins inside live cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Weninger, Keith</p> <p>2010-10-01</p> <p>Protein conformational dynamics are directly linked to function in many instances. Within living cells, protein dynamics are rarely synchronized so observing ensemble-averaged behaviors can hide details of signaling pathways. Here we present an approach using single molecule fluorescence resonance energy transfer (FRET) to observe the conformation of individual SNARE proteins as they fold to enter the SNARE complex in living cells. Proteins were recombinantly expressed, labeled with small-molecule fluorescent dyes and microinjected for in vivo imaging and tracking using total internal reflection microscopy. Observing single molecules avoids the difficulties of averaging over unsynchronized ensembles. Our approach is easily generalized to a wide variety of proteins in many cellular signaling pathways.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19950028426&hterms=1605&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3D%2526%25231605','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19950028426&hterms=1605&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D40%26Ntt%3D%2526%25231605"><span>Second-order closure PBL model with new third-order moments: Comparison with LES data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Canuto, V. M.; Minotti, F.; Ronchi, C.; Ypma, R. M.; Zeman, O.</p> <p>1994-01-01</p> <p>This paper contains two parts. In the first part, a new set of diagnostic equations is derived for the third-order moments for a buoyancy-driven flow, by exact inversion of the prognostic equations for the third-order moment equations in the stationary case. The third-order moments exhibit a universal structure: they all are a linear combination of the derivatives of all the second-order moments, bar-w(exp 2), bar-w theta, bar-theta(exp 2), and bar-q(exp 2). Each term of the sum contains a turbulent diffusivity D(sub t), which also exhibits a universal structure of the form D(sub t) = a nu(sub t) + b bar-w theta. Since the sign of the convective flux changes depending on stable or unstable stratification, D(sub t) varies according to the type of stratification. Here nu(sub t) approximately equal to wl (l is a mixing length and w is an rms velocity) represents the 'mechanical' part, while the 'buoyancy' part is represented by the convective flux bar-w theta. The quantities a and b are functions of the variable N(sub tau)(exp 2), where N(exp 2) = g alpha derivative of Theta with respect to z and tau is the turbulence time scale. The new expressions for the third-order moments generalize those of Zeman and Lumley, which were subsequently adopted by Sun and Ogura, Chen and Cotton, and Finger and Schmidt in their treatments of the convective boundary layer. In the second part, the new expressions for the third-order moments are used to solve the ensemble average equations describing a purely convective boundary laye r heated from below at a constant rate. The computed second- and third-order moments are then compared with the corresponding Large Eddy Simulation (LES) results, most of which are obtained by running a new LES code, and part of which are taken from published results. The ensemble average results compare favorably with the LES data.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27991626','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27991626"><span>Impact of distributions on the archetypes and prototypes in heterogeneous nanoparticle ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fernandez, Michael; Wilson, Hugh F; Barnard, Amanda S</p> <p>2017-01-05</p> <p>The magnitude and complexity of the structural and functional data available on nanomaterials requires data analytics, statistical analysis and information technology to drive discovery. We demonstrate that multivariate statistical analysis can recognise the sets of truly significant nanostructures and their most relevant properties in heterogeneous ensembles with different probability distributions. The prototypical and archetypal nanostructures of five virtual ensembles of Si quantum dots (SiQDs) with Boltzmann, frequency, normal, Poisson and random distributions are identified using clustering and archetypal analysis, where we find that their diversity is defined by size and shape, regardless of the type of distribution. At the complex hull of the SiQD ensembles, simple configuration archetypes can efficiently describe a large number of SiQDs, whereas more complex shapes are needed to represent the average ordering of the ensembles. This approach provides a route towards the characterisation of computationally intractable virtual nanomaterial spaces, which can convert big data into smart data, and significantly reduce the workload to simulate experimentally relevant virtual samples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1986STIN...8712820S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1986STIN...8712820S"><span>Generalization of one-dimensional solute transport: A stochastic-convective flow conceptualization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Simmons, C. S.</p> <p>1986-04-01</p> <p>A stochastic-convective representation of one-dimensional solute transport is derived. It is shown to conceptually encompass solutions of the conventional convection-dispersion equation. This stochastic approach, however, does not rely on the assumption that dispersive flux satisfies Fick's diffusion law. Observable values of solute concentration and flux, which together satisfy a conservation equation, are expressed as expectations over a flow velocity ensemble, representing the inherent random processess that govern dispersion. Solute concentration is determined by a Lagrangian pdf for random spatial displacements, while flux is determined by an equivalent Eulerian pdf for random travel times. A condition for such equivalence is derived for steady nonuniform flow, and it is proven that both Lagrangian and Eulerian pdfs are required to account for specified initial and boundary conditions on a global scale. Furthermore, simplified modeling of transport is justified by proving that an ensemble of effectively constant velocities always exists that constitutes an equivalent representation. An example of how a two-dimensional transport problem can be reduced to a single-dimensional stochastic viewpoint is also presented to further clarify concepts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70024656','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70024656"><span>Percolation flux and Transport velocity in the unsaturated zone, Yucca Mountain, Nevada</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Yang, I.C.</p> <p>2002-01-01</p> <p>The percolation flux for borehole USW UZ-14 was calculated from 14C residence times of pore water and water content of cores measured in the laboratory. Transport velocity is calculated from the depth interval between two points divided by the difference in 14C residence times. Two methods were used to calculate the flux and velocity. The first method uses the 14C data and cumulative water content data directly in the incremental intervals in the Paintbrush nonwelded unit and the Topopah Spring welded unit. The second method uses the regression relation for 14C data and cumulative water content data for the entire Paintbrush nonwelded unit and the Topopah Spring Tuff/Topopah Spring welded unit. Using the first method, for the Paintbrush nonwelded unit in boreholeUSW UZ-14 percolation flux ranges from 2.3 to 41.0 mm/a. Transport velocity ranges from 1.2 to 40.6 cm/a. For the Topopah Spring welded unit percolation flux ranges from 0.9 to 5.8 mm/a in the 8 incremental intervals calculated. Transport velocity ranges from 1.4 to 7.3 cm/a in the 8 incremental intervals. Using the second method, average percolation flux in the Paintbrush nonwelded unit for 6 boreholes ranges from 0.9 to 4.0 mm/a at the 95% confidence level. Average transport velocity ranges from 0.6 to 2.6 cm/a. For the Topopah Spring welded unit and Topopah Spring Tuff, average percolation flux in 5 boreholes ranges from 1.3 to 3.2 mm/a. Average transport velocity ranges from 1.6 to 4.0 cm/a. Both the average percolation flux and average transport velocity in the PTn are smaller than in the TS/TSw. However, the average minimum and average maximum values for the percolation flux in the TS/TSw are within the PTn average range. Therefore, differences in the percolation flux in the two units are not significant. On the other hand, average, average minimum, and average maximum transport velocities in the TS/TSw unit are all larger than the PTn values, implying a larger transport velocity for the TS/TSw although there is a small overlap.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EL.....9030004K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EL.....9030004K"><span>Ergodicity of financial indices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kolesnikov, A. V.; Rühl, T.</p> <p>2010-05-01</p> <p>We introduce the concept of the ensemble averaging for financial markets. We address the question of equality of ensemble and time averaging in their sequence and investigate if these averagings are equivalent for large amount of equity indices and branches. We start with the model of Gaussian-distributed returns, equal-weighted stocks in each index and absence of correlations within a single day and show that even this oversimplified model captures already the run of the corresponding index reasonably well due to its self-averaging properties. We introduce the concept of the instant cross-sectional volatility and discuss its relation to the ordinary time-resolved counterpart. The role of the cross-sectional volatility for the description of the corresponding index as well as the role of correlations between the single stocks and the role of non-Gaussianity of stock distributions is briefly discussed. Our model reveals quickly and efficiently some anomalies or bubbles in a particular financial market and gives an estimate of how large these effects can be and how quickly they disappear.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28522849','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28522849"><span>CarcinoPred-EL: Novel models for predicting the carcinogenicity of chemicals using molecular fingerprints and ensemble learning methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Li; Ai, Haixin; Chen, Wen; Yin, Zimo; Hu, Huan; Zhu, Junfeng; Zhao, Jian; Zhao, Qi; Liu, Hongsheng</p> <p>2017-05-18</p> <p>Carcinogenicity refers to a highly toxic end point of certain chemicals, and has become an important issue in the drug development process. In this study, three novel ensemble classification models, namely Ensemble SVM, Ensemble RF, and Ensemble XGBoost, were developed to predict carcinogenicity of chemicals using seven types of molecular fingerprints and three machine learning methods based on a dataset containing 1003 diverse compounds with rat carcinogenicity. Among these three models, Ensemble XGBoost is found to be the best, giving an average accuracy of 70.1 ± 2.9%, sensitivity of 67.0 ± 5.0%, and specificity of 73.1 ± 4.4% in five-fold cross-validation and an accuracy of 70.0%, sensitivity of 65.2%, and specificity of 76.5% in external validation. In comparison with some recent methods, the ensemble models outperform some machine learning-based approaches and yield equal accuracy and higher specificity but lower sensitivity than rule-based expert systems. It is also found that the ensemble models could be further improved if more data were available. As an application, the ensemble models are employed to discover potential carcinogens in the DrugBank database. The results indicate that the proposed models are helpful in predicting the carcinogenicity of chemicals. A web server called CarcinoPred-EL has been built for these models ( http://ccsipb.lnu.edu.cn/toxicity/CarcinoPred-EL/ ).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25142516','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25142516"><span>The interplay between cooperativity and diversity in model threshold ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cervera, Javier; Manzanares, José A; Mafe, Salvador</p> <p>2014-10-06</p> <p>The interplay between cooperativity and diversity is crucial for biological ensembles because single molecule experiments show a significant degree of heterogeneity and also for artificial nanostructures because of the high individual variability characteristic of nanoscale units. We study the cross-effects between cooperativity and diversity in model threshold ensembles composed of individually different units that show a cooperative behaviour. The units are modelled as statistical distributions of parameters (the individual threshold potentials here) characterized by central and width distribution values. The simulations show that the interplay between cooperativity and diversity results in ensemble-averaged responses of interest for the understanding of electrical transduction in cell membranes, the experimental characterization of heterogeneous groups of biomolecules and the development of biologically inspired engineering designs with individually different building blocks. © 2014 The Author(s) Published by the Royal Society. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7962E..2PH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7962E..2PH"><span>Confidence-based ensemble for GBM brain tumor segmentation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huo, Jing; van Rikxoort, Eva M.; Okada, Kazunori; Kim, Hyun J.; Pope, Whitney; Goldin, Jonathan; Brown, Matthew</p> <p>2011-03-01</p> <p>It is a challenging task to automatically segment glioblastoma multiforme (GBM) brain tumors on T1w post-contrast isotropic MR images. A semi-automated system using fuzzy connectedness has recently been developed for computing the tumor volume that reduces the cost of manual annotation. In this study, we propose a an ensemble method that combines multiple segmentation results into a final ensemble one. The method is evaluated on a dataset of 20 cases from a multi-center pharmaceutical drug trial and compared to the fuzzy connectedness method. Three individual methods were used in the framework: fuzzy connectedness, GrowCut, and voxel classification. The combination method is a confidence map averaging (CMA) method. The CMA method shows an improved ROC curve compared to the fuzzy connectedness method (p < 0.001). The CMA ensemble result is more robust compared to the three individual methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JHyd..555..371A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JHyd..555..371A"><span>On the incidence of meteorological and hydrological processors: Effect of resolution, sharpness and reliability of hydrological ensemble forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abaza, Mabrouk; Anctil, François; Fortin, Vincent; Perreault, Luc</p> <p>2017-12-01</p> <p>Meteorological and hydrological ensemble prediction systems are imperfect. Their outputs could often be improved through the use of a statistical processor, opening up the question of the necessity of using both processors (meteorological and hydrological), only one of them, or none. This experiment compares the predictive distributions from four hydrological ensemble prediction systems (H-EPS) utilising the Ensemble Kalman filter (EnKF) probabilistic sequential data assimilation scheme. They differ in the inclusion or not of the Distribution Based Scaling (DBS) method for post-processing meteorological forecasts and the ensemble Bayesian Model Averaging (ensemble BMA) method for hydrological forecast post-processing. The experiment is implemented on three large watersheds and relies on the combination of two meteorological reforecast products: the 4-member Canadian reforecasts from the Canadian Centre for Meteorological and Environmental Prediction (CCMEP) and the 10-member American reforecasts from the National Oceanic and Atmospheric Administration (NOAA), leading to 14 members at each time step. Results show that all four tested H-EPS lead to resolution and sharpness values that are quite similar, with an advantage to DBS + EnKF. The ensemble BMA is unable to compensate for any bias left in the precipitation ensemble forecasts. On the other hand, it succeeds in calibrating ensemble members that are otherwise under-dispersed. If reliability is preferred over resolution and sharpness, DBS + EnKF + ensemble BMA performs best, making use of both processors in the H-EPS system. Conversely, for enhanced resolution and sharpness, DBS is the preferred method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AGUFM.A23F..03A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AGUFM.A23F..03A"><span>Ensemble Downscaling of Winter Seasonal Forecasts: The MRED Project</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Arritt, R. W.; Mred Team</p> <p>2010-12-01</p> <p>The Multi-Regional climate model Ensemble Downscaling (MRED) project is a multi-institutional project that is producing large ensembles of downscaled winter seasonal forecasts from coupled atmosphere-ocean seasonal prediction models. Eight regional climate models each are downscaling 15-member ensembles from the National Centers for Environmental Prediction (NCEP) Climate Forecast System (CFS) and the new NASA seasonal forecast system based on the GEOS5 atmospheric model coupled with the MOM4 ocean model. This produces 240-member ensembles, i.e., 8 regional models x 15 global ensemble members x 2 global models, for each winter season (December-April) of 1982-2003. Results to date show that combined global-regional downscaled forecasts have greatest skill for seasonal precipitation anomalies during strong El Niño events such as 1982-83 and 1997-98. Ensemble means of area-averaged seasonal precipitation for the regional models generally track the corresponding results for the global model, though there is considerable inter-model variability amongst the regional models. For seasons and regions where area mean precipitation is accurately simulated the regional models bring added value by extracting greater spatial detail from the global forecasts, mainly due to better resolution of terrain in the regional models. Our results also emphasize that an ensemble approach is essential to realizing the added value from the combined global-regional modeling system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=334179','PESTICIDES'); return false;" href="https://cfpub.epa.gov/si/si_public_record_report.cfm?direntryid=334179"><span>Insights into the deterministic skill of air quality ensembles ...</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.epa.gov/pesticides/search.htm">EPA Pesticide Factsheets</a></p> <p></p> <p></p> <p>Simulations from chemical weather models are subject to uncertainties in the input data (e.g. emission inventory, initial and boundary conditions) as well as those intrinsic to the model (e.g. physical parameterization, chemical mechanism). Multi-model ensembles can improve the forecast skill, provided that certain mathematical conditions are fulfilled. In this work, four ensemble methods were applied to two different datasets, and their performance was compared for ozone (O3), nitrogen dioxide (NO2) and particulate matter (PM10). Apart from the unconditional ensemble average, the approach behind the other three methods relies on adding optimum weights to members or constraining the ensemble to those members that meet certain conditions in time or frequency domain. The two different datasets were created for the first and second phase of the Air Quality Model Evaluation International Initiative (AQMEII). The methods are evaluated against ground level observations collected from the EMEP (European Monitoring and Evaluation Programme) and AirBase databases. The goal of the study is to quantify to what extent we can extract predictable signals from an ensemble with superior skill over the single models and the ensemble mean. Verification statistics show that the deterministic models simulate better O3 than NO2 and PM10, linked to different levels of complexity in the represented processes. The unconditional ensemble mean achieves higher skill compared to each stati</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5135320','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5135320"><span>Transport efficiency of membrane-anchored kinesin-1 motors depends on motor density and diffusivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Grover, Rahul; Fischer, Janine; Schwarz, Friedrich W.; Walter, Wilhelm J.; Schwille, Petra; Diez, Stefan</p> <p>2016-01-01</p> <p>In eukaryotic cells, membranous vesicles and organelles are transported by ensembles of motor proteins. These motors, such as kinesin-1, have been well characterized in vitro as single molecules or as ensembles rigidly attached to nonbiological substrates. However, the collective transport by membrane-anchored motors, that is, motors attached to a fluid lipid bilayer, is poorly understood. Here, we investigate the influence of motors’ anchorage to a lipid bilayer on the collective transport characteristics. We reconstituted “membrane-anchored” gliding motility assays using truncated kinesin-1 motors with a streptavidin-binding peptide tag that can attach to streptavidin-loaded, supported lipid bilayers. We found that the diffusing kinesin-1 motors propelled the microtubules in the presence of ATP. Notably, we found the gliding velocity of the microtubules to be strongly dependent on the number of motors and their diffusivity in the lipid bilayer. The microtubule gliding velocity increased with increasing motor density and membrane viscosity, reaching up to the stepping velocity of single motors. This finding is in contrast to conventional gliding motility assays where the density of surface-immobilized kinesin-1 motors does not influence the microtubule velocity over a wide range. We reason that the transport efficiency of membrane-anchored motors is reduced because of their slippage in the lipid bilayer, an effect that we directly observed using single-molecule fluorescence microscopy. Our results illustrate the importance of motor–cargo coupling, which potentially provides cells with an additional means of regulating the efficiency of cargo transport. PMID:27803325</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70035825','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70035825"><span>Assessing the impact of land use change on hydrology by ensemble modelling (LUCHEM) II: Ensemble combinations and predictions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Viney, N.R.; Bormann, H.; Breuer, L.; Bronstert, A.; Croke, B.F.W.; Frede, H.; Graff, T.; Hubrechts, L.; Huisman, J.A.; Jakeman, A.J.; Kite, G.W.; Lanini, J.; Leavesley, G.; Lettenmaier, D.P.; Lindstrom, G.; Seibert, J.; Sivapalan, M.; Willems, P.</p> <p>2009-01-01</p> <p>This paper reports on a project to compare predictions from a range of catchment models applied to a mesoscale river basin in central Germany and to assess various ensemble predictions of catchment streamflow. The models encompass a large range in inherent complexity and input requirements. In approximate order of decreasing complexity, they are DHSVM, MIKE-SHE, TOPLATS, WASIM-ETH, SWAT, PRMS, SLURP, HBV, LASCAM and IHACRES. The models are calibrated twice using different sets of input data. The two predictions from each model are then combined by simple averaging to produce a single-model ensemble. The 10 resulting single-model ensembles are combined in various ways to produce multi-model ensemble predictions. Both the single-model ensembles and the multi-model ensembles are shown to give predictions that are generally superior to those of their respective constituent models, both during a 7-year calibration period and a 9-year validation period. This occurs despite a considerable disparity in performance of the individual models. Even the weakest of models is shown to contribute useful information to the ensembles they are part of. The best model combination methods are a trimmed mean (constructed using the central four or six predictions each day) and a weighted mean ensemble (with weights calculated from calibration performance) that places relatively large weights on the better performing models. Conditional ensembles, in which separate model weights are used in different system states (e.g. summer and winter, high and low flows) generally yield little improvement over the weighted mean ensemble. However a conditional ensemble that discriminates between rising and receding flows shows moderate improvement. An analysis of ensemble predictions shows that the best ensembles are not necessarily those containing the best individual models. Conversely, it appears that some models that predict well individually do not necessarily combine well with other models in multi-model ensembles. The reasons behind these observations may relate to the effects of the weighting schemes, non-stationarity of the climate series and possible cross-correlations between models. Crown Copyright ?? 2008.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JCAMD..24..675Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JCAMD..24..675Y"><span>Dynamic clustering threshold reduces conformer ensemble size while maintaining a biologically relevant ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yongye, Austin B.; Bender, Andreas; Martínez-Mayorga, Karina</p> <p>2010-08-01</p> <p>Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged- RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged- RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1-4), medium (5-9) and high (10-15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1312046-near-optimal-protocols-complex-nonequilibrium-transformations','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1312046-near-optimal-protocols-complex-nonequilibrium-transformations"><span>Near-optimal protocols in complex nonequilibrium transformations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Gingrich, Todd R.; Rotskoff, Grant M.; Crooks, Gavin E.; ...</p> <p>2016-08-29</p> <p>The development of sophisticated experimental means to control nanoscale systems has motivated efforts to design driving protocols that minimize the energy dissipated to the environment. Computational models are a crucial tool in this practical challenge. In this paper, we describe a general method for sampling an ensemble of finite-time, nonequilibrium protocols biased toward a low average dissipation. In addition, we show that this scheme can be carried out very efficiently in several limiting cases. As an application, we sample the ensemble of low-dissipation protocols that invert the magnetization of a 2D Ising model and explore how the diversity of themore » protocols varies in response to constraints on the average dissipation. In this example, we find that there is a large set of protocols with average dissipation close to the optimal value, which we argue is a general phenomenon.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3580869','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3580869"><span>Automated Delineation of Lung Tumors from CT Images Using a Single Click Ensemble Segmentation Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gu, Yuhua; Kumar, Virendra; Hall, Lawrence O; Goldgof, Dmitry B; Li, Ching-Yen; Korn, René; Bendtsen, Claus; Velazquez, Emmanuel Rios; Dekker, Andre; Aerts, Hugo; Lambin, Philippe; Li, Xiuli; Tian, Jie; Gatenby, Robert A; Gillies, Robert J</p> <p>2012-01-01</p> <p>A single click ensemble segmentation (SCES) approach based on an existing “Click&Grow” algorithm is presented. The SCES approach requires only one operator selected seed point as compared with multiple operator inputs, which are typically needed. This facilitates processing large numbers of cases. Evaluation on a set of 129 CT lung tumor images using a similarity index (SI) was done. The average SI is above 93% using 20 different start seeds, showing stability. The average SI for 2 different readers was 79.53%. We then compared the SCES algorithm with the two readers, the level set algorithm and the skeleton graph cut algorithm obtaining an average SI of 78.29%, 77.72%, 63.77% and 63.76% respectively. We can conclude that the newly developed automatic lung lesion segmentation algorithm is stable, accurate and automated. PMID:23459617</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25913899','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25913899"><span>Real time detection of farm-level swine mycobacteriosis outbreak using time series modeling of the number of condemned intestines in abattoirs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Adachi, Yasumoto; Makita, Kohei</p> <p>2015-09-01</p> <p>Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5030652','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5030652"><span>Transport Phenomena of Water in Molecular Fluidic Channels</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Vo, Truong Quoc; Kim, BoHung</p> <p>2016-01-01</p> <p>In molecular-level fluidic transport, where the discrete characteristics of a molecular system are not negligible (in contrast to a continuum description), the response of the molecular water system might still be similar to the continuum description if the time and ensemble averages satisfy the ergodic hypothesis and the scale of the average is enough to recover the classical thermodynamic properties. However, even in such cases, the continuum description breaks down on the material interfaces. In short, molecular-level liquid flows exhibit substantially different physics from classical fluid transport theories because of (i) the interface/surface force field, (ii) thermal/velocity slip, (iii) the discreteness of fluid molecules at the interface and (iv) local viscosity. Therefore, in this study, we present the result of our investigations using molecular dynamics (MD) simulations with continuum-based energy equations and check the validity and limitations of the continuum hypothesis. Our study shows that when the continuum description is subjected to the proper treatment of the interface effects via modified boundary conditions, the so-called continuum-based modified-analytical solutions, they can adequately predict nanoscale fluid transport phenomena. The findings in this work have broad effects in overcoming current limitations in modeling/predicting the fluid behaviors of molecular fluidic devices. PMID:27650138</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5325197','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5325197"><span>Clustering cancer gene expression data by projective clustering ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yu, Xianxue; Yu, Guoxian</p> <p>2017-01-01</p> <p>Gene expression data analysis has paramount implications for gene treatments, cancer diagnosis and other domains. Clustering is an important and promising tool to analyze gene expression data. Gene expression data is often characterized by a large amount of genes but with limited samples, thus various projective clustering techniques and ensemble techniques have been suggested to combat with these challenges. However, it is rather challenging to synergy these two kinds of techniques together to avoid the curse of dimensionality problem and to boost the performance of gene expression data clustering. In this paper, we employ a projective clustering ensemble (PCE) to integrate the advantages of projective clustering and ensemble clustering, and to avoid the dilemma of combining multiple projective clusterings. Our experimental results on publicly available cancer gene expression data show PCE can improve the quality of clustering gene expression data by at least 4.5% (on average) than other related techniques, including dimensionality reduction based single clustering and ensemble approaches. The empirical study demonstrates that, to further boost the performance of clustering cancer gene expression data, it is necessary and promising to synergy projective clustering with ensemble clustering. PCE can serve as an effective alternative technique for clustering gene expression data. PMID:28234920</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ExFl...58..119F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ExFl...58..119F"><span>Non-iterative double-frame 2D/3D particle tracking velocimetry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fuchs, Thomas; Hain, Rainer; Kähler, Christian J.</p> <p>2017-09-01</p> <p>In recent years, the detection of individual particle images and their tracking over time to determine the local flow velocity has become quite popular for planar and volumetric measurements. Particle tracking velocimetry has strong advantages compared to the statistical analysis of an ensemble of particle images by means of cross-correlation approaches, such as particle image velocimetry. Tracking individual particles does not suffer from spatial averaging and therefore bias errors can be avoided. Furthermore, the spatial resolution can be increased up to the sub-pixel level for mean fields. A maximization of the spatial resolution for instantaneous measurements requires high seeding concentrations. However, it is still challenging to track particles at high seeding concentrations, if no time series is available. Tracking methods used under these conditions are typically very complex iterative algorithms, which require expert knowledge due to the large number of adjustable parameters. To overcome these drawbacks, a new non-iterative tracking approach is introduced in this letter, which automatically analyzes the motion of the neighboring particles without requiring to specify any parameters, except for the displacement limits. This makes the algorithm very user friendly and also offers unexperienced users to use and implement particle tracking. In addition, the algorithm enables measurements of high speed flows using standard double-pulse equipment and estimates the flow velocity reliably even at large particle image densities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19830004698','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19830004698"><span>Tone-excited jet: Theory and experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ahuja, K. K.; Lepicovsky, J.; Tam, C. K. W.; Morris, P. J.; Burrin, R. H.</p> <p>1982-01-01</p> <p>A detailed study to understand the phenomenon of broadband jet-noise amplification produced by upstream discrete-tone sound excitation has been carried out. This has been achieved by simultaneous acquisition of the acoustic, mean velocity, turbulence intensities, and instability-wave pressure data. A 5.08 cm diameter jet has been tested for this purpose under static and also flight-simulation conditions. An open-jet wind tunnel has been used to simulate the flight effects. Limited data on heated jets have also been obtained. To improve the physical understanding of the flow modifications brought about by the upstream discrete-tone excitation, ensemble-averaged schlieren photographs of the jets have also been taken. Parallel to the experimental study, a mathematical model of the processes that lead to broadband-noise amplification by upstream tones has been developed. Excitation of large-scale turbulence by upstream tones is first calculated. A model to predict the changes in small-scale turbulence is then developed. By numerically integrating the resultant set of equations, the enhanced small-scale turbulence distribution in a jet under various excitation conditions is obtained. The resulting changes in small-scale turbulence have been attributed to broadband amplification of jet noise. Excellent agreement has been found between the theory and the experiments. It has also shown that the relative velocity effects are the same for the excited and the unexcited jets.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG34A..05R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG34A..05R"><span>Dynamics Under Location Uncertainty: Model Derivation, Modified Transport and Uncertainty Quantification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Resseguier, V.; Memin, E.; Chapron, B.; Fox-Kemper, B.</p> <p>2017-12-01</p> <p>In order to better observe and predict geophysical flows, ensemble-based data assimilation methods are of high importance. In such methods, an ensemble of random realizations represents the variety of the simulated flow's likely behaviors. For this purpose, randomness needs to be introduced in a suitable way and physically-based stochastic subgrid parametrizations are promising paths. This talk will propose a new kind of such a parametrization referred to as modeling under location uncertainty. The fluid velocity is decomposed into a resolved large-scale component and an aliased small-scale one. The first component is possibly random but time-correlated whereas the second is white-in-time but spatially-correlated and possibly inhomogeneous and anisotropic. With such a velocity, the material derivative of any - possibly active - tracer is modified. Three new terms appear: a correction of the large-scale advection, a multiplicative noise and a possibly heterogeneous and anisotropic diffusion. This parameterization naturally ensures attractive properties such as energy conservation for each realization. Additionally, this stochastic material derivative and the associated Reynolds' transport theorem offer a systematic method to derive stochastic models. In particular, we will discuss the consequences of the Quasi-Geostrophic assumptions in our framework. Depending on the turbulence amount, different models with different physical behaviors are obtained. Under strong turbulence assumptions, a simplified diagnosis of frontolysis and frontogenesis at the surface of the ocean is possible in this framework. A Surface Quasi-Geostrophic (SQG) model with a weaker noise influence has also been simulated. A single realization better represents small scales than a deterministic SQG model at the same resolution. Moreover, an ensemble accurately predicts extreme events, bifurcations as well as the amplitudes and the positions of the simulation errors. Figure 1 highlights this last result and compares it to the strong error underestimation of an ensemble simulated from the deterministic dynamic with random initial conditions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvF...2e4606D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvF...2e4606D"><span>Self-preservation relation to the Kolmogorov similarity hypotheses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Djenidi, Lyazid; Antonia, Robert A.; Danaila, Luminita</p> <p>2017-05-01</p> <p>The relation between self-preservation (SP) and the Kolmogorov similarity hypotheses (Kolmogorov, The local structure of turbulence in incompressible viscous fluid for very large Reynolds numbers, Dokl. Akad. Nauk SSSR 30, 301 (1941) [Proc. R. Soc. London A 434, 9 (1991), 10.1098/rspa.1991.0075]) is investigated through the transport equations for the second- and third-order moments of the longitudinal velocity increments [ δ u (r ,t )=u (x ,t )-u (x +r ,t ) , where x ,t , and r are the spatial point and the time and longitudinal separation between two points, respectively]. It is shown that the fluid viscosity ν and the mean turbulent kinetic energy dissipation rate ɛ ¯ (the overbar represents an ensemble average) emerge naturally from the equations of motion as controlling parameters for the velocity increment moments when SP is assumed. Consequently, the Kolmogorov length scale η [≡(ν3/ɛ¯) 1 /4] and velocity scale vK [≡(νɛ ¯) 1 /4] also emerge as natural scaling parameters in conformity with SP, indicating that Kolmogorov's first hypothesis is subsumed under the more general hypothesis of SP. Further, the requirement for a very large Reynolds number is also relaxed, at least for the first similarity hypothesis. This requirement however is still necessary to derive the two-thirds law (or the four-fifths law) from the analysis. These analytical results are supported by experimental data in wake, jet, and grid turbulence. An expression for the fourth-order moment of the longitudinal velocity increments (δu ) 4¯ is derived from the analysis carried out in the inertial range. The expression, which involves the product of (δu ) 2 and ∂ δ p /∂ x , does not require the use the volume-averaged dissipation ɛ¯r, introduced by Oboukhov [Oboukhov, Some specific features of atmospheric turbulence, J. Fluid Mech. 13, 77 (1962), 10.1017/S0022112062000506] on a phenomenological basis and used by Kolmogorov to derive his refined similarity hypotheses [Kolmogorov, A refinement of previous hypotheses concerning the local structure of turbulence in a viscous incompressible fluid at high Reynolds number, J. Fluid Mech. 13, 82 (1962), 10.1017/S0022112062000518], suggesting that ɛ¯r is not, like ɛ ¯, a quantity issuing from the Navier-Stokes equations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148l3329Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148l3329Z"><span>Inferring properties of disordered chains from FRET transfer efficiencies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zheng, Wenwei; Zerze, Gül H.; Borgia, Alessandro; Mittal, Jeetain; Schuler, Benjamin; Best, Robert B.</p> <p>2018-03-01</p> <p>Förster resonance energy transfer (FRET) is a powerful tool for elucidating both structural and dynamic properties of unfolded or disordered biomolecules, especially in single-molecule experiments. However, the key observables, namely, the mean transfer efficiency and fluorescence lifetimes of the donor and acceptor chromophores, are averaged over a broad distribution of donor-acceptor distances. The inferred average properties of the ensemble therefore depend on the form of the model distribution chosen to describe the distance, as has been widely recognized. In addition, while the distribution for one type of polymer model may be appropriate for a chain under a given set of physico-chemical conditions, it may not be suitable for the same chain in a different environment so that even an apparently consistent application of the same model over all conditions may distort the apparent changes in chain dimensions with variation of temperature or solution composition. Here, we present an alternative and straightforward approach to determining ensemble properties from FRET data, in which the polymer scaling exponent is allowed to vary with solution conditions. In its simplest form, it requires either the mean FRET efficiency or fluorescence lifetime information. In order to test the accuracy of the method, we have utilized both synthetic FRET data from implicit and explicit solvent simulations for 30 different protein sequences, and experimental single-molecule FRET data for an intrinsically disordered and a denatured protein. In all cases, we find that the inferred radii of gyration are within 10% of the true values, thus providing higher accuracy than simpler polymer models. In addition, the scaling exponents obtained by our procedure are in good agreement with those determined directly from the molecular ensemble. Our approach can in principle be generalized to treating other ensemble-averaged functions of intramolecular distances from experimental data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ThApC.tmp..394S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ThApC.tmp..394S"><span>Simulation of tropical cyclone activity over the western North Pacific based on CMIP5 models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, Haibo; Zhou, Weican; Zhao, Haikun</p> <p>2017-09-01</p> <p>Based on the Coupled Model Inter-comparison Project 5 (CMIP5) models, the tropical cyclone (TC) activity in the summers of 1965-2005 over the western North Pacific (WNP) is simulated by a TC dynamically downscaling system. In consideration of diversity among climate models, Bayesian model averaging (BMA) and equal-weighed model averaging (EMA) methods are applied to produce the ensemble large-scale environmental factors of the CMIP5 model outputs. The environmental factors generated by BMA and EMA methods are compared, as well as the corresponding TC simulations by the downscaling system. Results indicate that BMA method shows a significant advantage over the EMA. In addition, impacts of model selections on BMA method are examined. To each factor, ten models with better performance are selected from 30 CMIP5 models and then conduct BMA, respectively. As a consequence, the ensemble environmental factors and simulated TC activity are similar with the results from the 30 models' BMA, which verifies the BMA method can afford corresponding weight for each model in the ensemble based on the model's predictive skill. Thereby, the existence of poor performance models will not particularly affect the BMA effectiveness and the ensemble outcomes are improved. Finally, based upon the BMA method and downscaling system, we analyze the sensitivity of TC activity to three important environmental factors, i.e., sea surface temperature (SST), large-scale steering flow, and vertical wind shear. Among three factors, SST and large-scale steering flow greatly affect TC tracks, while average intensity distribution is sensitive to all three environmental factors. Moreover, SST and vertical wind shear jointly play a critical role in the inter-annual variability of TC lifetime maximum intensity and frequency of intense TCs.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1514107S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1514107S"><span>Synchronization Experiments With A Global Coupled Model of Intermediate Complexity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Selten, Frank; Hiemstra, Paul; Shen, Mao-Lin</p> <p>2013-04-01</p> <p>In the super modeling approach an ensemble of imperfect models are connected through nudging terms that nudge the solution of each model to the solution of all other models in the ensemble. The goal is to obtain a synchronized state through a proper choice of connection strengths that closely tracks the trajectory of the true system. For the super modeling approach to be successful, the connections should be dense and strong enough for synchronization to occur. In this study we analyze the behavior of an ensemble of connected global atmosphere-ocean models of intermediate complexity. All atmosphere models are connected to the same ocean model through the surface fluxes of heat, water and momentum, the ocean is integrated using weighted averaged surface fluxes. In particular we analyze the degree of synchronization between the atmosphere models and the characteristics of the ensemble mean solution. The results are interpreted using a low order atmosphere-ocean toy model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoRL..45.4273A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoRL..45.4273A"><span>Machine Learning Predictions of a Multiresolution Climate Model Ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anderson, Gemma J.; Lucas, Donald D.</p> <p>2018-05-01</p> <p>Statistical models of high-resolution climate models are useful for many purposes, including sensitivity and uncertainty analyses, but building them can be computationally prohibitive. We generated a unique multiresolution perturbed parameter ensemble of a global climate model. We use a novel application of a machine learning technique known as random forests to train a statistical model on the ensemble to make high-resolution model predictions of two important quantities: global mean top-of-atmosphere energy flux and precipitation. The random forests leverage cheaper low-resolution simulations, greatly reducing the number of high-resolution simulations required to train the statistical model. We demonstrate that high-resolution predictions of these quantities can be obtained by training on an ensemble that includes only a small number of high-resolution simulations. We also find that global annually averaged precipitation is more sensitive to resolution changes than to any of the model parameters considered.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4980076','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4980076"><span>Bayesian Ensemble Trees (BET) for Clustering and Prediction in Heterogeneous Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Duan, Leo L.; Clancy, John P.; Szczesniak, Rhonda D.</p> <p>2016-01-01</p> <p>We propose a novel “tree-averaging” model that utilizes the ensemble of classification and regression trees (CART). Each constituent tree is estimated with a subset of similar data. We treat this grouping of subsets as Bayesian Ensemble Trees (BET) and model them as a Dirichlet process. We show that BET determines the optimal number of trees by adapting to the data heterogeneity. Compared with the other ensemble methods, BET requires much fewer trees and shows equivalent prediction accuracy using weighted averaging. Moreover, each tree in BET provides variable selection criterion and interpretation for each subset. We developed an efficient estimating procedure with improved estimation strategies in both CART and mixture models. We demonstrate these advantages of BET with simulations and illustrate the approach with a real-world data example involving regression of lung function measurements obtained from patients with cystic fibrosis. Supplemental materials are available online. PMID:27524872</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5024108','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5024108"><span>Controllable quantum dynamics of inhomogeneous nitrogen-vacancy center ensembles coupled to superconducting resonators</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Song, Wan-lu; Yang, Wan-li; Yin, Zhang-qi; Chen, Chang-yong; Feng, Mang</p> <p>2016-01-01</p> <p>We explore controllable quantum dynamics of a hybrid system, which consists of an array of mutually coupled superconducting resonators (SRs) with each containing a nitrogen-vacancy center spin ensemble (NVE) in the presence of inhomogeneous broadening. We focus on a three-site model, which compared with the two-site case, shows more complicated and richer dynamical behavior, and displays a series of damped oscillations under various experimental situations, reflecting the intricate balance and competition between the NVE-SR collective coupling and the adjacent-site photon hopping. Particularly, we find that the inhomogeneous broadening of the spin ensemble can suppress the population transfer between the SR and the local NVE. In this context, although the inhomogeneous broadening of the spin ensemble diminishes entanglement among the NVEs, optimal entanglement, characterized by averaging the lower bound of concurrence, could be achieved through accurately adjusting the tunable parameters. PMID:27627994</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24110485','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24110485"><span>An ensemble rank learning approach for gene prioritization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Po-Feng; Soo, Von-Wun</p> <p>2013-01-01</p> <p>Several different computational approaches have been developed to solve the gene prioritization problem. We intend to use the ensemble boosting learning techniques to combine variant computational approaches for gene prioritization in order to improve the overall performance. In particular we add a heuristic weighting function to the Rankboost algorithm according to: 1) the absolute ranks generated by the adopted methods for a certain gene, and 2) the ranking relationship between all gene-pairs from each prioritization result. We select 13 known prostate cancer genes in OMIM database as training set and protein coding gene data in HGNC database as test set. We adopt the leave-one-out strategy for the ensemble rank boosting learning. The experimental results show that our ensemble learning approach outperforms the four gene-prioritization methods in ToppGene suite in the ranking results of the 13 known genes in terms of mean average precision, ROC and AUC measures.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28036236','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28036236"><span>Ensemble Perception of Dynamic Emotional Groups.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Elias, Elric; Dyer, Michael; Sweeny, Timothy D</p> <p>2017-02-01</p> <p>Crowds of emotional faces are ubiquitous, so much so that the visual system utilizes a specialized mechanism known as ensemble coding to see them. In addition to being proximally close, members of emotional crowds, such as a laughing audience or an angry mob, often behave together. The manner in which crowd members behave-in sync or out of sync-may be critical for understanding their collective affect. Are ensemble mechanisms sensitive to these dynamic properties of groups? Here, observers estimated the average emotion of a crowd of dynamic faces. The members of some crowds changed their expressions synchronously, whereas individuals in other crowds acted asynchronously. Observers perceived the emotion of a synchronous group more precisely than the emotion of an asynchronous crowd or even a single dynamic face. These results demonstrate that ensemble representation is particularly sensitive to coordinated behavior, and they suggest that shared behavior is critical for understanding emotion in groups.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1328565-stable-discrete-representation-relativistically-drifting-plasmas','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1328565-stable-discrete-representation-relativistically-drifting-plasmas"><span>Stable discrete representation of relativistically drifting plasmas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Kirchen, M.; Lehe, R.; Godfrey, B. B.; ...</p> <p>2016-10-10</p> <p>Representing the electrodynamics of relativistically drifting particle ensembles in discrete, co-propagating Galilean coordinates enables the derivation of a Particle-In-Cell algorithm that is intrinsically free of the numerical Cherenkov instability for plasmas flowing at a uniform velocity. Application of the method is shown by modeling plasma accelerators in a Lorentz-transformed optimal frame of reference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1328565-stable-discrete-representation-relativistically-drifting-plasmas','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1328565-stable-discrete-representation-relativistically-drifting-plasmas"><span>Stable discrete representation of relativistically drifting plasmas</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Kirchen, M.; Lehe, R.; Godfrey, B. B.</p> <p></p> <p>Representing the electrodynamics of relativistically drifting particle ensembles in discrete, co-propagating Galilean coordinates enables the derivation of a Particle-In-Cell algorithm that is intrinsically free of the numerical Cherenkov instability for plasmas flowing at a uniform velocity. Application of the method is shown by modeling plasma accelerators in a Lorentz-transformed optimal frame of reference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29240972','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29240972"><span>Evidence for Dynamic Chemical Kinetics at Individual Molecular Ruthenium Catalysts.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Easter, Quinn T; Blum, Suzanne A</p> <p>2018-02-05</p> <p>Catalytic cycles are typically depicted as possessing time-invariant steps with fixed rates. Yet the true behavior of individual catalysts with respect to time is unknown, hidden by the ensemble averaging inherent to bulk measurements. Evidence is presented for variable chemical kinetics at individual catalysts, with a focus on ring-opening metathesis polymerization catalyzed by the second-generation Grubbs' ruthenium catalyst. Fluorescence microscopy is used to probe the chemical kinetics of the reaction because the technique possesses sufficient sensitivity for the detection of single chemical reactions. Insertion reactions in submicron regions likely occur at groups of many (not single) catalysts, yet not so many that their unique kinetic behavior is ensemble averaged. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1393517','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1393517"><span>A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr</p> <p></p> <p>We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1393517-performance-analysis-ensemble-averaging-high-fidelity-turbulence-simulations-strong-scaling-limit','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1393517-performance-analysis-ensemble-averaging-high-fidelity-turbulence-simulations-strong-scaling-limit"><span>A performance analysis of ensemble averaging for high fidelity turbulence simulations at the strong scaling limit</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Makarashvili, Vakhtang; Merzari, Elia; Obabko, Aleksandr; ...</p> <p>2017-06-07</p> <p>We analyze the potential performance benefits of estimating expected quantities in large eddy simulations of turbulent flows using true ensembles rather than ergodic time averaging. Multiple realizations of the same flow are simulated in parallel, using slightly perturbed initial conditions to create unique instantaneous evolutions of the flow field. Each realization is then used to calculate statistical quantities. Provided each instance is sufficiently de-correlated, this approach potentially allows considerable reduction in the time to solution beyond the strong scaling limit for a given accuracy. This study focuses on the theory and implementation of the methodology in Nek5000, a massively parallelmore » open-source spectral element code.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19760010865','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19760010865"><span>A strictly Markovian expansion for plasma turbulence theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jones, F. C.</p> <p>1976-01-01</p> <p>The collision operator that appears in the equation of motion for a particle distribution function that was averaged over an ensemble of random Hamiltonians is non-Markovian. It is non-Markovian in that it involves a propagated integral over the past history of the ensemble averaged distribution function. All formal expansions of this nonlinear collision operator to date preserve this non-Markovian character term by term yielding an integro-differential equation that must be converted to a diffusion equation by an additional approximation. An expansion is derived for the collision operator that is strictly Markovian to any finite order and yields a diffusion equation as the lowest nontrivial order. The validity of this expansion is seen to be the same as that of the standard quasilinear expansion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015GMDD....8.9925P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015GMDD....8.9925P"><span>Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.</p> <p>2015-11-01</p> <p>A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23094935','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23094935"><span>Improved momentum-transfer theory for ion mobility. 1. Derivation of the fundamental equation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Siems, William F; Viehland, Larry A; Hill, Herbert H</p> <p>2012-11-20</p> <p>For the first time the fundamental ion mobility equation is derived by a bottom-up procedure, with N real atomic ion-atomic neutral collisions replaced by N repetitions of an average collision. Ion drift velocity is identified as the average of all pre- and postcollision velocities in the field direction. To facilitate velocity averaging, collisions are sorted into classes that "cool" and "heat" the ion. Averaging over scattering angles establishes mass-dependent relationships between pre- and postcollision velocities for the cooling and heating classes, and a combined expression for drift velocity is obtained by weighted addition according to relative frequencies of the cooling and heating encounters. At zero field this expression becomes identical to the fundamental low-field ion mobility equation. The bottom-up derivation identifies the low-field drift velocity as 3/4 of the average precollision ion velocity in the field direction and associates the passage from low-field to high-field conditions with the increasing dominance of "cooling" collisions over "heating" collisions. Most significantly, the analysis provides a direct path for generalization to fields of arbitrary strength.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015APS..DMP.Q1087L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015APS..DMP.Q1087L"><span>Photon number dependent group velocity in vacuum induced transparency</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lauk, Nikolai; Fleischhauer, Michael</p> <p>2015-05-01</p> <p>Vacuum induced transparency (VIT) is an effect which occurs in an ensemble of three level atoms in a Λ configuration that interact with two quantized fields. Coupling of one transition to a cavity mode induces transparency for the second field on the otherwise opaque transition similar to the well known EIT effect. In the strong coupling regime even an empty cavity leads to transparency, in contrast to EIT where the presence of a strong control field is required. This transparency is accompanied by a reduction of the group velocity for the propagating field. However, unlike in EIT the group velocity in VIT depends on the number of incoming photons, i.e. different photon number components propagate with different velocities. Here we investigate the possibility of using this effect to spatially separate different photon number components of an initially coherent pulse. We present the results of our calculations and discuss a possible experimental realization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA569591','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA569591"><span>A Regional Seismic Travel Time Model for North America</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2010-09-01</p> <p>velocity at the Moho, the mantle velocity gradient, and the average crustal velocity. After tomography across Eurasia, rigorous tests find that Pn...velocity gradient, and the average crustal velocity. After tomography across Eurasia rigorous tests find that Pn travel time residuals are reduced...and S-wave velocity in the crustal layers and in the upper mantle. A good prior model is essential because the RSTT tomography inversion is invariably</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97c2205L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97c2205L"><span>Transition from normal to ballistic diffusion in a one-dimensional impact system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Livorati, André L. P.; Kroetz, Tiago; Dettmann, Carl P.; Caldas, Iberê L.; Leonel, Edson D.</p> <p>2018-03-01</p> <p>We characterize a transition from normal to ballistic diffusion in a bouncing ball dynamics. The system is composed of a particle, or an ensemble of noninteracting particles, experiencing elastic collisions with a heavy and periodically moving wall under the influence of a constant gravitational field. The dynamics lead to a mixed phase space where chaotic orbits have a free path to move along the velocity axis, presenting a normal diffusion behavior. Depending on the control parameter, one can observe the presence of featured resonances, known as accelerator modes, that lead to a ballistic growth of velocity. Through statistical and numerical analysis of the velocity of the particle, we are able to characterize a transition between the two regimes, where transport properties were used to characterize the scenario of the ballistic regime. Also, in an analysis of the probability of an orbit to reach an accelerator mode as a function of the velocity, we observe a competition between the normal and ballistic transport in the midrange velocity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29776143','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29776143"><span>Transition from normal to ballistic diffusion in a one-dimensional impact system.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Livorati, André L P; Kroetz, Tiago; Dettmann, Carl P; Caldas, Iberê L; Leonel, Edson D</p> <p>2018-03-01</p> <p>We characterize a transition from normal to ballistic diffusion in a bouncing ball dynamics. The system is composed of a particle, or an ensemble of noninteracting particles, experiencing elastic collisions with a heavy and periodically moving wall under the influence of a constant gravitational field. The dynamics lead to a mixed phase space where chaotic orbits have a free path to move along the velocity axis, presenting a normal diffusion behavior. Depending on the control parameter, one can observe the presence of featured resonances, known as accelerator modes, that lead to a ballistic growth of velocity. Through statistical and numerical analysis of the velocity of the particle, we are able to characterize a transition between the two regimes, where transport properties were used to characterize the scenario of the ballistic regime. Also, in an analysis of the probability of an orbit to reach an accelerator mode as a function of the velocity, we observe a competition between the normal and ballistic transport in the midrange velocity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19950017027','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19950017027"><span>A time-accurate finite volume method valid at all flow velocities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kim, S.-W.</p> <p>1993-01-01</p> <p>A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil. The calculated streaklines are in very good comparison with the experimentally obtained smoke picture. The calculated turbulent viscosity contours show that the transition from laminar to turbulent state and the relaminarization occur widely in space as well as in time. The ensemble-averaged velocity profiles are also in good agreement with the measured data and the good comparison indicates that the numerical method as well as the multipletime-scale turbulence equations successfully predict the unsteady transitional turbulence field. The chemical reactions for the hydrogen in the vitiated supersonic airstream are described using 9 chemical species and 48 reaction-steps. Consider that a fast chemistry can not be used to describe the fine details (such as the instability) of chemically reacting flows while a reduced chemical kinetics can not be used confidently due to the uncertainty contained in the reaction mechanisms. However, the use of a detailed finite rate chemistry may make it difficult to obtain a fully converged solution due to the coupling between the large number of flow, turbulence, and chemical equations. The numerical results obtained in the present study are in good agreement with the measured data. The good comparison is attributed to the numerical method that can yield strongly converged results for the reacting flow and to the use of the multiple-time-scale turbulence equations that can accurately describe the mixing of the fuel and the oxidant.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20020023957','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20020023957"><span>Experimental Investigation of Transition to Turbulence as Affected By Passing Wakes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kaszeta, Richard W.; Ashpis, David E.; Simon, Terrence W.</p> <p>2001-01-01</p> <p>This paper presents experimental results from a study of the effects of periodically passing wakes upon laminar-to-turbulent transition and separation in a low-pressure turbine passage. The test section geometry is designed to simulate unsteady wakes in turbine engines for studying their effects on boundary layers and separated flow regions over the suction surface by using a single suction surface and a single pressure surface to simulate a single turbine blade passage. Single-wire, thermal anemometry techniques are used to measure time-resolved and phase averaged, wall-normal profiles of velocity, turbulence intensity and intermittency at multiple streamwise locations over the turbine airfoil suction surface. These data are compared to steady-state wake-free data collected in the same geometry to identify the effects of wakes upon laminar-to-turbulent transition. Results are presented for flows with a Reynolds number based on suction surface length and stage exit velocity of 50,000 and an approach flow turbulence intensity of 2.5%. While both existing design and experimental data are primarily concerned with higher Reynolds number flows (Re greater than 100,000), recent advances in gas turbine engines, and the accompanying increase in laminar and transitional flow effects, have made low-Re research increasingly important. From the presented data, the effects of passing wakes on transition and separation in the boundary layer, due to both increased turbulence levels and varying streamwise pressure gradients are presented. The results show how the wakes affect transition. The wakes affect the flow by virtue of their difference in turbulence levels and scales from those of the free-stream and by virtue of their ensemble- averaged velocity deficits, relative to the free-stream velocity, and the concomitant changes in angle of attack and temporal pressure gradients. The relationships between the velocity oscillations in the freestream and the unsteady velocity profile shapes in the near-wall flow are described. In this discussion is support for the theory that bypass transition is a response of the near-wall viscous layer to pressure fluctuations imposed upon it from the free-stream flow. Recent transition models are based on that premise. The data also show a significant lag between when the wake is present over the surface and when transition begins.cous layer to pressure fluctuations imposed upon it from the free-stream flow. Recent transition models are based on that premise. The data also show a significant lag between when the wake is present over the surface and when transition begins.cous layer to pressure fluctuations imposed upon it from the free-stream flow. Recent transition models are based on that premise. The data also show a significant lag between when the wake is present over the surface and when transition begins.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3524795','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3524795"><span>Modelling dynamics in protein crystal structures by ensemble refinement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Burnley, B Tom; Afonine, Pavel V; Adams, Paul D; Gros, Piet</p> <p>2012-01-01</p> <p>Single-structure models derived from X-ray data do not adequately account for the inherent, functionally important dynamics of protein molecules. We generated ensembles of structures by time-averaged refinement, where local molecular vibrations were sampled by molecular-dynamics (MD) simulation whilst global disorder was partitioned into an underlying overall translation–libration–screw (TLS) model. Modeling of 20 protein datasets at 1.1–3.1 Å resolution reduced cross-validated Rfree values by 0.3–4.9%, indicating that ensemble models fit the X-ray data better than single structures. The ensembles revealed that, while most proteins display a well-ordered core, some proteins exhibit a ‘molten core’ likely supporting functionally important dynamics in ligand binding, enzyme activity and protomer assembly. Order–disorder changes in HIV protease indicate a mechanism of entropy compensation for ordering the catalytic residues upon ligand binding by disordering specific core residues. Thus, ensemble refinement extracts dynamical details from the X-ray data that allow a more comprehensive understanding of structure–dynamics–function relationships. DOI: http://dx.doi.org/10.7554/eLife.00311.001 PMID:23251785</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1334906-selecting-classification-ensemble-detecting-process-drift-evolving-data-stream','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1334906-selecting-classification-ensemble-detecting-process-drift-evolving-data-stream"><span>Selecting a Classification Ensemble and Detecting Process Drift in an Evolving Data Stream</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Heredia-Langner, Alejandro; Rodriguez, Luke R.; Lin, Andy</p> <p>2015-09-30</p> <p>We characterize the commercial behavior of a group of companies in a common line of business using a small ensemble of classifiers on a stream of records containing commercial activity information. This approach is able to effectively find a subset of classifiers that can be used to predict company labels with reasonable accuracy. Performance of the ensemble, its error rate under stable conditions, can be characterized using an exponentially weighted moving average (EWMA) statistic. The behavior of the EWMA statistic can be used to monitor a record stream from the commercial network and determine when significant changes have occurred. Resultsmore » indicate that larger classification ensembles may not necessarily be optimal, pointing to the need to search the combinatorial classifier space in a systematic way. Results also show that current and past performance of an ensemble can be used to detect when statistically significant changes in the activity of the network have occurred. The dataset used in this work contains tens of thousands of high level commercial activity records with continuous and categorical variables and hundreds of labels, making classification challenging.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4948663','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4948663"><span>A benchmark for reaction coordinates in the transition path ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2016-01-01</p> <p>The molecular mechanism of a reaction is embedded in its transition path ensemble, the complete collection of reactive trajectories. Utilizing the information in the transition path ensemble alone, we developed a novel metric, which we termed the emergent potential energy, for distinguishing reaction coordinates from the bath modes. The emergent potential energy can be understood as the average energy cost for making a displacement of a coordinate in the transition path ensemble. Where displacing a bath mode invokes essentially no cost, it costs significantly to move the reaction coordinate. Based on some general assumptions of the behaviors of reaction and bath coordinates in the transition path ensemble, we proved theoretically with statistical mechanics that the emergent potential energy could serve as a benchmark of reaction coordinates and demonstrated its effectiveness by applying it to a prototypical system of biomolecular dynamics. Using the emergent potential energy as guidance, we developed a committor-free and intuition-independent method for identifying reaction coordinates in complex systems. We expect this method to be applicable to a wide range of reaction processes in complex biomolecular systems. PMID:27059559</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1814752W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1814752W"><span>Statistical uncertainty of extreme wind storms over Europe derived from a probabilistic clustering technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Walz, Michael; Leckebusch, Gregor C.</p> <p>2016-04-01</p> <p>Extratropical wind storms pose one of the most dangerous and loss intensive natural hazards for Europe. However, due to only 50 years of high quality observational data, it is difficult to assess the statistical uncertainty of these sparse events just based on observations. Over the last decade seasonal ensemble forecasts have become indispensable in quantifying the uncertainty of weather prediction on seasonal timescales. In this study seasonal forecasts are used in a climatological context: By making use of the up to 51 ensemble members, a broad and physically consistent statistical base can be created. This base can then be used to assess the statistical uncertainty of extreme wind storm occurrence more accurately. In order to determine the statistical uncertainty of storms with different paths of progression, a probabilistic clustering approach using regression mixture models is used to objectively assign storm tracks (either based on core pressure or on extreme wind speeds) to different clusters. The advantage of this technique is that the entire lifetime of a storm is considered for the clustering algorithm. Quadratic curves are found to describe the storm tracks most accurately. Three main clusters (diagonal, horizontal or vertical progression of the storm track) can be identified, each of which have their own particulate features. Basic storm features like average velocity and duration are calculated and compared for each cluster. The main benefit of this clustering technique, however, is to evaluate if the clusters show different degrees of uncertainty, e.g. more (less) spread for tracks approaching Europe horizontally (diagonally). This statistical uncertainty is compared for different seasonal forecast products.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999PhDT.......136C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999PhDT.......136C"><span>Experimental study of heat and mass transfer in a buoyant countercurrent exchange flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Conover, Timothy Allan</p> <p></p> <p>Buoyant Countercurrent Exchange Flow occurs in a vertical vent through which two miscible fluids communicate, the higher-density fluid, residing above the lower-density fluid, separated by the vented partition. The buoyancy- driven zero net volumetric flow through the vent transports any passive scalars, such as heat and toxic fumes, between the two compartments as the fluids seek thermodynamic and gravitational equilibrium. The plume rising from the vent into the top compartment resembles a pool fire plume. In some circumstances both countercurrent flows and pool fires can ``puff'' periodically, with distinct frequencies. One experimental test section containing fresh water in the top compartment and brine (NaCl solution) in the bottom compartment provided a convenient, idealized flow for study. This brine flow decayed in time as the concentrations approached equilibrium. A second test section contained fresh water that was cooled by heat exchangers above and heated by electrical elements below and operated steadily, allowing more time for data acquisition. Brine transport was reduced to a buoyancy- scaled flow coefficient, Q*, and heat transfer was reduced to an analogous coefficient, H*. Results for vent diameter D = 5.08 cm were consistent between test sections and with the literature. Some results for D = 2.54 cm were inconsistent, suggesting viscosity and/or molecular diffusion of heat become important at smaller scales. Laser Doppler Velocimetry was used to measure velocity fields in both test sections, and in thermal flow a small thermocouple measured temperature simultaneously with velocity. Measurement fields were restricted to the plume base region, above the vent proper. In baseline periodic flow, instantaneous velocity and temperature were ensemble averaged, producing a movie of the average variation of each measure during a puffing flow cycle. The temperature movie revealed the previously unknown cold core of the puff during its early development. The renewal-length model for puffing frequency of pool fire plumes was extended to puffing countercurrent flows by estimating inflow dilution. Puffing frequencies at several conditions were reduced to Strouhal number based on dilute plume density. Results for D = 5.08 cm compared favorably to published measurements of puffing pool fires, suggesting that the two different flows obey the same periodic dynamic process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2001APS..MARW32012B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2001APS..MARW32012B"><span>Variety of Behavior of Equity Returns in Financial Markets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bonanno, Giovanni; Lillo, Fabrizio; Mantegna, Rosario N.</p> <p>2001-03-01</p> <p>The price dynamics of a set of equities traded in an efficient market is pretty complex. It consists of almost not redundant time series which have (i) long-range correlated volatility and (ii) cross-correlation between each pair of equities. We perform a study of the statistical properties of an ensemble of equities returns which is fruitful to elucidate the nature and role of time and ensemble correlation. Specifically, we investigate a statistical ensemble of daily returns of n equities traded in United States financial markets. For each trading day of our database, we study the ensemble return distribution. We find that a typical ensemble return distribution exists in most of the trading days [1] with the exception of crash and rally days and of the days following to these extreme events [2]. We analyze each ensemble return distribution by extracting its first two central moments. We call the second moment of the ensemble return distribution the variety of the market. We choose this term because high variety implies a variated behavior of the equities returns in the considered day. We observe that the mean return and the variety are fluctuating in time and are stochastic processes themselves. The variety is a long-range correlated stochastic process. Customary time-averaged statistical properties of time series of stock returns are also considered. In general, time-averaged and portfolio-averaged returns have different statistical properties [1]. We infer from these differences information about the relative strength of correlation between equities and between different trading days. We also compare our empirical results with those predicted by the single-index model and we conclude that this simple model is unable to explain the statistical properties of the second moment of the ensemble return distribution. Correlation between pairs of equities are continuously present in the dynamics of a stock portfolio. Hence, it is relevant to investigate pair correlation in a efficient and original way. We propose to investigate these correlations at a daily and intra daily time horizon with a method based on concepts of random frustrated systems. Specifically, a hierarchical organization of the investigated equities is obtained by determining a metric distance between stocks and by investigating the properties of the subdominant ultrametric associated with it [3]. The high-frequency cross-correlation existing between pairs of equities are investigated in a set of 100 stocks traded in US equity markets. The decrease of the cross-correlation between the equity returns observed for diminishing time horizons progressively changes the nature of the hierarchical structure associated to each different time horizon [4]. The nature of the correlation present between pairs of time series of equity returns collected in a portfolio has a strong influence on the variety of the market. We finally discuss the relation between pair correlation and variety of an ensemble return distribution. References [1] Fabrizio Lillo and Rosario N. Mantegna, Variety and volatility in financial markets, Phys. Rev. E 62, 6126-6134 (2000). [2] Fabrizio Lillo and Rosario N. Mantegna, Symmetry alteration of ensemble return distribution in crash and rally days of financial market, Eur. Phys. J. B 15, 603-606 (2000). [3] Rosario N. Mantegna, Hierarchical structure in financial markets, Eur. Phys. J. B 11, 193-197 (1999). [4] Giovanni Bonanno, Fabrizio Lillo, and Rosario N. Mantegna, High-frequency cross-correlation in a set of stocks, Quantitative Finance (in press).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006AGUFM.H43A0474M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006AGUFM.H43A0474M"><span>Ensemble Solute Transport in 2-D Operator-Stable Random Fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Monnig, N. D.; Benson, D. A.</p> <p>2006-12-01</p> <p>The heterogeneous velocity field that exists at many scales in an aquifer will typically cause a dissolved solute plume to grow at a rate faster than Fick's Law predicts. Some statistical model must be adopted to account for the aquifer structure that engenders the velocity heterogeneity. A fractional Brownian motion (fBm) model has been shown to create the long-range correlation that can produce continually faster-than-Fickian plume growth. Previous fBm models have assumed isotropic scaling (defined here by a scalar Hurst coefficient). Motivated by field measurements of aquifer hydraulic conductivity, recent techniques were developed to construct random fields with anisotropic scaling with a self-similarity parameter that is defined by a matrix. The growth of ensemble plumes is analyzed for transport through 2-D "operator- stable" fBm hydraulic conductivity (K) fields. Both the longitudinal and transverse Hurst coefficients are important to both plume growth rates and the timing and duration of breakthrough. Smaller Hurst coefficients in the transverse direction lead to more "continuity" or stratification in the direction of transport. The result is continually faster-than-Fickian growth rates, highly non-Gaussian ensemble plumes, and a longer tail early in the breakthrough curve. Contrary to some analytic stochastic theories for monofractal K fields, the plume growth rate never exceeds Mercado's [1967] purely stratified aquifer growth rate of plume apparent dispersivity proportional to mean distance. Apparent super-Mercado growth must be the result of other factors, such as larger plumes corresponding to either a larger initial plume size or greater variance of the ln(K) field.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..115a2016S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..115a2016S"><span>Modelling the average velocity of propagation of the flame front in a gasoline engine with hydrogen additives</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Smolenskaya, N. M.; Smolenskii, V. V.</p> <p>2018-01-01</p> <p>The paper presents models for calculating the average velocity of propagation of the flame front, obtained from the results of experimental studies. Experimental studies were carried out on a single-cylinder gasoline engine UIT-85 with hydrogen additives up to 6% of the mass of fuel. The article shows the influence of hydrogen addition on the average velocity propagation of the flame front in the main combustion phase. The dependences of the turbulent propagation velocity of the flame front in the second combustion phase on the composition of the mixture and operating modes. The article shows the influence of the normal combustion rate on the average flame propagation velocity in the third combustion phase.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010AIPC.1207...66G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010AIPC.1207...66G"><span>Stochastic simulation of the spray formation assisted by a high pressure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gorokhovski, M.; Chtab-Desportes, A.; Voloshina, I.; Askarova, A.</p> <p>2010-03-01</p> <p>The stochastic model of spray formation in the vicinity of the injector and in the far-field has been described and assessed by comparison with measurements in Diesel-like conditions. In the proposed mesh-free approach, the 3D configuration of continuous liquid core is simulated stochastically by ensemble of spatial trajectories of the specifically introduced stochastic particles. The parameters of the stochastic process are presumed from the physics of primary atomization. The spray formation model consists in computation of spatial distribution of the probability of finding the non-fragmented liquid jet in the near-to-injector region. This model is combined with KIVA II computation of atomizing Diesel spray in two-ways. First, simultaneously with the gas phase RANS computation, the ensemble of stochastic particles is tracking and the probability field of their positions is calculated, which is used for sampling of initial locations of primary blobs. Second, the velocity increment of the gas due to the liquid injection is computed from the mean volume fraction of the simulated liquid core. Two novelties are proposed in the secondary atomization modeling. The first one is due to unsteadiness of the injection velocity. When the injection velocity increment in time is decreasing, the supplementary breakup may be induced. Therefore the critical Weber number is based on such increment. Second, a new stochastic model of the secondary atomization is proposed, in which the intermittent turbulent stretching is taken into account as the main mechanism. The measurements reported by Arcoumanis et al. (time-history of the mean axial centre-line velocity of droplet, and of the centre-line Sauter Mean Diameter), are compared with computations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29201495','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29201495"><span>Determination of velocity correction factors for real-time air velocity monitoring in underground mines.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhou, Lihong; Yuan, Liming; Thomas, Rick; Iannacchione, Anthony</p> <p>2017-12-01</p> <p>When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer ® . The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5709814','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5709814"><span>Determination of velocity correction factors for real-time air velocity monitoring in underground mines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yuan, Liming; Thomas, Rick; Iannacchione, Anthony</p> <p>2017-01-01</p> <p>When there are installations of air velocity sensors in the mining industry for real-time airflow monitoring, a problem exists with how the monitored air velocity at a fixed location corresponds to the average air velocity, which is used to determine the volume flow rate of air in an entry with the cross-sectional area. Correction factors have been practically employed to convert a measured centerline air velocity to the average air velocity. However, studies on the recommended correction factors of the sensor-measured air velocity to the average air velocity at cross sections are still lacking. A comprehensive airflow measurement was made at the Safety Research Coal Mine, Bruceton, PA, using three measuring methods including single-point reading, moving traverse, and fixed-point traverse. The air velocity distribution at each measuring station was analyzed using an air velocity contour map generated with Surfer®. The correction factors at each measuring station for both the centerline and the sensor location were calculated and are discussed. PMID:29201495</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFM.H13J1551C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFM.H13J1551C"><span>Short-term ensemble radar rainfall forecasts for hydrological applications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Codo de Oliveira, M.; Rico-Ramirez, M. A.</p> <p>2016-12-01</p> <p>Flooding is a very common natural disaster around the world, putting local population and economy at risk. Forecasting floods several hours ahead and issuing warnings are of main importance to permit proper response in emergency situations. However, it is important to know the uncertainties related to the rainfall forecasting in order to produce more reliable forecasts. Nowcasting models (short-term rainfall forecasts) are able to produce high spatial and temporal resolution predictions that are useful in hydrological applications. Nonetheless, they are subject to uncertainties mainly due to the nowcasting model used, errors in radar rainfall estimation, temporal development of the velocity field and to the fact that precipitation processes such as growth and decay are not taken into account. In this study an ensemble generation scheme using rain gauge data as a reference to estimate radars errors is used to produce forecasts with up to 3h lead-time. The ensembles try to assess in a realistic way the residual uncertainties that remain even after correction algorithms are applied in the radar data. The ensembles produced are compered to a stochastic ensemble generator. Furthermore, the rainfall forecast output was used as an input in a hydrodynamic sewer network model and also in hydrological model for catchments of different sizes in north England. A comparative analysis was carried of how was carried out to assess how the radar uncertainties propagate into these models. The first named author is grateful to CAPES - Ciencia sem Fronteiras for funding this PhD research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29342958','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29342958"><span>An Enhanced Method to Estimate Heart Rate from Seismocardiography via Ensemble Averaging of Body Movements at Six Degrees of Freedom.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lee, Hyunwoo; Lee, Hana; Whang, Mincheol</p> <p>2018-01-15</p> <p>Continuous cardiac monitoring has been developed to evaluate cardiac activity outside of clinical environments due to the advancement of novel instruments. Seismocardiography (SCG) is one of the vital components that could develop such a monitoring system. Although SCG has been presented with a lower accuracy, this novel cardiac indicator has been steadily proposed over traditional methods such as electrocardiography (ECG). Thus, it is necessary to develop an enhanced method by combining the significant cardiac indicators. In this study, the six-axis signals of accelerometer and gyroscope were measured and integrated by the L2 normalization and multi-dimensional kineticardiography (MKCG) approaches, respectively. The waveforms of accelerometer and gyroscope were standardized and combined via ensemble averaging, and the heart rate was calculated from the dominant frequency. Thirty participants (15 females) were asked to stand or sit in relaxed and aroused conditions. Their SCG was measured during the task. As a result, proposed method showed higher accuracy than traditional SCG methods in all measurement conditions. The three main contributions are as follows: (1) the ensemble averaging enhanced heart rate estimation with the benefits of the six-axis signals; (2) the proposed method was compared with the previous SCG method that employs fewer-axis; and (3) the method was tested in various measurement conditions for a more practical application.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013PhRvE..87e2713K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013PhRvE..87e2713K"><span>Improved estimation of anomalous diffusion exponents in single-particle tracking experiments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kepten, Eldad; Bronshtein, Irena; Garini, Yuval</p> <p>2013-05-01</p> <p>The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011AIPC.1376...87C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011AIPC.1376...87C"><span>Velocity-Vorticity Correlation Structure in Turbulent Channel Flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, J.; Pei, J.; She, Z. S.; Hussain, F.</p> <p>2011-09-01</p> <p>We present a new definition of statistical structure — velocity-vorticity correlation structure (VVCS) — based on amplitude distributions of the tensor field of normalized velocity-vorticity correlation (uiωj), and show that it displays the geometry of the statistical structure relevant to a given reference point, and it effectively captures coherent motions in inhomogeneous shear flows. The variation of the extracted objects moving with the reference point yr+ then presents a full picture of statistical structures for the flow, which goes beyond the traditional view of searching for reference-independent structures. Application to turbulent channel flow simulation data at Reτ = 180 demonstrates that the VVCS successfully captures, qualitatively and quantitatively, the near-wall streaks, the streamwise vortices [1,2], and their extensions up to yr+ = 110 with variations of their length and inclination angle. More interestingly, the VVCS associated with the streamwise velocity component (particularly (uωx ( and (uωz) displays topological change at four distances from the wall (with transitions at yr+≈20,40,60,110), giving rise to a geometrical interpretation of the multi-layer structure of wall-bounded turbulence. Specifically, we find that the VVCS of (uωz( bifurcates at yr+ = 40 with one attached to the wall and the other near the reference location. The VVCS of (uωx) is blob-like in the center region, quite different from a pair of elongated and inclined objects near the wall. The propagation speeds of the velocity components in the near-wall region, y+ ≤ 10, is found to be characterized by the same stream-wise correlation structures of (uωx) and (uωz), whose core is located at y+≈20. As a result, the convection of the velocity fluctuations always reveal the constant propagation speeds in the near-wall region. The coherent motions parallel to the wall plays an important role in determining the propagation of the velocity fluctuations. This study suggests that a variable set of geometrical structures should be invoked for the study of turbulence structures and for modeling mean flow properties in terms of structures. The method and the concept presented here are general for the study of other flow systems (like boundary or mixing layer), as long as ensemble averaging is well-defined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/6912541-interactions-between-moist-heating-dynamics-atmospheric-predictability','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/6912541-interactions-between-moist-heating-dynamics-atmospheric-predictability"><span>Interactions between moist heating and dynamics in atmospheric predictability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Straus, D.M.; Huntley, M.A.</p> <p>1994-02-01</p> <p>The predictability properties of a fixed heating version of a GCM in which the moist heating is specified beforehand are studied in a series of identical twin experiments. Comparison is made to an identical set of experiments using the control GCM, a five-level R30 version of the COLA GCM. The experiments each contain six ensembles, with a single ensemble consisting of six 30-day integrations starting from slightly perturbed Northern Hemisphere wintertime initial conditions. The moist heating from each integration within a single control ensemble was averaged over the ensemble. This averaged heating (a function of three spatial dimensions and time)more » was used as the prespecified heating in each member of the corresponding fixed heating ensemble. The errors grow less rapidly in the fixed heating case. The most rapidly growing scales at small times (global wavenumber 6) have doubling times of 3.2 days compared to 2.4 days for the control experiments. The predictability times for the most energetic scales (global wavenumbers 9-12) are about two weeks for the fixed heating experiments, compared to 9 days for the control. The ratio of error energy in the fixed heating to the control case falls below 0.5 by day 8, and then gradually increases as the error growth slows in the control case. The growth of errors is described in terms of budgets of error kinetic energy (EKE) and error available potential energy (EAPE) developed in terms of global wavenumber n. The diabatic generation of EAPE (G[sub APE]) is positive in the control case and is dominated by midlatitude heating errors after day 2. The fixed heating G[sub APE] is negative at all times due to longwave radiative cooling. 36 refs., 9 figs., 1 tab.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JGRD..117.5309L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JGRD..117.5309L"><span>Simultaneous assimilation of AIRS Xco2 and meteorological observations in a carbon climate model with an ensemble Kalman filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Junjie; Fung, Inez; Kalnay, Eugenia; Kang, Ji-Sun; Olsen, Edward T.; Chen, Luke</p> <p>2012-03-01</p> <p>This study is our first step toward the generation of 6 hourly 3-D CO2 fields that can be used to validate CO2 forecast models by combining CO2 observations from multiple sources using ensemble Kalman filtering. We discuss a procedure to assimilate Atmospheric Infrared Sounder (AIRS) column-averaged dry-air mole fraction of CO2 (Xco2) in conjunction with meteorological observations with the coupled Local Ensemble Transform Kalman Filter (LETKF)-Community Atmospheric Model version 3.5. We examine the impact of assimilating AIRS Xco2 observations on CO2 fields by comparing the results from the AIRS-run, which assimilates both AIRS Xco2 and meteorological observations, to those from the meteor-run, which only assimilates meteorological observations. We find that assimilating AIRS Xco2 results in a surface CO2 seasonal cycle and the N-S surface gradient closer to the observations. When taking account of the CO2 uncertainty estimation from the LETKF, the CO2 analysis brackets the observed seasonal cycle. Verification against independent aircraft observations shows that assimilating AIRS Xco2 improves the accuracy of the CO2 vertical profiles by about 0.5-2 ppm depending on location and altitude. The results show that the CO2 analysis ensemble spread at AIRS Xco2 space is between 0.5 and 2 ppm, and the CO2 analysis ensemble spread around the peak level of the averaging kernels is between 1 and 2 ppm. This uncertainty estimation is consistent with the magnitude of the CO2 analysis error verified against AIRS Xco2 observations and the independent aircraft CO2 vertical profiles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24483403','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24483403"><span>Transient aging in fractional Brownian and Langevin-equation motion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kursawe, Jochen; Schulz, Johannes; Metzler, Ralf</p> <p>2013-12-01</p> <p>Stochastic processes driven by stationary fractional Gaussian noise, that is, fractional Brownian motion and fractional Langevin-equation motion, are usually considered to be ergodic in the sense that, after an algebraic relaxation, time and ensemble averages of physical observables coincide. Recently it was demonstrated that fractional Brownian motion and fractional Langevin-equation motion under external confinement are transiently nonergodic-time and ensemble averages behave differently-from the moment when the particle starts to sense the confinement. Here we show that these processes also exhibit transient aging, that is, physical observables such as the time-averaged mean-squared displacement depend on the time lag between the initiation of the system at time t=0 and the start of the measurement at the aging time t(a). In particular, it turns out that for fractional Langevin-equation motion the aging dependence on t(a) is different between the cases of free and confined motion. We obtain explicit analytical expressions for the aged moments of the particle position as well as the time-averaged mean-squared displacement and present a numerical analysis of this transient aging phenomenon.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/5848410','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/5848410"><span>Stresses and elastic constants of crystalline sodium, from molecular dynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Schiferl, S.K.</p> <p>1985-02-01</p> <p>The stresses and the elastic constants of bcc sodium are calculated by molecular dynamics (MD) for temperatures to T = 340K. The total adiabatic potential of a system of sodium atoms is represented by pseudopotential model. The resulting expression has two terms: a large, strictly volume-dependent potential, plus a sum over ion pairs of a small, volume-dependent two-body potential. The stresses and the elastic constants are given as strain derivatives of the Helmholtz free energy. The resulting expressions involve canonical ensemble averages (and fluctuation averages) of the position and volume derivatives of the potential. An ensemble correction relates the resultsmore » to MD equilibrium averages. Evaluation of the potential and its derivatives requires the calculation of integrals with infinite upper limits of integration, and integrand singularities. Methods for calculating these integrals and estimating the effects of integration errors are developed. A method is given for choosing initial conditions that relax quickly to a desired equilibrium state. Statistical methods developed earlier for MD data are extended to evaluate uncertainties in fluctuation averages, and to test for symmetry. 45 refs., 10 figs., 4 tabs.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70033898','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70033898"><span>Sediment transport under wave groups: Relative importance between nonlinear waveshape and nonlinear boundary layer streaming</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Yu, X.; Hsu, T.-J.; Hanes, D.M.</p> <p>2010-01-01</p> <p>Sediment transport under nonlinear waves in a predominately sheet flow condition is investigated using a two-phase model. Specifically, we study the relative importance between the nonlinear waveshape and nonlinear boundary layer streaming on cross-shore sand transport. Terms in the governing equations because of the nonlinear boundary layer process are included in this one-dimensional vertical (1DV) model by simplifying the two-dimensional vertical (2DV) ensemble-averaged two-phase equations with the assumption that waves propagate without changing their form. The model is first driven by measured time series of near-bed flow velocity because of a wave group during the SISTEX99 large wave flume experiment and validated with the measured sand concentration in the sheet flow layer. Additional studies are then carried out by including and excluding the nonlinear boundary layer terms. It is found that for the grain diameter (0.24 mm) and high-velocity skewness wave condition considered here, nonlinear waveshape (e.g., skewness) is the dominant mechanism causing net onshore transport and nonlinear boundary layer streaming effect only causes an additional 36% onshore transport. However, for conditions of relatively low-wave skewness and a stronger offshore directed current, nonlinear boundary layer streaming plays a more critical role in determining the net transport. Numerical experiments further suggest that the nonlinear boundary layer streaming effect becomes increasingly important for finer grain. When the numerical model is driven by measured near-bed flow velocity in a more realistic surf zone setting, model results suggest nonlinear boundary layer processes may nearly double the onshore transport purely because of nonlinear waveshape. Copyright 2010 by the American Geophysical Union.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100015673','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100015673"><span>Effect of Reynolds Number and Periodic Unsteady Wake Flow Condition on Boundary Layer Development, Separation, and Intermittency Behavior Along the Suction Surface of a Low Pressure Turbine Blade</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schobeiri, M. T.; Ozturk, B.; Ashpis, David E.</p> <p>2007-01-01</p> <p>The paper experimentally studies the effects of periodic unsteady wake flow and different Reynolds numbers on boundary layer development, separation and re-attachment along the suction surface of a low pressure turbine blade. The experimental investigations were performed on a large scale, subsonic unsteady turbine cascade research facility at Turbomachinery Performance and Flow Research Laboratory (TPFL) of Texas A&M University. The experiments were carried out at Reynolds numbers of 110,000 and 150,000 (based on suction surface length and exit velocity). One steady and two different unsteady inlet flow conditions with the corresponding passing frequencies, wake velocities, and turbulence intensities were investigated. The reduced frequencies chosen cover the operating range of LP turbines. In addition to the unsteady boundary layer measurements, surface pressure measurements were performed. The inception, onset, and the extent of the separation bubble information collected from the pressure measurements were compared with the hot wire measurements. The results presented in ensemble-averaged, and the contour plot forms help to understand the physics of the separation phenomenon under periodic unsteady wake flow and different Reynolds number. It was found that the suction surface displayed a strong separation bubble for these three different reduced frequencies. For each condition, the locations defining the separation bubble were determined carefully analyzing and examining the pressure and mean velocity profile data. The location of the boundary layer separation was dependent of the Reynolds number. It is observed that starting point of the separation bubble and the re-attachment point move further downstream by increasing Reynolds number from 110,000 to 150,000. Also, the size of the separation bubble is smaller when compared to that for Re=110,000.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010APS..SES.JA004Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010APS..SES.JA004Y"><span>μ-PIV/Shadowgraphy measurements to elucidate dynamic physicochemical interactions in a multiphase model of pulmonary airway reopening</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yamaguchi, Eiichiro</p> <p>2010-10-01</p> <p>We employ micro-particle image velocimetry (μ-PIV) and shadowgraphy to measure the ensemble-averaged fluid-phase velocity field and interfacial geometry during pulsatile bubble propagation that includes a reverse-flow phase under influence of exogenous lung surfactant (Infasurf). Disease states such as respiratory distress syndrome (RDS) are characterized by insufficient pulmonary surfactant concentrations that enhance airway occlusion and collapse. Subsequent airway reopening, driven by mechanical ventilation, may generate damaging stresses that cause ventilator-induced lung injury (VILI). It is hypothesized that reverse flow may enhance surfactant uptake and protect the lung from VILI. The microscale observations conducted in this study will provide us with a significant understanding of dynamic physicochemical interactions that can be manipulated to reduce the magnitude of this damaging mechanical stimulus during airway reopening. Bubble propagation through a liquid-occluded fused glass capillary tube is controlled by linear-motor-driven syringe pumps that provide mean and sinusoidal velocity components. A translating microscope stage mechanically subtracts the mean velocity of the bubble tip in order to hold the progressing bubble tip in the microscope field of view. To optimize the signal-to-noise ratio near the bubble tip, μ-PIV and shadow images are recorded in separate trials then combined during post-processing with help of a custom-designed micro scale marker. Non-specific binding of Infasurf proteins to the channel wall is controlled by oxidation and chemical treatment of the glass surface. The colloidal stability and dynamic/static surface properties of the Infasurf-PIV particle solution are carefully adjusted based on Langmuir trough measurements. The Finite Time Lyapunov Exponent (FTLE) is computed to provide a Lagrangian perspective for comparison with our boundary element predictions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1356855-characterization-drop-aerodynamic-fragmentation-bag-sheet-thinning-regimes-crossed-beam-two-view-digital-line-holography','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1356855-characterization-drop-aerodynamic-fragmentation-bag-sheet-thinning-regimes-crossed-beam-two-view-digital-line-holography"><span>Characterization of drop aerodynamic fragmentation in the bag and sheet-thinning regimes by crossed-beam, two-view, digital in-line holography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Guildenbecher, Daniel R.; Gao, Jian; Chen, Jun; ...</p> <p>2017-04-19</p> <p>When a spherical liquid drop is subjected to a step change in relative gas velocity, aerodynamic forces lead to drop deformation and possible breakup into a number of secondary fragments. In order to investigate this flow, a digital in-line holography (DIH) diagnostic is proposed which enables rapid quantification of spatial statistics with limited experimental repetition. To overcome the high uncertainty in the depth direction experienced in previous applications of DIH, a crossed-beam, two-view configuration is introduced. With appropriate calibration, this diagnostic is shown to provide accurate quantification of fragment sizes, three-dimensional positions and three-component velocities in a large measurement volume.more » We apply these capabilities in order to investigate the aerodynamic breakup of drops at two non-dimensional Weber numbers, We, corresponding to the bag (We = 14) and sheet-thinning (We = 55) regimes. Ensemble average results show the evolution of fragment size and velocity statistics during the course of breakup. Our results indicate that mean fragment sizes increase throughout the course of breakup. For the bag breakup case, the evolution of a multi-mode fragment size probability density is observed. This is attributed to separate fragmentation mechanisms for the bag and rim structures. In contrast, for the sheet-thinning case, the fragment size probability density shows only one distinct peak indicating a single fragmentation mechanism. Compared to previous related investigations of this flow, many orders of magnitude more fragments are measured per condition, resulting in a significant improvement in data fidelity. For this reason, this experimental dataset is likely to provide new opportunities for detailed validation of analytic and computational models of this flow.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100015672','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100015672"><span>Effect of Reynolds Number and Periodic Unsteady Wake Flow Condition on Boundary Layer Development, Separation, and Re-attachment along the Suction Surface of a Low Pressure Turbine Blade</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ozturk, B.; Schobeiri, M. T.; Ashpis, David E.</p> <p>2005-01-01</p> <p>The paper experimentally studies the effects of periodic unsteady wake flow and different Reynolds numbers on boundary layer development, separation and re-attachment along the suction surface of a low pressure turbine blade. The experimental investigations were performed on a large scale, subsonic unsteady turbine cascade research facility at Turbomachinery Performance and Flow Research Laboratory (TPFL) of Texas A&M University. The experiments were carried out at Reynolds numbers of 110,000 and 150,000 (based on suction surface length and exit velocity). One steady and two different unsteady inlet flow conditions with the corresponding passing frequencies, wake velocities, and turbulence intensities were investigated. The reduced frequencies chosen cover the operating range of LP turbines. In addition to the unsteady boundary layer measurements, surface pressure measurements were performed. The inception, onset, and the extent of the separation bubble information collected from the pressure measurements were compared with the hot wire measurements. The results presented in ensemble-averaged, and the contour plot forms help to understand the physics of the separation phenomenon under periodic unsteady wake flow and different Reynolds number. It was found that the suction surface displayed a strong separation bubble for these three different reduced frequencies. For each condition, the locations defining the separation bubble were determined carefully analyzing and examining the pressure and mean velocity profile data. The location of the boundary layer separation was dependent of the Reynolds number. It is observed that starting point of the separation bubble and the re-attachment point move further downstream by increasing Reynolds number from 110,000 to 150,000. Also, the size of the separation bubble is smaller when compared to that for Re=110,000.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920015729','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920015729"><span>Fluid mechanics experiments in oscillatory flow. Volume 1: Report</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Seume, J.; Friedman, G.; Simon, T. W.</p> <p>1992-01-01</p> <p>Results of a fluid mechanics measurement program in oscillating flow within a circular duct are presented. The program began with a survey of transition behavior over a range of oscillation frequency and magnitude and continued with a detailed study at a single operating point. Such measurements were made in support of Stirling engine development. Values of three dimensionless parameters, Re(sub max), Re(sub w), and A(sub R), embody the velocity amplitude, frequency of oscillation and mean fluid displacement of the cycle, respectively. Measurements were first made over a range of these parameters which included operating points of all Stirling engines. Next, a case was studied with values of these parameters that are representative of the heat exchanger tubes in the heater section of NASA's Stirling cycle Space Power Research Engine (SPRE). Measurements were taken of the axial and radial components of ensemble-averaged velocity and rms-velocity fluctuation and the dominant Reynolds shear stress, at various radial positions for each of four axial stations. In each run, transition from laminar to turbulent flow, and its reverse, were identified and sufficient data was gathered to propose the transition mechanism. Models of laminar and turbulent boundary layers were used to process the data into wall coordinates and to evaluate skin friction coefficients. Such data aids in validating computational models and is useful in comparing oscillatory flow characteristics to those of fully-developed steady flow. Data were taken with a contoured entry to each end of the test section and with flush square inlets so that the effects of test section inlet geometry on transition and turbulence are documented. Volume 1 contains the text of the report including figures and supporting appendices. Volume 2 contains data reduction program listings and tabulated data (including its graphical presentation).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28219010','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28219010"><span>Dynamics of Polarons in Organic Conjugated Polymers with Side Radicals.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, J J; Wei, Z J; Zhang, Y L; Meng, Y; Di, B</p> <p>2017-03-16</p> <p>Based on the one-dimensional tight-binding Su-Schrieffer-Heeger (SSH) model, and using the molecular dynamics method, we discuss the dynamics of electron and hole polarons propagating along a polymer chain, as a function of the distance between side radicals and the magnitude of the transfer integrals between the main chain and the side radicals. We first discuss the average velocities of electron and hole polarons as a function of the distance between side radicals. It is found that the average velocities of the electron polarons remain almost unchanged, while the average velocities of hole polarons decrease significantly when the radical distance is comparable to the polaron width. Second, we have found that the average velocities of electron polarons decrease with increasing transfer integral, but the average velocities of hole polarons increase. These results may provide a theoretical basis for understanding carriers transport properties in polymers chain with side radicals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015IJMPC..2650094Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015IJMPC..2650094Y"><span>An improved car-following model with two preceding cars' average speed</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yu, Shao-Wei; Shi, Zhong-Ke</p> <p>2015-01-01</p> <p>To better describe cooperative car-following behaviors under intelligent transportation circumstances and increase roadway traffic mobility, the data of three successive following cars at a signalized intersection of Jinan in China were obtained and employed to explore the linkage between two preceding cars' average speed and car-following behaviors. The results indicate that two preceding cars' average velocity has significant effects on the following car's motion. Then an improved car-following model considering two preceding cars' average velocity was proposed and calibrated based on full velocity difference model and some numerical simulations were carried out to study how two preceding cars' average speed affected the starting process and the traffic flow evolution process with an initial small disturbance, the results indicate that the improved car-following model can qualitatively describe the impacts of two preceding cars' average velocity on traffic flow and that taking two preceding cars' average velocity into account in designing the control strategy for the cooperative adaptive cruise control system can improve the stability of traffic flow, suppress the appearance of traffic jams and increase the capacity of signalized intersections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.P51B2059B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.P51B2059B"><span>NanoRocks: A Long-Term Microgravity Experiment to Stydy Planet Formation and Planetary Ring Particles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Brisset, J.; Colwell, J. E.; Dove, A.; Maukonen, D.; Brown, N.; Lai, K.; Hoover, B.</p> <p>2015-12-01</p> <p>We report on the results of the NanoRocks experiment on the International Space Station (ISS), which simulates collisions that occur in protoplanetary disks and planetary ring systems. A critical stage of the process of early planet formation is the growth of solid bodies from mm-sized chondrules and aggregates to km-sized planetesimals. To characterize the collision behavior of dust in protoplanetary conditions, experimental data is required, working hand in hand with models and numerical simulations. In addition, the collisional evolution of planetary rings takes place in the same collisional regime. The objective of the NanoRocks experiment is to study low-energy collisions of mm-sized particles of different shapes and materials. An aluminum tray (~8x8x2cm) divided into eight sample cells holding different types of particles gets shaken every 60 s providing particles with initial velocities of a few cm/s. In September 2014, NanoRocks reached ISS and 220 video files, each covering one shaking cycle, have already been downloaded from Station. The data analysis is focused on the dynamical evolution of the multi-particle systems and on the formation of cluster. We track the particles down to mean relative velocities less than 1 mm/s where we observe cluster formation. The mean velocity evolution after each shaking event allows for a determination of the mean coefficient of restitution for each particle set. These values can be used as input into protoplanetary disk and planetary rings simulations. In addition, the cluster analysis allows for a determination of the mean final cluster size and the average particle velocity of clustering onset. The size and shape of these particle clumps is crucial to understand the first stages of planet formation inside protoplanetary disks as well as many a feature of Saturn's rings. We report on the results from the ensemble of these collision experiments and discuss applications to planetesimal formation and planetary ring evolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018TCry...12.1715K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018TCry...12.1715K"><span>Deriving micro- to macro-scale seismic velocities from ice-core c axis orientations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kerch, Johanna; Diez, Anja; Weikusat, Ilka; Eisen, Olaf</p> <p>2018-05-01</p> <p>One of the great challenges in glaciology is the ability to estimate the bulk ice anisotropy in ice sheets and glaciers, which is needed to improve our understanding of ice-sheet dynamics. We investigate the effect of crystal anisotropy on seismic velocities in glacier ice and revisit the framework which is based on fabric eigenvalues to derive approximate seismic velocities by exploiting the assumed symmetry. In contrast to previous studies, we calculate the seismic velocities using the exact c axis angles describing the orientations of the crystal ensemble in an ice-core sample. We apply this approach to fabric data sets from an alpine and a polar ice core. Our results provide a quantitative evaluation of the earlier approximative eigenvalue framework. For near-vertical incidence our results differ by up to 135 m s-1 for P-wave and 200 m s-1 for S-wave velocity compared to the earlier framework (estimated 1 % difference in average P-wave velocity at the bedrock for the short alpine ice core). We quantify the influence of shear-wave splitting at the bedrock as 45 m s-1 for the alpine ice core and 59 m s-1 for the polar ice core. At non-vertical incidence we obtain differences of up to 185 m s-1 for P-wave and 280 m s-1 for S-wave velocities. Additionally, our findings highlight the variation in seismic velocity at non-vertical incidence as a function of the horizontal azimuth of the seismic plane, which can be significant for non-symmetric orientation distributions and results in a strong azimuth-dependent shear-wave splitting of max. 281 m s-1 at some depths. For a given incidence angle and depth we estimated changes in phase velocity of almost 200 m s-1 for P wave and more than 200 m s-1 for S wave and shear-wave splitting under a rotating seismic plane. We assess for the first time the change in seismic anisotropy that can be expected on a short spatial (vertical) scale in a glacier due to strong variability in crystal-orientation fabric (±50 m s-1 per 10 cm). Our investigation of seismic anisotropy based on ice-core data contributes to advancing the interpretation of seismic data, with respect to extracting bulk information about crystal anisotropy, without having to drill an ice core and with special regard to future applications employing ultrasonic sounding.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1223012','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1223012"><span>Cell population modelling of yeast glycolytic oscillations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Henson, Michael A; Müller, Dirk; Reuss, Matthias</p> <p>2002-01-01</p> <p>We investigated a cell-population modelling technique in which the population is constructed from an ensemble of individual cell models. The average value or the number distribution of any intracellular property captured by the individual cell model can be calculated by simulation of a sufficient number of individual cells. The proposed method is applied to a simple model of yeast glycolytic oscillations where synchronization of the cell population is mediated by the action of an excreted metabolite. We show that smooth one-dimensional distributions can be obtained with ensembles comprising 1000 individual cells. Random variations in the state and/or structure of individual cells are shown to produce complex dynamic behaviours which cannot be adequately captured by small ensembles. PMID:12206713</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.9945C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.9945C"><span>Rupture process of the 2009 L'Aquila, central Italy, earthquake, from the separate and joint inversion of Strong Motion, GPS and DInSAR data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cirella, A.; Piatanesi, A.; Tinti, E.; Chini, M.; Cocco, M.</p> <p>2012-04-01</p> <p>In this study, we investigate the rupture history of the April 6th 2009 (Mw 6.1) L'Aquila normal faulting earthquake by using a nonlinear inversion of strong motion, GPS and DInSAR data. We use a two-stage non-linear inversion technique. During the first stage, an algorithm based on the heat-bath simulated annealing generates an ensemble of models that efficiently sample the good data-fitting regions of parameter space. In the second stage the algorithm performs a statistical analysis of the ensemble providing us the best-fitting model, the average model, the associated standard deviation and coefficient of variation. This technique, rather than simply looking at the best model, extracts the most stable features of the earthquake rupture that are consistent with the data and gives an estimate of the variability of each model parameter. The application to the 2009 L'Aquila main-shock shows that both the separate and joint inversion solutions reveal a complex rupture process and a heterogeneous slip distribution. Slip is concentrated in two main asperities: a smaller shallow patch of slip located up-dip from the hypocenter and a second deeper and larger asperity located southeastward along strike direction. The key feature of the source process emerging from our inverted models concerns the rupture history, which is characterized by two distinct stages. The first stage begins with rupture initiation and with a modest moment release lasting nearly 0.9 seconds, which is followed by a sharp increase in slip velocity and rupture speed located 2 km up-dip from the nucleation. During this first stage the rupture front propagated up-dip from the hypocenter at relatively high (˜ 4.0 km/s), but still sub-shear, rupture velocity. The second stage starts nearly 2 seconds after nucleation and it is characterized by the along strike rupture propagation. The largest and deeper asperity fails during this stage of the rupture process. The rupture velocity is larger in the up-dip than in the along-strike direction. The up-dip and along-strike rupture propagation are separated in time and associated with a Mode II and a Mode III crack, respectively. Our results show that the 2009 L'Aquila earthquake featured a very complex rupture, with strong spatial and temporal heterogeneities suggesting a strong frictional and/or structural control of the rupture process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012JMagR.216...88K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012JMagR.216...88K"><span>Velocity of mist droplets and suspending gas imaged separately</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kuethe, Dean O.; McBride, Amber; Altobelli, Stephen A.</p> <p>2012-03-01</p> <p>Nuclear Magnetic Resonance Images (MRIs) of the velocity of water droplets and velocity of the suspending gas, hexafluoroethane, are presented for a vertical and horizontal mist pipe flow. In the vertical flow, the upward velocity of the droplets is clearly slower than the upward velocity of the gas. The average droplet size calculated from the average falling velocity in the upward flow is larger than the average droplet size of mist drawn from the top of the pipe measured with a multi-stage aerosol impactor. Vertical flow concentrates larger particles because they have a longer transit time through the pipe. In the horizontal flow there is a gravity-driven circulation with high-velocity mist in the lower portion of the pipe and low-velocity gas in the upper portion. MRI has the advantages that it can image both phases and that it is unperturbed by optical opacity. A drawback is that the droplet phase of mist is difficult to image because of low average spin density and because the signal from water coalesced on the pipe walls is high. To our knowledge these are the first NMR images of mist.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930044337&hterms=rust&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Drust','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930044337&hterms=rust&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D30%26Ntt%3Drust"><span>Two-dimensional velocity, optical risetime, and peak current estimates for natural positive lightning return strokes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Mach, Douglas M.; Rust, W. D.</p> <p>1993-01-01</p> <p>Velocities, optical risetimes, and transmission line model peak currents for seven natural positive return strokes are reported. The average 2D positive return stroke velocity for channel segments of less than 500 m in length starting near the base of the channel is 0.8 +/- 0.3 x 10 exp 8 m/s, which is slower than the present corresponding average velocity for natural negative first return strokes of 1.7 +/- 0.7 x 10 exp 8/s. It is inferred that positive stroke peak currents in the literature, which assume the same velocity as negative strokes, are low by a factor of 2. The average 2D positive return stroke velocity for channel segments of greater than 500 m starting near the base of the channel is 0.9 +/- 0.4 x 10 exp 8 m/s. The corresponding average velocity for the present natural negative first strokes is 1.2 +/- 0.6 x 10 exp 8 m/s. No significant velocity change with height is found for positive return strokes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28708399','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28708399"><span>Development and Validation of a Computational Model Ensemble for the Early Detection of BCRP/ABCG2 Substrates during the Drug Design Stage.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gantner, Melisa E; Peroni, Roxana N; Morales, Juan F; Villalba, María L; Ruiz, María E; Talevi, Alan</p> <p>2017-08-28</p> <p>Breast Cancer Resistance Protein (BCRP) is an ATP-dependent efflux transporter linked to the multidrug resistance phenomenon in many diseases such as epilepsy and cancer and a potential source of drug interactions. For these reasons, the early identification of substrates and nonsubstrates of this transporter during the drug discovery stage is of great interest. We have developed a computational nonlinear model ensemble based on conformational independent molecular descriptors using a combined strategy of genetic algorithms, J48 decision tree classifiers, and data fusion. The best model ensemble consists in averaging the ranking of the 12 decision trees that showed the best performance on the training set, which also demonstrated a good performance for the test set. It was experimentally validated using the ex vivo everted rat intestinal sac model. Five anticonvulsant drugs classified as nonsubstrates for BRCP by the model ensemble were experimentally evaluated, and none of them proved to be a BCRP substrate under the experimental conditions used, thus confirming the predictive ability of the model ensemble. The model ensemble reported here is a potentially valuable tool to be used as an in silico ADME filter in computer-aided drug discovery campaigns intended to overcome BCRP-mediated multidrug resistance issues and to prevent drug-drug interactions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.S51B2677V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.S51B2677V"><span>Sequential Data Assimilation for Seismicity: a Proof of Concept</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>van Dinther, Y.; Fichtner, A.; Kuensch, H. R.</p> <p>2015-12-01</p> <p>Our physical understanding and probabilistic forecasting ability of earthquakes is significantly hampered by limited indications of the state of stress and strength on faults and their governing parameters. Using the sequential data assimilation framework developed in meteorology and oceanography (e.g., Evensen, JGR, 1994) and a seismic cycle forward model based on Navier-Stokes Partial Differential Equations (van Dinther et al., JGR, 2013), we show that such information with its uncertainties is within reach, at least for laboratory setups. We aim to provide the first, thorough proof of concept for seismicity related PDE applications via a perfect model test of seismic cycles in a simplified wedge-like subduction setup. By evaluating the performance with respect to known numerical input and output, we aim to answer wether there is any probabilistic forecast value for this laboratory-like setup, which and how many parameters can be constrained, and how much data in both space and time would be needed to do so. Thus far our implementation of an Ensemble Kalman Filter demonstrated that probabilistic estimates of both the state of stress and strength on a megathrust fault can be obtained and utilized even when assimilating surface velocity data at a single point in time and space. An ensemble-based error covariance matrix containing velocities, stresses and pressure links surface velocity observations to fault stresses and strengths well enough to update fault coupling accordingly. Depending on what synthetic data show, coseismic events can then be triggered or inhibited.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3992658','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3992658"><span>From a structural average to the conformational ensemble of a DNA bulge</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Shi, Xuesong; Beauchamp, Kyle A.; Harbury, Pehr B.; Herschlag, Daniel</p> <p>2014-01-01</p> <p>Direct experimental measurements of conformational ensembles are critical for understanding macromolecular function, but traditional biophysical methods do not directly report the solution ensemble of a macromolecule. Small-angle X-ray scattering interferometry has the potential to overcome this limitation by providing the instantaneous distance distribution between pairs of gold-nanocrystal probes conjugated to a macromolecule in solution. Our X-ray interferometry experiments reveal an increasing bend angle of DNA duplexes with bulges of one, three, and five adenosine residues, consistent with previous FRET measurements, and further reveal an increasingly broad conformational ensemble with increasing bulge length. The distance distributions for the AAA bulge duplex (3A-DNA) with six different Au-Au pairs provide strong evidence against a simple elastic model in which fluctuations occur about a single conformational state. Instead, the measured distance distributions suggest a 3A-DNA ensemble with multiple conformational states predominantly across a region of conformational space with bend angles between 24 and 85 degrees and characteristic bend directions and helical twists and displacements. Additional X-ray interferometry experiments revealed perturbations to the ensemble from changes in ionic conditions and the bulge sequence, effects that can be understood in terms of electrostatic and stacking contributions to the ensemble and that demonstrate the sensitivity of X-ray interferometry. Combining X-ray interferometry ensemble data with molecular dynamics simulations gave atomic-level models of representative conformational states and of the molecular interactions that may shape the ensemble, and fluorescence measurements with 2-aminopurine-substituted 3A-DNA provided initial tests of these atomistic models. More generally, X-ray interferometry will provide powerful benchmarks for testing and developing computational methods. PMID:24706812</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24667482','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24667482"><span>NIMEFI: gene regulatory network inference using multiple ensemble feature importance algorithms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan</p> <p>2014-01-01</p> <p>One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28469742','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28469742"><span>Muscle Force-Velocity Relationships Observed in Four Different Functional Tests.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zivkovic, Milena Z; Djuric, Sasa; Cuk, Ivan; Suzovic, Dejan; Jaric, Slobodan</p> <p>2017-02-01</p> <p>The aims of the present study were to investigate the shape and strength of the force-velocity relationships observed in different functional movement tests and explore the parameters depicting force, velocity and power producing capacities of the tested muscles. Twelve subjects were tested on maximum performance in vertical jumps, cycling, bench press throws, and bench pulls performed against different loads. Thereafter, both the averaged and maximum force and velocity variables recorded from individual trials were used for force-velocity relationship modeling. The observed individual force-velocity relationships were exceptionally strong (median correlation coefficients ranged from r = 0.930 to r = 0.995) and approximately linear independently of the test and variable type. Most of the relationship parameters observed from the averaged and maximum force and velocity variable types were strongly related in all tests (r = 0.789-0.991), except for those in vertical jumps (r = 0.485-0.930). However, the generalizability of the force-velocity relationship parameters depicting maximum force, velocity and power of the tested muscles across different tests was inconsistent and on average moderate. We concluded that the linear force-velocity relationship model based on either maximum or averaged force-velocity data could provide the outcomes depicting force, velocity and power generating capacity of the tested muscles, although such outcomes can only be partially generalized across different muscles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5384051','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5384051"><span>Muscle Force-Velocity Relationships Observed in Four Different Functional Tests</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zivkovic, Milena Z.; Djuric, Sasa; Cuk, Ivan; Suzovic, Dejan; Jaric, Slobodan</p> <p>2017-01-01</p> <p>Abstract The aims of the present study were to investigate the shape and strength of the force-velocity relationships observed in different functional movement tests and explore the parameters depicting force, velocity and power producing capacities of the tested muscles. Twelve subjects were tested on maximum performance in vertical jumps, cycling, bench press throws, and bench pulls performed against different loads. Thereafter, both the averaged and maximum force and velocity variables recorded from individual trials were used for force–velocity relationship modeling. The observed individual force-velocity relationships were exceptionally strong (median correlation coefficients ranged from r = 0.930 to r = 0.995) and approximately linear independently of the test and variable type. Most of the relationship parameters observed from the averaged and maximum force and velocity variable types were strongly related in all tests (r = 0.789-0.991), except for those in vertical jumps (r = 0.485-0.930). However, the generalizability of the force-velocity relationship parameters depicting maximum force, velocity and power of the tested muscles across different tests was inconsistent and on average moderate. We concluded that the linear force-velocity relationship model based on either maximum or averaged force-velocity data could provide the outcomes depicting force, velocity and power generating capacity of the tested muscles, although such outcomes can only be partially generalized across different muscles. PMID:28469742</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=20110013169&hterms=Coding&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DCoding','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=20110013169&hterms=Coding&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D10%26Ntt%3DCoding"><span>The Limits of Coding with Joint Constraints on Detected and Undetected Error Rates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dolinar, Sam; Andrews, Kenneth; Pollara, Fabrizio; Divsalar, Dariush</p> <p>2008-01-01</p> <p>We develop a remarkably tight upper bound on the performance of a parameterized family of bounded angle maximum-likelihood (BA-ML) incomplete decoders. The new bound for this class of incomplete decoders is calculated from the code's weight enumerator, and is an extension of Poltyrev-type bounds developed for complete ML decoders. This bound can also be applied to bound the average performance of random code ensembles in terms of an ensemble average weight enumerator. We also formulate conditions defining a parameterized family of optimal incomplete decoders, defined to minimize both the total codeword error probability and the undetected error probability for any fixed capability of the decoder to detect errors. We illustrate the gap between optimal and BA-ML incomplete decoding via simulation of a small code.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19780063736&hterms=self+expansion+theory&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dself%2Bexpansion%2Btheory','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19780063736&hterms=self+expansion+theory&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dself%2Bexpansion%2Btheory"><span>A strictly Markovian expansion for plasma turbulence theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Jones, F. C.</p> <p>1978-01-01</p> <p>The collision operator that appears in the equation of motion for a particle distribution function that has been averaged over an ensemble of random Hamiltonians is non-Markovian. It is non-Markovian in that it involves a propagated integral over the past history of the ensemble averaged distribution function. All formal expansions of this nonlinear collision operator to date preserve this non-Markovian character term by term yielding an integro-differential equation that must be converted to a diffusion equation by an additional approximation. In this note we derive an expansion of the collision operator that is strictly Markovian to any finite order and yields a diffusion equation as the lowest non-trivial order. The validity of this expansion is seen to be the same as that of the standard quasi-linear expansion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1997NuPhS..53..395M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1997NuPhS..53..395M"><span>Non-Perturbative Renormalization of the Lattice Heavy Quark Classical Velocity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mandula, Jeffrey E.; Ogilvie, Michael C.</p> <p>1997-02-01</p> <p>We discuss the renormalization of the lattice formulation of the Heavy Quark Effective Theory (LHQET). In addition to wave function and composite operator renormalizations, on the lattice the classical velocity is also renormalized. The origin of this renormalization is the reduction of Lorentz (or O(4)) invariance to (hyper)cubic invariance. We present results of a new, direct lattice simulation of this finite renormalization, and compare the results to the perturbative (one loop) result. The simulation results are obtained with the use of a variationally optimized heavy-light meson operator, using an ensemble of lattices provided by the Fermilab ACP-MAPS collaboration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018GeoRL..45.4429Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018GeoRL..45.4429Y"><span>Medium-Range Forecast Skill for Extraordinary Arctic Cyclones in Summer of 2008-2016</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yamagami, Akio; Matsueda, Mio; Tanaka, Hiroshi L.</p> <p>2018-05-01</p> <p>Arctic cyclones (ACs) are a severe atmospheric phenomenon that affects the Arctic environment. This study assesses the forecast skill of five leading operational medium-range ensemble forecasts for 10 extraordinary ACs that occurred in summer during 2008-2016. Average existence probability of the predicted ACs was >0.9 at lead times of ≤3.5 days. Average central position error of the predicted ACs was less than half of the mean radius of the 10 ACs (469.1 km) at lead times of 2.5-4.5 days. Average central pressure error of the predicted ACs was 5.5-10.7 hPa at such lead times. Therefore, the operational ensemble prediction systems generally predict the position of ACs within 469.1 km 2.5-4.5 days before they mature. The forecast skill for the extraordinary ACs is lower than that for midlatitude cyclones in the Northern Hemisphere but similar to that in the Southern Hemisphere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97a2502K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97a2502K"><span>Shear-stress fluctuations and relaxation in polymer glasses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kriuchevskyi, I.; Wittmer, J. P.; Meyer, H.; Benzerara, O.; Baschnagel, J.</p> <p>2018-01-01</p> <p>We investigate by means of molecular dynamics simulation a coarse-grained polymer glass model focusing on (quasistatic and dynamical) shear-stress fluctuations as a function of temperature T and sampling time Δ t . The linear response is characterized using (ensemble-averaged) expectation values of the contributions (time averaged for each shear plane) to the stress-fluctuation relation μsf for the shear modulus and the shear-stress relaxation modulus G (t ) . Using 100 independent configurations, we pay attention to the respective standard deviations. While the ensemble-averaged modulus μsf(T ) decreases continuously with increasing T for all Δ t sampled, its standard deviation δ μsf(T ) is nonmonotonic with a striking peak at the glass transition. The question of whether the shear modulus is continuous or has a jump singularity at the glass transition is thus ill posed. Confirming the effective time-translational invariance of our systems, the Δ t dependence of μsf and related quantities can be understood using a weighted integral over G (t ) .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.894a2077P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.894a2077P"><span>Detonation velocity in poorly mixed gas mixtures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Prokhorov, E. S.</p> <p>2017-10-01</p> <p>The technique for computation of the average velocity of plane detonation wave front in poorly mixed mixture of gaseous hydrocarbon fuel and oxygen is proposed. Here it is assumed that along the direction of detonation propagation the chemical composition of the mixture has periodic fluctuations caused, for example, by layered stratification of gas charge. The technique is based on the analysis of functional dependence of ideal (Chapman-Jouget) detonation velocity on mole fraction (with respect to molar concentration) of the fuel. It is shown that the average velocity of detonation can be significantly (by more than 10%) less than the velocity of ideal detonation. The dependence that permits to estimate the degree of mixing of gas mixture basing on the measurements of average detonation velocity is established.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19593731','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19593731"><span>Changes in blood velocity following microvascular free tissue transfer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hanasono, Matthew M; Ogunleye, Olubunmi; Yang, Justin S; Hartley, Craig J; Miller, Michael J</p> <p>2009-09-01</p> <p>Understanding how pedicle blood velocities change after free tissue transfer may enable microvascular surgeons to predict when thrombosis is most likely to occur. A 20-MHz Doppler probe was used to measure arterial and venous blood velocities prior to pedicle division and 20 minutes after anastomosis in 32 microvascular free flaps. An implantable Doppler probe was then used to measure arterial and venous blood velocities daily for 5 days. Peak arterial blood velocity averaged 30.6 cm/s prior to pedicle division and increased to 36.5 cm/s 20 minutes after anastomosis ( P < 0.05). Peak venous blood velocity averaged 7.6 cm/s prior to pedicle division and increased to 12.4 cm/s 20 minutes after anastomosis ( P < 0.05). Peak arterial blood velocities averaged 34.0, 37.7, 43.8, 37.9, 37.6 cm/s on postoperative days (PODs) 1 through 5, respectively. Peak venous blood velocities averaged 11.9, 14.5, 18.2, 16.8, 17.7 cm/s on PODs 1 through 5, respectively. The peak arterial blood velocity on POD 3, and peak venous blood velocities on PODs 2, 3, and 5 were significantly higher than 20 minutes after anastomosis ( P < 0.05). Arterial and venous blood velocities increase for the first 3 postoperative days, potentially contributing to the declining risk for pedicle thrombosis during this time period.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018HESS...22.2007D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018HESS...22.2007D"><span>Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dib, Alain; Kavvas, M. Levent</p> <p>2018-03-01</p> <p>The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013EGUGA..1513090D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013EGUGA..1513090D"><span>Interactive vs. Non-Interactive Ensembles for Weather Prediction and Climate Projection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duane, Gregory</p> <p>2013-04-01</p> <p>If the members of an ensemble of different models are allowed to interact with one another in run time, predictive skill can be improved as compared to that of any individual model or any average of indvidual model outputs. Inter-model connections in such an interactive ensemble can be trained, using historical data, so that the resulting ``supermodel" synchronizes with reality when used in weather-prediction mode, where the individual models perform data assimilation from each other (with trainable inter-model "observation error") as well as from real observations. In climate-projection mode, parameters of the individual models are changed, as might occur from an increase in GHG levels, and one obtains relevant statistical properties of the new supermodel attractor. In simple cases, it has been shown that training of the inter-model connections with the old parameter values gives a supermodel that is still predictive when the parameter values are changed. Here we inquire as to the circumstances under which supermodel performance can be expected to exceed that of the customary weighted average of model outputs. We consider a supermodel formed from quasigeostrophic channel models with different forcing coefficients, and introduce an effective training scheme for the inter-model connections. We show that the blocked-zonal index cycle is reproduced better by the supermodel than by any non-interactive ensemble in the extreme case where the forcing coefficients of the different models are very large or very small. With realistic differences in forcing coefficients, as would be representative of actual differences among IPCC-class models, the usual linearity assumption is justified and a weighted average of model outputs is adequate. It is therefore hypothesized that supermodeling is likely to be useful in situations where there are qualitative model differences, as arising from sub-gridscale parameterizations, that affect overall model behavior. Otherwise the usual ex post facto averaging will probably suffice. Previous results from an ENSO-prediction supermodel [Kirtman et al.] are re-examined in light of the hypothesis about the importance of qualitative inter-model differences.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2901495','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2901495"><span>Dynamic clustering threshold reduces conformer ensemble size while maintaining a biologically relevant ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yongye, Austin B.; Bender, Andreas</p> <p>2010-01-01</p> <p>Representing the 3D structures of ligands in virtual screenings via multi-conformer ensembles can be computationally intensive, especially for compounds with a large number of rotatable bonds. Thus, reducing the size of multi-conformer databases and the number of query conformers, while simultaneously reproducing the bioactive conformer with good accuracy, is of crucial interest. While clustering and RMSD filtering methods are employed in existing conformer generators, the novelty of this work is the inclusion of a clustering scheme (NMRCLUST) that does not require a user-defined cut-off value. This algorithm simultaneously optimizes the number and the average spread of the clusters. Here we describe and test four inter-dependent approaches for selecting computer-generated conformers, namely: OMEGA, NMRCLUST, RMS filtering and averaged-RMS filtering. The bioactive conformations of 65 selected ligands were extracted from the corresponding protein:ligand complexes from the Protein Data Bank, including eight ligands that adopted dissimilar bound conformations within different receptors. We show that NMRCLUST can be employed to further filter OMEGA-generated conformers while maintaining biological relevance of the ensemble. It was observed that NMRCLUST (containing on average 10 times fewer conformers per compound) performed nearly as well as OMEGA, and both outperformed RMS filtering and averaged-RMS filtering in terms of identifying the bioactive conformations with excellent and good matches (0.5 < RMSD < 1.0 Å). Furthermore, we propose thresholds for OMEGA root-mean square filtering depending on the number of rotors in a compound: 0.8, 1.0 and 1.4 for structures with low (1–4), medium (5–9) and high (10–15) numbers of rotatable bonds, respectively. The protocol employed is general and can be applied to reduce the number of conformers in multi-conformer compound collections and alleviate the complexity of downstream data processing in virtual screening experiments. Electronic supplementary material The online version of this article (doi:10.1007/s10822-010-9365-1) contains supplementary material, which is available to authorized users. PMID:20499135</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3575305','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3575305"><span>A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2012-01-01</p> <p>Background Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? Results The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Conclusion Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway. PMID:23216969</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23216969','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23216969"><span>A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Günther, Oliver P; Chen, Virginia; Freue, Gabriela Cohen; Balshaw, Robert F; Tebbutt, Scott J; Hollander, Zsuzsanna; Takhar, Mandeep; McMaster, W Robert; McManus, Bruce M; Keown, Paul A; Ng, Raymond T</p> <p>2012-12-08</p> <p>Biomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble? The first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity. Proteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.H41A1417L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.H41A1417L"><span>Multi-model analysis in hydrological prediction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lanthier, M.; Arsenault, R.; Brissette, F.</p> <p>2017-12-01</p> <p>Hydrologic modelling, by nature, is a simplification of the real-world hydrologic system. Therefore ensemble hydrological predictions thus obtained do not present the full range of possible streamflow outcomes, thereby producing ensembles which demonstrate errors in variance such as under-dispersion. Past studies show that lumped models used in prediction mode can return satisfactory results, especially when there is not enough information available on the watershed to run a distributed model. But all lumped models greatly simplify the complex processes of the hydrologic cycle. To generate more spread in the hydrologic ensemble predictions, multi-model ensembles have been considered. In this study, the aim is to propose and analyse a method that gives an ensemble streamflow prediction that properly represents the forecast probabilities and reduced ensemble bias. To achieve this, three simple lumped models are used to generate an ensemble. These will also be combined using multi-model averaging techniques, which generally generate a more accurate hydrogram than the best of the individual models in simulation mode. This new predictive combined hydrogram is added to the ensemble, thus creating a large ensemble which may improve the variability while also improving the ensemble mean bias. The quality of the predictions is then assessed on different periods: 2 weeks, 1 month, 3 months and 6 months using a PIT Histogram of the percentiles of the real observation volumes with respect to the volumes of the ensemble members. Initially, the models were run using historical weather data to generate synthetic flows. This worked for individual models, but not for the multi-model and for the large ensemble. Consequently, by performing data assimilation at each prediction period and thus adjusting the initial states of the models, the PIT Histogram could be constructed using the observed flows while allowing the use of the multi-model predictions. The under-dispersion has been largely corrected on short-term predictions. For the longer term, the addition of the multi-model member has been beneficial to the quality of the predictions, although it is too early to determine whether the gain is related to the addition of a member or if multi-model member has plus-value itself.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1916810O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1916810O"><span>Total probabilities of ensemble runoff forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian</p> <p>2017-04-01</p> <p>Ensemble forecasting has a long history from meteorological modelling, as an indication of the uncertainty of the forecasts. However, it is necessary to calibrate and post-process the ensembles as the they often exhibit both bias and dispersion errors. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters varying in space and time, while giving a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, which makes it unsuitable for our purpose. Our post-processing method of the ensembles is developed in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu), where we are making forecasts for whole Europe, and based on observations from around 700 catchments. As the target is flood forecasting, we are also more interested in improving the forecast skill for high-flows rather than in a good prediction of the entire flow regime. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different meteorological forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to estimate the total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but we are adding a spatial penalty in the calibration process to force a spatial correlation of the parameters. The penalty takes distance, stream-connectivity and size of the catchment areas into account. This can in some cases have a slight negative impact on the calibration error, but avoids large differences between parameters of nearby locations, whether stream connected or not. The spatial calibration also makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ChA%26A..41..430Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ChA%26A..41..430Y"><span>Ensemble Pulsar Time Scale</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yin, Dong-shan; Gao, Yu-ping; Zhao, Shu-hong</p> <p>2017-07-01</p> <p>Millisecond pulsars can generate another type of time scale that is totally independent of the atomic time scale, because the physical mechanisms of the pulsar time scale and the atomic time scale are quite different from each other. Usually the pulsar timing observations are not evenly sampled, and the internals between two data points range from several hours to more than half a month. Further more, these data sets are sparse. All this makes it difficult to generate an ensemble pulsar time scale. Hence, a new algorithm to calculate the ensemble pulsar time scale is proposed. Firstly, a cubic spline interpolation is used to densify the data set, and make the intervals between data points uniform. Then, the Vondrak filter is employed to smooth the data set, and get rid of the high-frequency noises, and finally the weighted average method is adopted to generate the ensemble pulsar time scale. The newly released NANOGRAV (North American Nanohertz Observatory for Gravitational Waves) 9-year data set is used to generate the ensemble pulsar time scale. This data set includes the 9-year observational data of 37 millisecond pulsars observed by the 100-meter Green Bank telescope and the 305-meter Arecibo telescope. It is found that the algorithm used in this paper can reduce effectively the influence caused by the noises in pulsar timing residuals, and improve the long-term stability of the ensemble pulsar time scale. Results indicate that the long-term (> 1 yr) stability of the ensemble pulsar time scale is better than 3.4 × 10-15.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12366212','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12366212"><span>Fracture of disordered solids in compression as a critical phenomenon. I. Statistical mechanics formalism.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Toussaint, Renaud; Pride, Steven R</p> <p>2002-09-01</p> <p>This is the first of a series of three articles that treats fracture localization as a critical phenomenon. This first article establishes a statistical mechanics based on ensemble averages when fluctuations through time play no role in defining the ensemble. Ensembles are obtained by dividing a huge rock sample into many mesoscopic volumes. Because rocks are a disordered collection of grains in cohesive contact, we expect that once shear strain is applied and cracks begin to arrive in the system, the mesoscopic volumes will have a wide distribution of different crack states. These mesoscopic volumes are the members of our ensembles. We determine the probability of observing a mesoscopic volume to be in a given crack state by maximizing Shannon's measure of the emergent-crack disorder subject to constraints coming from the energy balance of brittle fracture. The laws of thermodynamics, the partition function, and the quantification of temperature are obtained for such cracking systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005ASAJ..118Q1918L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005ASAJ..118Q1918L"><span>Air flow measurement techniques applied to noise reduction of a centrifugal blower</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Laage, John W.; Armstrong, Ashli J.; Eilers, Daniel J.; Olsen, Michael G.; Mann, J. Adin</p> <p>2005-09-01</p> <p>The air flow in a centrifugal blower was studied using a variety of flow and sound measurement techniques. The flow measurement techniques employed included Particle Image Velocimetry (PIV), pitot tubes, and a five hole spherical probe. PIV was used to measure instantaneous and ensemble-averaged velocity fields over large area of the outlet duct as a function of fan position, allowing for the visualization of the flow as it leave the fan blades and progressed downstream. The results from the flow measurements were reviewed along side the results of the sound measurements with the goal of identifying sources of noise and inefficiencies in flow performance. The radiated sound power was divided into broadband and tone noise and measures of the flow. The changes in the tone and broadband sound were compared to changes in flow quantities such as the turbulent kinetic energy and Reynolds stress. Results for each method will be presented to demonstrate the strengths of each flow measurement technique as well as their limitations. Finally, the role that each played in identifying noise sources is described.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011JNEng...8a6002W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011JNEng...8a6002W"><span>State-space decoding of primary afferent neuron firing rates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wagenaar, J. B.; Ventura, V.; Weber, D. J.</p> <p>2011-02-01</p> <p>Kinematic state feedback is important for neuroprostheses to generate stable and adaptive movements of an extremity. State information, represented in the firing rates of populations of primary afferent (PA) neurons, can be recorded at the level of the dorsal root ganglia (DRG). Previous work in cats showed the feasibility of using DRG recordings to predict the kinematic state of the hind limb using reverse regression. Although accurate decoding results were attained, reverse regression does not make efficient use of the information embedded in the firing rates of the neural population. In this paper, we present decoding results based on state-space modeling, and show that it is a more principled and more efficient method for decoding the firing rates in an ensemble of PA neurons. In particular, we show that we can extract confounded information from neurons that respond to multiple kinematic parameters, and that including velocity components in the firing rate models significantly increases the accuracy of the decoded trajectory. We show that, on average, state-space decoding is twice as efficient as reverse regression for decoding joint and endpoint kinematics.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvF...2k4604M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvF...2k4604M"><span>Comparison of forcing functions in magnetohydrodynamics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McKay, Mairi E.; Linkmann, Moritz; Clark, Daniel; Chalupa, Adam A.; Berera, Arjun</p> <p>2017-11-01</p> <p>Results are presented of direct numerical simulations of incompressible, homogeneous magnetohydrodynamic turbulence without a mean magnetic field, subject to different mechanical forcing functions commonly used in the literature. Specifically, the forces are negative damping (which uses the large-scale velocity field as a forcing function), a nonhelical random force, and a nonhelical static sinusoidal force (analogous to helical ABC forcing). The time evolution of the three ideal invariants (energy, magnetic helicity, and cross helicity), the time-averaged energy spectra, the energy ratios, and the dissipation ratios are examined. All three forcing functions produce qualitatively similar steady states with regard to the time evolution of the energy and magnetic helicity. However, differences in the cross-helicity evolution are observed, particularly in the case of the static sinusoidal method of energy injection. Indeed, an ensemble of sinusoidally forced simulations with identical parameters shows significant variations in the cross helicity over long time periods, casting some doubt on the validity of the principle of ergodicity in systems in which the injection of helicity cannot be controlled. Cross helicity can unexpectedly enter the system through the forcing function and must be carefully monitored.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1987icap.conf..243A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1987icap.conf..243A"><span>Evaluation of dual polarization scattering matrix radar rain backscatter measurements in the X- and Q-bands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Agrawal, A. P.; Carnegie, D. W.; Boerner, W.-M.</p> <p></p> <p>This paper presents an evaluation of polarimetric rain backscatter measurements collected with coherent dual polarization radar systems in the X (8.9 GHz) and Q (45GHz) bands, the first being operated in a pulsed mode and the second being a FM-CW system. The polarimetric measurement data consisted for each band of fifty files of time-sequential scattering matrix measurements expressed in terms of a linear (H, V) antenna polarization state basis. The rain backscattering takes place in a rain cell defined by the beam widths and down range distances of 275 ft through 325 ft and the scattering matrices were measured far below the hydrometeoric scattering center decorrelation time so that ensemble averaging of time-sequential scattering matrices may be applied. In the data evaluation great care was taken in determining: (1) polarimetric Doppler velocities associated with the motion of descending oscillating raindrops and/or eddies within the moving swaths of coastal rain showers, and (2) also the properties of the associated co/cross-polarization rain clutter nulls and their distributions on the Poincare polarization sphere.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/465806','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/465806"><span>Output characteristics of SASE-driven short wavelength FEL`s</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Fawley, W.M.</p> <p></p> <p>This paper investigates various properties of the ``microspikes`` associated with self-amplified spontaneous emission (SASE) in a short wavelength free-electron laser (FEL). Using results from the 2-D numerical simulation code GINGER, we confirm theoretical predictions such as the convective group velocity in the exponential gain regime. In the saturated gain regime beyond the initial saturation, we find that the average radiation power continues to grow with an approximately linearly dependence upon undulator length. Moreover, the spectrum significantly broadens and shifts in wavelength to the redward direction, with{ital P(w)} approaching a constant, asymptotic value. This is in marked contrast to the exponentialmore » gain regime where the spectrum steadily narrows, {ital P(w)} grows, and the central wavelength remains constant with {ital z}. Via use of a spectrogram diagnostic {ital S(w,t)}, it appears that the radiation pattern in the saturated gain regime is composed of an ensemble of distinct ``sinews`` whose widths AA remain approximately constant but whose central wavelengths can ``chirp`` by varying a small extent with {ital t}.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SSEle.143...49D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SSEle.143...49D"><span>Multi-Subband Ensemble Monte Carlo simulations of scaled GAA MOSFETs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Donetti, L.; Sampedro, C.; Ruiz, F. G.; Godoy, A.; Gamiz, F.</p> <p>2018-05-01</p> <p>We developed a Multi-Subband Ensemble Monte Carlo simulator for non-planar devices, taking into account two-dimensional quantum confinement. It couples self-consistently the solution of the 3D Poisson equation, the 2D Schrödinger equation, and the 1D Boltzmann transport equation with the Ensemble Monte Carlo method. This simulator was employed to study MOS devices based on ultra-scaled Gate-All-Around Si nanowires with diameters in the range from 4 nm to 8 nm with gate length from 8 nm to 14 nm. We studied the output and transfer characteristics, interpreting the behavior in the sub-threshold region and in the ON state in terms of the spatial charge distribution and the mobility computed with the same simulator. We analyzed the results, highlighting the contribution of different valleys and subbands and the effect of the gate bias on the energy and velocity profiles. Finally the scaling behavior was studied, showing that only the devices with D = 4nm maintain a good control of the short channel effects down to the gate length of 8nm .</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017RScI...88j3507K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017RScI...88j3507K"><span>Ring-averaged ion velocity distribution function probe for laboratory magnetized plasma experiment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kawamori, Eiichirou; Chen, Jinting; Lin, Chiahsuan; Lee, Zongmau</p> <p>2017-10-01</p> <p>Ring-averaged velocity distribution function of ions at a fixed guiding center position is a fundamental quantity in the gyrokinetic plasma physics. We have developed a diagnostic tool for the ring averaged velocity distribution function of ions for laboratory plasma experiments, which is named as the ring-averaged ion distribution function probe (RIDFP). The RIDFP is a set of ion collectors for different velocities. It is designed to be immersed in magnetized plasmas and achieves momentum selection of incoming ions by the selection of the ion Larmor radii. To nullify the influence of the sheath potential surrounding the RIDFP on the orbits of the incoming ions, the electrostatic potential of the RIDFP body is automatically adjusted to coincide with the space potential of the target plasma with the use of an emissive probe and a voltage follower. The developed RIDFP successfully measured the equilibrium ring-averaged velocity distribution function of a laboratory magnetized plasma, which was in accordance with the Maxwellian distribution having an ion temperature of 0.2 eV.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29092513','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29092513"><span>Ring-averaged ion velocity distribution function probe for laboratory magnetized plasma experiment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kawamori, Eiichirou; Chen, Jinting; Lin, Chiahsuan; Lee, Zongmau</p> <p>2017-10-01</p> <p>Ring-averaged velocity distribution function of ions at a fixed guiding center position is a fundamental quantity in the gyrokinetic plasma physics. We have developed a diagnostic tool for the ring averaged velocity distribution function of ions for laboratory plasma experiments, which is named as the ring-averaged ion distribution function probe (RIDFP). The RIDFP is a set of ion collectors for different velocities. It is designed to be immersed in magnetized plasmas and achieves momentum selection of incoming ions by the selection of the ion Larmor radii. To nullify the influence of the sheath potential surrounding the RIDFP on the orbits of the incoming ions, the electrostatic potential of the RIDFP body is automatically adjusted to coincide with the space potential of the target plasma with the use of an emissive probe and a voltage follower. The developed RIDFP successfully measured the equilibrium ring-averaged velocity distribution function of a laboratory magnetized plasma, which was in accordance with the Maxwellian distribution having an ion temperature of 0.2 eV.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28268573','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28268573"><span>Deep neural ensemble for retinal vessel segmentation in fundus images towards achieving label-free angiography.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lahiri, A; Roy, Abhijit Guha; Sheet, Debdoot; Biswas, Prabir Kumar</p> <p>2016-08-01</p> <p>Automated segmentation of retinal blood vessels in label-free fundus images entails a pivotal role in computed aided diagnosis of ophthalmic pathologies, viz., diabetic retinopathy, hypertensive disorders and cardiovascular diseases. The challenge remains active in medical image analysis research due to varied distribution of blood vessels, which manifest variations in their dimensions of physical appearance against a noisy background. In this paper we formulate the segmentation challenge as a classification task. Specifically, we employ unsupervised hierarchical feature learning using ensemble of two level of sparsely trained denoised stacked autoencoder. First level training with bootstrap samples ensures decoupling and second level ensemble formed by different network architectures ensures architectural revision. We show that ensemble training of auto-encoders fosters diversity in learning dictionary of visual kernels for vessel segmentation. SoftMax classifier is used for fine tuning each member autoencoder and multiple strategies are explored for 2-level fusion of ensemble members. On DRIVE dataset, we achieve maximum average accuracy of 95.33% with an impressively low standard deviation of 0.003 and Kappa agreement coefficient of 0.708. Comparison with other major algorithms substantiates the high efficacy of our model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26389618','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26389618"><span>Sensory processing patterns predict the integration of information held in visual working memory.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lowe, Matthew X; Stevenson, Ryan A; Wilson, Kristin E; Ouslis, Natasha E; Barense, Morgan D; Cant, Jonathan S; Ferber, Susanne</p> <p>2016-02-01</p> <p>Given the limited resources of visual working memory, multiple items may be remembered as an averaged group or ensemble. As a result, local information may be ill-defined, but these ensemble representations provide accurate diagnostics of the natural world by combining gist information with item-level information held in visual working memory. Some neurodevelopmental disorders are characterized by sensory processing profiles that predispose individuals to avoid or seek-out sensory stimulation, fundamentally altering their perceptual experience. Here, we report such processing styles will affect the computation of ensemble statistics in the general population. We identified stable adult sensory processing patterns to demonstrate that individuals with low sensory thresholds who show a greater proclivity to engage in active response strategies to prevent sensory overstimulation are less likely to integrate mean size information across a set of similar items and are therefore more likely to be biased away from the mean size representation of an ensemble display. We therefore propose the study of ensemble processing should extend beyond the statistics of the display, and should also consider the statistics of the observer. (PsycINFO Database Record (c) 2016 APA, all rights reserved).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=radiation+AND+electromagnetic&id=EJ830492','ERIC'); return false;" href="https://eric.ed.gov/?q=radiation+AND+electromagnetic&id=EJ830492"><span>How to Explain the Non-Zero Mass of Electromagnetic Radiation Consisting of Zero-Mass Photons</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Gabovich, Alexander M.; Gabovich, Nadezhda A.</p> <p>2007-01-01</p> <p>The mass of electromagnetic radiation in a cavity is considered using the correct relativistic approach based on the concept of a scalar mass not dependent on the particle (system) velocity. It is shown that due to the non-additivity of mass in the special theory of relativity the ensemble of chaotically propagating mass-less photons in the cavity…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA103575','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA103575"><span>Multisite Testing of the Discrete Address Beacon System (DABS).</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1981-07-01</p> <p>downlink messages from an airborne distributed computer system containing , transponder in addition to performing 36 minicomputers, most of which are...the lockout function. organized into groups (or ensembles) of four computers interfaced to a local Each sensor may provide surveillance and data bus...position and velocity. Depending upon computer subsystem, which monitors the means used for scenario generation, in real time all communication and aircraft</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012PhDT........43R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012PhDT........43R"><span>Unsteady specific work and isentropic efficiency of a radial turbine driven by pulsed detonations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rouser, Kurt P.</p> <p></p> <p>There has been longstanding government and industry interest in pressure-gain combustion for use in Brayton cycle based engines. Theoretically, pressure-gain combustion allows heat addition with reduced entropy loss. The pulsed detonation combustor (PDC) is a device that can provide such pressure-gain combustion and possibly replace typical steady deflagration combustors. The PDC is inherently unsteady, however, and comparisons with conventional steady deflagration combustors must be based upon time-integrated performance variables. In this study, the radial turbine of a Garrett automotive turbocharger was coupled directly to and driven, full admission, by a PDC in experiments fueled by hydrogen or ethylene. Data included pulsed cycle time histories of turbine inlet and exit temperature, pressure, velocity, mass flow, and enthalpy. The unsteady inlet flowfield showed momentary reverse flow, and thus unsteady accumulation and expulsion of mass and enthalpy within the device. The coupled turbine-driven compressor provided a time-resolved measure of turbine power. Peak power increased with PDC fill fraction, and duty cycle increased with PDC frequency. Cycle-averaged unsteady specific work increased with fill fraction and frequency. An unsteady turbine efficiency formulation is proposed, including heat transfer effects, enthalpy flux-weighted total pressure ratio, and ensemble averaging over multiple cycles. Turbine efficiency increased with frequency but was lower than the manufacturer reported conventional steady turbine efficiency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1983PhyBC.120..255F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1983PhyBC.120..255F"><span>Neutron scattering on solitons in quasi-one-dimensional systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fedyanin, V. K.</p> <p>1983-05-01</p> <p>In the framework of the model of an ideal lattice gas of solitons, we obtain the following general formula for the dynamic neutron scattering form factor: S(q, w) = N¯S 1(q, w) , S(q, w) = {p‧(ν 0)δ 22}/{2πqZ 1h }f 2(qδ(ν 0 Here q = k‧ - k and w = E‧ - E are the neutron momentum and energy transfer, respectively, ν0 = wq-1, δ(ν) is the soliton width of velocity ν, p‧( ν0) = d p/d ν| ν0 , p(ν) is the soliton momentum, E(p(ν)) is the soliton energy, N¯ is the average number of solitons at θ = kδT = β-1 and is constructed from the soliton non-linear differential equations. The derivation of the formula is essentially based on (i) specific dependence of these solutions on ξ = x - vt, and (ii) generalization of the averaging over the soliton ensemble, proposed in ref. [1]. The specifi properties of the scattering spectra of polypeptides, DNA molecules and magnetics as functions of the temperature and interaction parameters and of external fields are discussed on the basis of this formula. The contribution to S(q, w) for “slow” solitons in magnetics has been calculated in [2, 3]. (For each concrete model the authors were forced to formulate anew the way of calculation, to assume the small size of ν, etc.)</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhyA..487..215S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhyA..487..215S"><span>Generalized ensemble theory with non-extensive statistics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shen, Ke-Ming; Zhang, Ben-Wei; Wang, En-Ke</p> <p>2017-12-01</p> <p>The non-extensive canonical ensemble theory is reconsidered with the method of Lagrange multipliers by maximizing Tsallis entropy, with the constraint that the normalized term of Tsallis' q -average of physical quantities, the sum ∑ pjq, is independent of the probability pi for Tsallis parameter q. The self-referential problem in the deduced probability and thermal quantities in non-extensive statistics is thus avoided, and thermodynamical relationships are obtained in a consistent and natural way. We also extend the study to the non-extensive grand canonical ensemble theory and obtain the q-deformed Bose-Einstein distribution as well as the q-deformed Fermi-Dirac distribution. The theory is further applied to the generalized Planck law to demonstrate the distinct behaviors of the various generalized q-distribution functions discussed in literature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19730002072','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19730002072"><span>Interplanetary double-shock ensembles with anomalous electrical conductivity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Dryer, M.</p> <p>1972-01-01</p> <p>Similarity theory is applied to the case of constant velocity, piston-driven, shock waves. This family of solutions, incorporating the interplanetary magnetic field for the case of infinite electric conductivity, represents one class of experimentally observed, flare-generated shock waves. This paper discusses the theoretical extension to flows with finite conductivity (presumably caused by unspecified modes of wave-particle interactions). Solutions, including reverse shocks, are found for a wide range of magnetic Reynolds numbers from one to infinity. Consideration of a zero and nonzero ambient flowing solar wind (together with removal of magnetic considerations) enables the recovery of earlier similarity solutions as well as numerical simulations. A limited comparison with observations suggests that flare energetics can be reasonably estimated once the shock velocity, ambient solar wind velocity and density, and ambient azimuthal Alfven Mach number are known.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009AGUFM.S31D..06B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009AGUFM.S31D..06B"><span>Transdimensional Seismic Tomography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bodin, T.; Sambridge, M.</p> <p>2009-12-01</p> <p>In seismic imaging the degree of model complexity is usually determined by manually tuning damping parameters within a fixed parameterization chosen in advance. Here we present an alternative methodology for seismic travel time tomography where the model complexity is controlled automatically by the data. In particular we use a variable parametrization consisting of Voronoi cells with mobile geometry, shape and number, all treated as unknowns in the inversion. The reversible jump algorithm is used to sample the transdimensional model space within a Bayesian framework which avoids global damping procedures and the need to tune regularisation parameters. The method is an ensemble inference approach, as many potential solutions are generated with variable numbers of cells. Information is extracted from the ensemble as a whole by performing Monte Carlo integration to produce the expected Earth model. The ensemble of models can also be used to produce velocity uncertainty estimates and experiments with synthetic data suggest they represent actual uncertainty surprisingly well. In a transdimensional approach, the level of data uncertainty directly determines the model complexity needed to satisfy the data. Intriguingly, the Bayesian formulation can be extended to the case where data uncertainty is also uncertain. Experiments show that it is possible to recover data noise estimate while at the same time controlling model complexity in an automated fashion. The method is tested on synthetic data in a 2-D application and compared with a more standard matrix based inversion scheme. The method has also been applied to real data obtained from cross correlation of ambient noise where little is known about the size of the errors associated with the travel times. As an example, a tomographic image of Rayleigh wave group velocity for the Australian continent is constructed for 5s data together with uncertainty estimates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4764389','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4764389"><span>In Vitro and In Vivo Single Myosin Step-Sizes in Striated Muscle a</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Burghardt, Thomas P.; Sun, Xiaojing; Wang, Yihua; Ajtai, Katalin</p> <p>2016-01-01</p> <p>Myosin in muscle transduces ATP free energy into the mechanical work of moving actin. It has a motor domain transducer containing ATP and actin binding sites, and, mechanical elements coupling motor impulse to the myosin filament backbone providing transduction/mechanical-coupling. The mechanical coupler is a lever-arm stabilized by bound essential and regulatory light chains. The lever-arm rotates cyclically to impel bound filamentous actin. Linear actin displacement due to lever-arm rotation is the myosin step-size. A high-throughput quantum dot labeled actin in vitro motility assay (Qdot assay) measures motor step-size in the context of an ensemble of actomyosin interactions. The ensemble context imposes a constant velocity constraint for myosins interacting with one actin filament. In a cardiac myosin producing multiple step-sizes, a “second characterization” is step-frequency that adjusts longer step-size to lower frequency maintaining a linear actin velocity identical to that from a shorter step-size and higher frequency actomyosin cycle. The step-frequency characteristic involves and integrates myosin enzyme kinetics, mechanical strain, and other ensemble affected characteristics. The high-throughput Qdot assay suits a new paradigm calling for wide surveillance of the vast number of disease or aging relevant myosin isoforms that contrasts with the alternative model calling for exhaustive research on a tiny subset myosin forms. The zebrafish embryo assay (Z assay) performs single myosin step-size and step-frequency assaying in vivo combining single myosin mechanical and whole muscle physiological characterizations in one model organism. The Qdot and Z assays cover “bottom-up” and “top-down” assaying of myosin characteristics. PMID:26728749</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70042009','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70042009"><span>Seismic structure of the crust and uppermost mantle of South America and surrounding oceanic basins</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Chulick, Gary S.; Detweiler, Shane; Mooney, Walter D.</p> <p>2013-01-01</p> <p>We present a new set of contour maps of the seismic structure of South America and the surrounding ocean basins. These maps include new data, helping to constrain crustal thickness, whole-crustal average P-wave and S-wave velocity, and the seismic velocity of the uppermost mantle (Pn and Sn). We find that: (1) The weighted average thickness of the crust under South America is 38.17 km (standard deviation, s.d. ±8.7 km), which is ∼1 km thinner than the global average of 39.2 km (s.d. ±8.5 km) for continental crust. (2) Histograms of whole-crustal P-wave velocities for the South American crust are bi-modal, with the lower peak occurring for crust that appears to be missing a high-velocity (6.9–7.3 km/s) lower crustal layer. (3) The average P-wave velocity of the crystalline crust (Pcc) is 6.47 km/s (s.d. ±0.25 km/s). This is essentially identical to the global average of 6.45 km/s. (4) The average Pn velocity beneath South America is 8.00 km/s (s.d. ±0.23 km/s), slightly lower than the global average of 8.07 km/s. (5) A region across northern Chile and northeast Argentina has anomalously low P- and S-wave velocities in the crust. Geographically, this corresponds to the shallowly-subducted portion of the Nazca plate (the Pampean flat slab first described by Isacks et al., 1968), which is also a region of crustal extension. (6) The thick crust of the Brazilian craton appears to extend into Venezuela and Colombia. (7) The crust in the Amazon basin and along the western edge of the Brazilian craton may be thinned by extension. (8) The average crustal P-wave velocity under the eastern Pacific seafloor is higher than under the western Atlantic seafloor, most likely due to the thicker sediment layer on the older Atlantic seafloor.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMNG33A1864M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMNG33A1864M"><span>Assimilating every-30-second 100-m-mesh radar observations for convective weather: implications to non-Gaussian PDF</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Miyoshi, T.; Teramura, T.; Ruiz, J.; Kondo, K.; Lien, G. Y.</p> <p>2016-12-01</p> <p>Convective weather is known to be highly nonlinear and chaotic, and it is hard to predict their location and timing precisely. Our Big Data Assimilation (BDA) effort has been exploring to use dense and frequent observations to avoid non-Gaussian probability density function (PDF) and to apply an ensemble Kalman filter under the Gaussian error assumption. The phased array weather radar (PAWR) can observe a dense three-dimensional volume scan with 100-m range resolution and 100 elevation angles in only 30 seconds. The BDA system assimilates the PAWR reflectivity and Doppler velocity observations every 30 seconds into 100 ensemble members of storm-scale numerical weather prediction (NWP) model at 100-m grid spacing. The 30-second-update, 100-m-mesh BDA system has been quite successful in multiple case studies of local severe rainfall events. However, with 1000 ensemble members, the reduced-resolution BDA system at 1-km grid spacing showed significant non-Gaussian PDF with every-30-second updates. With a 10240-member ensemble Kalman filter with a global NWP model at 112-km grid spacing, we found roughly 1000 members satisfactory to capture the non-Gaussian error structures. With these in mind, we explore how the density of observations in space and time affects the non-Gaussianity in an ensemble Kalman filter with a simple toy model. In this presentation, we will present the most up-to-date results of the BDA research, as well as the investigation with the toy model on the non-Gaussianity with dense and frequent observations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19740018956','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19740018956"><span>On the error probability of general tree and trellis codes with applications to sequential decoding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Johannesson, R.</p> <p>1973-01-01</p> <p>An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010APExp...3i2801K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010APExp...3i2801K"><span>Optical Rabi Oscillations in a Quantum Dot Ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kujiraoka, Mamiko; Ishi-Hayase, Junko; Akahane, Kouichi; Yamamoto, Naokatsu; Ema, Kazuhiro; Sasaki, Masahide</p> <p>2010-09-01</p> <p>We have investigated Rabi oscillations of exciton polarization in a self-assembled InAs quantum dot ensemble. The four-wave mixing signals measured as a function of the average of the pulse area showed the large in-plane anisotropy and nonharmonic oscillations. The experimental results can be well reproduced by a two-level model calculation including three types of inhomogeneities without any fitting parameter. The large anisotropy can be well explained by the anisotropic dipole moments. We also find that the nonharmonic behaviors partly originate from the polarization interference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24853864','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24853864"><span>A random matrix approach to credit risk.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas</p> <p>2014-01-01</p> <p>We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4031172','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4031172"><span>A Random Matrix Approach to Credit Risk</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Guhr, Thomas</p> <p>2014-01-01</p> <p>We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA478634','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA478634"><span>ensembleBMA: An R Package for Probabilistic Forecasting using Ensembles and Bayesian Model Averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2007-08-15</p> <p>library is used to allow addition of the legend and map outline to the plot. > bluescale <- function(n) hsv (4/6, s = seq(from = 1 /8, to = 1 , length = n...v = 1 ) > plotBMAforecast( probFreeze290104, lon=srftGridData$lon, lat =srftGridData$ lat , type="image", col=bluescale(100)) > title("Probability of...probPrecip130103) # used to determine zlim in plots [ 1 ] 0.02832709 0.99534860 > plotBMAforecast( probPrecip130103[,Ŕ"], lon=prcpGridData$lon, lat</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJT....38..149S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJT....38..149S"><span>Establishment of a New National Reference Ensemble of Water Triple Point Cells</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Senn, Remo</p> <p>2017-10-01</p> <p>The results of the Bilateral Comparison EURAMET.T-K3.5 (w/VSL, The Netherlands) with the goal to link Switzerland's ITS-90 realization (Ar to Al) to the latest key comparisons gave strong indications for a discrepancy in the realization of the triple point of water. Due to the age of the cells of about twenty years, it was decided to replace the complete reference ensemble with new "state-of-the-art" cells. Three new water triple point cells from three different suppliers were purchased, as well as a new maintenance bath for an additional improvement of the realization. In several loops measurements were taken, each cell of both ensembles intercompared, and the deviations and characteristics determined. The measurements show a significant lower average value of the old ensemble of 0.59 ± 0.25 mK (k=2) in comparison with the new one. Likewise, the behavior of the old cells is very unstable with a drift downward during the realization of the triple point. Based on these results the impact of the new ensemble on the ITS-90 realization from Ar to Al was calculated and set in the context to performed calibrations and their related uncertainties in the past. This paper presents the instrumentation, cells, measurement procedure, results, uncertainties and impact of the new national reference ensemble of water triple point cells on the current ITS-90 realization in Switzerland.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMNG33A0195L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMNG33A0195L"><span>Multi-objective optimization for generating a weighted multi-model ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, H.</p> <p>2017-12-01</p> <p>Many studies have demonstrated that multi-model ensembles generally show better skill than each ensemble member. When generating weighted multi-model ensembles, the first step is measuring the performance of individual model simulations using observations. There is a consensus on the assignment of weighting factors based on a single evaluation metric. When considering only one evaluation metric, the weighting factor for each model is proportional to a performance score or inversely proportional to an error for the model. While this conventional approach can provide appropriate combinations of multiple models, the approach confronts a big challenge when there are multiple metrics under consideration. When considering multiple evaluation metrics, it is obvious that a simple averaging of multiple performance scores or model ranks does not address the trade-off problem between conflicting metrics. So far, there seems to be no best method to generate weighted multi-model ensembles based on multiple performance metrics. The current study applies the multi-objective optimization, a mathematical process that provides a set of optimal trade-off solutions based on a range of evaluation metrics, to combining multiple performance metrics for the global climate models and their dynamically downscaled regional climate simulations over North America and generating a weighted multi-model ensemble. NASA satellite data and the Regional Climate Model Evaluation System (RCMES) software toolkit are used for assessment of the climate simulations. Overall, the performance of each model differs markedly with strong seasonal dependence. Because of the considerable variability across the climate simulations, it is important to evaluate models systematically and make future projections by assigning optimized weighting factors to the models with relatively good performance. Our results indicate that the optimally weighted multi-model ensemble always shows better performance than an arithmetic ensemble mean and may provide reliable future projections.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5839517','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5839517"><span>Avoided climate impacts of urban and rural heat and cold waves over the U.S. using large climate model ensembles for RCP8.5 and RCP4.5</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Anderson, G.B.; Jones, B.; McGinnis, S.A.; Sanderson, B.</p> <p>2015-01-01</p> <p>Previous studies examining future changes in heat/cold waves using climate model ensembles have been limited to grid cell-average quantities. Here, we make use of an urban parameterization in the Community Earth System Model (CESM) that represents the urban heat island effect, which can exacerbate extreme heat but may ameliorate extreme cold in urban relative to rural areas. Heat/cold wave characteristics are derived for U.S. regions from a bias-corrected CESM 30-member ensemble for climate outcomes driven by the RCP8.5 forcing scenario and a 15-member ensemble driven by RCP4.5. Significant differences are found between urban and grid cell-average heat/cold wave characteristics. Most notably, urban heat waves for 1981–2005 are more intense than grid cell-average by 2.1°C (southeast) to 4.6°C (southwest), while cold waves are less intense. We assess the avoided climate impacts of urban heat/cold waves in 2061–2080 when following the lower forcing scenario. Urban heat wave days per year increase from 6 in 1981–2005 to up to 92 (southeast) in RCP8.5. Following RCP4.5 reduces heat wave days by about 50%. Large avoided impacts are demonstrated for individual communities; e.g., the longest heat wave for Houston in RCP4.5 is 38 days while in RCP8.5 there is one heat wave per year that is longer than a month with some lasting the entire summer. Heat waves also start later in the season in RCP4.5 (earliest are in early May) than RCP8.5 (mid-April), compared to 1981–2005 (late May). In some communities, cold wave events decrease from 2 per year for 1981–2005 to one-in-five year events in RCP4.5 and one-in-ten year events in RCP8.5. PMID:29520121</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...854..167G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...854..167G"><span>Shaken Snow Globes: Kinematic Tracers of the Multiphase Condensation Cascade in Massive Galaxies, Groups, and Clusters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gaspari, M.; McDonald, M.; Hamer, S. L.; Brighenti, F.; Temi, P.; Gendron-Marsolais, M.; Hlavacek-Larrondo, J.; Edge, A. C.; Werner, N.; Tozzi, P.; Sun, M.; Stone, J. M.; Tremblay, G. R.; Hogan, M. T.; Eckert, D.; Ettori, S.; Yu, H.; Biffi, V.; Planelles, S.</p> <p>2018-02-01</p> <p>We propose a novel method to constrain turbulence and bulk motions in massive galaxies, galaxy groups, and clusters, exploring both simulations and observations. As emerged in the recent picture of top-down multiphase condensation, hot gaseous halos are tightly linked to all other phases in terms of cospatiality and thermodynamics. While hot halos (∼107 K) are perturbed by subsonic turbulence, warm (∼104 K) ionized and neutral filaments condense out of the turbulent eddies. The peaks condense into cold molecular clouds (<100 K) raining in the core via chaotic cold accretion (CCA). We show that all phases are tightly linked in terms of the ensemble (wide-aperture) velocity dispersion along the line of sight. The correlation arises in complementary long-term AGN feedback simulations and high-resolution CCA runs, and is corroborated by the combined Hitomi and new Integral Field Unit measurements in the Perseus cluster. The ensemble multiphase gas distributions (from the UV to the radio band) are characterized by substantial spectral line broadening (σ v,los ≈ 100–200 {km} {{{s}}}-1) with a mild line shift. On the other hand, pencil-beam detections (as H I absorption against the AGN backlight) sample the small-scale clouds displaying smaller broadening and significant line shifts of up to several 100 {km} {{{s}}}-1 (for those falling toward the AGN), with increased scatter due to the turbulence intermittency. We present new ensemble σ v,los of the warm Hα+[N II] gas in 72 observed cluster/group cores: the constraints are consistent with the simulations and can be used as robust proxies for the turbulent velocities, in particular for the challenging hot plasma (otherwise requiring extremely long X-ray exposures). Finally, we show that the physically motivated criterion C ≡ t cool/t eddy ≈ 1 best traces the condensation extent region and the presence of multiphase gas in observed clusters and groups. The ensemble method can be applied to many available spectroscopic data sets and can substantially advance our understanding of multiphase halos in light of the next-generation multiwavelength missions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhyA..419..221H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhyA..419..221H"><span>Variable diffusion in stock market fluctuations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hua, Jia-Chen; Chen, Lijian; Falcon, Liberty; McCauley, Joseph L.; Gunaratne, Gemunu H.</p> <p>2015-02-01</p> <p>We analyze intraday fluctuations in several stock indices to investigate the underlying stochastic processes using techniques appropriate for processes with nonstationary increments. The five most actively traded stocks each contains two time intervals during the day where the variance of increments can be fit by power law scaling in time. The fluctuations in return within these intervals follow asymptotic bi-exponential distributions. The autocorrelation function for increments vanishes rapidly, but decays slowly for absolute and squared increments. Based on these results, we propose an intraday stochastic model with linear variable diffusion coefficient as a lowest order approximation to the real dynamics of financial markets, and to test the effects of time averaging techniques typically used for financial time series analysis. We find that our model replicates major stylized facts associated with empirical financial time series. We also find that ensemble averaging techniques can be used to identify the underlying dynamics correctly, whereas time averages fail in this task. Our work indicates that ensemble average approaches will yield new insight into the study of financial markets' dynamics. Our proposed model also provides new insight into the modeling of financial markets dynamics in microscopic time scales.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2007AdWR...30.1371D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2007AdWR...30.1371D"><span>Multi-model ensemble hydrologic prediction using Bayesian model averaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duan, Qingyun; Ajami, Newsha K.; Gao, Xiaogang; Sorooshian, Soroosh</p> <p>2007-05-01</p> <p>Multi-model ensemble strategy is a means to exploit the diversity of skillful predictions from different models. This paper studies the use of Bayesian model averaging (BMA) scheme to develop more skillful and reliable probabilistic hydrologic predictions from multiple competing predictions made by several hydrologic models. BMA is a statistical procedure that infers consensus predictions by weighing individual predictions based on their probabilistic likelihood measures, with the better performing predictions receiving higher weights than the worse performing ones. Furthermore, BMA provides a more reliable description of the total predictive uncertainty than the original ensemble, leading to a sharper and better calibrated probability density function (PDF) for the probabilistic predictions. In this study, a nine-member ensemble of hydrologic predictions was used to test and evaluate the BMA scheme. This ensemble was generated by calibrating three different hydrologic models using three distinct objective functions. These objective functions were chosen in a way that forces the models to capture certain aspects of the hydrograph well (e.g., peaks, mid-flows and low flows). Two sets of numerical experiments were carried out on three test basins in the US to explore the best way of using the BMA scheme. In the first set, a single set of BMA weights was computed to obtain BMA predictions, while the second set employed multiple sets of weights, with distinct sets corresponding to different flow intervals. In both sets, the streamflow values were transformed using Box-Cox transformation to ensure that the probability distribution of the prediction errors is approximately Gaussian. A split sample approach was used to obtain and validate the BMA predictions. The test results showed that BMA scheme has the advantage of generating more skillful and equally reliable probabilistic predictions than original ensemble. The performance of the expected BMA predictions in terms of daily root mean square error (DRMS) and daily absolute mean error (DABS) is generally superior to that of the best individual predictions. Furthermore, the BMA predictions employing multiple sets of weights are generally better than those using single set of weights.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.er.usgs.gov/publication/70159444','USGSPUBS'); return false;" href="https://pubs.er.usgs.gov/publication/70159444"><span>Validation of a spatial model used to locate fish spawning reef construction sites in the St. Clair–Detroit River system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Fischer, Jason L.; Bennion, David; Roseman, Edward F.; Manny, Bruce A.</p> <p>2015-01-01</p> <p>Lake sturgeon (Acipenser fulvescens) populations have suffered precipitous declines in the St. Clair–Detroit River system, following the removal of gravel spawning substrates and overfishing in the late 1800s to mid-1900s. To assist the remediation of lake sturgeon spawning habitat, three hydrodynamic models were integrated into a spatial model to identify areas in two large rivers, where water velocities were appropriate for the restoration of lake sturgeon spawning habitat. Here we use water velocity data collected with an acoustic Doppler current profiler (ADCP) to assess the ability of the spatial model and its sub-models to correctly identify areas where water velocities were deemed suitable for restoration of fish spawning habitat. ArcMap 10.1 was used to create raster grids of water velocity data from model estimates and ADCP measurements which were compared to determine the percentage of cells similarly classified as unsuitable, suitable, or ideal for fish spawning habitat remediation. The spatial model categorized 65% of the raster cells the same as depth-averaged water velocity measurements from the ADCP and 72% of the raster cells the same as surface water velocity measurements from the ADCP. Sub-models focused on depth-averaged velocities categorized the greatest percentage of cells similar to ADCP measurements where 74% and 76% of cells were the same as depth-averaged water velocity measurements. Our results indicate that integrating depth-averaged and surface water velocity hydrodynamic models may have biased the spatial model and overestimated suitable spawning habitat. A model solely integrating depth-averaged velocity models could improve identification of areas suitable for restoration of fish spawning habitat.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://pubs.usgs.gov/of/2016/1208/ofr20161208.pdf','USGSPUBS'); return false;" href="https://pubs.usgs.gov/of/2016/1208/ofr20161208.pdf"><span>Seismic velocity site characterization of 10 Arizona strong-motion recording stations by spectral analysis of surface wave dispersion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://pubs.er.usgs.gov/pubs/index.jsp?view=adv">USGS Publications Warehouse</a></p> <p>Kayen, Robert E.; Carkin, Brad A.; Corbett, Skye C.</p> <p>2017-10-19</p> <p>Vertical one-dimensional shear wave velocity (VS) profiles are presented for strong-motion sites in Arizona for a suite of stations surrounding the Palo Verde Nuclear Generating Station. The purpose of the study is to determine the detailed site velocity profile, the average velocity in the upper 30 meters of the profile (VS30), the average velocity for the entire profile (VSZ), and the National Earthquake Hazards Reduction Program (NEHRP) site classification. The VS profiles are estimated using a non-invasive continuous-sine-wave method for gathering the dispersion characteristics of surface waves. Shear wave velocity profiles were inverted from the averaged dispersion curves using three independent methods for comparison, and the root-mean-square combined coefficient of variation (COV) of the dispersion and inversion calculations are estimated for each site.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016EGUGA..1813618O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016EGUGA..1813618O"><span>Total probabilities of ensemble runoff forecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Olav Skøien, Jon; Bogner, Konrad; Salamon, Peter; Smith, Paul; Pappenberger, Florian</p> <p>2016-04-01</p> <p>Ensemble forecasting has for a long time been used as a method in meteorological modelling to indicate the uncertainty of the forecasts. However, as the ensembles often exhibit both bias and dispersion errors, it is necessary to calibrate and post-process them. Two of the most common methods for this are Bayesian Model Averaging (Raftery et al., 2005) and Ensemble Model Output Statistics (EMOS) (Gneiting et al., 2005). There are also methods for regionalizing these methods (Berrocal et al., 2007) and for incorporating the correlation between lead times (Hemri et al., 2013). Engeland and Steinsland Engeland and Steinsland (2014) developed a framework which can estimate post-processing parameters which are different in space and time, but still can give a spatially and temporally consistent output. However, their method is computationally complex for our larger number of stations, and cannot directly be regionalized in the way we would like, so we suggest a different path below. The target of our work is to create a mean forecast with uncertainty bounds for a large number of locations in the framework of the European Flood Awareness System (EFAS - http://www.efas.eu) We are therefore more interested in improving the forecast skill for high-flows rather than the forecast skill of lower runoff levels. EFAS uses a combination of ensemble forecasts and deterministic forecasts from different forecasters to force a distributed hydrologic model and to compute runoff ensembles for each river pixel within the model domain. Instead of showing the mean and the variability of each forecast ensemble individually, we will now post-process all model outputs to find a total probability, the post-processed mean and uncertainty of all ensembles. The post-processing parameters are first calibrated for each calibration location, but assuring that they have some spatial correlation, by adding a spatial penalty in the calibration process. This can in some cases have a slight negative impact on the calibration error, but makes it easier to interpolate the post-processing parameters to uncalibrated locations. We also look into different methods for handling the non-normal distributions of runoff data and the effect of different data transformations on forecasts skills in general and for floods in particular. Berrocal, V. J., Raftery, A. E. and Gneiting, T.: Combining Spatial Statistical and Ensemble Information in Probabilistic Weather Forecasts, Mon. Weather Rev., 135(4), 1386-1402, doi:10.1175/MWR3341.1, 2007. Engeland, K. and Steinsland, I.: Probabilistic postprocessing models for flow forecasts for a system of catchments and several lead times, Water Resour. Res., 50(1), 182-197, doi:10.1002/2012WR012757, 2014. Gneiting, T., Raftery, A. E., Westveld, A. H. and Goldman, T.: Calibrated Probabilistic Forecasting Using Ensemble Model Output Statistics and Minimum CRPS Estimation, Mon. Weather Rev., 133(5), 1098-1118, doi:10.1175/MWR2904.1, 2005. Hemri, S., Fundel, F. and Zappa, M.: Simultaneous calibration of ensemble river flow predictions over an entire range of lead times, Water Resour. Res., 49(10), 6744-6755, doi:10.1002/wrcr.20542, 2013. Raftery, A. E., Gneiting, T., Balabdaoui, F. and Polakowski, M.: Using Bayesian Model Averaging to Calibrate Forecast Ensembles, Mon. Weather Rev., 133(5), 1155-1174, doi:10.1175/MWR2906.1, 2005.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..DFDQ35001W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..DFDQ35001W"><span>Upscaling the Navier-Stokes Equation for Turbulent Flows in Porous Media Using a Volume Averaging Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wood, Brian; He, Xiaoliang; Apte, Sourabh</p> <p>2017-11-01</p> <p>Turbulent flows through porous media are encountered in a number of natural and engineered systems. Many attempts to close the Navier-Stokes equation for such type of flow have been made, for example using RANS models and double averaging. On the other hand, Whitaker (1996) applied volume averaging theorem to close the macroscopic N-S equation for low Re flow. In this work, the volume averaging theory is extended into the turbulent flow regime to posit a relationship between the macroscale velocities and the spatial velocity statistics in terms of the spatial averaged velocity only. Rather than developing a Reynolds stress model, we propose a simple algebraic closure, consistent with generalized effective viscosity models (Pope 1975), to represent the spatial fluctuating velocity and pressure respectively. The coefficients (one 1st order, two 2nd order and one 3rd order tensor) of the linear functions depend on averaged velocity and gradient. With the data set from DNS, performed with inertial and turbulent flows (pore Re of 300, 500 and 1000) through a periodic face centered cubic (FCC) unit cell, all the unknown coefficients can be computed and the closure is complete. The macroscopic quantity calculated from the averaging is then compared with DNS data to verify the upscaling. NSF Project Numbers 1336983, 1133363.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JCoPh.324..115X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JCoPh.324..115X"><span>Quantifying and reducing model-form uncertainties in Reynolds-averaged Navier-Stokes simulations: A data-driven, physics-informed Bayesian approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xiao, H.; Wu, J.-L.; Wang, J.-X.; Sun, R.; Roy, C. J.</p> <p>2016-11-01</p> <p>Despite their well-known limitations, Reynolds-Averaged Navier-Stokes (RANS) models are still the workhorse tools for turbulent flow simulations in today's engineering analysis, design and optimization. While the predictive capability of RANS models depends on many factors, for many practical flows the turbulence models are by far the largest source of uncertainty. As RANS models are used in the design and safety evaluation of many mission-critical systems such as airplanes and nuclear power plants, quantifying their model-form uncertainties has significant implications in enabling risk-informed decision-making. In this work we develop a data-driven, physics-informed Bayesian framework for quantifying model-form uncertainties in RANS simulations. Uncertainties are introduced directly to the Reynolds stresses and are represented with compact parameterization accounting for empirical prior knowledge and physical constraints (e.g., realizability, smoothness, and symmetry). An iterative ensemble Kalman method is used to assimilate the prior knowledge and observation data in a Bayesian framework, and to propagate them to posterior distributions of velocities and other Quantities of Interest (QoIs). We use two representative cases, the flow over periodic hills and the flow in a square duct, to evaluate the performance of the proposed framework. Both cases are challenging for standard RANS turbulence models. Simulation results suggest that, even with very sparse observations, the obtained posterior mean velocities and other QoIs have significantly better agreement with the benchmark data compared to the baseline results. At most locations the posterior distribution adequately captures the true model error within the developed model form uncertainty bounds. The framework is a major improvement over existing black-box, physics-neutral methods for model-form uncertainty quantification, where prior knowledge and details of the models are not exploited. This approach has potential implications in many fields in which the governing equations are well understood but the model uncertainty comes from unresolved physical processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20100039445','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20100039445"><span>Stretch Reflex as a Simple Measure to Evaluate the Efficacy of Potential Flight Countermeasures Using the Bed Rest Environment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Cerisano, J. M.; Reschke, M. F.; Kofman, I. S.; Fisher, E. A.; Harm, D. L.</p> <p>2010-01-01</p> <p>INTRODUCTION: Spaceflight is acknowledged to have significant effects on the major postural muscles. However, it has been difficult to separate the effects of ascending somatosensory changes caused by the unloading of these muscles during flight from changes in sensorimotor function caused by a descending vestibulo-cerebellar response to microgravity. It is hypothesized that bed rest is an adequate model to investigate postural muscle unloading given that spaceflight and bed rest may produce similar results in both nerve axon and muscle tissue. METHODS: To investigate this hypothesis, stretch reflexes were measured on 18 subjects who spent 60 to 90 days in continuous 6 head-down bed rest. Using a motorized system capable of rotating the foot around the ankle joint (dorsiflexion) through an angle of 10 deg at a peak velocity of approximately 250 deg/sec, a stretch reflex was recorded from the subject's left triceps surae muscle group. Using surface electromyography, about 300 reflex responses were obtained and ensemble-averaged on 3 separate days before bed rest, 3 to 4 times in bed, and 3 times after bed rest. The averaged responses for each test day were examined for reflex latency and conduction velocity (CV) across gender and compared with spaceflight data. RESULTS: Although no gender differences were found, bed rest induced changes in reflex latency and CV similar to the ones observed during spaceflight. Also, a relationship between CV and loss of muscle strength in the lower leg was observed for most bed rest subjects. CONCLUSION: Even though bed rest (limb unloading) alone may not mimic all of the synaptic and muscle tissue loss that is observed as a result of spaceflight, it can serve as a working analog of flight for the evaluation of potential countermeasures that may be beneficial in mitigating unwanted changes in the major postural muscles that are observed post flight.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA622702','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA622702"><span>Adaptive Radar Data Quality Control and Ensemble-Based Assimilation for Analyzing and Forecasting High-Impact Weather</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2014-05-22</p> <p>velocity prognostic equation was derived for the 3.5Var in a concise and accurate form by considering atmospheric refraction and earth curvature (Xu and... atmospheric refraction and earth curvature. J. Atmos. Sci, 70, 3328-3338. [published, refereed]. ...Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget , Paperwork Reduction Project (0704-0188) Washington, DC 20503</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA150355','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA150355"><span>Size and Velocity Distributions of Particles and Droplets in Spray Combustion Systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1984-11-01</p> <p>constructed, calibrated, and successfully applied. Our efforts to verify the performance and accuracy of this diagnostic led to a parallel research...array will not be an acceptable detection system for size distribution measurements by this method. VI. Conclusions This study has led to the following...radiation is also useful particle size analysis by ensemble multiangle scattering. One problem for all multiwavelength or multiaricle diagnostics for</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA205462','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA205462"><span>An Experimental Study of the Effect of Streamwise Vortices on Unsteady Turbulent Boundary-Layer Separation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>1988-12-09</p> <p>Measurement of Second Order Statistics .... .............. .54 5.4 Measurement of Triple Products ...... ................. .58 5.6 Uncertainty Analysis...deterministic fluctuations, u/ 2 , were 25 times larger than the mean fluctuations, u𔃼, there were no significant variations in the mean statistical ...input signals, the three velocity components are cal- culated, Awn in ,i-;dual phase ensembles are collected for the appropriate statistical 3</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25615196','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25615196"><span>Anomalous scaling of passive scalar fields advected by the Navier-Stokes velocity ensemble: effects of strong compressibility and large-scale anisotropy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Antonov, N V; Kostenko, M M</p> <p>2014-12-01</p> <p>The field theoretic renormalization group and the operator product expansion are applied to two models of passive scalar quantities (the density and the tracer fields) advected by a random turbulent velocity field. The latter is governed by the Navier-Stokes equation for compressible fluid, subject to external random force with the covariance ∝δ(t-t')k(4-d-y), where d is the dimension of space and y is an arbitrary exponent. The original stochastic problems are reformulated as multiplicatively renormalizable field theoretic models; the corresponding renormalization group equations possess infrared attractive fixed points. It is shown that various correlation functions of the scalar field, its powers and gradients, demonstrate anomalous scaling behavior in the inertial-convective range already for small values of y. The corresponding anomalous exponents, identified with scaling (critical) dimensions of certain composite fields ("operators" in the quantum-field terminology), can be systematically calculated as series in y. The practical calculation is performed in the leading one-loop approximation, including exponents in anisotropic contributions. It should be emphasized that, in contrast to Gaussian ensembles with finite correlation time, the model and the perturbation theory presented here are manifestly Galilean covariant. The validity of the one-loop approximation and comparison with Gaussian models are briefly discussed.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22253461-complex-quantum-hamilton-jacobi-equation-bohmian-trajectories-application-photodissociation-dynamics-nocl','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22253461-complex-quantum-hamilton-jacobi-equation-bohmian-trajectories-application-photodissociation-dynamics-nocl"><span>Complex quantum Hamilton-Jacobi equation with Bohmian trajectories: Application to the photodissociation dynamics of NOCl</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Chou, Chia-Chun, E-mail: ccchou@mx.nthu.edu.tw</p> <p>2014-03-14</p> <p>The complex quantum Hamilton-Jacobi equation-Bohmian trajectories (CQHJE-BT) method is introduced as a synthetic trajectory method for integrating the complex quantum Hamilton-Jacobi equation for the complex action function by propagating an ensemble of real-valued correlated Bohmian trajectories. Substituting the wave function expressed in exponential form in terms of the complex action into the time-dependent Schrödinger equation yields the complex quantum Hamilton-Jacobi equation. We transform this equation into the arbitrary Lagrangian-Eulerian version with the grid velocity matching the flow velocity of the probability fluid. The resulting equation describing the rate of change in the complex action transported along Bohmian trajectories is simultaneouslymore » integrated with the guidance equation for Bohmian trajectories, and the time-dependent wave function is readily synthesized. The spatial derivatives of the complex action required for the integration scheme are obtained by solving one moving least squares matrix equation. In addition, the method is applied to the photodissociation of NOCl. The photodissociation dynamics of NOCl can be accurately described by propagating a small ensemble of trajectories. This study demonstrates that the CQHJE-BT method combines the considerable advantages of both the real and the complex quantum trajectory methods previously developed for wave packet dynamics.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ThApC.132.1057Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ThApC.132.1057Y"><span>Multi-criterion model ensemble of CMIP5 surface air temperature over China</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming</p> <p>2018-05-01</p> <p>The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the South Central China (the Inner Mongolia), the North Eastern China (the South Central China), and the North Western China (the South Central China), under RCP 2.6, RCP 4.5, and RCP 8.5 scenarios, respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3965471','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3965471"><span>NIMEFI: Gene Regulatory Network Inference using Multiple Ensemble Feature Importance Algorithms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ruyssinck, Joeri; Huynh-Thu, Vân Anh; Geurts, Pierre; Dhaene, Tom; Demeester, Piet; Saeys, Yvan</p> <p>2014-01-01</p> <p>One of the long-standing open challenges in computational systems biology is the topology inference of gene regulatory networks from high-throughput omics data. Recently, two community-wide efforts, DREAM4 and DREAM5, have been established to benchmark network inference techniques using gene expression measurements. In these challenges the overall top performer was the GENIE3 algorithm. This method decomposes the network inference task into separate regression problems for each gene in the network in which the expression values of a particular target gene are predicted using all other genes as possible predictors. Next, using tree-based ensemble methods, an importance measure for each predictor gene is calculated with respect to the target gene and a high feature importance is considered as putative evidence of a regulatory link existing between both genes. The contribution of this work is twofold. First, we generalize the regression decomposition strategy of GENIE3 to other feature importance methods. We compare the performance of support vector regression, the elastic net, random forest regression, symbolic regression and their ensemble variants in this setting to the original GENIE3 algorithm. To create the ensemble variants, we propose a subsampling approach which allows us to cast any feature selection algorithm that produces a feature ranking into an ensemble feature importance algorithm. We demonstrate that the ensemble setting is key to the network inference task, as only ensemble variants achieve top performance. As second contribution, we explore the effect of using rankwise averaged predictions of multiple ensemble algorithms as opposed to only one. We name this approach NIMEFI (Network Inference using Multiple Ensemble Feature Importance algorithms) and show that this approach outperforms all individual methods in general, although on a specific network a single method can perform better. An implementation of NIMEFI has been made publicly available. PMID:24667482</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20050175886','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20050175886"><span>Experimental Investigation of the Differences Between Reynolds-Averaged and Favre-Averaged Velocity in Supersonic Jets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Panda, J.; Seasholtz, R. G.</p> <p>2005-01-01</p> <p>Recent advancement in the molecular Rayleigh scattering based technique allowed for simultaneous measurement of velocity and density fluctuations with high sampling rates. The technique was used to investigate unheated high subsonic and supersonic fully expanded free jets in the Mach number range of 0.8 to 1.8. The difference between the Favre averaged and Reynolds averaged axial velocity and axial component of the turbulent kinetic energy is found to be small. Estimates based on the Morkovin's "Strong Reynolds Analogy" were found to provide lower values of turbulent density fluctuations than the measured data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005APS..DPPRP1096E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005APS..DPPRP1096E"><span>Equilibrium statistical mechanics of self-consistent wave-particle system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Elskens, Yves</p> <p>2005-10-01</p> <p>The equilibrium distribution of N particles and M waves (e.g. Langmuir) is analysed in the weak-coupling limit for the self-consistent hamiltonian model H = ∑rpr^2 /(2m) + ∑jφjIj+ ɛ∑r,j(βj/ kj) (kjxr- θj) [1]. In the canonical ensemble, with temperature T and reservoir velocity v < jφj/kj, the wave intensities are almost independent and exponentially distributed, with expectation <Ij>= kBT / (φj- kjv). These equilibrium predictions are in agreement with Monte Carlo samplings [2] and with direct simulations of the dynamics, indicating equivalence between canonical and microcanonical ensembles. [1] Y. Elskens and D.F. Escande, Microscopic dynamics of plasmas and chaos (IoP publishing, Bristol, 2003). [2] M-C. Firpo and F. Leyvraz, 30th EPS conf. contr. fusion and plasma phys., P-2.8 (2003).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017APS..MARL52010S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017APS..MARL52010S"><span>Relaxation in a two-body Fermi-Pasta-Ulam system in the canonical ensemble</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sen, Surajit; Barrett, Tyler</p> <p></p> <p>The study of the dynamics of the Fermi-Pasta-Ulam (FPU) chain remains a challenging problem. Inspired by the recent work of Onorato et al. on thermalization in the FPU system, we report a study of relaxation processes in a two-body FPU system in the canonical ensemble. The studies have been carried out using the Recurrence Relations Method introduced by Zwanzig, Mori, Lee and others. We have obtained exact analytical expressions for the first thirteen levels of the continued fraction representation of the Laplace transformed velocity autocorrelation function of the system. Using simple and reasonable extrapolation schemes and known limits we are able to estimate the relaxation behavior of the oscillators in the two-body FPU system and recover the expected behavior in the harmonic limit. Generalizations of the calculations to larger systems will be discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhRvE..97e3310C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhRvE..97e3310C"><span>Generalized Green's function molecular dynamics for canonical ensemble simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Coluci, V. R.; Dantas, S. O.; Tewary, V. K.</p> <p>2018-05-01</p> <p>The need of small integration time steps (˜1 fs) in conventional molecular dynamics simulations is an important issue that inhibits the study of physical, chemical, and biological systems in real timescales. Additionally, to simulate those systems in contact with a thermal bath, thermostating techniques are usually applied. In this work, we generalize the Green's function molecular dynamics technique to allow simulations within the canonical ensemble. By applying this technique to one-dimensional systems, we were able to correctly describe important thermodynamic properties such as the temperature fluctuations, the temperature distribution, and the velocity autocorrelation function. We show that the proposed technique also allows the use of time steps one order of magnitude larger than those typically used in conventional molecular dynamics simulations. We expect that this technique can be used in long-timescale molecular dynamics simulations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19750023590','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19750023590"><span>An investigation of turbulent transport in the extreme lower atmosphere</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Koper, C. A., Jr.; Sadeh, W. Z.</p> <p>1975-01-01</p> <p>A model in which the Lagrangian autocorrelation is expressed by a domain integral over a set of usual Eulerian autocorrelations acquired concurrently at all points within a turbulence box is proposed along with a method for ascertaining the statistical stationarity of turbulent velocity by creating an equivalent ensemble to investigate the flow in the extreme lower atmosphere. Simultaneous measurements of turbulent velocity on a turbulence line along the wake axis were carried out utilizing a longitudinal array of five hot-wire anemometers remotely operated. The stationarity test revealed that the turbulent velocity is approximated as a realization of a weakly self-stationary random process. Based on the Lagrangian autocorrelation it is found that: (1) large diffusion time predominated; (2) ratios of Lagrangian to Eulerian time and spatial scales were smaller than unity; and, (3) short and long diffusion time scales and diffusion spatial scales were constrained within their Eulerian counterparts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20040121069','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20040121069"><span>Investigation of Particle Sampling Bias in the Shear Flow Field Downstream of a Backward Facing Step</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Meyers, James F.; Kjelgaard, Scott O.; Hepner, Timothy E.</p> <p>1990-01-01</p> <p>The flow field about a backward facing step was investigated to determine the characteristics of particle sampling bias in the various flow phenomena. The investigation used the calculation of the velocity:data rate correlation coefficient as a measure of statistical dependence and thus the degree of velocity bias. While the investigation found negligible dependence within the free stream region, increased dependence was found within the boundary and shear layers. Full classic correction techniques over-compensated the data since the dependence was weak, even in the boundary layer and shear regions. The paper emphasizes the necessity to determine the degree of particle sampling bias for each measurement ensemble and not use generalized assumptions to correct the data. Further, it recommends the calculation of the velocity:data rate correlation coefficient become a standard statistical calculation in the analysis of all laser velocimeter data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018NJPh...20d2001P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018NJPh...20d2001P"><span>A Zeeman slower for diatomic molecules</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Petzold, M.; Kaebert, P.; Gersema, P.; Siercke, M.; Ospelkaus, S.</p> <p>2018-04-01</p> <p>We present a novel slowing scheme for beams of laser-coolable diatomic molecules reminiscent of Zeeman slowing of atomic beams. The scheme results in efficient compression of the one-dimensional velocity distribution to velocities trappable by magnetic or magneto-optical traps. We experimentally demonstrate our method in an atomic testbed and show an enhancement of flux below v = 35 m s‑1 by a factor of ≈20 compared to white light slowing. 3D Monte Carlo simulations performed to model the experiment show excellent agreement. We apply the same simulations to the prototype molecule 88Sr19F and expect 15% of the initial flux to be continuously compressed in a narrow velocity window at around 10 m s‑1. This is the first experimentally shown continuous and dissipative slowing technique in molecule-like level structures, promising to provide the missing link for the preparation of large ultracold molecular ensembles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1236477-quantitative-validation-carbon-fiber-laminate-low-velocity-impact-simulations','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1236477-quantitative-validation-carbon-fiber-laminate-low-velocity-impact-simulations"><span>Quantitative validation of carbon-fiber laminate low velocity impact simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.</p> <p>2015-09-26</p> <p>Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/936447','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/936447"><span>Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ajami, N K; Duan, Q; Gao, X</p> <p>2005-04-11</p> <p>This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniquesmore » affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26844300','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26844300"><span>Metainference: A Bayesian inference method for heterogeneous systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bonomi, Massimiliano; Camilloni, Carlo; Cavalli, Andrea; Vendruscolo, Michele</p> <p>2016-01-01</p> <p>Modeling a complex system is almost invariably a challenging task. The incorporation of experimental observations can be used to improve the quality of a model and thus to obtain better predictions about the behavior of the corresponding system. This approach, however, is affected by a variety of different errors, especially when a system simultaneously populates an ensemble of different states and experimental data are measured as averages over such states. To address this problem, we present a Bayesian inference method, called "metainference," that is able to deal with errors in experimental measurements and with experimental measurements averaged over multiple states. To achieve this goal, metainference models a finite sample of the distribution of models using a replica approach, in the spirit of the replica-averaging modeling based on the maximum entropy principle. To illustrate the method, we present its application to a heterogeneous model system and to the determination of an ensemble of structures corresponding to the thermal fluctuations of a protein molecule. Metainference thus provides an approach to modeling complex systems with heterogeneous components and interconverting between different states by taking into account all possible sources of errors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EGUGA..1918455D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EGUGA..1918455D"><span>Synchronized Trajectories in a Climate "Supermodel"</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duane, Gregory; Schevenhoven, Francine; Selten, Frank</p> <p>2017-04-01</p> <p>Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930022704&hterms=Antarctic+icebergs&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DAntarctic%2Bicebergs','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930022704&hterms=Antarctic+icebergs&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D70%26Ntt%3DAntarctic%2Bicebergs"><span>Recent acceleration of Thwaites Glacier</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ferrigno, J. G.</p> <p>1993-01-01</p> <p>The first velocity measurements for Thwaites Glacier were made by R. J. Allen in 1977. He compared features of Thwaites Glacier and Iceberg Tongue on aerial photography from 1947 and 1967 with 1972 Landsat images, and measured average annual displacements of 3.7 and 2.3 km/a. Using his photogrammetric experience and taking into consideration the lack of definable features and the poor control in the area, he estimated an average velocity of 2.0 to 2.9 km/a to be more accurate. In 1985, Lindstrom and Tyler also made velocity estimates for Thwaites Glacier. Using Landsat imagery from 1972 and 1983, their estimates of the velocities of 33 points ranged from 2.99 to 4.02 km/a, with an average of 3.6 km/a. The accuracy of their estimates is uncertain, however, because in the absence of fixed control points, they assumed that the velocities of icebergs in the fast ice were uniform. Using additional Landsat imagery in 1984 and 1990, accurate coregistration with the 1972 image was achieved based on fixed rock points. For the period 1972 to 1984, 25 points on the glacier surface ranged in average velocity from 2.47 to 2.76 km/a, with an overall average velocity of 2.62 +/- 0.02 km/a. For the period 1984 to 1990, 101 points ranged in velocity from 2.54 to 3.15 km/a, with an overall average of 2.84 km/a. During both time periods, the velocity pattern showed the same spatial relationship for three longitudinal paths. The 8-percent acceleration in a decade is significant. This recent acceleration may be associated with changes observed in this region since 1986. Fast ice melted and several icebergs calved from the base of the Iceberg Tongue and the terminus of Thwaites Glacier. However, as early as 1972, the Iceberg Tongue had very little contact with the glacier.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19930062803&hterms=Chimera&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DChimera','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19930062803&hterms=Chimera&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D80%26Ntt%3DChimera"><span>Effects of bleed-hole geometry and plenum pressure on three-dimensional shock-wave/boundary-layer/bleed interactions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.</p> <p>1993-01-01</p> <p>A numerical study was performed to investigate 3D shock-wave/boundary-layer interactions on a flat plate with bleed through one or more circular holes that vent into a plenum. This study was focused on how bleed-hole geometry and pressure ratio across bleed holes affect the bleed rate and the physics of the flow in the vicinity of the holes. The aspects of the bleed-hole geometry investigated include angle of bleed hole and the number of bleed holes. The plenum/freestream pressure ratios investigated range from 0.3 to 1.7. This study is based on the ensemble-averaged, 'full compressible' Navier-Stokes (N-S) equations closed by the Baldwin-Lomax algebraic turbulence model. Solutions to the ensemble-averaged N-S equations were obtained by an implicit finite-volume method using the partially-split, two-factored algorithm of Steger on an overlapping Chimera grid.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/989792','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/989792"><span>Optimized nested Markov chain Monte Carlo sampling: theory</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Coe, Joshua D; Shaw, M Sam; Sewell, Thomas D</p> <p>2009-01-01</p> <p>Metropolis Monte Carlo sampling of a reference potential is used to build a Markov chain in the isothermal-isobaric ensemble. At the endpoints of the chain, the energy is reevaluated at a different level of approximation (the 'full' energy) and a composite move encompassing all of the intervening steps is accepted on the basis of a modified Metropolis criterion. By manipulating the thermodynamic variables characterizing the reference system we maximize the average acceptance probability of composite moves, lengthening significantly the random walk made between consecutive evaluations of the full energy at a fixed acceptance probability. This provides maximally decorrelated samples ofmore » the full potential, thereby lowering the total number required to build ensemble averages of a given variance. The efficiency of the method is illustrated using model potentials appropriate to molecular fluids at high pressure. Implications for ab initio or density functional theory (DFT) treatment are discussed.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29363314','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29363314"><span>Life under the Microscope: Single-Molecule Fluorescence Highlights the RNA World.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ray, Sujay; Widom, Julia R; Walter, Nils G</p> <p>2018-04-25</p> <p>The emergence of single-molecule (SM) fluorescence techniques has opened up a vast new toolbox for exploring the molecular basis of life. The ability to monitor individual biomolecules in real time enables complex, dynamic folding pathways to be interrogated without the averaging effect of ensemble measurements. In parallel, modern biology has been revolutionized by our emerging understanding of the many functions of RNA. In this comprehensive review, we survey SM fluorescence approaches and discuss how the application of these tools to RNA and RNA-containing macromolecular complexes in vitro has yielded significant insights into the underlying biology. Topics covered include the three-dimensional folding landscapes of a plethora of isolated RNA molecules, their assembly and interactions in RNA-protein complexes, and the relation of these properties to their biological functions. In all of these examples, the use of SM fluorescence methods has revealed critical information beyond the reach of ensemble averages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22479573-almost-sure-convergence-quantum-spin-glasses','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22479573-almost-sure-convergence-quantum-spin-glasses"><span>Almost sure convergence in quantum spin glasses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Buzinski, David, E-mail: dab197@case.edu; Meckes, Elizabeth, E-mail: elizabeth.meckes@case.edu</p> <p>2015-12-15</p> <p>Recently, Keating, Linden, and Wells [Markov Processes Relat. Fields 21(3), 537-555 (2015)] showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of Keating, Linden, and Wells to show that in fact the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself with no ensemble averaging. We alsomore » extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by Erdős and Schröder [Math. Phys., Anal. Geom. 17(3-4), 441–464 (2014)].« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JSMTE..11.3401T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JSMTE..11.3401T"><span>Typical performance of approximation algorithms for NP-hard problems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Takabe, Satoshi; Hukushima, Koji</p> <p>2016-11-01</p> <p>Typical performance of approximation algorithms is studied for randomized minimum vertex cover problems. A wide class of random graph ensembles characterized by an arbitrary degree distribution is discussed with the presentation of a theoretical framework. Herein, three approximation algorithms are examined: linear-programming relaxation, loopy-belief propagation, and the leaf-removal algorithm. The former two algorithms are analyzed using a statistical-mechanical technique, whereas the average-case analysis of the last one is conducted using the generating function method. These algorithms have a threshold in the typical performance with increasing average degree of the random graph, below which they find true optimal solutions with high probability. Our study reveals that there exist only three cases, determined by the order of the typical performance thresholds. In addition, we provide some conditions for classification of the graph ensembles and demonstrate explicitly some examples for the difference in thresholds.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25516108','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25516108"><span>Differences in single and aggregated nanoparticle plasmon spectroscopy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Singh, Pushkar; Deckert-Gaudig, Tanja; Schneidewind, Henrik; Kirsch, Konstantin; van Schrojenstein Lantman, Evelien M; Weckhuysen, Bert M; Deckert, Volker</p> <p>2015-02-07</p> <p>Vibrational spectroscopy usually provides structural information averaged over many molecules. We report a larger peak position variation and reproducibly smaller FWHM of TERS spectra compared to SERS spectra indicating that the number of molecules excited in a TERS experiment is extremely low. Thus, orientational averaging effects are suppressed and micro ensembles are investigated. This is shown for a thiophenol molecule adsorbed on Au nanoplates and nanoparticles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..238a2013H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..238a2013H"><span>A Study on The Development of Local Exhaust Ventilation System (LEV’s) for Installation of Laser Cutting Machine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Harun, S. I.; Idris, S. R. A.; Tamar Jaya, N.</p> <p>2017-09-01</p> <p>Local exhaust ventilation (LEV) is an engineering system frequently used in the workplace to protect operators from hazardous substances. The objective of this project is design and fabricate the ventilation system as installation for chamber room of laser cutting machine and to stimulate the air flow inside chamber room of laser cutting machine with the ventilation system that designed. LEV’s fabricated with rated voltage D.C 10.8V and 1.5 ampere. Its capacity 600 ml, continuously use limit approximately 12-15 minute, overall length LEV’s fabricated is 966 mm with net weight 0.88 kg and maximum airflow is 1.3 meter cubic per minute. Stimulate the air flow inside chamber room of laser cutting machine with the ventilation system that designed and fabricated overall result get 2 main gas vapor which air and carbon dioxide. For air gas which experimented by using anemometer, general duct velocity that produce is same with other gas produce, carbon dioxide which 5 m/s until 10 m/s. Overall result for 5 m/s and 10 m/s as minimum and maximum duct velocity produce for both air and carbon dioxide. The air gas flow velocity that captured by LEV’s fabricated, 3.998 m/s average velocity captured from 5 m/s duct velocity which it efficiency of 79.960% and 7.667 m/s average velocity captured from 10 m/s duct velocity with efficiency of 76.665%. For carbon dioxide gas flow velocity that captured by LEV’s fabricated, 3.674 m/s average velocity captured from 5 m/s duct velocity which it efficiency of 73.480% and 8.255 m/s average velocity captured from 10 m/s duct velocity with efficiency of 82.545%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017NHESS..17.1795P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017NHESS..17.1795P"><span>Revisiting the synoptic-scale predictability of severe European winter storms using ECMWF ensemble reforecasts</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pantillon, Florian; Knippertz, Peter; Corsmeier, Ulrich</p> <p>2017-10-01</p> <p>New insights into the synoptic-scale predictability of 25 severe European winter storms of the 1995-2015 period are obtained using the homogeneous ensemble reforecast dataset from the European Centre for Medium-Range Weather Forecasts. The predictability of the storms is assessed with different metrics including (a) the track and intensity to investigate the storms' dynamics and (b) the Storm Severity Index to estimate the impact of the associated wind gusts. The storms are well predicted by the whole ensemble up to 2-4 days ahead. At longer lead times, the number of members predicting the observed storms decreases and the ensemble average is not clearly defined for the track and intensity. The Extreme Forecast Index and Shift of Tails are therefore computed from the deviation of the ensemble from the model climate. Based on these indices, the model has some skill in forecasting the area covered by extreme wind gusts up to 10 days, which indicates a clear potential for early warnings. However, large variability is found between the individual storms. The poor predictability of outliers appears related to their physical characteristics such as explosive intensification or small size. Longer datasets with more cases would be needed to further substantiate these points.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A33B2345G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A33B2345G"><span>Single Aerosol Particle Studies Using Optical Trapping Raman And Cavity Ringdown Spectroscopy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gong, Z.; Wang, C.; Pan, Y. L.; Videen, G.</p> <p>2017-12-01</p> <p>Due to the physical and chemical complexity of aerosol particles and the interdisciplinary nature of aerosol science that involves physics, chemistry, and biology, our knowledge of aerosol particles is rather incomplete; our current understanding of aerosol particles is limited by averaged (over size, composition, shape, and orientation) and/or ensemble (over time, size, and multi-particles) measurements. Physically, single aerosol particles are the fundamental units of any large aerosol ensembles. Chemically, single aerosol particles carry individual chemical components (properties and constituents) in particle ensemble processes. Therefore, the study of single aerosol particles can bridge the gap between aerosol ensembles and bulk/surface properties and provide a hierarchical progression from a simple benchmark single-component system to a mixed-phase multicomponent system. A single aerosol particle can be an effective reactor to study heterogeneous surface chemistry in multiple phases. Latest technological advances provide exciting new opportunities to study single aerosol particles and to further develop single aerosol particle instrumentation. We present updates on our recent studies of single aerosol particles optically trapped in air using the optical-trapping Raman and cavity ringdown spectroscopy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2000Natur.405..567L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2000Natur.405..567L"><span>Cortical ensemble activity increasingly predicts behaviour outcomes during learning of a motor task</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Laubach, Mark; Wessberg, Johan; Nicolelis, Miguel A. L.</p> <p>2000-06-01</p> <p>When an animal learns to make movements in response to different stimuli, changes in activity in the motor cortex seem to accompany and underlie this learning. The precise nature of modifications in cortical motor areas during the initial stages of motor learning, however, is largely unknown. Here we address this issue by chronically recording from neuronal ensembles located in the rat motor cortex, throughout the period required for rats to learn a reaction-time task. Motor learning was demonstrated by a decrease in the variance of the rats' reaction times and an increase in the time the animals were able to wait for a trigger stimulus. These behavioural changes were correlated with a significant increase in our ability to predict the correct or incorrect outcome of single trials based on three measures of neuronal ensemble activity: average firing rate, temporal patterns of firing, and correlated firing. This increase in prediction indicates that an association between sensory cues and movement emerged in the motor cortex as the task was learned. Such modifications in cortical ensemble activity may be critical for the initial learning of motor tasks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20000093260','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20000093260"><span>Decimated Input Ensembles for Improved Generalization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Tumer, Kagan; Oza, Nikunj C.; Norvig, Peter (Technical Monitor)</p> <p>1999-01-01</p> <p>Recently, many researchers have demonstrated that using classifier ensembles (e.g., averaging the outputs of multiple classifiers before reaching a classification decision) leads to improved performance for many difficult generalization problems. However, in many domains there are serious impediments to such "turnkey" classification accuracy improvements. Most notable among these is the deleterious effect of highly correlated classifiers on the ensemble performance. One particular solution to this problem is generating "new" training sets by sampling the original one. However, with finite number of patterns, this causes a reduction in the training patterns each classifier sees, often resulting in considerably worsened generalization performance (particularly for high dimensional data domains) for each individual classifier. Generally, this drop in the accuracy of the individual classifier performance more than offsets any potential gains due to combining, unless diversity among classifiers is actively promoted. In this work, we introduce a method that: (1) reduces the correlation among the classifiers; (2) reduces the dimensionality of the data, thus lessening the impact of the 'curse of dimensionality'; and (3) improves the classification performance of the ensemble.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4103595','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4103595"><span>CABS-flex predictions of protein flexibility compared with NMR ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian</p> <p>2014-01-01</p> <p>Motivation: Identification of flexible regions of protein structures is important for understanding of their biological functions. Recently, we have developed a fast approach for predicting protein structure fluctuations from a single protein model: the CABS-flex. CABS-flex was shown to be an efficient alternative to conventional all-atom molecular dynamics (MD). In this work, we evaluate CABS-flex and MD predictions by comparison with protein structural variations within NMR ensembles. Results: Based on a benchmark set of 140 proteins, we show that the relative fluctuations of protein residues obtained from CABS-flex are well correlated to those of NMR ensembles. On average, this correlation is stronger than that between MD and NMR ensembles. In conclusion, CABS-flex is useful and complementary to MD in predicting protein regions that undergo conformational changes as well as the extent of such changes. Availability and implementation: The CABS-flex is freely available to all users at http://biocomp.chem.uw.edu.pl/CABSflex. Contact: sekmi@chem.uw.edu.pl Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24735558</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24735558','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24735558"><span>CABS-flex predictions of protein flexibility compared with NMR ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jamroz, Michal; Kolinski, Andrzej; Kmiecik, Sebastian</p> <p>2014-08-01</p> <p>Identification of flexible regions of protein structures is important for understanding of their biological functions. Recently, we have developed a fast approach for predicting protein structure fluctuations from a single protein model: the CABS-flex. CABS-flex was shown to be an efficient alternative to conventional all-atom molecular dynamics (MD). In this work, we evaluate CABS-flex and MD predictions by comparison with protein structural variations within NMR ensembles. Based on a benchmark set of 140 proteins, we show that the relative fluctuations of protein residues obtained from CABS-flex are well correlated to those of NMR ensembles. On average, this correlation is stronger than that between MD and NMR ensembles. In conclusion, CABS-flex is useful and complementary to MD in predicting protein regions that undergo conformational changes as well as the extent of such changes. The CABS-flex is freely available to all users at http://biocomp.chem.uw.edu.pl/CABSflex. sekmi@chem.uw.edu.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29788510','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29788510"><span>Predicting drug-induced liver injury using ensemble learning methods and molecular fingerprints.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ai, Haixin; Chen, Wen; Zhang, Li; Huang, Liangchao; Yin, Zimo; Hu, Huan; Zhao, Qi; Zhao, Jian; Liu, Hongsheng</p> <p>2018-05-21</p> <p>Drug-induced liver injury (DILI) is a major safety concern in the drug-development process, and various methods have been proposed to predict the hepatotoxicity of compounds during the early stages of drug trials. In this study, we developed an ensemble model using three machine learning algorithms and 12 molecular fingerprints from a dataset containing 1,241 diverse compounds. The ensemble model achieved an average accuracy of 71.1±2.6%, sensitivity of 79.9±3.6%, specificity of 60.3±4.8%, and area under the receiver operating characteristic curve (AUC) of 0.764±0.026 in five-fold cross-validation and an accuracy of 84.3%, sensitivity of 86.9%, specificity of 75.4%, and AUC of 0.904 in an external validation dataset of 286 compounds collected from the Liver Toxicity Knowledge Base (LTKB). Compared with previous methods, the ensemble model achieved relatively high accuracy and sensitivity. We also identified several substructures related to DILI. In addition, we provide a web server offering access to our models (http://ccsipb.lnu.edu.cn/toxicity/HepatoPred-EL/).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1421971-ensemble-averaged-structurefunction-relationship-nanocrystals-effective-superparamagnetic-fe-clusters-catalytically-active-pt-skin"><span>Ensemble averaged structure–function relationship for nanocrystals: effective superparamagnetic Fe clusters with catalytically active Pt skin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Petkov, Valeri; Prasai, Binay; Shastri, Sarvjit</p> <p>2017-09-12</p> <p>Practical applications require the production and usage of metallic nanocrystals (NCs) in large ensembles. Besides, due to their cluster-bulk solid duality, metallic NCs exhibit a large degree of structural diversity. This poses the question as to what atomic-scale basis is to be used when the structure–function relationship for metallic NCs is to be quantified precisely. In this paper, we address the question by studying bi-functional Fe core-Pt skin type NCs optimized for practical applications. In particular, the cluster-like Fe core and skin-like Pt surface of the NCs exhibit superparamagnetic properties and a superb catalytic activity for the oxygen reduction reaction,more » respectively. We determine the atomic-scale structure of the NCs by non-traditional resonant high-energy X-ray diffraction coupled to atomic pair distribution function analysis. Using the experimental structure data we explain the observed magnetic and catalytic behavior of the NCs in a quantitative manner. Lastly, we demonstrate that NC ensemble-averaged 3D positions of atoms obtained by advanced X-ray scattering techniques are a very proper basis for not only establishing but also quantifying the structure–function relationship for the increasingly complex metallic NCs explored for practical applications.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1008a2019A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1008a2019A"><span>Ensemble averaging and stacking of ARIMA and GSTAR model for rainfall forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anggraeni, D.; Kurnia, I. F.; Hadi, A. F.</p> <p>2018-04-01</p> <p>Unpredictable rainfall changes can affect human activities, such as in agriculture, aviation, shipping which depend on weather forecasts. Therefore, we need forecasting tools with high accuracy in predicting the rainfall in the future. This research focus on local forcasting of the rainfall at Jember in 2005 until 2016, from 77 rainfall stations. The rainfall here was not only related to the occurrence of the previous of its stations, but also related to others, it’s called the spatial effect. The aim of this research is to apply the GSTAR model, to determine whether there are some correlations of spatial effect between one to another stations. The GSTAR model is an expansion of the space-time model that combines the time-related effects, the locations (stations) in a time series effects, and also the location it self. The GSTAR model will also be compared to the ARIMA model that completely ignores the independent variables. The forcested value of the ARIMA and of the GSTAR models then being combined using the ensemble forecasting technique. The averaging and stacking method of ensemble forecasting method here provide us the best model with higher acuracy model that has the smaller RMSE (Root Mean Square Error) value. Finally, with the best model we can offer a better local rainfall forecasting in Jember for the future.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EJASP2012...14Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EJASP2012...14Y"><span>A framework of multitemplate ensemble for fingerprint verification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yin, Yilong; Ning, Yanbin; Ren, Chunxiao; Liu, Li</p> <p>2012-12-01</p> <p>How to improve performance of an automatic fingerprint verification system (AFVS) is always a big challenge in biometric verification field. Recently, it becomes popular to improve the performance of AFVS using ensemble learning approach to fuse related information of fingerprints. In this article, we propose a novel framework of fingerprint verification which is based on the multitemplate ensemble method. This framework is consisted of three stages. In the first stage, enrollment stage, we adopt an effective template selection method to select those fingerprints which best represent a finger, and then, a polyhedron is created by the matching results of multiple template fingerprints and a virtual centroid of the polyhedron is given. In the second stage, verification stage, we measure the distance between the centroid of the polyhedron and a query image. In the final stage, a fusion rule is used to choose a proper distance from a distance set. The experimental results on the FVC2004 database prove the improvement on the effectiveness of the new framework in fingerprint verification. With a minutiae-based matching method, the average EER of four databases in FVC2004 drops from 10.85 to 0.88, and with a ridge-based matching method, the average EER of these four databases also decreases from 14.58 to 2.51.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A13E2121S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A13E2121S"><span>Impacts of a Stochastic Ice Mass-Size Relationship on Squall Line Ensemble Simulations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Stanford, M.; Varble, A.; Morrison, H.; Grabowski, W.; McFarquhar, G. M.; Wu, W.</p> <p>2017-12-01</p> <p>Cloud and precipitation structure, evolution, and cloud radiative forcing of simulated mesoscale convective systems (MCSs) are significantly impacted by ice microphysics parameterizations. Most microphysics schemes assume power law relationships with constant parameters for ice particle mass, area, and terminal fallspeed relationships as a function of size, despite observations showing that these relationships vary in both time and space. To account for such natural variability, a stochastic representation of ice microphysical parameters was developed using the Predicted Particle Properties (P3) microphysics scheme in the Weather Research and Forecasting model, guided by in situ aircraft measurements from a number of field campaigns. Here, the stochastic framework is applied to the "a" and "b" parameters of the unrimed ice mass-size (m-D) relationship (m=aDb) with co-varying "a" and "b" values constrained by observational distributions tested over a range of spatiotemporal autocorrelation scales. Diagnostically altering a-b pairs in three-dimensional (3D) simulations of the 20 May 2011 Midlatitude Continental Convective Clouds Experiment (MC3E) squall line suggests that these parameters impact many important characteristics of the simulated squall line, including reflectivity structure (particularly in the anvil region), surface rain rates, surface and top of atmosphere radiative fluxes, buoyancy and latent cooling distributions, and system propagation speed. The stochastic a-b P3 scheme is tested using two frameworks: (1) a large ensemble of two-dimensional idealized squall line simulations and (2) a smaller ensemble of 3D simulations of the 20 May 2011 squall line, for which simulations are evaluated using observed radar reflectivity and radial velocity at multiple wavelengths, surface meteorology, and surface and satellite measured longwave and shortwave radiative fluxes. Ensemble spreads are characterized and compared against initial condition ensemble spreads for a range of variables.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27034973','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27034973"><span>An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ranganayaki, V; Deepa, S N</p> <p>2016-01-01</p> <p>Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28716511','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28716511"><span>An ensemble predictive modeling framework for breast cancer classification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nagarajan, Radhakrishnan; Upreti, Meenakshi</p> <p>2017-12-01</p> <p>Molecular changes often precede clinical presentation of diseases and can be useful surrogates with potential to assist in informed clinical decision making. Recent studies have demonstrated the usefulness of modeling approaches such as classification that can predict the clinical outcomes from molecular expression profiles. While useful, a majority of these approaches implicitly use all molecular markers as features in the classification process often resulting in sparse high-dimensional projection of the samples often comparable to that of the sample size. In this study, a variant of the recently proposed ensemble classification approach is used for predicting good and poor-prognosis breast cancer samples from their molecular expression profiles. In contrast to traditional single and ensemble classifiers, the proposed approach uses multiple base classifiers with varying feature sets obtained from two-dimensional projection of the samples in conjunction with a majority voting strategy for predicting the class labels. In contrast to our earlier implementation, base classifiers in the ensembles are chosen based on maximal sensitivity and minimal redundancy by choosing only those with low average cosine distance. The resulting ensemble sets are subsequently modeled as undirected graphs. Performance of four different classification algorithms is shown to be better within the proposed ensemble framework in contrast to using them as traditional single classifier systems. Significance of a subset of genes with high-degree centrality in the network abstractions across the poor-prognosis samples is also discussed. Copyright © 2017 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013WRR....49.6744H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013WRR....49.6744H"><span>Simultaneous calibration of ensemble river flow predictions over an entire range of lead times</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hemri, S.; Fundel, F.; Zappa, M.</p> <p>2013-10-01</p> <p>Probabilistic estimates of future water levels and river discharge are usually simulated with hydrologic models using ensemble weather forecasts as main inputs. As hydrologic models are imperfect and the meteorological ensembles tend to be biased and underdispersed, the ensemble forecasts for river runoff typically are biased and underdispersed, too. Thus, in order to achieve both reliable and sharp predictions statistical postprocessing is required. In this work Bayesian model averaging (BMA) is applied to statistically postprocess ensemble runoff raw forecasts for a catchment in Switzerland, at lead times ranging from 1 to 240 h. The raw forecasts have been obtained using deterministic and ensemble forcing meteorological models with different forecast lead time ranges. First, BMA is applied based on mixtures of univariate normal distributions, subject to the assumption of independence between distinct lead times. Then, the independence assumption is relaxed in order to estimate multivariate runoff forecasts over the entire range of lead times simultaneously, based on a BMA version that uses multivariate normal distributions. Since river runoff is a highly skewed variable, Box-Cox transformations are applied in order to achieve approximate normality. Both univariate and multivariate BMA approaches are able to generate well calibrated probabilistic forecasts that are considerably sharper than climatological forecasts. Additionally, multivariate BMA provides a promising approach for incorporating temporal dependencies into the postprocessed forecasts. Its major advantage against univariate BMA is an increase in reliability when the forecast system is changing due to model availability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFMGC21E0980P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFMGC21E0980P"><span>An 'Observational Large Ensemble' to compare observed and modeled temperature trend uncertainty due to internal variability.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Poppick, A. N.; McKinnon, K. A.; Dunn-Sigouin, E.; Deser, C.</p> <p>2017-12-01</p> <p>Initial condition climate model ensembles suggest that regional temperature trends can be highly variable on decadal timescales due to characteristics of internal climate variability. Accounting for trend uncertainty due to internal variability is therefore necessary to contextualize recent observed temperature changes. However, while the variability of trends in a climate model ensemble can be evaluated directly (as the spread across ensemble members), internal variability simulated by a climate model may be inconsistent with observations. Observation-based methods for assessing the role of internal variability on trend uncertainty are therefore required. Here, we use a statistical resampling approach to assess trend uncertainty due to internal variability in historical 50-year (1966-2015) winter near-surface air temperature trends over North America. We compare this estimate of trend uncertainty to simulated trend variability in the NCAR CESM1 Large Ensemble (LENS), finding that uncertainty in wintertime temperature trends over North America due to internal variability is largely overestimated by CESM1, on average by a factor of 32%. Our observation-based resampling approach is combined with the forced signal from LENS to produce an 'Observational Large Ensemble' (OLENS). The members of OLENS indicate a range of spatially coherent fields of temperature trends resulting from different sequences of internal variability consistent with observations. The smaller trend variability in OLENS suggests that uncertainty in the historical climate change signal in observations due to internal variability is less than suggested by LENS.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4791511','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4791511"><span>An Intelligent Ensemble Neural Network Model for Wind Speed Prediction in Renewable Energy Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ranganayaki, V.; Deepa, S. N.</p> <p>2016-01-01</p> <p>Various criteria are proposed to select the number of hidden neurons in artificial neural network (ANN) models and based on the criterion evolved an intelligent ensemble neural network model is proposed to predict wind speed in renewable energy applications. The intelligent ensemble neural model based wind speed forecasting is designed by averaging the forecasted values from multiple neural network models which includes multilayer perceptron (MLP), multilayer adaptive linear neuron (Madaline), back propagation neural network (BPN), and probabilistic neural network (PNN) so as to obtain better accuracy in wind speed prediction with minimum error. The random selection of hidden neurons numbers in artificial neural network results in overfitting or underfitting problem. This paper aims to avoid the occurrence of overfitting and underfitting problems. The selection of number of hidden neurons is done in this paper employing 102 criteria; these evolved criteria are verified by the computed various error values. The proposed criteria for fixing hidden neurons are validated employing the convergence theorem. The proposed intelligent ensemble neural model is applied for wind speed prediction application considering the real time wind data collected from the nearby locations. The obtained simulation results substantiate that the proposed ensemble model reduces the error value to minimum and enhances the accuracy. The computed results prove the effectiveness of the proposed ensemble neural network (ENN) model with respect to the considered error factors in comparison with that of the earlier models available in the literature. PMID:27034973</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21197855','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21197855"><span>Torso undergarments: their merit for clothed and armored individuals in hot-dry conditions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Van den Heuvel, Anne M J; Kerry, Pete; Van der Velde, Jeroen H P M; Patterson, Mark J; Taylor, Nigel A S</p> <p>2010-12-01</p> <p>The aim of this study was to evaluate how the textile composition of torso undergarment fabrics may impact upon thermal strain, moisture transfer, and the thermal and clothing comfort of fully clothed, armored individuals working in a hot-dry environment (41.2 degrees C and 29.8% relative humidity). Five undergarment configurations were assessed using eight men who walked for 120 min (4 km x h(-1)), then alternated running (2 min at 10 km x h(-1)) and walking (2 min at 4 km x h(-1)) for 20 min. Trials differed only in the torso undergarments worn: no t-shirt (Ensemble A); 100% cotton t-shirt (Ensemble B); 100% woolen t-shirt (Ensemble C); synthetic t-shirt (Ensemble D: nylon, polyethylene, elastane); hybrid shirt (Ensemble E). Thermal and cardiovascular strain progressively increased throughout each trial, with the average terminal core temperature being 38.5 degrees C and heart rate peaking at 170 bpm across all trials. However, no significant between-trial separations were evident for core or mean skin temperatures, or for heart rate, sweat production, evaporation, the within-ensemble water vapor pressures, or for thermal or clothing discomfort. Thus, under these conditions, neither the t-shirt textile compositions, nor the presence or absence of an undergarment, offered any significant thermal, central cardiac, or comfort advantages. Furthermore, there was no evidence that any of these fabrics created a significantly drier microclimate next to the skin.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AGUFMNG23B..03D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AGUFMNG23B..03D"><span>Interactive vs. Non-Interactive Multi-Model Ensembles</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duane, G. S.</p> <p>2013-12-01</p> <p>If the members of an ensemble of different models are allowed to interact with one another in run time, predictive skill can be improved as compared to that of any individual model or any average of indvidual model outputs. Inter-model connections in such an interactive ensemble can be trained, using historical data, so that the resulting ``supermodel' synchronizes with reality when used in weather-prediction mode, where the individual models perform data assimilation from each other (with trainable inter-model 'observation error') as well as from real observations. In climate-projection mode, parameters of the individual models are changed, as might occur from an increase in GHG levels, and one obtains relevant statistical properties of the new supermodel attractor. In simple cases, it has been shown that training of the inter-model connections with the old parameter values gives a supermodel that is still predictive when the parameter values are changed. Here we inquire as to the circumstances under which supermodel performance can be expected to exceed that of the customary weighted average of model outputs. We consider a supermodel formed from quasigeostrophic (QG) channel models with different forcing coefficients, and introduce an effective training scheme for the inter-model connections. We show that the blocked-zonal index cycle is reproduced better by the supermodel than by any non-interactive ensemble in the extreme case where the forcing coefficients of the different models are very large or very small. With realistic differences in forcing coefficients, as would be representative of actual differences among IPCC-class models, the usual linearity assumption is justified and a weighted average of model outputs is adequate. It is therefore hypothesized that supermodeling is likely to be useful in situations where there are qualitative model differences, as arising from sub-gridscale parameterizations, that affect overall model behavior. Otherwise the usual ex post facto averaging will probably suffice. The advantage of supermodeling is seen in statistics such as anticorrelation between blocking activity in the Atlantic and Pacific sectors, in the case of the QG channel model, rather than in overall blocking frequency. Likewise in climate models, the advantage of supermodeling is typically manifest in higher-order statistics rather than in quantities such as mean temperature.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5754089','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5754089"><span>A Kolmogorov-Smirnov test for the molecular clock based on Bayesian ensembles of phylogenies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Antoneli, Fernando; Passos, Fernando M.; Lopes, Luciano R.</p> <p>2018-01-01</p> <p>Divergence date estimates are central to understand evolutionary processes and depend, in the case of molecular phylogenies, on tests of molecular clocks. Here we propose two non-parametric tests of strict and relaxed molecular clocks built upon a framework that uses the empirical cumulative distribution (ECD) of branch lengths obtained from an ensemble of Bayesian trees and well known non-parametric (one-sample and two-sample) Kolmogorov-Smirnov (KS) goodness-of-fit test. In the strict clock case, the method consists in using the one-sample Kolmogorov-Smirnov (KS) test to directly test if the phylogeny is clock-like, in other words, if it follows a Poisson law. The ECD is computed from the discretized branch lengths and the parameter λ of the expected Poisson distribution is calculated as the average branch length over the ensemble of trees. To compensate for the auto-correlation in the ensemble of trees and pseudo-replication we take advantage of thinning and effective sample size, two features provided by Bayesian inference MCMC samplers. Finally, it is observed that tree topologies with very long or very short branches lead to Poisson mixtures and in this case we propose the use of the two-sample KS test with samples from two continuous branch length distributions, one obtained from an ensemble of clock-constrained trees and the other from an ensemble of unconstrained trees. Moreover, in this second form the test can also be applied to test for relaxed clock models. The use of a statistically equivalent ensemble of phylogenies to obtain the branch lengths ECD, instead of one consensus tree, yields considerable reduction of the effects of small sample size and provides a gain of power. PMID:29300759</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29121946','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29121946"><span>Improving precision of glomerular filtration rate estimating model by ensemble learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Xun; Li, Ningshan; Lv, Linsheng; Fu, Yongmei; Cheng, Cailian; Wang, Caixia; Ye, Yuqiu; Li, Shaomin; Lou, Tanqi</p> <p>2017-11-09</p> <p>Accurate assessment of kidney function is clinically important, but estimates of glomerular filtration rate (GFR) by regression are imprecise. We hypothesized that ensemble learning could improve precision. A total of 1419 participants were enrolled, with 1002 in the development dataset and 417 in the external validation dataset. GFR was independently estimated from age, sex and serum creatinine using an artificial neural network (ANN), support vector machine (SVM), regression, and ensemble learning. GFR was measured by 99mTc-DTPA renal dynamic imaging calibrated with dual plasma sample 99mTc-DTPA GFR. Mean measured GFRs were 70.0 ml/min/1.73 m 2 in the developmental and 53.4 ml/min/1.73 m 2 in the external validation cohorts. In the external validation cohort, precision was better in the ensemble model of the ANN, SVM and regression equation (IQR = 13.5 ml/min/1.73 m 2 ) than in the new regression model (IQR = 14.0 ml/min/1.73 m 2 , P < 0.001). The precision of ensemble learning was the best of the three models, but the models had similar bias and accuracy. The median difference ranged from 2.3 to 3.7 ml/min/1.73 m 2 , 30% accuracy ranged from 73.1 to 76.0%, and P was > 0.05 for all comparisons of the new regression equation and the other new models. An ensemble learning model including three variables, the average ANN, SVM, and regression equation values, was more precise than the new regression model. A more complex ensemble learning strategy may further improve GFR estimates.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ApJ...859...96Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ApJ...859...96Z"><span>Stellar Velocity Dispersion: Linking Quiescent Galaxies to Their Dark Matter Halos</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zahid, H. Jabran; Sohn, Jubee; Geller, Margaret J.</p> <p>2018-06-01</p> <p>We analyze the Illustris-1 hydrodynamical cosmological simulation to explore the stellar velocity dispersion of quiescent galaxies as an observational probe of dark matter halo velocity dispersion and mass. Stellar velocity dispersion is proportional to dark matter halo velocity dispersion for both central and satellite galaxies. The dark matter halos of central galaxies are in virial equilibrium and thus the stellar velocity dispersion is also proportional to dark matter halo mass. This proportionality holds even when a line-of-sight aperture dispersion is calculated in analogy to observations. In contrast, at a given stellar velocity dispersion, the dark matter halo mass of satellite galaxies is smaller than virial equilibrium expectations. This deviation from virial equilibrium probably results from tidal stripping of the outer dark matter halo. Stellar velocity dispersion appears insensitive to tidal effects and thus reflects the correlation between stellar velocity dispersion and dark matter halo mass prior to infall. There is a tight relation (≲0.2 dex scatter) between line-of-sight aperture stellar velocity dispersion and dark matter halo mass suggesting that the dark matter halo mass may be estimated from the measured stellar velocity dispersion for both central and satellite galaxies. We evaluate the impact of treating all objects as central galaxies if the relation we derive is applied to a statistical ensemble. A large fraction (≳2/3) of massive quiescent galaxies are central galaxies and systematic uncertainty in the inferred dark matter halo mass is ≲0.1 dex thus simplifying application of the simulation results to currently available observations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1999GeoJI.138..871R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1999GeoJI.138..871R"><span>Lithospheric structure of the Arabian Shield and Platform from complete regional waveform modelling and surface wave group velocities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rodgers, Arthur J.; Walter, William R.; Mellors, Robert J.; Al-Amri, Abdullah M. S.; Zhang, Yu-Shen</p> <p>1999-09-01</p> <p>Regional seismic waveforms reveal significant differences in the structure of the Arabian Shield and the Arabian Platform. We estimate lithospheric velocity structure by modelling regional waveforms recorded by the 1995-1997 Saudi Arabian Temporary Broadband Deployment using a grid search scheme. We employ a new method whereby we narrow the waveform modelling grid search by first fitting the fundamental mode Love and Rayleigh wave group velocities. The group velocities constrain the average crustal thickness and velocities as well as the crustal velocity gradients. Because the group velocity fitting is computationally much faster than the synthetic seismogram calculation this method allows us to determine good average starting models quickly. Waveform fits of the Pn and Sn body wave arrivals constrain the mantle velocities. The resulting lithospheric structures indicate that the Arabian Platform has an average crustal thickness of 40 km, with relatively low crustal velocities (average crustal P- and S-wave velocities of 6.07 and 3.50 km s^-1 , respectively) without a strong velocity gradient. The Moho is shallower (36 km) and crustal velocities are 6 per cent higher (with a velocity increase with depth) for the Arabian Shield. Fast crustal velocities of the Arabian Shield result from a predominantly mafic composition in the lower crust. Lower velocities in the Arabian Platform crust indicate a bulk felsic composition, consistent with orogenesis of this former active margin. P- and S-wave velocities immediately below the Moho are slower in the Arabian Shield than in the Arabian Platform (7.9 and 4.30 km s^-1 , and 8.10 and 4.55 km s^-1 , respectively). This indicates that the Poisson's ratios for the uppermost mantle of the Arabian Shield and Platform are 0.29 and 0.27, respectively. The lower mantle velocities and higher Poisson's ratio beneath the Arabian Shield probably arise from a partially molten mantle associated with Red Sea spreading and continental volcanism, although we cannot constrain the lateral extent of a zone of partially molten mantle.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJWC.13703014N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJWC.13703014N"><span>Domain wall network as QCD vacuum: confinement, chiral symmetry, hadronization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nedelko, Sergei N.; Voronin, Vladimir V.</p> <p>2017-03-01</p> <p>An approach to QCD vacuum as a medium describable in terms of statistical ensemble of almost everywhere homogeneous Abelian (anti-)self-dual gluon fields is reviewed. These fields play the role of the confining medium for color charged fields as well as underline the mechanism of realization of chiral SUL(Nf) × SUR(Nf) and UA(1) symmetries. Hadronization formalism based on this ensemble leads to manifestly defined quantum effective meson action. Strong, electromagnetic and weak interactions of mesons are represented in the action in terms of nonlocal n-point interaction vertices given by the quark-gluon loops averaged over the background ensemble. Systematic results for the mass spectrum and decay constants of radially excited light, heavy-light mesons and heavy quarkonia are presented. Relationship of this approach to the results of functional renormalization group and Dyson-Schwinger equations, and the picture of harmonic confinement is briefly outlined.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15114356','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15114356"><span>Large-scale recording of neuronal ensembles.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Buzsáki, György</p> <p>2004-05-01</p> <p>How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron-electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29725108','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29725108"><span>Fitting a function to time-dependent ensemble averaged data.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fogelmark, Karl; Lomholt, Michael A; Irbäck, Anders; Ambjörnsson, Tobias</p> <p>2018-05-03</p> <p>Time-dependent ensemble averages, i.e., trajectory-based averages of some observable, are of importance in many fields of science. A crucial objective when interpreting such data is to fit these averages (for instance, squared displacements) with a function and extract parameters (such as diffusion constants). A commonly overlooked challenge in such function fitting procedures is that fluctuations around mean values, by construction, exhibit temporal correlations. We show that the only available general purpose function fitting methods, correlated chi-square method and the weighted least squares method (which neglects correlation), fail at either robust parameter estimation or accurate error estimation. We remedy this by deriving a new closed-form error estimation formula for weighted least square fitting. The new formula uses the full covariance matrix, i.e., rigorously includes temporal correlations, but is free of the robustness issues, inherent to the correlated chi-square method. We demonstrate its accuracy in four examples of importance in many fields: Brownian motion, damped harmonic oscillation, fractional Brownian motion and continuous time random walks. We also successfully apply our method, weighted least squares including correlation in error estimation (WLS-ICE), to particle tracking data. The WLS-ICE method is applicable to arbitrary fit functions, and we provide a publically available WLS-ICE software.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/541187-calibration-method-helps-seismic-velocity-interpretation','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/541187-calibration-method-helps-seismic-velocity-interpretation"><span>Calibration method helps in seismic velocity interpretation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Guzman, C.E.; Davenport, H.A.; Wilhelm, R.</p> <p>1997-11-03</p> <p>Acoustic velocities derived from seismic reflection data, when properly calibrated to subsurface measurements, help interpreters make pure velocity predictions. A method of calibrating seismic to measured velocities has improved interpretation of subsurface features in the Gulf of Mexico. In this method, the interpreter in essence creates a kind of gauge. Properly calibrated, the gauge enables the interpreter to match predicted velocities to velocities measured at wells. Slow-velocity zones are of special interest because they sometimes appear near hydrocarbon accumulations. Changes in velocity vary in strength with location; the structural picture is hidden unless the variations are accounted for by mappingmore » in depth instead of time. Preliminary observations suggest that the presence of hydrocarbons alters the lithology in the neighborhood of the trap; this hydrocarbon effect may be reflected in the rock velocity. The effect indicates a direct use of seismic velocity in exploration. This article uses the terms seismic velocity and seismic stacking velocity interchangeably. It uses ground velocity, checkshot average velocity, and well velocity interchangeably. Interval velocities are derived from seismic stacking velocities or well average velocities; they refer to velocities of subsurface intervals or zones. Interval travel time (ITT) is the reciprocal of interval velocity in microseconds per foot.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21528668-laser-cooling-molecules-zero-velocity-selection-single-spontaneous-emission','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21528668-laser-cooling-molecules-zero-velocity-selection-single-spontaneous-emission"><span>Laser cooling of molecules by zero-velocity selection and single spontaneous emission</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Ooi, C. H. Raymond</p> <p>2010-11-15</p> <p>A laser-cooling scheme for molecules is presented based on repeated cycle of zero-velocity selection, deceleration, and irreversible accumulation. Although this scheme also employs a single spontaneous emission as in [Raymond Ooi, Marzlin, and Audretsch, Eur. Phys. J. D 22, 259 (2003)], in order to circumvent the difficulty of maintaining closed pumping cycles in molecules, there are two distinct features which make the cooling process of this scheme faster and more practical. First, the zero-velocity selection creates a narrow velocity-width population with zero mean velocity, such that no further deceleration (with many stimulated Raman adiabatic passage (STIRAP) pulses) is required. Second,more » only two STIRAP processes are required to decelerate the remaining hot molecular ensemble to create a finite population around zero velocity for the next cycle. We present a setup to realize the cooling process in one dimension with trapping in the other two dimensions using a Stark barrel. Numerical estimates of the cooling parameters and simulations with density matrix equations using OH molecules show the applicability of the cooling scheme. For a gas at temperature T=1 K, the estimated cooling time is only 2 ms, with phase-space density increased by about 30 times. The possibility of extension to three-dimensional cooling via thermalization is also discussed.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AAS...22543810W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AAS...22543810W"><span>Super earth interiors and validity of Birch's Law for ultra-high pressure metals and ionic solids</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ware, Lucas Andrew</p> <p>2015-01-01</p> <p>Super Earths, recently detected by the Kepler Mission, expand the ensemble of known terrestrial planets beyond our Solar System's limited group. Birch's Law and velocity-density systematics have been crucial in constraining our knowledge of the composition of Earth's mantle and core. Recently published static diamond anvil cell experimental measurements of sound velocities in iron, a key deep element in most super Earth models, are inconsistent with each other with regard to the validity of Birch's Law. We examine the range of validity of Birch's Law for several metallic elements, including iron, and ionic solids shocked with a two-stage light gas gun into the ultra-high pressure, temperature fluid state and make comparisons to the recent static data.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.S23A0774G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.S23A0774G"><span>Accessing the uncertainties of seismic velocity and anisotropy structure of Northern Great Plains using a transdimensional Bayesian approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gao, C.; Lekic, V.</p> <p>2017-12-01</p> <p>Seismic imaging utilizing complementary seismic data provides unique insight on the formation, evolution and current structure of continental lithosphere. While numerous efforts have improved the resolution of seismic structure, the quantification of uncertainties remains challenging due to the non-linearity and the non-uniqueness of geophysical inverse problem. In this project, we use a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm to incorporate seismic observables including Rayleigh and Love wave dispersion, Ps and Sp receiver function to invert for shear velocity (Vs), compressional velocity (Vp), density, and radial anisotropy of the lithospheric structure. The Bayesian nature and the transdimensionality of this approach allow the quantification of the model parameter uncertainties while keeping the models parsimonious. Both synthetic test and inversion of actual data for Ps and Sp receiver functions are performed. We quantify the information gained in different inversions by calculating the Kullback-Leibler divergence. Furthermore, we explore the ability of Rayleigh and Love wave dispersion data to constrain radial anisotropy. We show that when multiple types of model parameters (Vsv, Vsh, and Vp) are inverted simultaneously, the constraints on radial anisotropy are limited by relatively large data uncertainties and trade-off strongly with Vp. We then perform joint inversion of the surface wave dispersion (SWD) and Ps, Sp receiver functions, and show that the constraints on both isotropic Vs and radial anisotropy are significantly improved. To achieve faster convergence of the rjMcMC, we propose a progressive inclusion scheme, and invert SWD measurements and receiver functions from about 400 USArray stations in the Northern Great Plains. We start by only using SWD data due to its fast convergence rate. We then use the average of the ensemble as a starting model for the joint inversion, which is able to resolve distinct seismic signatures of geological structures including the trans-Hudson orogen, Wyoming craton and Yellowstone hotspot. Various analyses are done to access the uncertainties of the seismic velocities and Moho depths. We also address the importance of careful data processing of receiver functions by illustrating artifacts due to unmodelled sediment reverberations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1406195','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1406195"><span>Uncertainty Quantification of Multi-Phase Closures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Nadiga, Balasubramanya T.; Baglietto, Emilio</p> <p></p> <p>In the ensemble-averaged dispersed phase formulation used for CFD of multiphase ows in nuclear reactor thermohydraulics, closures of interphase transfer of mass, momentum, and energy constitute, by far, the biggest source of error and uncertainty. Reliable estimators of this source of error and uncertainty are currently non-existent. Here, we report on how modern Validation and Uncertainty Quanti cation (VUQ) techniques can be leveraged to not only quantify such errors and uncertainties, but also to uncover (unintended) interactions between closures of di erent phenomena. As such this approach serves as a valuable aide in the research and development of multiphase closures.more » The joint modeling of lift, drag, wall lubrication, and turbulent dispersion|forces that lead to tranfer of momentum between the liquid and gas phases|is examined in the frame- work of validation of the adiabatic but turbulent experiments of Liu and Banko , 1993. An extensive calibration study is undertaken with a popular combination of closure relations and the popular k-ϵ turbulence model in a Bayesian framework. When a wide range of super cial liquid and gas velocities and void fractions is considered, it is found that this set of closures can be validated against the experimental data only by allowing large variations in the coe cients associated with the closures. We argue that such an extent of variation is a measure of uncertainty induced by the chosen set of closures. We also nd that while mean uid velocity and void fraction pro les are properly t, uctuating uid velocity may or may not be properly t. This aspect needs to be investigated further. The popular set of closures considered contains ad-hoc components and are undesirable from a predictive modeling point of view. Consequently, we next consider improvements that are being developed by the MIT group under CASL and which remove the ad-hoc elements. We use non-intrusive methodologies for sensitivity analysis and calibration (using Dakota) to study sensitivities of the CFD representation (STARCCM+) of uid velocity pro les and void fraction pro les in the context of Shaver and Podowski, 2015 correction to lift, and the Lubchenko et al., 2017 formulation of wall lubrication.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012CoPhC.183.1783U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012CoPhC.183.1783U"><span>Novel algorithm and MATLAB-based program for automated power law analysis of single particle, time-dependent mean-square displacement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Umansky, Moti; Weihs, Daphne</p> <p>2012-08-01</p> <p>In many physical and biophysical studies, single-particle tracking is utilized to reveal interactions, diffusion coefficients, active modes of driving motion, dynamic local structure, micromechanics, and microrheology. The basic analysis applied to those data is to determine the time-dependent mean-square displacement (MSD) of particle trajectories and perform time- and ensemble-averaging of similar motions. The motion of particles typically exhibits time-dependent power-law scaling, and only trajectories with qualitatively and quantitatively comparable MSD should be ensembled. Ensemble averaging trajectories that arise from different mechanisms, e.g., actively driven and diffusive, is incorrect and can result inaccurate correlations between structure, mechanics, and activity. We have developed an algorithm to automatically and accurately determine power-law scaling of experimentally measured single-particle MSD. Trajectories can then categorized and grouped according to user defined cutoffs of time, amplitudes, scaling exponent values, or combinations. Power-law fits are then provided for each trajectory alongside categorized groups of trajectories, histograms of power laws, and the ensemble-averaged MSD of each group. The codes are designed to be easily incorporated into existing user codes. We expect that this algorithm and program will be invaluable to anyone performing single-particle tracking, be it in physical or biophysical systems. Catalogue identifier: AEMD_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMD_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 25 892 No. of bytes in distributed program, including test data, etc.: 5 572 780 Distribution format: tar.gz Programming language: MATLAB (MathWorks Inc.) version 7.11 (2010b) or higher, program should also be backwards compatible. Symbolic Math Toolboxes (5.5) is required. The Curve Fitting Toolbox (3.0) is recommended. Computer: Tested on Windows only, yet should work on any computer running MATLAB. In Windows 7, should be used as administrator, if the user is not the administrator the program may not be able to save outputs and temporary outputs to all locations. Operating system: Any supporting MATLAB (MathWorks Inc.) v7.11 / 2010b or higher. Supplementary material: Sample output files (approx. 30 MBytes) are available. Classification: 12 External routines: Several MATLAB subfunctions (m-files), freely available on the web, were used as part of and included in, this code: count, NaN suite, parseArgs, roundsd, subaxis, wcov, wmean, and the executable pdfTK.exe. Nature of problem: In many physical and biophysical areas employing single-particle tracking, having the time-dependent power-laws governing the time-averaged meansquare displacement (MSD) of a single particle is crucial. Those power laws determine the mode-of-motion and hint at the underlying mechanisms driving motion. Accurate determination of the power laws that describe each trajectory will allow categorization into groups for further analysis of single trajectories or ensemble analysis, e.g. ensemble and time-averaged MSD. Solution method: The algorithm in the provided program automatically analyzes and fits time-dependent power laws to single particle trajectories, then group particles according to user defined cutoffs. It accepts time-dependent trajectories of several particles, each trajectory is run through the program, its time-averaged MSD is calculated, and power laws are determined in regions where the MSD is linear on a log-log scale. Our algorithm searches for high-curvature points in experimental data, here time-dependent MSD. Those serve as anchor points for determining the ranges of the power-law fits. Power-law scaling is then accurately determined and error estimations of the parameters and quality of fit are provided. After all single trajectory time-averaged MSDs are fit, we obtain cutoffs from the user to categorize and segment the power laws into groups; cutoff are either in exponents of the power laws, time of appearance of the fits, or both together. The trajectories are sorted according to the cutoffs and the time- and ensemble-averaged MSD of each group is provided, with histograms of the distributions of the exponents in each group. The program then allows the user to generate new trajectory files with trajectories segmented according to the determined groups, for any further required analysis. Additional comments: README file giving the names and a brief description of all the files that make-up the package and clear instructions on the installation and execution of the program is included in the distribution package. Running time: On an i5 Windows 7 machine with 4 GB RAM the automated parts of the run (excluding data loading and user input) take less than 45 minutes to analyze and save all stages for an 844 trajectory file, including optional PDF save. Trajectory length did not affect run time (tested up to 3600 frames/trajectory), which was on average 3.2±0.4 seconds per trajectory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AGUFMIN13A1649T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AGUFMIN13A1649T"><span>Extending Climate Analytics as a Service to the Earth System Grid Federation Progress Report on the Reanalysis Ensemble Service</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tamkin, G.; Schnase, J. L.; Duffy, D.; Li, J.; Strong, S.; Thompson, J. H.</p> <p>2016-12-01</p> <p>We are extending climate analytics-as-a-service, including: (1) A high-performance Virtual Real-Time Analytics Testbed supporting six major reanalysis data sets using advanced technologies like the Cloudera Impala-based SQL and Hadoop-based MapReduce analytics over native NetCDF files. (2) A Reanalysis Ensemble Service (RES) that offers a basic set of commonly used operations over the reanalysis collections that are accessible through NASA's climate data analytics Web services and our client-side Climate Data Services Python library, CDSlib. (3) An Open Geospatial Consortium (OGC) WPS-compliant Web service interface to CDSLib to accommodate ESGF's Web service endpoints. This presentation will report on the overall progress of this effort, with special attention to recent enhancements that have been made to the Reanalysis Ensemble Service, including the following: - An CDSlib Python library that supports full temporal, spatial, and grid-based resolution services - A new reanalysis collections reference model to enable operator design and implementation - An enhanced library of sample queries to demonstrate and develop use case scenarios - Extended operators that enable single- and multiple reanalysis area average, vertical average, re-gridding, and trend, climatology, and anomaly computations - Full support for the MERRA-2 reanalysis and the initial integration of two additional reanalyses - A prototype Jupyter notebook-based distribution mechanism that combines CDSlib documentation with interactive use case scenarios and personalized project management - Prototyped uncertainty quantification services that combine ensemble products with comparative observational products - Convenient, one-stop shopping for commonly used data products from multiple reanalyses, including basic subsetting and arithmetic operations over the data and extractions of trends, climatologies, and anomalies - The ability to compute and visualize multiple reanalysis intercomparisons</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015CG.....84...37J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015CG.....84...37J"><span>Ensemble of surrogates-based optimization for identifying an optimal surfactant-enhanced aquifer remediation strategy at heterogeneous DNAPL-contaminated sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jiang, Xue; Lu, Wenxi; Hou, Zeyu; Zhao, Haiqing; Na, Jin</p> <p>2015-11-01</p> <p>The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.H52B..02L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.H52B..02L"><span>Ensemble of Surrogates-based Optimization for Identifying an Optimal Surfactant-enhanced Aquifer Remediation Strategy at Heterogeneous DNAPL-contaminated Sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lu, W., Sr.; Xin, X.; Luo, J.; Jiang, X.; Zhang, Y.; Zhao, Y.; Chen, M.; Hou, Z.; Ouyang, Q.</p> <p>2015-12-01</p> <p>The purpose of this study was to identify an optimal surfactant-enhanced aquifer remediation (SEAR) strategy for aquifers contaminated by dense non-aqueous phase liquid (DNAPL) based on an ensemble of surrogates-based optimization technique. A saturated heterogeneous medium contaminated by nitrobenzene was selected as case study. A new kind of surrogate-based SEAR optimization employing an ensemble surrogate (ES) model together with a genetic algorithm (GA) is presented. Four methods, namely radial basis function artificial neural network (RBFANN), kriging (KRG), support vector regression (SVR), and kernel extreme learning machines (KELM), were used to create four individual surrogate models, which were then compared. The comparison enabled us to select the two most accurate models (KELM and KRG) to establish an ES model of the SEAR simulation model, and the developed ES model as well as these four stand-alone surrogate models was compared. The results showed that the average relative error of the average nitrobenzene removal rates between the ES model and the simulation model for 20 test samples was 0.8%, which is a high approximation accuracy, and which indicates that the ES model provides more accurate predictions than the stand-alone surrogate models. Then, a nonlinear optimization model was formulated for the minimum cost, and the developed ES model was embedded into this optimization model as a constrained condition. Besides, GA was used to solve the optimization model to provide the optimal SEAR strategy. The developed ensemble surrogate-optimization approach was effective in seeking a cost-effective SEAR strategy for heterogeneous DNAPL-contaminated sites. This research is expected to enrich and develop the theoretical and technical implications for the analysis of remediation strategy optimization of DNAPL-contaminated aquifers.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22364774-dynamic-stability-solar-system-statistically-inconclusive-results-from-ensemble-integrations','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22364774-dynamic-stability-solar-system-statistically-inconclusive-results-from-ensemble-integrations"><span>DYNAMIC STABILITY OF THE SOLAR SYSTEM: STATISTICALLY INCONCLUSIVE RESULTS FROM ENSEMBLE INTEGRATIONS</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zeebe, Richard E., E-mail: zeebe@soest.hawaii.edu</p> <p></p> <p>Due to the chaotic nature of the solar system, the question of its long-term stability can only be answered in a statistical sense, for instance, based on numerical ensemble integrations of nearby orbits. Destabilization of the inner planets, leading to close encounters and/or collisions can be initiated through a large increase in Mercury's eccentricity, with a currently assumed likelihood of ∼1%. However, little is known at present about the robustness of this number. Here I report ensemble integrations of the full equations of motion of the eight planets and Pluto over 5 Gyr, including contributions from general relativity. The resultsmore » show that different numerical algorithms lead to statistically different results for the evolution of Mercury's eccentricity (e{sub M}). For instance, starting at present initial conditions (e{sub M}≃0.21), Mercury's maximum eccentricity achieved over 5 Gyr is, on average, significantly higher in symplectic ensemble integrations using heliocentric rather than Jacobi coordinates and stricter error control. In contrast, starting at a possible future configuration (e{sub M}≃0.53), Mercury's maximum eccentricity achieved over the subsequent 500 Myr is, on average, significantly lower using heliocentric rather than Jacobi coordinates. For example, the probability for e{sub M} to increase beyond 0.53 over 500 Myr is >90% (Jacobi) versus only 40%-55% (heliocentric). This poses a dilemma because the physical evolution of the real system—and its probabilistic behavior—cannot depend on the coordinate system or the numerical algorithm chosen to describe it. Some tests of the numerical algorithms suggest that symplectic integrators using heliocentric coordinates underestimate the odds for destabilization of Mercury's orbit at high initial e{sub M}.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JHyd..556..634M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JHyd..556..634M"><span>Comprehensive evaluation of Ensemble Multi-Satellite Precipitation Dataset using the Dynamic Bayesian Model Averaging scheme over the Tibetan plateau</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ma, Yingzhao; Yang, Yuan; Han, Zhongying; Tang, Guoqiang; Maguire, Lane; Chu, Zhigang; Hong, Yang</p> <p>2018-01-01</p> <p>The objective of this study is to comprehensively evaluate the new Ensemble Multi-Satellite Precipitation Dataset using the Dynamic Bayesian Model Averaging scheme (EMSPD-DBMA) at daily and 0.25° scales from 2001 to 2015 over the Tibetan Plateau (TP). Error analysis against gauge observations revealed that EMSPD-DBMA captured the spatiotemporal pattern of daily precipitation with an acceptable Correlation Coefficient (CC) of 0.53 and a Relative Bias (RB) of -8.28%. Moreover, EMSPD-DBMA outperformed IMERG and GSMaP-MVK in almost all metrics in the summers of 2014 and 2015, with the lowest RB and Root Mean Square Error (RMSE) values of -2.88% and 8.01 mm/d, respectively. It also better reproduced the Probability Density Function (PDF) in terms of daily rainfall amount and estimated moderate and heavy rainfall better than both IMERG and GSMaP-MVK. Further, hydrological evaluation with the Coupled Routing and Excess STorage (CREST) model in the Upper Yangtze River region indicated that the EMSPD-DBMA forced simulation showed satisfying hydrological performance in terms of streamflow prediction, with Nash-Sutcliffe coefficient of Efficiency (NSE) values of 0.82 and 0.58, compared to gauge forced simulation (0.88 and 0.60) at the calibration and validation periods, respectively. EMSPD-DBMA also performed a greater fitness for peak flow simulation than a new Multi-Source Weighted-Ensemble Precipitation Version 2 (MSWEP V2) product, indicating a promising prospect of hydrological utility for the ensemble satellite precipitation data. This study belongs to early comprehensive evaluation of the blended multi-satellite precipitation data across the TP, which would be significant for improving the DBMA algorithm in regions with complex terrain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AGUFM.A21F2211K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AGUFM.A21F2211K"><span>Can decadal climate predictions be improved by ocean ensemble dispersion filtering?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kadow, C.; Illing, S.; Kröner, I.; Ulbrich, U.; Cubasch, U.</p> <p>2017-12-01</p> <p>Decadal predictions by Earth system models aim to capture the state and phase of the climate several years inadvance. Atmosphere-ocean interaction plays an important role for such climate forecasts. While short-termweather forecasts represent an initial value problem and long-term climate projections represent a boundarycondition problem, the decadal climate prediction falls in-between these two time scales. The ocean memorydue to its heat capacity holds big potential skill on the decadal scale. In recent years, more precise initializationtechniques of coupled Earth system models (incl. atmosphere and ocean) have improved decadal predictions.Ensembles are another important aspect. Applying slightly perturbed predictions results in an ensemble. Insteadof using and evaluating one prediction, but the whole ensemble or its ensemble average, improves a predictionsystem. However, climate models in general start losing the initialized signal and its predictive skill from oneforecast year to the next. Here we show that the climate prediction skill of an Earth system model can be improvedby a shift of the ocean state toward the ensemble mean of its individual members at seasonal intervals. Wefound that this procedure, called ensemble dispersion filter, results in more accurate results than the standarddecadal prediction. Global mean and regional temperature, precipitation, and winter cyclone predictions showan increased skill up to 5 years ahead. Furthermore, the novel technique outperforms predictions with largerensembles and higher resolution. Our results demonstrate how decadal climate predictions benefit from oceanensemble dispersion filtering toward the ensemble mean. This study is part of MiKlip (fona-miklip.de) - a major project on decadal climate prediction in Germany.We focus on the Max-Planck-Institute Earth System Model using the low-resolution version (MPI-ESM-LR) andMiKlip's basic initialization strategy as in 2017 published decadal climate forecast: http://www.fona-miklip.de/decadal-forecast-2017-2026/decadal-forecast-for-2017-2026/ More informations about this study in JAMES:DOI: 10.1002/2016MS000787</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29080301','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29080301"><span>Assessing uncertainties in crop and pasture ensemble model simulations of productivity and N2 O emissions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ehrhardt, Fiona; Soussana, Jean-François; Bellocchi, Gianni; Grace, Peter; McAuliffe, Russel; Recous, Sylvie; Sándor, Renáta; Smith, Pete; Snow, Val; de Antoni Migliorati, Massimiliano; Basso, Bruno; Bhatia, Arti; Brilli, Lorenzo; Doltra, Jordi; Dorich, Christopher D; Doro, Luca; Fitton, Nuala; Giacomini, Sandro J; Grant, Brian; Harrison, Matthew T; Jones, Stephanie K; Kirschbaum, Miko U F; Klumpp, Katja; Laville, Patricia; Léonard, Joël; Liebig, Mark; Lieffering, Mark; Martin, Raphaël; Massad, Raia S; Meier, Elizabeth; Merbold, Lutz; Moore, Andrew D; Myrgiotis, Vasileios; Newton, Paul; Pattey, Elizabeth; Rolinski, Susanne; Sharp, Joanna; Smith, Ward N; Wu, Lianhai; Zhang, Qing</p> <p>2018-02-01</p> <p>Simulation models are extensively used to predict agricultural productivity and greenhouse gas emissions. However, the uncertainties of (reduced) model ensemble simulations have not been assessed systematically for variables affecting food security and climate change mitigation, within multi-species agricultural contexts. We report an international model comparison and benchmarking exercise, showing the potential of multi-model ensembles to predict productivity and nitrous oxide (N 2 O) emissions for wheat, maize, rice and temperate grasslands. Using a multi-stage modelling protocol, from blind simulations (stage 1) to partial (stages 2-4) and full calibration (stage 5), 24 process-based biogeochemical models were assessed individually or as an ensemble against long-term experimental data from four temperate grassland and five arable crop rotation sites spanning four continents. Comparisons were performed by reference to the experimental uncertainties of observed yields and N 2 O emissions. Results showed that across sites and crop/grassland types, 23%-40% of the uncalibrated individual models were within two standard deviations (SD) of observed yields, while 42 (rice) to 96% (grasslands) of the models were within 1 SD of observed N 2 O emissions. At stage 1, ensembles formed by the three lowest prediction model errors predicted both yields and N 2 O emissions within experimental uncertainties for 44% and 33% of the crop and grassland growth cycles, respectively. Partial model calibration (stages 2-4) markedly reduced prediction errors of the full model ensemble E-median for crop grain yields (from 36% at stage 1 down to 4% on average) and grassland productivity (from 44% to 27%) and to a lesser and more variable extent for N 2 O emissions. Yield-scaled N 2 O emissions (N 2 O emissions divided by crop yields) were ranked accurately by three-model ensembles across crop species and field sites. The potential of using process-based model ensembles to predict jointly productivity and N 2 O emissions at field scale is discussed. © 2017 John Wiley & Sons Ltd.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>