Sample records for hamburg large-scale geostrophic

  1. Modernization of the graphics post-processors of the Hamburg German Climate Computer Center Carbon Cycle Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, E.J.; McNeilly, G.S.

    The existing National Center for Atmospheric Research (NCAR) code in the Hamburg Oceanic Carbon Cycle Circulation Model and the Hamburg Large-Scale Geostrophic Ocean General Circulation Model was modernized and reduced in size while still producing an equivalent end result. A reduction in the size of the existing code from more than 50,000 lines to approximately 7,500 lines in the new code has made the new code much easier to maintain. The existing code in Hamburg model uses legacy NCAR (including even emulated CALCOMP subrountines) graphics to display graphical output. The new code uses only current (version 3.1) NCAR subrountines.

  2. Large-Scale Flows and Magnetic Fields Produced by Rotating Convection in a Quasi-Geostrophic Model of Planetary Cores

    NASA Astrophysics Data System (ADS)

    Guervilly, C.; Cardin, P.

    2017-12-01

    Convection is the main heat transport process in the liquid cores of planets. The convective flows are thought to be turbulent and constrained by rotation (corresponding to high Reynolds numbers Re and low Rossby numbers Ro). Under these conditions, and in the absence of magnetic fields, the convective flows can produce coherent Reynolds stresses that drive persistent large-scale zonal flows. The formation of large-scale flows has crucial implications for the thermal evolution of planets and the generation of large-scale magnetic fields. In this work, we explore this problem with numerical simulations using a quasi-geostrophic approximation to model convective and zonal flows at Re 104 and Ro 10-4 for Prandtl numbers relevant for liquid metals (Pr 0.1). The formation of intense multiple zonal jets strongly affects the convective heat transport, leading to the formation of a mean temperature staircase. We also study the generation of magnetic fields by the quasi-geostrophic flows at low magnetic Prandtl numbers.

  3. Representation of fine scale atmospheric variability in a nudged limited area quasi-geostrophic model: application to regional climate modelling

    NASA Astrophysics Data System (ADS)

    Omrani, H.; Drobinski, P.; Dubos, T.

    2009-09-01

    In this work, we consider the effect of indiscriminate nudging time on the large and small scales of an idealized limited area model simulation. The limited area model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by its « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. Compared to a previous study by Salameh et al. (2009) who investigated the existence of an optimal nudging time minimizing the error on both large and small scale in a linear model, we here use a fully non-linear model which allows us to represent the chaotic nature of the atmosphere: given the perfect quasi-geostrophic model, errors in the initial conditions, concentrated mainly in the smaller scales of motion, amplify and cascade into the larger scales, eventually resulting in a prediction with low skill. To quantify the predictability of our quasi-geostrophic model, we measure the rate of divergence of the system trajectories in phase space (Lyapunov exponent) from a set of simulations initiated with a perturbation of a reference initial state. Predictability of the "global", periodic model is mostly controlled by the beta effect. In the LAM, predictability decreases as the domain size increases. Then, the effect of large-scale nudging is studied by using the "perfect model” approach. Two sets of experiments were performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic LAM where the size of the LAM domain comes into play in addition to the first set of simulations. In the two sets of experiments, the best spatial correlation between the nudge simulation and the reference is observed with a nudging time close to the predictability time.

  4. Downscaling ocean conditions: Experiments with a quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Katavouta, A.; Thompson, K. R.

    2013-12-01

    The predictability of small-scale ocean variability, given the time history of the associated large-scales, is investigated using a quasi-geostrophic model of two wind-driven gyres separated by an unstable, mid-ocean jet. Motivated by the recent theoretical study of Henshaw et al. (2003), we propose a straightforward method for assimilating information on the large-scale in order to recover the small-scale details of the quasi-geostrophic circulation. The similarity of this method to the spectral nudging of limited area atmospheric models is discussed. Results from the spectral nudging of the quasi-geostrophic model, and an independent multivariate regression-based approach, show that important features of the ocean circulation, including the position of the meandering mid-ocean jet and the associated pinch-off eddies, can be recovered from the time history of a small number of large-scale modes. We next propose a hybrid approach for assimilating both the large-scales and additional observed time series from a limited number of locations that alone are too sparse to recover the small scales using traditional assimilation techniques. The hybrid approach improved significantly the recovery of the small-scales. The results highlight the importance of the coupling between length scales in downscaling applications, and the value of assimilating limited point observations after the large-scales have been set correctly. The application of the hybrid and spectral nudging to practical ocean forecasting, and projecting changes in ocean conditions on climate time scales, is discussed briefly.

  5. On Instability of Geostrophic Current with Linear Vertical Shear at Length Scales of Interleaving

    NASA Astrophysics Data System (ADS)

    Kuzmina, N. P.; Skorokhodov, S. L.; Zhurbas, N. V.; Lyzhkov, D. A.

    2018-01-01

    The instability of long-wave disturbances of a geostrophic current with linear velocity shear is studied with allowance for the diffusion of buoyancy. A detailed derivation of the model problem in dimensionless variables is presented, which is used for analyzing the dynamics of disturbances in a vertically bounded layer and for describing the formation of large-scale intrusions in the Arctic basin. The problem is solved numerically based on a high-precision method developed for solving fourth-order differential equations. It is established that there is an eigenvalue in the spectrum of eigenvalues that corresponds to unstable (growing with time) disturbances, which are characterized by a phase velocity exceeding the maximum velocity of the geostrophic flow. A discussion is presented to explain some features of the instability.

  6. Nonlinear Theory of The Geostrophic Adjustment

    NASA Astrophysics Data System (ADS)

    Zeitlin, V.

    Nonlinear geostrophic adjustment and splitting of the fast and slow dynamical vari- ables are analysed in the framework of multi-layer and continuously stratified prim- itive equations by means of the multi-scale perturbation theory in the Rossby num- ber applied to localized initial disturbances. Two basic dynamical regimes: the quasi- geostrophic (QG) and the frontal geostrophic (FG) with small and large deviations of the isopycnal surfaces, respectively, are considered and differences in corresponding adjustment scenarios are displayed. Decoupling of the fast component of the flow is proven up to the third order in Rossby number and long-time corrections to the stan- dard balanced QG and FG models are found. Peculiarities of splitting in the FG regime due to the quasi-inertial oscillations are displayed and a Schrodinger-like modulation equations for the envelope of these latter are derived.

  7. Nudging and predictability in regional climate modelling: investigation in a nested quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Omrani, Hiba; Drobinski, Philippe; Dubos, Thomas

    2010-05-01

    In this work, we consider the effect of indiscriminate and spectral nudging on the large and small scales of an idealized model simulation. The model is a two layer quasi-geostrophic model on the beta-plane driven at its boundaries by the « global » version with periodic boundary condition. This setup mimics the configuration used for regional climate modelling. The effect of large-scale nudging is studied by using the "perfect model" approach. Two sets of experiments are performed: (1) the effect of nudging is investigated with a « global » high resolution two layer quasi-geostrophic model driven by a low resolution two layer quasi-geostrophic model. (2) similar simulations are conducted with the two layer quasi-geostrophic Limited Area Model (LAM) where the size of the LAM domain comes into play in addition to the first set of simulations. The study shows that the indiscriminate nudging time that minimizes the error at both the large and small scales is reached for a nudging time close to the predictability time, for spectral nudging, the optimum nudging time should tend to zero since the best large scale dynamics is supposed to be given by the driving large-scale fields are generally given at much lower frequency than the model time step(e,g, 6-hourly analysis) with a basic interpolation between the fields, the optimum nudging time differs from zero, however remaining smaller than the predictability time.

  8. Helicity, geostrophic balance and mixing in rotating stratified turbulence: a multi-scale problem

    NASA Astrophysics Data System (ADS)

    Pouquet, A.; Marino, R.; Mininni, P.; Rorai, C.; Rosenberg, D. L.

    2012-12-01

    Helicity, geostrophic balance and mixing in rotating stratified turbulence: a multi-scale problem A. Pouquet, R. Marino, P. D. Mininni, C. Rorai & D. Rosenberg, NCAR Interactions between winds and waves have important roles in planetary and oceanic boundary layers, affecting momentum, heat and CO2 transport. Within the Abyssal Southern Ocean at Mid latitude, this may result in a mixed layer which is too shallow in climate models thereby affecting the overall evolution because of poor handling of wave breaking as in Kelvin-Helmoltz instabilities: gravity waves couple nonlinearly on slow time scales and undergo steepening through resonant interactions, or due to the presence of shear. In the oceans, sub-mesoscale frontogenesis and significant departure from quasi-geostrophy can be seen as turbulence intensifies. The ensuing anomalous vertical dispersion may not be simply modeled by a random walk, due to intermittent structures, wave propagation and to their interactions. Conversely, the energy and seeds required for such intermittent events to occur, say in the stable planetary boundary layer, may come from the wave field that is perturbed, or from winds and the effect of topography. Under the assumption of stationarity, weak nonlinearities, dissipation and forcing, one obtains large-scale geostrophic balance linking pressure gradient, gravity and Coriolis force. The role of helicity (velocity-vorticity correlations) has not received as much attention, outside the realm of astrophysics when considering the growth of large-scale magnetic fields. However, it is measured routinely in the atmosphere in order to gauge the likelihood of supercell convective storms to strengthen, and it may be a factor to consider in the formation of hurricanes. In this context, we examine the transition from a wave-dominated regime to an isotropic small-scale turbulent one in rotating flows with helical forcing. Using a direct numerical simulation (DNS) on a 3072^3 grid with Rossby and

  9. On the coupled evolution of oceanic internal waves and quasi-geostrophic flow

    NASA Astrophysics Data System (ADS)

    Wagner, Gregory LeClaire

    Oceanic motion outside thin boundary layers is primarily a mixture of quasi-geostrophic flow and internal waves with either near-inertial frequencies or the frequency of the semidiurnal lunar tide. This dissertation seeks a deeper understanding of waves and flow through reduced models that isolate their nonlinear and coupled evolution from the Boussinesq equations. Three physical-space models are developed: an equation that describes quasi-geostrophic evolution in an arbitrary and prescribed field of hydrostatic internal waves; a three-component model that couples quasi-geostrophic flow to both near-inertial waves and the near-inertial second harmonic; and a model for the slow evolution of hydrostatic internal tides in quasi-geostrophic flow of near-arbitrary scale. This slow internal tide equation opens the path to a coupled model for the energetic interaction of quasi-geostrophic flow and oceanic internal tides. Four results emerge. First, the wave-averaged quasi-geostrophic equation reveals that finite-amplitude waves give rise to a mean flow that advects quasi-geostrophic potential vorticity. Second is the definition of a new material invariant: Available Potential Vorticity, or APV. APV isolates the part of Ertel potential vorticity available for balanced-flow evolution in Eulerian frames and proves necessary in the separating waves and quasi-geostrophic flow. The third result, hashed out for near-inertial waves and quasi-geostrophic flow, is that wave-flow interaction leads to energy exchange even under conditions of weak nonlinearity. For storm-forced oceanic near-inertial waves the interaction often energizes waves at the expense of flow. We call this extraction of balanced quasi-geostrophic energy 'stimulated generation' since it requires externally-forced rather than spontaneously-generated waves. The fourth result is that quasi-geostrophic flow can encourage or 'catalyze' a nonlinear interaction between a near-inertial wave field and its second harmonic

  10. Large Eddy Simulations of a Bottom Boundary Layer Under a Shallow Geostrophic Front

    NASA Astrophysics Data System (ADS)

    Bateman, S. P.; Simeonov, J.; Calantoni, J.

    2017-12-01

    The unstratified surf zone and the stratified shelf waters are often separated by dynamic fronts that can strongly impact the character of the Ekman bottom boundary layer. Here, we use large eddy simulations to study the turbulent bottom boundary layer associated with a geostrophic current on a stratified shelf of uniform depth. The simulations are initialized with a spatially uniform vertical shear that is in geostrophic balance with a pressure gradient due to a linear horizontal temperature variation. Superposed on the temperature front is a stable vertical temperature gradient. As turbulence develops near the bottom, the turbulence-induced mixing gradually erodes the initial uniform temperature stratification and a well-mixed layer grows in height until the turbulence becomes fully developed. The simulations provide the spatial distribution of the turbulent dissipation and the Reynolds stresses in the fully developed boundary layer. We vary the initial linear stratification and investigate its effect on the height of the bottom boundary layer and the turbulence statistics. The results are compared to previous models and simulations of stratified bottom Ekman layers.

  11. Quasi-geostrophic dynamo theory

    NASA Astrophysics Data System (ADS)

    Calkins, Michael A.

    2018-03-01

    The asymptotic theory of rapidly rotating, convection-driven dynamos in a plane layer is discussed. A key characteristic of these quasi-geostrophic dynamos is that the Lorentz force is comparable in magnitude to the ageostrophic component of the Coriolis force, rather than the leading order component that yields geostrophy. This characteristic is consistent with both observations of planetary dynamos and numerical dynamo investigations, where the traditional Elssasser number, ΛT = O (1) . Thus, while numerical dynamo simulations currently cannot access the strongly turbulent flows that are thought to be characteristic of planetary interiors, it is argued that they are in the appropriate geostrophically balanced regime provided that inertial and viscous forces are both small relative to the leading order Coriolis force. Four distinct quasi-geostrophic dynamo regimes are discussed, with each regime characterized by a unique magnetic to kinetic energy density ratio and differing dynamics. The axial torque due to the Lorentz force is shown to be asymptotically small for such quasi-geostrophic dynamos, suggesting that 'Taylor's constraint' represents an ambiguous measure of the primary force balance in a rapidly rotating dynamo.

  12. Hamburger hazards and emotions.

    PubMed

    Olsen, Nina Veflen; Røssvoll, Elin; Langsrud, Solveig; Scholderer, Joachim

    2014-07-01

    Previous studies indicate that many consumers eat rare hamburgers and that information about microbiological hazards related to undercooked meat not necessarily leads to more responsible behavior. With this study we aim to investigate whether consumers' willingness to eat hamburgers depends on the emotions they experience when confronted with the food. A representative sample of 1046 Norwegian consumers participated in an online experiment. In the first part, participants were randomly divided into two groups. One group was confronted with a picture of a rare hamburger, whereas the other group was confronted with a picture of a well-done hamburger. The respondents were instructed to imagine that they were served the hamburger on the picture and then to indicate which emotions they experienced: fear, disgust, surprise, interest, pleasure, or none of these. In part two, all respondents were confronted with four pictures of hamburgers cooked to different degrees of doneness (rare, medium rare, medium well-done, well-done), and were asked to state their likelihood of eating. We analyzed the data by means of a multivariate probit model and two linear fixed-effect models. The results show that confrontation with rare hamburgers evokes more fear and disgust than confrontation with well-done hamburgers, that all hamburgers trigger pleasure and interest, and that a consumer's willingness to eat rare hamburgers depends on the particular type of emotion evoked. These findings indicate that emotions play an important role in a consumer's likelihood of eating risky food, and should be considered when developing food safety strategies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Flooding near Hamburg, Iowa

    NASA Image and Video Library

    2017-12-08

    NASA image acquired July 17, 2011 In mid-July 2011, more than a month after the Missouri River broke through two levees and flooded fields near Hamburg, Iowa, muddy water lingered near the city. Hamburg residents were relieved, however, that a newly built levee had spared the town from flooding. On July 17, 2011, the Advanced Land Imager (ALI) on NASA’s Earth Observing-1 (EO-1) satellite captured this natural-color image. Compared to an image acquired on June 24, flooding has apparently receded slightly in some areas. Sediment-choked water nevertheless lingers on large swaths of land. On July 13, 2011, KETV of Omaha, Nebraska, reported that a newly built, 2-mile levee designed to protect Hamburg already exceeded federal standards. The U.S. Army Corps of Engineers handed control of the levee over to city officials on July 12. In the end, the levee was expected to cost the Army Corps $6 million, and the city of Hamburg about $800,000. On July 18, 2011, the Advanced Hydrological Prediction Service reported moderate flooding along the Missouri River not far from Hamburg, Iowa. In the northwest, the river reached 24.37 feet (7.43 meters) at Nebraska City. In the southeast, the river reached 38.98 feet (11.88 meters) at Brownville, Nebraska. NASA Earth Observatory image created by Jesse Allen and Robert Simmon, using EO-1 ALI data provided courtesy of the NASA EO-1 team. Caption by Michon Scott. Instrument: EO-1 - ALI Credit: NASA Earth Observatory NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  14. A study of the adequacy of quasi-geostrophic dynamics for modeling the effect of frontal cyclones on the larger scale flow

    NASA Technical Reports Server (NTRS)

    Mudrick, S.

    1985-01-01

    The validity of quasi-geostrophic (QG) dynamics were tested on compared to primitive equation (PE) dynamics, for modeling the effect of cyclone waves on the larger scale flow. The formation of frontal cyclones and the dynamics of occluded frontogenesis were studied. Surface friction runs with the PE model and the wavelength of maximum instability is described. Also fine resolution PE simulation of a polar low is described.

  15. Large-scale Density Structures in Magneto-rotational Disk Turbulence

    NASA Astrophysics Data System (ADS)

    Youdin, Andrew; Johansen, A.; Klahr, H.

    2009-01-01

    Turbulence generated by the magneto-rotational instability (MRI) is a strong candidate to drive accretion flows in disks, including sufficiently ionized regions of protoplanetary disks. The MRI is often studied in local shearing boxes, which model a small section of the disk at high resolution. I will present simulations of large, stratified shearing boxes which extend up to 10 gas scale-heights across. These simulations are a useful bridge to fully global disk simulations. We find that MRI turbulence produces large-scale, axisymmetric density perturbations . These structures are part of a zonal flow --- analogous to the banded flow in Jupiter's atmosphere --- which survives in near geostrophic balance for tens of orbits. The launching mechanism is large-scale magnetic tension generated by an inverse cascade. We demonstrate the robustness of these results by careful study of various box sizes, grid resolutions, and microscopic diffusion parameterizations. These gas structures can trap solid material (in the form of large dust or ice particles) with important implications for planet formation. Resolved disk images at mm-wavelengths (e.g. from ALMA) will verify or constrain the existence of these structures.

  16. The Benign Hamburger.

    ERIC Educational Resources Information Center

    Peaslee, Graham; Lantz, Juliette M.; Walczak, Mary M.

    1998-01-01

    Uses a case study of food poisoning from hamburgers at the fictitious Jill-at-the-Grill to teach the nuclear science behind food irradiation. Includes case teaching notes on the benign hamburger. (ASK)

  17. Modelling the urban air quality in Hamburg with the new city-scale chemistry transport model CityChem

    NASA Astrophysics Data System (ADS)

    Karl, Matthias; Ramacher, Martin; Aulinger, Armin; Matthias, Volker; Quante, Markus

    2017-04-01

    Air quality modelling plays an important role by providing guidelines for efficient air pollution abatement measures. Currently, most urban dispersion models treat air pollutants as passive tracer substances or use highly simplified chemistry when simulating air pollutant concentrations on the city-scale. The newly developed urban chemistry-transport model CityChem has the capability of modelling the photochemical transformation of multiple pollutants along with atmospheric diffusion to produce pollutant concentration fields for the entire city on a horizontal resolution of 100 m or even finer and a vertical resolution of 24 layers up to 4000 m height. CityChem is based on the Eulerian urban dispersion model EPISODE of the Norwegian Institute for Air Research (NILU). CityChem treats the complex photochemistry in cities using detailed EMEP chemistry on an Eulerian 3-D grid, while using simple photo-stationary equilibrium on a much higher resolution grid (receptor grid), i.e. close to industrial point sources and traffic sources. The CityChem model takes into account that long-range transport contributes to urban pollutant concentrations. This is done by using 3-D boundary concentrations for the city domain derived from chemistry-transport simulations with the regional air quality model CMAQ. For the study of the air quality in Hamburg, CityChem was set-up with a main grid of 30×30 grid cells of 1×1 km2 each and a receptor grid of 300×300 grid cells of 100×100 m2. The CityChem model was driven with meteorological data generated by the prognostic meteorology component of the Australian chemistry-transport model TAPM. Bottom-up inventories of emissions from traffic, industry, households were based on data of the municipality of Hamburg. Shipping emissions for the port of Hamburg were taken from the Clean North Sea Shipping project. Episodes with elevated ozone (O3) were of specific interest for this study, as these are associated with exceedances of the World

  18. Derivation of Inviscid Quasi-geostrophic Equation from Rotational Compressible Magnetohydrodynamic Flows

    NASA Astrophysics Data System (ADS)

    Kwon, Young-Sam; Lin, Ying-Chieh; Su, Cheng-Fang

    2018-04-01

    In this paper, we consider the compressible models of magnetohydrodynamic flows giving rise to a variety of mathematical problems in many areas. We derive a rigorous quasi-geostrophic equation governed by magnetic field from the rotational compressible magnetohydrodynamic flows with the well-prepared initial data. It is a first derivation of quasi-geostrophic equation governed by the magnetic field, and the tool is based on the relative entropy method. This paper covers two results: the existence of the unique local strong solution of quasi-geostrophic equation with the good regularity and the derivation of a quasi-geostrophic equation.

  19. Dynamical analysis of extreme precipitation in the US northeast based on large-scale meteorological patterns

    NASA Astrophysics Data System (ADS)

    Agel, Laurie; Barlow, Mathew; Colby, Frank; Binder, Hanin; Catto, Jennifer L.; Hoell, Andrew; Cohen, Judah

    2018-05-01

    Previous work has identified six large-scale meteorological patterns (LSMPs) of dynamic tropopause height associated with extreme precipitation over the Northeast US, with extreme precipitation defined as the top 1% of daily station precipitation. Here, we examine the three-dimensional structure of the tropopause LSMPs in terms of circulation and factors relevant to precipitation, including moisture, stability, and synoptic mechanisms associated with lifting. Within each pattern, the link between the different factors and extreme precipitation is further investigated by comparing the relative strength of the factors between days with and without the occurrence of extreme precipitation. The six tropopause LSMPs include two ridge patterns, two eastern US troughs, and two troughs centered over the Ohio Valley, with a strong seasonality associated with each pattern. Extreme precipitation in the ridge patterns is associated with both convective mechanisms (instability combined with moisture transport from the Great Lakes and Western Atlantic) and synoptic forcing related to Great Lakes storm tracks and embedded shortwaves. Extreme precipitation associated with eastern US troughs involves intense southerly moisture transport and strong quasi-geostrophic forcing of vertical velocity. Ohio Valley troughs are associated with warm fronts and intense warm conveyor belts that deliver large amounts of moisture ahead of storms, but little direct quasi-geostrophic forcing. Factors that show the largest difference between days with and without extreme precipitation include integrated moisture transport, low-level moisture convergence, warm conveyor belts, and quasi-geostrophic forcing, with the relative importance varying between patterns.

  20. A Theory For The Variability of The Baroclinic Quasi-geostrophic Winnd Driven Circulation.

    NASA Astrophysics Data System (ADS)

    Ben Jelloul, M.; Huck, T.

    We propose a theory of the wind driven circulation based on the large scale (i.e. small Burger number) quasi-geostrophic assumptions retained in the Rhines and Young (1982) classical study of the steady baroclinic flow. We therefore use multiple time scale and asymptotic expansions to separate steady and the time dependent component of the flow. The barotropic flow is given by the Sverdrup balance. At first order in Burger number, the baroclinic flow can be decom- posed in two parts. A steady contribution ensures no flow in the deep layer which is at rest in absence of dissipative processes. Since the baroclinic instability is inhibited at large scale a spectrum of neutral modes also arises. These are of three type, classical Rossby basin modes deformed through advection by the barotropic flow, recirculating modes localized in the recirculation gyre and blocked modes corresponding to closed potential vorticity contours. At next order in Burger number, amplitude equations for baroclinic modes are derived. If dissipative processes are included at this order, the system adjusts towards Rhines and Young solution with a homogenized potential vorticity pool.

  1. Optimum use of CDOT French and Hamburg data (French and Hamburg tests).

    DOT National Transportation Integrated Search

    2013-11-01

    The Colorado Department of Transportation (CDOT) has been collecting data from the Hamburg Rutter and the : French Rutter for over 20 years. No specifications have been written in that time for either the Hamburg Rutter : or the French Rutter. This r...

  2. Geostrophic balance with a full Coriolis Force: implications for low latitutde studies

    NASA Technical Reports Server (NTRS)

    Juarez, M. de la Torre

    2002-01-01

    In its standard form, geostrophic balance uses a partial representation of the Coriolis force. The resulting formation has a singularity at the equator, and violates mass and momentum conservation. When the horizontal projection of the planetary rotation vector is considered, the singularity at the equator disappears, continuity can be preserved, and quasigeostrophy can be formulated at planetary scale.

  3. Shallow Water Quasi-Geostrophic Theory on the Sphere

    NASA Astrophysics Data System (ADS)

    Schubert, Wayne H.; Taft, Richard K.; Silvers, Levi G.

    2009-02-01

    Quasi-geostrophic theory forms the basis for much of our understanding of mid-latitude atmospheric dynamics. The theory is typically presented in either its f-plane form or its β-plane form. However, for many applications, including diagnostic use in global climate modeling, a fully spherical version would be most useful. Such a global theory does in fact exist and has for many years, but few in the scientific community seem to have ever been aware of it. In the context of shallow water dynamics, it is shown that the spherical version of quasigeostrophic theory is easily derived (re-derived) based on a partitioning of the flow between nondivergent and irrotational components, as opposed to a partitioning between geostrophic and ageostrophic components. In this way, the invertibility principle is expressed as a relation between the streamfunction and the potential vorticity, rather than between the geopotential and the potential vorticity. This global theory is then extended by showing that the invertibility principle can be solved analytically using spheroidal harmonic transforms, an advancement that greatly improves the usefulness of this "forgotten" theory. When the governing equation for the time evolution of the potential vorticity is linearized about a state of rest, a simple Rossby-Haurwitz wave dispersion relation is derived and examined. These waves have a horizontal structure described by spheroidal harmonics, and the Rossby-Haurwitz wave frequencies are given in terms of the eigenvalues of the spheroidal harmonic operator. Except for sectoral harmonics with low zonal wavenumber, the quasi-geostrophic Rossby-Haurwitz frequencies agree very well with those calculated from the primitive equations. One of the many possible applications of spherical quasi-geostrophic theory is to the study of quasi-geostrophic turbulence on the sphere. In this context, the theory is used to derive an anisotropic Rhines barrier in three-dimensional wavenumber space.

  4. Use of surface drifters to increase resolution and accuracy of oceanic geostrophic circulation mapped from satellite only (altimetry and gravimetry)

    NASA Astrophysics Data System (ADS)

    Mulet, Sandrine; Rio, Marie-Hélène; Etienne, Hélène

    2017-04-01

    Strong improvements have been made in our knowledge of the surface ocean geostrophic circulation thanks to satellite observations. For instance, the use of the latest GOCE (Gravity field and steady-state Ocean Circulation Explorer) geoid model with altimetry data gives good estimate of the mean oceanic circulation at spatial scales down to 125 km. However, surface drifters are essential to resolve smaller scales, it is thus mandatory to carefully process drifter data and then to combine these different data sources. In this framework, the global 1/4° CNES-CLS13 Mean Dynamic Topography (MDT) and associated mean geostrophic currents have been computed (Rio et al, 2014). First a satellite only MDT was computed from altimetric and gravimetric data. Then, an important work was to pre-process drifter data to extract only the geostrophic component in order to be consistent with physical content of satellite only MDT. This step include estimate and remove of Ekman current and wind slippage. Finally drifters and satellite only MDT were combined. Similar approaches are used regionally to go further toward higher resolution, for instance in the Agulhas current or along the Brazilian coast. Also, a case study in the Gulf of Mexico intends to use drifters in the same way to improve weekly geostrophic current estimate.

  5. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  6. Wave Response during Hydrostatic and Geostrophic Adjustment. Part I: Transient Dynamics.

    NASA Astrophysics Data System (ADS)

    Chagnon, Jeffrey M.; Bannon, Peter R.

    2005-05-01

    The adjustment of a compressible, stably stratified atmosphere to sources of hydrostatic and geostrophic imbalance is investigated using a linear model. Imbalance is produced by prescribed, time-dependent injections of mass, heat, or momentum that model those processes considered “external” to the scales of motion on which the linearization and other model assumptions are justifiable. Solutions are demonstrated in response to a localized warming characteristic of small isolated clouds, larger thunderstorms, and convective systems.For a semi-infinite atmosphere, solutions consist of a set of vertical modes of continuously varying wavenumber, each of which contains time dependencies classified as steady, acoustic wave, and buoyancy wave contributions. Additionally, a rigid lower-boundary condition implies the existence of a discrete mode—the Lamb mode— containing only a steady and acoustic wave contribution. The forced solutions are generalized in terms of a temporal Green's function, which represents the response to an instantaneous injection.The response to an instantaneous warming with geometry representative of a small, isolated cloud takes place in two stages. Within the first few minutes, acoustic and Lamb waves accomplish an expansion of the heated region. Within the first quarter-hour, nonhydrostatic buoyancy waves accomplish an upward displacement inside of the heated region with inflow below, outflow above, and weak subsidence on the periphery—all mainly accomplished by the lowest vertical wavenumber modes, which have the largest horizontal group speed. More complicated transient patterns of inflow aloft and outflow along the lower boundary are accomplished by higher vertical wavenumber modes. Among these is an outwardly propagating rotor along the lower boundary that effectively displaces the low-level inflow upward and outward.A warming of 20 min duration with geometry representative of a large thunderstorm generates only a weak acoustic

  7. Nonlinear Cascades of Surface Oceanic Geostrophic Kinetic Energy in the Frequency Domain

    DTIC Science & Technology

    2012-09-01

    kinetic energy in wavenumber k space for surface ocean geostrophic flows have been computed from sat - ellite altimetry data of sea surface height (Scott...5 0.65kN, where kN corresponds to the Nyquist scale. The filter is applied to bq 1 and bq 2 , the Fourier transforms of q1 and q2, at every time step

  8. Turbulent convection in geostrophic circulation with wind and buoyancy forcing

    NASA Astrophysics Data System (ADS)

    Sohail, Taimoor; Gayen, Bishakhdatta; Hogg, Andy

    2017-11-01

    We conduct a direct numerical simulation of geostrophic circulation forced by surface wind and buoyancy to model a circumpolar ocean. The imposed buoyancy forcing (represented by Rayleigh number) drives a zonal current and supports small-scale convection in the buoyancy destabilizing region. In addition, we observe eddy activity which transports heat southward, supporting a large amount of heat uptake. Increasing wind stress enhances the meridional buoyancy gradient, triggering more eddy activity inside the boundary layer. Therefore, heat uptake increases with higher wind stress. The majority of dissipation is confined within the surface boundary layer, while mixing is dominant inside the convective plume and the buoyancy destabilizing region of the domain. The relative strength of the mixing and dissipation in the system can be expressed by mixing efficiency. This study finds that mixing is much greater than viscous dissipation, resulting in higher values of mixing efficiency than previously used. Supported by Australian Research Council Grant DP140103706.

  9. Langevin Dynamics, Large Deviations and Instantons for the Quasi-Geostrophic Model and Two-Dimensional Euler Equations

    NASA Astrophysics Data System (ADS)

    Bouchet, Freddy; Laurie, Jason; Zaboronski, Oleg

    2014-09-01

    We investigate a class of simple models for Langevin dynamics of turbulent flows, including the one-layer quasi-geostrophic equation and the two-dimensional Euler equations. Starting from a path integral representation of the transition probability, we compute the most probable fluctuation paths from one attractor to any state within its basin of attraction. We prove that such fluctuation paths are the time reversed trajectories of the relaxation paths for a corresponding dual dynamics, which are also within the framework of quasi-geostrophic Langevin dynamics. Cases with or without detailed balance are studied. We discuss a specific example for which the stationary measure displays either a second order (continuous) or a first order (discontinuous) phase transition and a tricritical point. In situations where a first order phase transition is observed, the dynamics are bistable. Then, the transition paths between two coexisting attractors are instantons (fluctuation paths from an attractor to a saddle), which are related to the relaxation paths of the corresponding dual dynamics. For this example, we show how one can analytically determine the instantons and compute the transition probabilities for rare transitions between two attractors.

  10. Generation of large-scale intrusions at baroclinic fronts: an analytical consideration with a reference to the Arctic Ocean

    NASA Astrophysics Data System (ADS)

    Kuzmina, Natalia

    2016-12-01

    Analytical solutions are found for the problem of instability of a weak geostrophic flow with linear velocity shear accounting for vertical diffusion of buoyancy. The analysis is based on the potential-vorticity equation in a long-wave approximation when the horizontal scale of disturbances is considered much larger than the local baroclinic Rossby radius. It is hypothesized that the solutions found can be applied to describe stable and unstable disturbances of the planetary scale with respect, in particular, to the Arctic Ocean, where weak baroclinic fronts with typical temporal variability periods on the order of several years or more have been observed and the β effect is negligible. Stable (decaying with time) solutions describe disturbances that, in contrast to the Rossby waves, can propagate to both the west and east, depending on the sign of the linear shear of geostrophic velocity. The unstable (growing with time) solutions are applied to explain the formation of large-scale intrusions at baroclinic fronts under the stable-stable thermohaline stratification observed in the upper layer of the Polar Deep Water in the Eurasian Basin. The suggested mechanism of formation of intrusions can be considered a possible alternative to the mechanism of interleaving at the baroclinic fronts due to the differential mixing.

  11. The global reference atmospheric model, mod 2 (with two scale perturbation model)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Hargraves, W. R.

    1976-01-01

    The Global Reference Atmospheric Model was improved to produce more realistic simulations of vertical profiles of atmospheric parameters. A revised two scale random perturbation model using perturbation magnitudes which are adjusted to conform to constraints imposed by the perfect gas law and the hydrostatic condition is described. The two scale perturbation model produces appropriately correlated (horizontally and vertically) small scale and large scale perturbations. These stochastically simulated perturbations are representative of the magnitudes and wavelengths of perturbations produced by tides and planetary scale waves (large scale) and turbulence and gravity waves (small scale). Other new features of the model are: (1) a second order geostrophic wind relation for use at low latitudes which does not "blow up" at low latitudes as the ordinary geostrophic relation does; and (2) revised quasi-biennial amplitudes and phases and revised stationary perturbations, based on data through 1972.

  12. Rapid detection of irradiated frozen hamburgers

    NASA Astrophysics Data System (ADS)

    Delincée, Henry

    2002-03-01

    DNA comet assay can be employed as a rapid and inexpensive screening test to check whether frozen ground beef patties (hamburgers) have been irradiated as a means to increase their safety by eliminating pathogenic bacteria, e.g. E. coli O157:H7. Such a detection procedure will provide an additional check on compliance with existing regulations, e.g. enforcement of labelling and rules in international trade. Frozen ready prepared hamburgers from the market place were `electron irradiated' with doses of 0, 1.3, 2.7, 4.5 and 7.2kGy covering the range of potential commercial irradiation. DNA fragmentation in the hamburgers was made visible within a few hours using the comet assay, and non-irradiated hamburgers could be easily discerned from the irradiated ones. Even after 9 months of frozen storage, irradiated hamburgers could be identified. Since DNA fragmentation may also occur with other food processes (e.g. temperature abuse), positive screening tests shall be confirmed using a validated method to specifically prove an irradiation treatment, e.g. EN 1784 or EN 1785.

  13. "History had taken such a large piece out of my life" - Neuroscientist refugees from Hamburg during National Socialism.

    PubMed

    Zeidman, Lawrence A; von Villiez, Anna; Stellmann, Jan-Patrick; van den Bussche, Hendrik

    2016-01-01

    Approximately 9,000 physicians were uprooted for so-called "racial" or "political" reasons by the Nazi regime and 6,000 fled Germany. These refugees are often seen as survivors who contributed to a "brain drain" from Germany. About 432 doctors (all specialties, private and academic) were dismissed from the major German city of Hamburg. Of these, 16 were Hamburg University faculty members dismissed from their government-supported positions for "racial" reasons, and, of these, five were neuroscientists. In a critical analysis, not comprehensively done previously, we will demonstrate that the brain drain did not equal a "brain gain." The annihilation of these five neuroscientists' careers under different but similar auspices, their shameful harassment and incarceration, financial expropriation by Nazi ransom techniques, forced migration, and roadblocks once reaching destination countries stalled and set back any hopes of research and quickly continuing once-promising careers. A major continuing challenge is finding ways to repair an open wound and obvious vacuum in the German neuroscience community created by the largely collective persecution of colleagues 80 years ago.

  14. The synoptic- and planetary-scale environments associated with significant 1000-hPa geostrophic wind events along the Beaufort Sea coast

    NASA Astrophysics Data System (ADS)

    Cooke, Melanie

    The substantial interannual variability and the observed warming trend of the Beaufort Sea region are important motivators for the study of regional climate and weather there. In an attempt to further our understanding of strong wind events, which can drive sea ice dynamics and storm surges, their characteristic environments at the synoptic and planetary scales are defined and analysed using global reanalysis data. A dependency on an enhanced or suppressed Aleutian low is found. This produces either a strong southeasterly or north-westerly 1000-hPa geostrophic wind event. The characteristic mid-tropospheric patterns for these two distinct event types show similarities to the positive and negative Pacific/North American teleconnection patterns, but their correlations have yet to be assessed.

  15. A generalized quasi-geostrophic core flow formalism

    NASA Astrophysics Data System (ADS)

    Amit, H.; Coutelier, M.

    2016-12-01

    The quasi-geostrophic formalism provides a theoretical coupling between toroidal and poloidal core flows. By enforcing impermeable core-mantle boundary, conservation of mass and a linear variation of the axial flow along an axial column, this coupling can be written as div_h · u_h = c tan θ/R u_θ where u_h is the tangential velocity at the top of the core, θ is co-latitude, R is the core radius and c=2 (Amit and Olson, 2004; Amit and Pais, 2013). We extend this theory and develop this expression for different profiles of the axial flow. Our results show that the same expression holds but the value of c may vary depending on the profile of the axial flow, including c=1 as in the tangential geostrophy formalism. These results may therefore provide new constraints on quasi-geostrophic core flow inversions from geomagnetic SV.

  16. On the zero-Rossby limit for the primitive equations of the atmosphere*

    NASA Astrophysics Data System (ADS)

    Chen, Gui-Qiang; Zhang, Ping

    2001-09-01

    The zero-Rossby limit for the primitive equations governing atmospheric motions is analysed. The limit is important in geophysics for large-scale models (cf Lions 1996 Int. Conf. IAM 95 (Hamburg 1995) (Math. Res. vol 87) (Berlin: Akademie) pp 177-212) and is in the level of the zero relaxation limit for nonlinear partial differential equations (cf Chen et al 1994 Commun. Pure Appl. Math. 47 787-830). It is proved that, if the initial data appropriately approximate data of geostrophic type, the corresponding solutions of the simplified primitive equations approximate the solutions of the quasigeostrophic equations with order ɛ accuracy as the Rossby number ɛ goes to zero.

  17. Sea level anomaly on the Patagonian continental shelf: Trends, annual patterns and geostrophic flows

    PubMed Central

    Saraceno, M.; Piola, A. R.; Strub, P. T.

    2016-01-01

    Abstract We study the annual patterns and linear trend of satellite sea level anomaly (SLA) over the southwest South Atlantic continental shelf (SWACS) between 54ºS and 36ºS. Results show that south of 42°S the thermal steric effect explains nearly 100% of the annual amplitude of the SLA, while north of 42°S it explains less than 60%. This difference is due to the halosteric contribution. The annual wind variability plays a minor role over the whole continental shelf. The temporal linear trend in SLA ranges between 1 and 5 mm/yr (95% confidence level). The largest linear trends are found north of 39°S, at 42°S and at 50°S. We propose that in the northern region the large positive linear trends are associated with local changes in the density field caused by advective effects in response to a southward displacement of the South Atlantic High. The causes of the relative large SLA trends in two southern coastal regions are discussed as a function meridional wind stress and river discharge. Finally, we combined the annual cycle of SLA with the mean dynamic topography to estimate the absolute geostrophic velocities. This approach provides the first comprehensive description of the seasonal component of SWACS circulation based on satellite observations. The general circulation of the SWACS is northeastward with stronger/weaker geostrophic currents in austral summer/winter. At all latitudes, geostrophic velocities are larger (up to 20 cm/s) close to the shelf‐break and decrease toward the coast. This spatio‐temporal pattern is more intense north of 45°S. PMID:27840784

  18. Long-term variabilities of meridional geostrophic volumn transport in North Pacific Ocean

    NASA Astrophysics Data System (ADS)

    Zhou, H.; Yuan, D.; Dewar, W. K.

    2016-02-01

    The meridional geostrophic volumn transport (MGVT) by the ocean plays a very important role in the climatic water mass and heat balance because of its large heat capacity which enables the oceans to store the large amount of radiation received in the summer and to release it in winter. Better understanding of the role of the oceans in climate variability is essential to assess the likely range of future climate fluctuations. In the last century the North Pacific Ocean experienced considerable climate variability, especially on decadal time scale. Some studies have shown that the North Pacific Ocean is the origin of North Pacific multidecadal variability (Latif and Barnett, 1994; Barnett et al., 1999). These fluctuations were associated with large anomalies in sea level, temperature, storminess and rainfall, the heat transport and other extremes are changing as well. If the MGVT of the ocean is well-determined, it can be used as a test of the validity of numerical, global climate models. In this paper, we investigate the long-term variability of the MGVT in North Pacific ocean based on 55 years long global ocean heat and salt content data (Levitus et al., 2012). Very clear inter-decadal variations can be seen in tropical , subtropical and subpolar regions of North Pacific Ocean. There are very consistent variations between the MGVT anomalies and the inter-decadal pacific oscillation (IPO) index in the tropical gyre with cold phase of IPO corresponding to negative MGVT anomalies and warm phase corresponding to positive MGVT anomalies. The subtropical gyre shows more complex variations, and the subpolar gyre shows a negative MGVT anomaly before late 1970's and a positive anomaly after that time. The geostrophic velocities of North Pacific Ocean show significantly different anomalies during the two IPO cold phases of 1955-1976 and 1999 to present, which suggests a different mechanism of the two cold phases. The long term variations of Sverdrup transport compares well

  19. Super Clausius-Clapeyron scaling of extreme hourly precipitation and its relation to large-scale atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Lenderink, Geert; Barbero, Renaud; Loriaux, Jessica; Fowler, Hayley

    2017-04-01

    increase in large-scale moisture convergence appears to be consequence of latent heat release due to the convective activity as estimated from the quasi-geostrophic omega equation. Consequently, most hourly extremes occur in precipitation events with considerable spatial extent. Importantly, this event size appears to increase rapidly at the highest dew point temperature range, suggesting potentially strong impacts of climatic warming.

  20. The End of Hamburg's Anglophilia: Wilhelmine Hamburg Attitudes Viewed through School Examination Essays and a University Lecture (1912-1914)

    ERIC Educational Resources Information Center

    Gärtner, Niko

    2014-01-01

    Late nineteenth-century German-English rivalry changed attitudes in Hamburg. Previously, the once fiercely independent city and its burgeoning mercantile middle class had developed an Anglophilia that justified Hamburg being labelled a "London suburb" and "the most British town on the Continent". The affinity for all things…

  1. A Variational Formalism for the Radiative Transfer Equation and a Geostrophic, Hydrostatic Atmosphere: Prelude to Model 3

    NASA Technical Reports Server (NTRS)

    Achtemeier, Gary L.

    1991-01-01

    The second step in development of MODEL III is summarized. It combines the four radiative transfer equations of the first step with the equations for a geostrophic and hydrostatic atmosphere. This step is intended to bring radiance into a three dimensional balance with wind, height, and temperature. The use of the geostrophic approximation in place of the full set of primitive equations allows for an easier evaluation of how the inclusion of the radiative transfer equation increases the complexity of the variational equations. Seven different variational formulations were developed for geostrophic, hydrostatic, and radiative transfer equations. The first derivation was too complex to yield solutions that were physically meaningful. For the remaining six derivations, the variational method gave the same physical interpretation (the observed brightness temperatures could provide no meaningful input to a geostrophic, hydrostatic balance) at least through the problem solving methodology used in these studies. The variational method is presented and the Euler-Lagrange equations rederived for the geostrophic, hydrostatic, and radiative transfer equations.

  2. Biopreservation of hamburgers by essential oil of Zataria multiflora.

    PubMed

    Samadi, N; Sharifan, A; Emam-Djomeh, Z; Sormaghi, M H Salehi

    2012-01-01

    Hamburgers with high nutrient supply and a loosely-packed structure present favourable conditions for microbial growth. In this study, the chemical composition and antimicrobial activity of the essential oil of Zataria multiflora and its potential application as a natural preservative in reducing the indigenous microbial population of hamburgers were investigated. Carvacrol, thymol and linalool were found to be the most abundant constituents of the essential oil using GC-MS analysis. The essential oil exhibited strong antibacterial activity against Gram-positive and Gram-negative bacteria. Addition of Z. multiflora essential oil in concentrations higher than MIC values influenced the microbial population of hamburgers stored at 25°C, 4°C and -12°C. The significant results of this study are our observations that the use of Z. multiflora essential oil at 0.05% v/w increases the time needed for the natural microflora of hamburgers to reach concentrations able to produce a perceivable spoilage at refrigerator and room temperatures without any inverse effect on their sensory attributes. Freezing of essential oil-treated hamburgers may also reduce the risk of diseases associated with consumption of under-cooked hamburgers through significant microbial reduction by more than 3 log.

  3. On the Impact of Sea Level Fingerprints on the Estimation of the Meridional Geostrophic Transport in the Atlantic Basin

    NASA Astrophysics Data System (ADS)

    Hsu, C. W.; Velicogna, I.

    2017-12-01

    The mid-ocean geostrophic transport accounts for more than half of the seasonal and inter-annual variabilities in Atlantic meridional overturning circulation (AMOC) based on the in-situ measurement from RAPID MOC/MOCHA array since 2004. Here, we demonstrate that the mid-ocean geostrophic transport estimates derived from ocean bottom pressure (OBP) are affected by the sea level fingerprint (SLF), which is a variation of the equi-geopotential height (relative sea level) due to rapid mass unloading of the entire Earth system and in particular from glaciers and ice sheets. This potential height change, although it alters the OBP, should not be included in the derivation of the mid-ocean geostrophic transport. This "pseudo" geostrophic-transport due to the SLF is in-phase with the seasonal and interannual signal in the upper mid-ocean geostrophic transport. The east-west SLF gradient across the Atlantic basin could be mistaken as a north-south geostrophic transport that increases by 54% of its seasonal variability and by 20% of its inter-annual variability. This study demonstrates for the first time the importance of this pseudo transport in both the annual and interannual signals by comparing the SLF with in-situ observation from RAPID MOC/MOCHA array. The pseudo transport needs to be taken into account if OBP measurements and remote sensing are used to derive mid-ocean geostrophic transport.

  4. Ocean data assimilation using optimal interpolation with a quasi-geostrophic model

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele M.; Miller, Robert N.

    1991-01-01

    A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.

  5. Consumer preferences, internal color and reduction of shigatoxigenic Escherichia coli in cooked hamburgers.

    PubMed

    Røssvoll, Elin; Sørheim, Oddvin; Heir, Even; Møretrø, Trond; Olsen, Nina Veflen; Langsrud, Solveig

    2014-02-01

    The aim of this study was to relate consumer preferences and preparation of hamburgers to color change, internal temperature and reduction of shigatoxigenic Escherichia coli (STEC) serogroups O157 and the "Big Six" (O26, O45, O103, O111, O121, O145) under two ground beef packaging scenarios: 75% O2 MAP and vacuum. 75% O2 MAP hamburgers cooked to 60 °C core temperature appeared done and showed less internal red color (lower a*) than corresponding vacuum hamburgers. Similar STEC reduction (<4 log10) was found for both hamburgers at core temperatures ≤ 66 °C. In a representative survey (N=1046) most consumers reported to judge hamburger doneness by the color and many preferred undercooked hamburgers. Premature browning of 75% O2 MAP hamburgers represents a risk of foodborne illness, when considering consumers' food handling practices. The risk is even greater if such ground beef is prepared by consumers who prefer undercooked hamburgers and judge doneness by color. © 2013.

  6. The architecture of Hamburg-Bergedorf Observatory 1906 - 1912, compared with other observatories (German Title: Die Architektur der Hamburg-Bergedorfer Sternwarte 1906 - 1912 im Vergleich mit anderen Observatorien)

    NASA Astrophysics Data System (ADS)

    Müller, Peter

    The foundation of the astrophysical observatories in Potsdam-Telegrafenberg in 1874, in Meudon near Paris in 1875 and in Mount Hamilton in California in 1875 resulted in a complete change of observatory architecture. Astrometry had become irrelevant; meridian halls, i.e. an exact north-south orientation, were no longer necessary. The location in the centre of a (university) town was disadvantageous, due to vibrations caused by traffic and artificial light at night. New principles were defined: considerable distance (from the city center), secluded and exposed position (on a mountain) and construction of pavilions: inside a park a pavilion was built for each instrument. Other observatories of this type are: Pic du Midi in the French Alps, built as from 1878 as the first permanent observatory in the high mountains; Nice, Mont Gros, (1879); Brussels, Uccle (1883); Edinburgh, Blackford Hill (1892); Heidelberg, Königstuhl (1896); Barcelona, Monte Tibidado (1902). The original Hamburg Observatory was a modest rectangular building near the Millernrtor; in 1833 it became a State institute. As from 1906 erection of a spacious complex in Bergedorf, 20 km northeast of the city center, took place. Except for the unavailable position on a mountain, this complex fulfilled all principles of a modern observatory: in a park pavilion architecture in an elegant neo-baroque style designed by Albert Erbe (architect of the new Hamburger Kunsthalle with cupola). At the Hamburg Observatory the domed structures were cleverly hierarchised leaving an open view to the south. At the beginning astrometry and astrophysics were equally important; there was still a meridian circle. Apart from that, the instruments were manifold: a large refractor 0.60 m (installed by Repsold/Hamburg, 9 m focal length) and a large reflector 1 m (Zeiss/Jena, 3m focal length). Both were the largest instruments of their kind in the German Empire. In addition, there was the Lippert Astrograph on an elegant polar

  7. A study on rate of infestation to Sarcocystis cysts in supplied raw hamburgers.

    PubMed

    Nematollahia, Ahmad; Khoshkerdar, Afsaneh; Helan, Javad Ashrafi; Shahbazi, Parisa; Hassanzadeh, Parviz

    2015-06-01

    This study was carried on for determination of presence of Sarcocystis cysts in raw hamburgers in Tabriz North West of Iran. Ninety-six samples of industrial (70 % meat content) and traditional (30 % meat content) hamburgers (80 samples industrial and 16 samples traditional) were obtained from retail fast food stores. The samples were examined by gross examination, and microscopic examination methods consist impression smear and peptic digestion. Macroscopic cysts did not observed in any of the samples in gross examination. Microscopic study showed that from 96 samples 54 (56.25 %) samples were infected by at least one bradyzoites of Sarcocystis. From 54 infected samples, 45 industrial hamburgers and nine traditional hamburgers samples were infected. Statistical analysis showed that there was not significant differences between industrial and traditional hamburgers in infection to Sarcocystis. Infestation of hamburgers to Sarcocystis in summer was higher than other seasons but this difference was not significant. In Iran, beef meat is used for preparation of 70 % of hamburger and infestation of cattle to sarcocystosis was reported in many investigations in Iran. With regard to the high prevalence of Sarcocystis infection in meat products such as hamburgers in this study, it is strongly recommended to avoid eating raw or under-cooked hamburgers or keep them at freezing temperature for at least 3-5 days.

  8. Frontal Generation of Waves: A Geostrophic Adjustment Interpretation of The Observations

    NASA Astrophysics Data System (ADS)

    Blumen, W.; Lundquist, J. K.

    motions and how much is associated with a geostrophically balanced state. It is not possible to separate waves from other types of motion from the observed energy spectrum, but there is evidence of a spectral peak in the range of 7 to 23 minutes in the 16 October energy spectrum. This peak is assumed to be associated with wave excitation by the frontal passage, although other types of motion may also be a contributors. A model calculation reveals that the energy con- tained in this spectral peak represents about 10 to 15 percent of the energy contained in the initial state (time t = 0). This result, although based on crude estimates of the 1 observed wave energy is, nevertheless, in general agreement with the prediction of geostrophic theory:a relatively small amount of energy is expected to be associated with relatively high-frequency, small-scale gravity waves. Additional details regard- ing the geostrophic adjustment interpretation of the observations will be presented in the talk. 2

  9. The nature of large-scale turbulence in the Jovian atmosphere

    NASA Technical Reports Server (NTRS)

    Mitchell, J. L.

    1982-01-01

    The energetics and spectral characteristis of quasi-geostrophic turbulence in Jupiter's atmosphere are examined using sequences of Voyager images and infrared temperature soundings. Using global wind measurements momentum transports associated with zonally symmetric stresses and turbulent stresses are quantified. Though a strong up-gradient flux of momentum by eddies was observed, measurements do not preclude the possibility that symmetric stresses play a critical role in maintaining the mean zonal circulation. Strong correlation between the observed meridional distribution of eddy-scale kinetic energy and available potential energy suggests coupling between the observed cloudtop turbulent motions and the upper tropospheric thermodynamics. An Oort energy budget for Jupiter's upper troposphere is formulated.

  10. Absolute geostrophic currents over the SR02 section south of Africa in December 2009

    NASA Astrophysics Data System (ADS)

    Tarakanov, Roman

    2017-04-01

    The structure of the absolute geostrophic currents is investigated on the basis of CTD-, SADCP- and LADCP-data over the hydrographic section occupied south of Africa from the Good Hope Cape to 57° S along the Prime Meridian, and on the basis of satellite data on absolute dynamic topography (ADT) produced by Ssalto/Duacs and distributed by Aviso, with a support from Cnes (http://www.aviso.altimetry.fr/duacs/). Thus the section crossed the subtropical zone (at the junction of the subtropical gyres of the Indian and Atlantic oceans), the Antarctic Circumpolar Current (ACC) and terminated at the northern periphery of the Weddell Gyre. A total of 87 stations were occupied here with CTD-, and LADCP-profiling in the entire water column. The distance between stations was 20 nautical miles. Absolute geostrophic currents were calculated between each pair of CTD-stations with barotropic correction based on two methods: by SADCP data and by ADT at these stations. The subtropical part of the section crossed a large segment of the Agulhas meander, already separated from the current and disintegrating into individual eddies. In addition, smaller formed cyclones and anticyclones of the Agulhas Current were also observed in this zone. These structural elements of the upper layer of the ocean currents do not penetrate deeper than 1000-1500 m. Oppositely directed barotropic currents with velocities up to 30 cm/s were observed below these depths extending to the ocean bottom. Such large velocities agree well with the data of the bottom tracking of Lowered ADCP. Only these data were the reliable results of LADCP measurements because of the high transparency of the deep waters of the subtropical zone. The total transport of absolute geostrophic currents in the section is estimated as 144 and 179 Sv to the east, based on the SADCP and ADT barotropic correction, respectively. A transport of 4 (2) Sv to the east was observed on the northern periphery of the Weddell Gyre, 187 (182) Sv to

  11. Upwelling Response to Hurricane Isaac in Geostrophic Oceanic Vortices

    NASA Astrophysics Data System (ADS)

    Jaimes, B.; Shay, L. K.; Brewster, J. K.; Schuster, R.

    2013-05-01

    As a tropical cyclone (TC) moves over the ocean, the cyclonic curl of the wind stress produces a region of upwelling waters under the TC center that is compensated by downwelling waters at regions outside the center. Direct measurements conducted during hurricane Rita and recent numerical studies indicate that this is not necessarily the case when TCs move over geostrophic oceanic features, where its background relative vorticity impacts wind-driven horizontal current divergence and the upwelling velocity. Modulation of the upwelling response in these energetic oceanic regimes impacts vertical mixing across the oceanic mixed layer base, air-sea fluxes into the atmosphere, and ultimately storm intensity. As part of NOAA Intensity Forecasting Experiment, an experiment was conducted during the passage of TC Isaac over the energetic geostrophic eddy field in the Gulf of Mexico in August 2012. Expendable bathythermographs, current profilers, and conductivity-temperature-depth probes were deployed in Isaac from NOAA WP-3D aircraft during four in-storm flights to measure oceanic variability and its impact on TC-driven upwelling and surface fluxes of heat and momentum. During intensification to hurricane, the cyclonic curl of the wind stress of Isaac extended over a region of more than 300 km in diameter (4 to 5 times the radius of maximum winds). Isaac's center moved over a cold cyclonic feature, while its right and left sides moved over warm anticyclones. Contrasting upwelling and downwelling regimes developed inside the region of cyclonic curl of the wind stress. Both positive (upwelling) and negative (downwelling) vertical displacements of 40 and 60 m, respectively, were measured inside the region of cyclonic curl of the wind stress, which are between 3 to 4 times larger than predicted vertical displacements for a quiescent ocean based on scaling arguments. Oceanic mixed layer (OML) currents of 0.2 to 0.7 m s-1 were measured, which are about 50% smaller than the

  12. Greater Role of Geostrophic Currents on Ekman Dynamics in the Western Arctic Ocean as a Mechanism for Beaufort Gyre Stabilization

    NASA Astrophysics Data System (ADS)

    Steele, M.; Zhong, W.; Zhang, J.; Zhao, J.

    2017-12-01

    Seven different methods, with and without including geostrophic currents, were used to explore Ekman dynamics in the western Arctic Ocean for the period 1992-2014. Results show that surface geostrophic currents have been increasing and are much stronger than Ekman layer velocities in recent years (2003-2014) when the oceanic Beaufort Gyre (BG) is spinning up in the region. The new methods that include geostrophic currents result in more realistic Ekman pumping velocities than a previous iterative method that does not consider geostrophic currents and therefore overestimates Ekman pumping velocities by up to 52% in the central area of the BG over the period 2003-2014. When the BG is spinning up as seen in recent years, geostrophic currents become stronger, which tend to modify the ice-ocean stress and to cause an Ekman divergence that counteracts wind-driven Ekman convergence in the Canada Basin. This is a mechanism we have identified to play an important and growing role in stabilizing the Ekman convergence and therefore the BG in recent years. This mechanism may be used to explain three scenarios that describe the interplay of changes in wind forcing, sea ice motion, and geostrophic currents that control the variability of the Ekman dynamics in the central BG during 1992-2014. Results also reveal several upwelling regions in the southern and northern Canada Basin and the Chukchi Abyssal Plain which may plays a significant role in biological processes in these regions.

  13. Greater Role of Geostrophic Currents in Ekman Dynamics in the Western Arctic Ocean as a Mechanism for Beaufort Gyre Stabilization

    NASA Astrophysics Data System (ADS)

    Zhong, Wenli; Steele, Michael; Zhang, Jinlun; Zhao, Jinping

    2018-01-01

    Seven different methods, with and without including geostrophic currents, were used to explore Ekman dynamics in the western Arctic Ocean for the period 1992-2014. Results show that surface geostrophic currents have been increasing and are much stronger than Ekman layer velocities in recent years (2003-2014) when the oceanic Beaufort Gyre (BG) is spinning up in the region. The new methods that include geostrophic currents result in more realistic Ekman pumping velocities than a previous iterative method that does not consider geostrophic currents and therefore overestimates Ekman pumping velocities by up to 52% in the central area of the BG over the period 2003-2014. When the BG is spinning up as seen in recent years, geostrophic currents become stronger, which tend to modify the ice-ocean stress and moderate the wind-driven Ekman convergence in the Canada Basin. This is a mechanism we have identified to play an important and growing role in stabilizing the Ekman convergence and therefore the BG in recent years. This mechanism may be used to explain three scenarios that describe the interplay of changes in wind forcing, sea ice motion, and geostrophic currents that control the variability of the Ekman dynamics in the central BG during 1992-2014. Results also reveal several upwelling regions in the southern and northern Canada Basin and the Chukchi Abyssal Plain which may play a significant role in physical and biological processes in these regions.

  14. Nudging Satellite Altimeter Data Into Quasi-Geostrophic Ocean Models

    NASA Astrophysics Data System (ADS)

    Verron, Jacques

    1992-05-01

    This paper discusses the efficiency of several variants of the nudging technique (derived from the technique of the same name developed by meteorologists) for assimilating altimeter data into numerical ocean models based on quasi-geostrophic formulation. Assimilation experiments are performed with data simulated in the nominal sampling conditions of the Topex-Poseidon satellite mission. Under experimental conditions it is found that nudging on the altimetric sea level is as efficient as nudging on the vorticity (second derivative in space of the dynamic topography), the technique used thus far in studies of this type. The use of altimetric residuals only, instead of the total altimetric sea level signal, is also explored. The critical importance of having an adequate reference mean sea level is largely confirmed. Finally, the possibility of nudging only the signal of sea level tendency (i.e., the successive time differences of the sea level height) is examined. Apart from the barotropic mode, results are not very successful compared with those obtained by assimilating the residuals.

  15. Epidemiological and Ecological Characterization of the EHEC O104:H4 Outbreak in Hamburg, Germany, 2011

    PubMed Central

    Tahden, Maike; Manitz, Juliane; Baumgardt, Klaus; Fell, Gerhard; Kneib, Thomas; Hegasy, Guido

    2016-01-01

    In 2011, a large outbreak of entero-hemorrhagic E. coli (EHEC) and hemolytic uremic syndrome (HUS) occurred in Germany. The City of Hamburg was the first focus of the epidemic and had the highest incidences among all 16 Federal States of Germany. In this article, we present epidemiological characteristics of the Hamburg notification data. Evaluating the epicurves retrospectively, we found that the first epidemiological signal of the outbreak, which was in form of a HUS case cluster, was received by local health authorities when already 99 EHEC and 48 HUS patients had experienced their first symptoms. However, only two EHEC and seven HUS patients had been notified. Middle-aged women had the highest risk for contracting the infection in Hamburg. Furthermore, we studied timeliness of case notification in the course of the outbreak. To analyze the spatial distribution of EHEC/HUS incidences in 100 districts of Hamburg, we mapped cases' residential addresses using geographic information software. We then conducted an ecological study in order to find a statistical model identifying associations between local socio-economic factors and EHEC/HUS incidences in the epidemic. We employed a Bayesian Poisson model with covariates characterizing the Hamburg districts as well as incorporating structured and unstructured spatial effects. The Deviance Information Criterion was used for stepwise variable selection. We applied different modeling approaches by using primary data, transformed data, and preselected subsets of transformed data in order to identify socio-economic factors characterizing districts where EHEC/HUS outbreak cases had their residence. PMID:27723830

  16. Synergistic benefits between stormwater management measures and a new pricing system for stormwater in the City of Hamburg.

    PubMed

    Bertram, N P; Waldhoff, A; Bischoff, G; Ziegler, J; Meinzinger, F; Skambraks, A-K

    2017-09-01

    Hamburg is a growing metropolitan city. The increase in sealed surfaces of about 0.36% per year and the subsequent increased runoff impacts on the city's wastewater infrastructure. Further potential risks to the drainage infrastructure arise also from effects of climate change, e.g. increased intensity and frequency of heavy rainfalls. These challenges were addressed in the Rain InfraStructure Adaption (RISA) project conducted 2009-2015 by HAMBURG WASSER and the State Ministry for Environment and Energy, supported by several municipal stakeholders. RISA addressed intensifying conflicts in the context of urban development and stormwater management at that time. Major results of the project are improvements and recommendations for adequate consideration of stormwater management issues during urban planning as well as new funding mechanisms for stormwater management measures. The latter topic resulted in the introduction of a separated stormwater charge based on the amount of sealed area connected to the sewer system of each property. For both undertakings - the RISA project and the introduction of the separated stormwater charge - a novel, comprehensive, digital database was built. Today, these geographical information system (GIS)-based data offer various scale-independent analysis and information opportunities, which facilitate the day-to-day business of HAMBURG WASSER and stormwater management practice in Hamburg.

  17. Quasi-Geostrophic Diagnosis of Mixed-Layer Dynamics Embedded in a Mesoscale Turbulent Field

    NASA Astrophysics Data System (ADS)

    Chavanne, C. P.; Klein, P.

    2016-02-01

    A new quasi-geostrophic model has been developed to diagnose the three-dimensional circulation, including the vertical velocity, in the upper ocean from high-resolution observations of sea surface height and buoyancy. The formulation for the adiabatic component departs from the classical surface quasi-geostrophic framework considered before since it takes into account the stratification within the surface mixed-layer that is usually much weaker than that in the ocean interior. To achieve this, the model approximates the ocean with two constant-stratification layers : a finite-thickness surface layer (or the mixed-layer) and an infinitely-deep interior layer. It is shown that the leading-order adiabatic circulation is entirely determined if both the surface streamfunction and buoyancy anomalies are considered. The surface layer further includes a diabatic dynamical contribution. Parameterization of diabatic vertical velocities is based on their restoring impacts of the thermal-wind balance that is perturbed by turbulent vertical mixing of momentum and buoyancy. The model skill in reproducing the three-dimensional circulation in the upper ocean from surface data is checked against the output of a high-resolution primitive-equation numerical simulation. Correlation between simulated and diagnosed vertical velocities are significantly improved in the mixed-layer for the new model compared to the classical surface quasi-geostrophic model, reaching 0.9 near the surface.

  18. Use of the quasi-geostrophic dynamical framework to reconstruct the 3-D ocean state in a high-resolution realistic simulation of North Atlantic.

    NASA Astrophysics Data System (ADS)

    Fresnay, Simon; Ponte, Aurélien

    2017-04-01

    The quasi-geostrophic (QG) framework has been, is and will be still for years to come a cornerstone method linking observations with estimates of the ocean circulation and state. We have used here the QG framework to reconstruct dynamical variables of the 3-D ocean in a state-of-the-art high-resolution (1/60 deg, 300 vertical levels) numerical simulation of the North Atlantic (NATL60). The work was carried out in 3 boxes of the simulation: Gulf Stream, Azores and Reykjaness Ridge. In a first part, general diagnostics describing the eddying dynamics have been performed and show that the QG scaling verifies in general, at depths distant from mixed layer and bathymetric gradients. Correlations with surface observables variables (e.g. temperature, sea level) were computed and estimates of quasi-geostrophic potential vorticity (QGPV) were reconstructed by the means of regression laws. It is shown that that reconstruction of QGPV exhibits valuable skill for a restricted scale range, mainly using sea level as the variable of regression. Additional discussion is given, based on the flow balanced with QGPV. This work is part of the DIMUP project, aiming to improve our ability to operationnaly estimate the ocean state.

  19. Maximum entropy production principle for geostrophic turbulence

    NASA Astrophysics Data System (ADS)

    Sommeria, J.; Bouchet, F.; Chavanis, P. H.

    2003-04-01

    In 2D turbulence, complex stirring leads to the formation of steady organized states, once fine scale fluctuations have been filtered out. This self-organization can be explained in terms of statistical equilibrium for vorticity, as the most likely outcome of vorticity parcel rearrangements with the constraints of the conservation laws. A mixing entropy describing the vorticity rearrangements is introduced. Extension to the shallow water system has been proposed by Chavanis P.H. and Sommeria J. (2002), Phys. Rev. E. Generalization to multi-layer geostrophic flows is formally straightforward. Outside equilibrium, eddy fluxes should drive the system toward equilibrium, in the spirit of non equilibrium linear thermodynamics. This can been formalized in terms of a principle of maximum entropy production (MEP), as shown by Robert and Sommeria (1991), Phys. Rev. Lett. 69. Then a parameterization of eddy fluxes is obtained, involving an eddy diffusivity plus a drift term acting at larger scale. These two terms balance each other at equilibrium, resulting in a non trivial steady flow, which is the mean state of the statistical equilibrium. Applications of this eddy parametrization will be presented, in the context of oceanic circulation and Jupiter's Great Red Spot. Quantitative tests will be discussed, obtained by comparisons with direct numerical simulations. Kinetic models, inspired from plasma physics, provide a more precise description of the relaxation toward equilibrium, as shown by Chavanis P.H. 2000 ``Quasilinear theory of the 2D Euler equation'', Phys. Rev. Lett. 84. This approach provides relaxation equations with a form similar to the MEP, but not identical. In conclusion, the MEP provides the right trends of the system but its precise justification remains elusive.

  20. Recurrent hamburger thyrotoxicosis

    PubMed Central

    Parmar, Malvinder S.; Sturge, Cecil

    2003-01-01

    RECURRENT EPISODES OF SPONTANEOUSLY RESOLVING HYPERTHYROIDISM may be caused by release of preformed hormone from the thyroid gland after it has been damaged by inflammation (recurrent silent thyroiditis) or by exogenous administration of thyroid hormone, which might be intentional or surreptitious (thyrotoxicosis factitia). Community-wide outbreaks of “hamburger thyrotoxicosis” resulting from inadvertent consumption of beef contaminated with bovine thyroid gland have been previously reported. Here we describe a single patient who experienced recurrent episodes of this phenomenon over an 11-year period and present an approach to systematically evaluating patients with recurrent hyperthyroidism. PMID:12952802

  1. A study of the adequacy of quasi-geostrophic dynamics for modeling the effect of frontal cyclones on the larger scale flow

    NASA Technical Reports Server (NTRS)

    Mudrick, Stephen

    1987-01-01

    The evolution of individual cyclone waves is studied in order to see how well quasi-geostrophic (QG) dynamics can simulate the behavior of primitive equations (PE) dynamics. This work is an extension of a similar study (Mudrick, 1982); emphasis is placed here on adding a frontal zone and other more diverse features to the basic states used. In addition, sets of PE integrations, with and without friction, are used to study the formation of surface occluded fronts within the evolving cyclones. Results of the study are summarized at the beginning of the report.

  2. First identification of Sarcocystis hominis in Iranian traditional hamburger.

    PubMed

    Ahmadi, M Moghaddam; Hajimohammadi, B; Eslami, G; Oryan, A; Yasini Ardakani, S A; Zohourtabar, A; Zare, S

    2015-12-01

    Zoonotic concerns of cattle sarcocystosis are of importance, because humans are the final host for Sarcocystis hominis. Therefore the meat products containing beef may encompass sarcocysts which endanger food safety. In this study, we described the first report of molecular identification of S. hominis in Iranian traditional hamburgers using PCR-RFLP. Throughout a pilot research that was carried out to setup a molecular approach to identify the Sarcocystis spp., using PCR-RFLP, a sample of raw Iranian traditional hamburger was purchased from a street food seller located in Yazd, central Iran in May 2013. DNA extraction was done, by salting out method; briefly, the sample was lysed with NET buffer. The DNA purification and precipitation was then performed. Amplicon and digestion results were analyzed, using gel agarose electrophoresis. The results showed a PCR product with 926 bp in length after amplification and 376 and 550 bp in length after digestion. This product was identified as S. hominis. To the best of our knowledge, this is the first report of S. hominis infection in Iranian hamburger.

  3. A study of the glow discharge plasma jet of the novel Hamburger-electrode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenzheng, E-mail: wzhliu@bjtu.edu.cn; Ma, Chuanlong, E-mail: 15121452@bjtu.edu.cn; Yang, Xiao

    2016-08-15

    To generate atmospheric pressure glow discharge plasma jets (APGDPJs), a novel Hamburger-electrode was proposed. Through the study on electric field distributions, flow field distributions, and characteristics of the discharge and jet, we found that adopting the mode of dielectric barrier discharge with non-uniform thickness of dielectric, it was easy to form the strong electric field areas which were conducive to generate discharge and electric field distributions with large electric field intensity in the narrow gap and weak electric field intensity in the wide gap that were not inclined to form a filament discharge. Using the structure of evenly distributed innermore » electrodes, it was easy to weaken the pressure of strong electric field areas and form flow field distributions which is beneficial for taking out the high density charged particles and generating APGDPJs. Stable APGDPJs in nitrogen with 3.5 mm in diameter and 9 mm in length were formed by using the novel Hamburger-electrode.« less

  4. Yacare caiman (Caiman yacare) trim hamburger and sausage subjected to different smoking techniques.

    PubMed

    Fernandes, Vitória Regina Takeuchi; Souza Franco, Maria Luiza Rodrigues; Mikcha, Jane Martha Graton; de Souza, Vera Lúcia Ferreira; Gasparino, Eliane; Coutinho, Marcos Eduardo; Tanamati, Augusto; Del Vesco, Ana Paula

    2014-02-01

    Caiman, as well as having skin that, after tanning, produces leather of high added value, exceptional quality and good market value, also possesses a meat with a remarkably smooth taste and appearance. This study aimed to characterize hamburger and sausages made from Yacare caiman (Caiman yacare) meat trim. Hot smoked products contained less moisture than the unsmoked products. Protein and ash were higher, respectively, for hot smoked hamburger and sausage. Lipids had greater presence in hot smoked sausage (9.72%), whereas in the burgers they were higher in the liquid smoked burgers (6.71%). The hot smoked products had lower water activity. Hot smoked products displayed less luminance, but the a* and b* chroma were higher in smoked hamburgers. Taste, texture and general acceptability were significant for the hamburger, whereas for the sausage there was a significant effect for texture, salt and purchase intent. For all the products, the hot smoking resulted in the lowest acceptability. © 2013 Society of Chemical Industry.

  5. New quasi-geostrophic flow estimations for the Earth's core

    NASA Astrophysics Data System (ADS)

    Pais, M. Alexandra

    2014-05-01

    Quasi-geostrophic (QG) flows have been reported in numerical dynamo studies that simulate Boussinesq convection of an electrical conducting fluid inside a rapidly rotating spherical shell. In these cases, the required condition for columnar convection seems to be that inertial waves should propagate much faster in the medium than Alfvén waves. QG models are particularly appealing for studies where Earth's liquid core flows are assessed from information contained in geomagnetic data obtained at and above the Earth's surface. Here, they make the whole difference between perceiving only the core surface expression of the geodynamo or else assessing the whole interior core flow. The QG approximation has now been used in different studies to invert geomagnetic field models, providing a different kinematic interpretation of the observed geomagnetic field secular variation (SV). Under this new perspective, a large eccentric jet flowing westward under the Atlantic Hemisphere and a cyclonic column under the Pacific were pointed out as interesting features of the flow. A large eccentric jet with similar characteristics has been explained in recent numerical geodynamo simulations in terms of dynamical coupling between the solid core, the liquid core and the mantle. Nonetheless, it requires an inner core crystallization on the eastern hemisphere, contrary to what has been proposed in recent dynamical models for the inner core. Some doubts remain, as we see, concerning the dynamics that can explain the radial outward flow in the eastern core hemisphere, actually seen in inverted core flow models. This and other puzzling features justify a new assessment of core flows, taking full advantage of the recent geomagnetic field model COV-OBS and of experience, accumulated over the years, on flow inversion. Assuming the QG approximation already eliminates a large part of non-uniqueness in the inversion. Some important non-uniqueness still remains, inherent to the physical model, given

  6. [Habitus, capital and fields: the search for an acting head of the Hamburg Asylum Friedrichsberg in 1897].

    PubMed

    Sammet, Kai

    2005-01-01

    In 1897 Hamburg was in search of an Oberarzt for the asylum Friedrichsberg who should function as the acting head of the head Wilhelm Reye (1833-1912). This search was part of the intended reformation of the outmoded psychiatric care in Hamburg. During this application procedure the Hamburg Physikus John Wahncau examined all possible candidates and applicants. The article explores the election process by using some sociological categories developed by Pierre Bourdieu (habitus, capital, field). The author argues that not only meritocratic attributes led to the choice of one candidate, but also his functional "fitting" into the field in Hamburg.

  7. Long term evolution of wind at the German coasts using newly digitzed data of signal stations

    NASA Astrophysics Data System (ADS)

    Tinz, Birger; Wagner, Dörte; Feser, Frauke; Storch, Hans v.

    2017-04-01

    A long overseen source of synoptic data collected along the coast of Germany has been detected, and is presently digitized. The data stem from warning posts in harbors along the coast, so called "Signalstationen", which recorded estimated wind speed and direction, wave conditions, air pressure and precipitation. The first post began operating in 1877 and the last ceased operation in 1999. Signal Stations were positioned close to the shore to convey severe weather warning of the German Marine Observatory in Hamburg to ships and the coastal population. This was done by raising optical signals. Reports were prepared 3 to 9 times per day. These observations did not enter the regular weather analysis process of the weather service, but were later archived: Now, about 800 handwritten journals are archived at the German Meteorological Service in Hamburg, and some are now available for further analysis. A first inspection of these data indicates a wealth of data, which are well suited for high-resolution description of historical coastal events such as the storm surges in the southern Baltic Sea on 31 January 1913 or in the German Bight on 12 March 1906. The temporal homogeneity is sometimes compromised and homogenization is required. Estimated wind conditions, available so far at the two stations Travemünde and Schleimünde for more than 100 years, allow for the first time an assessment of changing wind and storm conditions based on wind data (instead of proxies such as annual percentiles of geostrophic wind distributions). The pressure data may be used to generating fine-scale synoptic analysis but also for generating geostrophic wind statistics on spatial scales much shorter than what was possible so far.

  8. Spread of Measles Virus D4-Hamburg, Europe, 2008–2011

    PubMed Central

    Mihneva, Zefira; Gold, Hermann; Baumgarte, Sigrid; Baillot, Armin; Helble, Rudolph; Roggendorf, Hedwig; Bosevska, Golubinka; Nedeljkovic, Jasminka; Makowka, Agata; Hutse, Veronik; Holzmann, Heidemarie; Aberle, Stefan W.; Cordey, Samuel; Necula, Gheorghe; Mentis, Andreas; Korukluoğlu, Gulay; Carr, Michael; Brown, Kevin E.; Hübschen, Judith M.; Muller, Claude P.; Mulders, Mick N.; Santibanez, Sabine

    2011-01-01

    A new strain of measles virus, D4-Hamburg, was imported from London to Hamburg in December 2008 and subsequently spread to Bulgaria, where an outbreak of >24,300 cases was observed. We analyzed spread of the virus to demonstrate the importance of addressing hard-to-reach communities within the World Health Organization European Region regarding access to medical care and vaccination campaigns. The D4-Hamburg strain appeared during 2009–2011 in Poland, Ireland, Northern Ireland, Austria, Greece, Romania, Turkey, Macedonia, Serbia, Switzerland, and Belgium and was repeatedly reimported to Germany. The strain was present in Europe for >27 months and led to >25,000 cases in 12 countries. Spread of the virus was prevalently but not exclusively associated with travel by persons in the Roma ethnic group; because this travel extends beyond the borders of any European country, measures to prevent the spread of measles should be implemented by the region as a whole. PMID:21801615

  9. A Unified Model of Geostrophic Adjustment and Frontogenesis

    NASA Astrophysics Data System (ADS)

    Taylor, John; Shakespeare, Callum

    2013-11-01

    Fronts, or regions with strong horizontal density gradients, are ubiquitous and dynamically important features of the ocean and atmosphere. In the ocean, fronts are associated with enhanced air-sea fluxes, turbulence, and biological productivity, while atmospheric fronts are associated with some of the most extreme weather events. Here, we describe a new mathematical framework for describing the formation of fronts, or frontogenesis. This framework unifies two classical problems in geophysical fluid dynamics, geostrophic adjustment and strain-driven frontogenesis, and provides a number of important extensions beyond previous efforts. The model solutions closely match numerical simulations during the early stages of frontogenesis, and provide a means to describe the development of turbulence at mature fronts.

  10. Meso-beta scale numerical simulation studies of terrain-induced jet streak mass and momentum perturbations

    NASA Technical Reports Server (NTRS)

    Lin, Yuh-Lang; Kaplan, Michael L.

    1994-01-01

    An in-depth analysis of observed gravity waves and their relationship to precipitation bands over the Montana mesonetwork during the 11-12 July 1981 CCOPE case study indicated two episodes of coherent waves. While geostrophic adjustment, shearing instability, and terrain were all implicated separately or in combination as possible wave generation mechanisms, the lack of upper-air data within the wave genesis region made it difficult to define the genesis processes from observations alone. The first part of this paper, 3D Numerical Modeling Studies of Terrain-Induced Mass/Momentum Perturbations, employs a mesoscale numerical model to help diagnose the intricate early wave generation mechanisms during the first observed gravity wave episode. The meso-beta scale numerical model is used to study various simulations of the role of multiple geostrophic adjustment processes in focusing a region for gravity wave genesis. The second part of this paper, Linear Theory and Theoretical Modeling, investigates the response of non-resting rotating homogeneous and continuously stratified Boussinesq models of the terrestrial atmosphere to temporally impulsive and uniformly propagating three-dimensional localized zonal momentum sources representative of midlatitude jet streaks. The methods of linear perturbation theory applied to the potential vorticity (PV) and wave field equations are used to study the geostrophic adjustment dynamics. The total zonal and meridional wind perturbations are separated into geostrophic and ageostrophic components in order to define and follow the evolution of both the primary and secondary mesocirculations accompanying midlatitude jetogenesis forced by geostrophic adjustment processes. This problem is addressed to help fill the gap in understanding the dynamics and structure of mesoscale inertia-gravity waves forced by geostrophic adjustment processes in simple two-dimensional quiescent current systems and those produced by mesoscale numerical models

  11. The Hamburger War. Instructor's Guide [and] Student Materials. Business Issues in the Classroom. Revised.

    ERIC Educational Resources Information Center

    Maxey, Phyllis F.; Meier, Stephen C.

    One of a series of units on business issues for high school students, this packet uses the example of hamburger wars ("price wars" between hamburger stands) to introduce students to the ways in which businesses operate in a competitive environment. A teacher's guide and student materials are provided in two separate sections. Following…

  12. Hetonic quartets in a two-layer quasi-geostrophic flow: V-states and stability

    NASA Astrophysics Data System (ADS)

    Reinaud, J. N.; Sokolovskiy, M. A.; Carton, X.

    2018-05-01

    We investigate families of finite core vortex quartets in mutual equilibrium in a two-layer quasi-geostrophic flow. The finite core solutions stem from known solutions for discrete (singular) vortex quartets. Two vortices lie in the top layer and two vortices lie in the bottom layer. Two vortices have a positive potential vorticity anomaly, while the two others have negative potential vorticity anomaly. The vortex configurations are therefore related to the baroclinic dipoles known in the literature as hetons. Two main branches of solutions exist depending on the arrangement of the vortices: the translating zigzag-shaped hetonic quartets and the rotating zigzag-shaped hetonic quartets. By addressing their linear stability, we show that while the rotating quartets can be unstable over a large range of the parameter space, most translating quartets are stable. This has implications on the longevity of such vortex equilibria in the oceans.

  13. Meridional overturning and large-scale circulation of the Indian Ocean

    NASA Astrophysics Data System (ADS)

    Ganachaud, Alexandre; Wunsch, Carl; Marotzke, Jochem; Toole, John

    2000-11-01

    The large scale Indian Ocean circulation is estimated from a global hydrographic inverse geostrophic box model with a focus on the meridional overturning circulation (MOC). The global model is based on selected recent World Ocean Circulation Experiment (WOCE) sections which in the Indian Basin consist of zonal sections at 32°S, 20°S and 8°S, and a section between Bali and Australia from the Java-Australia Dynamic Experiment (JADE). The circulation is required to conserve mass, salinity, heat, silica and "PO" (170PO4+O2). Near-conservation is imposed within layers bounded by neutral surfaces, while permitting advective and diffusive exchanges between the layers. Conceptually, the derived circulation is an estimate of the average circulation for the period 1987-1995. A deep inflow into the Indian Basin of 11±4 Sv is found, which is in the lower range of previous estimates, but consistent with conservation requirements and the global data set. The Indonesian Throughflow (ITF) is estimated at 15±5 Sv. The flow in the Mozambique Channel is of the same magnitude, implying a weak net flow between Madagascar and Australia. A net evaporation of -0.6±0.4 Sv is found between 32°S and 8°S, consistent with independent estimates. No net heat gain is found over the Indian Basin (0.1 ± 0.2PW north of 32°S) as a consequence of the large warm water influx from the ITF. Through the use of anomaly equations, the average dianeutral upwelling and diffusion between the sections are required and resolved, with values in the range 1-3×10-5 cm s-1 for the upwelling and 2-10 cm2 s-1 for the diffusivity.

  14. Long-term change of potential evapotranspiration over Southwest China and teleconnections with large-scale climate anomalies

    NASA Astrophysics Data System (ADS)

    Liu, B.; Chen, X.; Li, Y.; Chen, Z.

    2017-12-01

    bstract: Potential evapotranspiration (PET) is a sensitive factor for atmospheric and ecological systems over Southwest China which is characterized by intensive karst geomorphology and fragile environment. Based on daily meteorological data of 94 stations during 1961-2013, the spatiotemporal characteristics of PET are analyzed. The changing characteristics of local meteorological factors and large-scale climatic features are also investigated to explain the potential reasons for changing PET. Study results are as follows: (1) The high-value center of PET with a mean value of 1097 mm/a locates in the south mainly resulted from the regional climatic features of higher air temperature (TEM), sunshine duration (SSD) and lower relative humidity (RHU); and the low-value center of PET with a mean value of 831 mm/a is in the northeast primarily attributed to higher RHU and weaker SSD. (2) Annual PET decreases at -10.04 mm decade-1 before the year 2000 but increases at 50.65 mm decade-1 thereafter; and the dominant factors of PET change are SSD, RHU and wind speed (WIN), with the relative contributions of 33.29%, 25.42% and 22.16%, respectively. (3) The abrupt change of PET in 2000 is strongly dominated by large-scale climatic anomalies. The strengthened 850hPa geostrophic wind (0.51 ms-1 decade-1), weakened total cloud cover (-2.25 % decade-1) and 500hPa water vapor flux (-2.85 % decade-1) have provided advantageous dynamic, thermal and dry conditions for PET over Southwest China since the 21st century.

  15. Using large-scale diagnostic quantities to investigate change in East Coast Lows

    NASA Astrophysics Data System (ADS)

    Ji, Fei; Evans, Jason P.; Argueso, Daniel; Fita, Lluis; Di Luca, Alejandro

    2015-11-01

    East Coast Lows (ECLs) are intense low-pressure systems that affect the eastern seaboard of Australia. They have attracted research interest for both their destructive nature and water supplying capability. Estimating the changes in ECLs in the future has a major impact on emergency response as well as water management strategies for the coastal communities on the east coast of Australia. In this study, ECLs were identified using two large-scale diagnostic quantities: isentropic potential vorticity (IPV) and geostrophic vorticity (GV), which were calculated from outputs of historical and future regional climate simulations from the NSW/ACT regional climate modelling (NARCliM) project. The diagnostic results for the historical period were evaluated against a subjective ECL event database. Future simulations using a high emission scenario were examined to estimate changes in frequency, duration, and intensity of ECLs. The use of a relatively high resolution regional climate model makes this the first study to examine future changes in ECLs while resolving the full range of ECL sizes which can be as small as 100-200 km in diameter. The results indicate that it is likely that there will be fewer ECLs, with weaker intensity in the future. There could also be a seasonal shift in ECLs from cool months to warm months. These changes have the potential to significantly impact the water security on the east coast of Australia.

  16. Migrants' educational success through innovation: The case of the Hamburg bilingual schools

    NASA Astrophysics Data System (ADS)

    Duarte, Joana

    2011-12-01

    Although Germany has experienced net in-migration for the past five decades, this fact has only recently been officially acknowledged. Furthermore, Germany is marked by a general monolingual self-concept very much attached to the idea of a nation-state with one homogeneous language. However, in large urban areas of Germany about 35 per cent of the population has a migration background, as has almost every second child enrolling in primary school. Hence the country is marked by this dichotomy between a monolingual policy discourse and a multilingual society, manifested in everyday life and, as a consequence, in educational institutions. The fact is that this political attitude towards Germany's own migration history and migrants has led to an educational gap between students with a migration background and their monolingual peers. In 2000, a project was started in Hamburg, aiming to overcome this educational gap and involving the creation of bilingual schools for some of the largest migrant languages. Bilingual classes were thus set up for the following language combinations: German-Portuguese, German-Italian, German-Spanish and German-Turkish, and were evaluated by the University of Hamburg. This paper reports on the model used and the specific school outcomes of the students attending these classes.

  17. Learning by Doing: Science Education at the Hamburg Observatory

    ERIC Educational Resources Information Center

    Wolfschmidt, Gudrun

    2015-01-01

    In my contribution I would like to offer three different examples: the activities of the association "Förderverein Hamburger Sternwarte", science education in the "astronomy workshop", and the teaching of the history of science and technology for university students.

  18. Evaluation of bias in the Hamburg wheel tracking device.

    DOT National Transportation Integrated Search

    2013-09-01

    As the list of states adopting the Hamburg Wheel Tracking Device (HWTD) continues to grow, there is a need to evaluate how results are utilized. American Association of State Highway and Transportation Officials T 324 does not standardize the analysi...

  19. Effects of Geostrophic Kinetic Energy on the Distribution of Mesopelagic Fish Larvae in the Southern Gulf of California in Summer/Fall Stratified Seasons.

    PubMed

    Contreras-Catala, Fernando; Sánchez-Velasco, Laura; Beier, Emilio; Godínez, Victor M; Barton, Eric D; Santamaría-Del-Angel, Eduardo

    2016-01-01

    Effects of geostrophic kinetic energy flux on the three-dimensional distribution of fish larvae of mesopelagic species (Vinciguerria lucetia, Diogenichthys laternatus, Benthosema panamense and Triphoturus mexicanus) in the southern Gulf of California during summer and fall seasons of stronger stratification were analyzed. The greatest larval abundance was found at sampling stations in geostrophic kinetic energy-poor areas (<7.5 J/m3), where the distribution of the dominant species tended to be stratified. Larvae of V. lucetia (average abundance of 318 larvae/10m2) and B. panamense (174 larvae/10m2) were mostly located in and above the pycnocline (typically ~ 40 m depth). In contrast, larvae of D. laternatus (60 larvae/10m2) were mainly located in and below the pycnocline. On the other hand, in sampling stations from geostrophic kinetic energy-rich areas (> 21 J/m3), where mesoscale eddies were present, the larvae of the dominant species had low abundance and were spread more evenly through the water column, in spite of the water column stratification. For example, in a cyclonic eddy, V. lucetia larvae (34 larvae/10m2) extended their distribution to, at least, the limit of sampling 200 m depth below the pycnocline, while D. laternatus larvae (29 larvae/10m2) were found right up to the surface, both probably as a consequence mixing and secondary circulation in the eddy. Results showed that the level of the geostrophic kinetic energy flux affects the abundance and the three-dimensional distribution of mesopelagic fish larvae during the seasons of stronger stratification, indicating that areas with low geostrophic kinetic energy may be advantageous for feeding and development of mesopelagic fish larvae because of greater water column stability.

  20. Effects of Geostrophic Kinetic Energy on the Distribution of Mesopelagic Fish Larvae in the Southern Gulf of California in Summer/Fall Stratified Seasons

    PubMed Central

    Contreras-Catala, Fernando; Beier, Emilio; Godínez, Victor M.; Barton, Eric D.; Santamaría-del-Angel, Eduardo

    2016-01-01

    Effects of geostrophic kinetic energy flux on the three-dimensional distribution of fish larvae of mesopelagic species (Vinciguerria lucetia, Diogenichthys laternatus, Benthosema panamense and Triphoturus mexicanus) in the southern Gulf of California during summer and fall seasons of stronger stratification were analyzed. The greatest larval abundance was found at sampling stations in geostrophic kinetic energy-poor areas (<7.5 J/m3), where the distribution of the dominant species tended to be stratified. Larvae of V. lucetia (average abundance of 318 larvae/10m2) and B. panamense (174 larvae/10m2) were mostly located in and above the pycnocline (typically ~ 40 m depth). In contrast, larvae of D. laternatus (60 larvae/10m2) were mainly located in and below the pycnocline. On the other hand, in sampling stations from geostrophic kinetic energy-rich areas (> 21 J/m3), where mesoscale eddies were present, the larvae of the dominant species had low abundance and were spread more evenly through the water column, in spite of the water column stratification. For example, in a cyclonic eddy, V. lucetia larvae (34 larvae/10m2) extended their distribution to, at least, the limit of sampling 200 m depth below the pycnocline, while D. laternatus larvae (29 larvae/10m2) were found right up to the surface, both probably as a consequence mixing and secondary circulation in the eddy. Results showed that the level of the geostrophic kinetic energy flux affects the abundance and the three-dimensional distribution of mesopelagic fish larvae during the seasons of stronger stratification, indicating that areas with low geostrophic kinetic energy may be advantageous for feeding and development of mesopelagic fish larvae because of greater water column stability. PMID:27760185

  1. Insecure Identities: Unaccompanied Minors as Refugees in Hamburg

    ERIC Educational Resources Information Center

    Schroeder, Joachim

    2012-01-01

    This paper analyses the financial circumstances and social income of nearly one hundred unaccompanied minors who have come to Hamburg as refugees from various regions of Africa. It is based on extensive qualitative surveys, analysing their objective conditions of life and in particular their legal situation. A wide range of interview material and…

  2. Energy transfers in large-scale and small-scale dynamos

    NASA Astrophysics Data System (ADS)

    Samtaney, Ravi; Kumar, Rohit; Verma, Mahendra

    2015-11-01

    We present the energy transfers, mainly energy fluxes and shell-to-shell energy transfers in small-scale dynamo (SSD) and large-scale dynamo (LSD) using numerical simulations of MHD turbulence for Pm = 20 (SSD) and for Pm = 0.2 on 10243 grid. For SSD, we demonstrate that the magnetic energy growth is caused by nonlocal energy transfers from the large-scale or forcing-scale velocity field to small-scale magnetic field. The peak of these energy transfers move towards lower wavenumbers as dynamo evolves, which is the reason for the growth of the magnetic fields at the large scales. The energy transfers U2U (velocity to velocity) and B2B (magnetic to magnetic) are forward and local. For LSD, we show that the magnetic energy growth takes place via energy transfers from large-scale velocity field to large-scale magnetic field. We observe forward U2U and B2B energy flux, similar to SSD.

  3. Study on the synergic effect of natural compounds on the microbial quality decay of packed fish hamburger.

    PubMed

    Corbo, M R; Speranza, B; Filippone, A; Granatiero, S; Conte, A; Sinigaglia, M; Del Nobile, M A

    2008-10-31

    The effectiveness of natural compounds in slowing down the microbial quality decay of refrigerated fish hamburger is addressed in this study. In particular, the control of the microbiological spoilage by combined use of three antimicrobials, and the determination of their optimal composition to extend the fish hamburger Microbiological Stability Limit (MAL) are the main objectives of this work. Thymol, grapefruit seed extract (GFSE) and lemon extract were tested for monitoring the cell growth of the main fish spoilage microorganisms (Pseudomonas fluorescens, Photobacterium phosphoreum and Shewanella putrefaciens), inoculated in fish hamburgers, and the growth of mesophilic and psychrotrophic bacteria. A Central Composite Design (CCD) was developed to highlight a possible synergic effect of the above natural compounds. Results showed an increase in the MAL value for hamburgers mixed with the antimicrobial compounds, compared to the control sample. The optimal antimicrobial compound composition, which corresponds to the maximal MAL value determined in this study, is: 110 mgL(-1) of thymol, 100 mgL(-1) of GFSE and 120 mgL(-1) of lemon extract. The presence of the natural compounds delay the sensorial quality decay without compromising the flavor of the fish hamburgers.

  4. Eddy Vertical Structure Observed by Deepgliders: Evidence for the Enstrophy Inertial Range Cascade in Geostrophic Turbulence

    NASA Astrophysics Data System (ADS)

    Eriksen, C. C.

    2016-12-01

    Full water column temperature and salinity profiles and estimates of average current collected with Deepgliders were used to analyze vertical structure of mesoscale features in the western North Atlantic Ocean. Fortnightly repeat surveys over a 58 km by 58 km region centered at the Bermuda Atlantic Time Series (BATS) site southeast of Bermuda were carried out for 3 and 9 months in successive years. In addition, a section from Bermuda along Line W across the Gulf Stream to the New England Continental Slope and a pair of sections from Bermuda to the Bahamas were carried out. Absolute geostrophic current estimates constructed from these measurements and projected upon flat bottom resting ocean dynamic modes for the regions indicate nearly equal kinetic energy in the barotropic mode and first baroclinic mode. An empirical orthogonal mode decomposition of dynamic mode amplitudes demonstrates strong coupling of the barotropic and first baroclinic modes, a result resembling those reported for the Polymode experiment 3 decades ago. Higher baroclinic modes are largely independent of one another. Energy in baroclinic modes varies in inverse proportion to mode number cubed, a result predicted for an enstrophy inertial range cascade of geostrophic turbulence, believed newly detected by these observations. This (mode number)-3 dependence is found at BATS and across the Gulf Stream and Sargasso Sea. On two occasions, submesoscale anticyclones were detected at BATS whose vertical structure closely resembled the second baroclinic mode. Anomalously cold and fresh water within their cores (by as much as 3.5°C and 0.5 in salinity) suggests they were of subpolar (likely Labrador Sea) origin. These provided temporary perturbations to the vertical mode number energy spectrum.

  5. [Improving Mental Health Literacy and Mental Illness Stigma in the Population of Hamburg].

    PubMed

    Lambert, Martin; Härter, Martin; Arnold, Detlef; Dirmaier, Jörg; Tlach, Lisa; Liebherz, Sarah; Sänger, Sylvia; Karow, Anne; Brandes, Andreas; Sielaff, Gyöngyver; Bock, Thomas

    2015-07-01

    Evidence shows that poor mental health literacy and stigmatization have negative consequences on mental health. However, studies on interventions to improve both are often heterogenic in methodology and results. The psychenet-campaign in Hamburg was developed and implemented in collaboration with patients and relatives and comprised multidimensional interventions focusing on education and contact to patients. The main goals were the improvement of mental health literacy and destigmatization and the long-term implementation within Hamburg's mental health care system. © Georg Thieme Verlag KG Stuttgart · New York.

  6. Protracted outbreak of S. Enteritidis PT 21c in a large Hamburg nursing home

    PubMed Central

    Frank, Christina; Buchholz, Udo; Maaß, Monika; Schröder, Arthur; Bracht, Karl-Hans; Domke, Paul-Gerhard; Rabsch, Wolfgang; Fell, Gerhard

    2007-01-01

    Background During August 2006, a protracted outbreak of Salmonella (S.) Enteritidis infections in a large Hamburg nursing home was investigated. Methods A site visit of the home was conducted and food suppliers' premises tested for Salmonella. Among nursing home residents a cohort study was carried out focusing on foods consumed in the three days before the first part of the outbreak. Instead of relying on residents' memory, data from the home's patient food ordering system was used as exposure data. S. Enteritidis isolates from patients and suspected food vehicles were phage typed and compared. Results Within a population of 822 nursing home residents, 94 case patients among residents (1 fatality) and 17 among staff members were counted 6 through 29 August. The outbreak peaked 7 through 9 August, two days after a spell of very warm summer weather. S. Enteritidis was consistently recovered from patients' stools throughout the outbreak. Among the food items served during 5 through 7 August, the cohort study pointed to afternoon cake on all three days as potential risk factors for disease. Investigation of the bakery supplying the cake yielded S. Enteritidis from cakes sampled 31 August. Comparison of the isolates by phage typing demonstrated both isolates from patients and the cake to be the exceedingly rare phage type 21c. Conclusion Cake (various types served on various days) contaminated with S. Enteritidis were the likely vehicle of the outbreak in the nursing home. While the cakes were probably contaminated with low pathogen dose throughout the outbreak period, high ambient summer temperatures and failure to keep the cake refrigerated led to high pathogen dose in cake on some days and in some of the housing units. This would explain the initial peak of cases, but also the drawn out nature of the outbreak with cases until the end of August. Suggestions are made to nursing homes, aiding in outbreak prevention. Early outbreak detection is crucial, such that

  7. How different are the Liège and Hamburg atlases of the solar spectrum?

    NASA Astrophysics Data System (ADS)

    Doerr, H.-P.; Vitas, N.; Fabbian, D.

    2016-05-01

    Context. The high-fidelity solar spectral atlas prepared by http://adsabs.harvard.edu/abs/1973apds.book.....D Delbouille et al. (Liège atlas, 1973) and the atlas by http://adsabs.harvard.edu/abs/1999SoPh..184..421N Neckel (Hamburg atlas, 1999, Sol. Phys., 184, 421) are widely recognised as the most important collection of reference spectra of the Sun at disc centre in the visible wavelength range. The two datasets serve as fundamental resources for many researchers, in particular for chemical abundance analyses. But despite their similar published specifications (spectral resolution and noise level), the shapes of the spectral lines in the two atlases differ significantly and systematically. Aims: Knowledge of any instrumental degradations is imperative to fully exploit the information content of spectroscopic data. We seek to investigate the magnitude of these differences and explain the possible sources. We provide the wavelength-dependent correction parameters that need to be taken into account when the spectra are to be compared with synthetic data, for instance. Methods: A parametrically degraded version of the Hamburg spectrum was fitted to the Liège spectrum. The parameters of the model (wavelength shift, broadening, intensity scaling, and intensity offset) represent the different characteristics of the respective instruments, observational strategies, and data processing. Results: The wavelength scales of the Liège and Hamburg atlases differ on average by 0.5 mÅ with a standard deviation of ± 2 mÅ, except for a peculiar region around 5500 Å. The continuum levels are offset by up to 18% below 5000 Å, but remain stably at a 0.8% difference towards the red. We find no evidence for spectral stray light in the Liège spectrum. Its resolving power is almost independent of wavelength but limited to about 216 000, which is between two to six times lower than specified. When accounting for the degradations determined in this work, the spectra of the two

  8. Escherichia coli O157:H7 reduction in hamburgers with regard to premature browning of minced beef, colour score and method for determining doneness.

    PubMed

    Boqvist, Sofia; Fernström, Lise-Lotte; Alsanius, Beatrix W; Lindqvist, Roland

    2015-12-23

    This study investigated the effect of premature browning (PMB) on the survival of Escherichia coli O157:H7 in beef hamburgers after cooking with respect to interior colour of the hamburger and recommendations to cook hamburgers to a core temperature of 71 °C. Assessment of doneness by visual inspection or measurement of internal temperature was compared in terms of survival and the increased relative risk of illness due to PMB was estimated. At the last consume-by-day, hamburgers made from minced meat packaged in 80/20 O2/CO2 (MAP hamburger) and from meat minced at retail packaged in atmospheric condition (control hamburger) were inoculated with a gfp-tagged strain of E. coli O157:H7 (E. coli O157:H7gfp+). Hamburgers were cooked for different times during assessment of the core temperature every 30s and cut in halves after cooking. Doneness was evaluated based on visual judgement of the internal colour using a score chart (C-score) from 'uncooked' (score 1) to 'tan with no evidence of pink' (score 5). An alternative five point score chart (TCC-score) including texture of the meat, clarity of meat juice and internal colour was also developed. Enumeration of viable E. coli O157:H7gfp+ in cooked hamburgers was based on fluorescent colonies recovered from plates. Results showed that MAP hamburgers developed PMB when compared with controls (P=0.0003) and that the shortest cooking time for the highest C-score was 6 and 11 min for MAP and control hamburgers, respectively. The mean temperature in the MAP hamburger was then 60.3 °C. The TCC-score reduced the difference between MAP and control hamburgers. It was also shown that the survival of E. coli O157:H7gfp+ was highest in MAP hamburgers. The predicted absolute risks for illness were highest for MAP hamburgers for all C-scores and the relative risk associated with PMB increased with doneness. For a C-score of 4 (slightly pink) the predicted relative risk for illness was 300 times higher for MAP hamburger than for

  9. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or PNL noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10 exp 6 based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  10. [Urban Health (StadtGesundheit): The Wider Perspective Exemplified by the City State of Hamburg].

    PubMed

    Fehr, R; Fertmann, R; Stender, K-P; Lettau, N; Trojan, A

    2016-09-01

    Public health and city planning have common roots, and in many places they are now reuniting under the heading of urban health. To organize this field adequately requires a broad, integrative view of medical care, health promotion, and health in all urban policies. Given current crises and developments including climate change and globalization, such a wider perspective should also be useful for Germany. Using the City State of Hamburg as an example and combining historic and systematic approaches, we explore the preconditions for in-depth analyses. Our results show that health is a significant topic of Hamburg urban policy, featuring a broad range of structures, processes and actors, both within the health sector and far beyond. Health promotion over the last 30 years evolved notably from a niche topic into an established field with remarkable cooperative structures. The tradition of comprehensive reporting on urban health in Hamburg that was initiated more than 200 years ago is no longer alive today. However, local health reporting keeps integrating a wide range of diverse topics. Communication among the Hamburg health actors - beyond straightforward medical quality assurance - does not seem to focus on critical evaluations, e. g. concerning social and ecologic sustainability. A prerequisite for in-depth analyses including external comparisons is to secure permanent access to relevant sources. Robust approaches to this end, however, seem to be lacking. © Georg Thieme Verlag KG Stuttgart · New York.

  11. Review and analysis of Hamburg Wheel Tracking device test data.

    DOT National Transportation Integrated Search

    2014-02-01

    The Hamburg Wheel Tracking Device (HWTD) test (TEX-242-F) and the Kansas Test Method KT-56 (KT-56), or : modified Lottman test, have been used in Kansas for the last 10 years or so to predict rutting and moisture damage potential of : Superpave mixes...

  12. Absolute Geostrophic Velocity Inverted from World Ocean Atlas 2013 (WOAV13) with the P-Vector Method

    DTIC Science & Technology

    2015-11-01

    The WOAV13 dataset comprises 3D global gridded climatological fields of absolute geostrophic velocity inverted...from World Ocean Atlas-2013 (WOA13) temperature and salinity fields using the P-vector method. It provides a climatological velocity field that is... climatology Dataset Identifier: gov.noaa.nodc:0121576 Creator: NOAP Lab, Department of Oceanography, Naval Postgraduate School, Monterey, CA Title

  13. Hass avocado modulates postprandial vascular reactivity and postprandial inflammatory responses to a hamburger meal in healthy volunteers.

    PubMed

    Li, Zhaoping; Wong, Angela; Henning, Susanne M; Zhang, Yanjun; Jones, Alexis; Zerlin, Alona; Thames, Gail; Bowerman, Susan; Tseng, Chi-Hong; Heber, David

    2013-02-26

    Hass avocados are rich in monounsaturated fatty acids (oleic acid) and antioxidants (carotenoids, tocopherols, polyphenols) and are often eaten as a slice in a sandwich containing hamburger or other meats. Hamburger meat forms lipid peroxides during cooking. After ingestion, the stomach functions as a bioreactor generating additional lipid peroxides and this process can be inhibited when antioxidants are ingested together with the meat. The present pilot study was conducted to investigate the postprandial effect of the addition of 68 g of avocado to a hamburger on vasodilation and inflammation. Eleven healthy subjects on two separate occasions consumed either a 250 g hamburger patty alone (ca. 436 cal and 25 g fat) or together with 68 grams of avocado flesh (an additional 114 cal and 11 g of fat for a total of 550 cal and 36 g fat), a common culinary combination, to assess effects on vascular health. Using the standard peripheral arterial tonometry (PAT) method to calculate the PAT index, we observed significant vasoconstriction 2 hours following hamburger ingestion (2.19 ± 0.36 vs. 1.56 ± 0.21, p = 0.0007), which did not occur when the avocado flesh was ingested together with the burger (2.17 ± 0.57 vs. 2.08 ± 0.51, NS p = 0.68). Peripheral blood mononuclear cells were isolated from postprandial blood samples and the Ikappa-B alpha (IκBα) protein concentration was determined to assess effects on inflammation. At 3 hours, there was a significant preservation of IκBα (131% vs. 58%, p = 0.03) when avocado was consumed with the meat compared to meat alone, consistent with reduced activation of the NF-kappa B (NFκB) inflammatory pathway. IL-6 increased significantly at 4 hours in postprandial serum after consumption of the hamburger, but no change was observed when avocado was added. Postprandial serum triglyceride concentration increased, but did not further increase when avocado was ingested with the burger compared to burger alone despite the added fat and

  14. Poincare oscillations and geostrophic adjustment in a rotating paraboloid

    NASA Astrophysics Data System (ADS)

    Kalashnik, M.; Kakhiani, V.; Patarashvili, K.; Tsakadze, S.

    2009-10-01

    Free liquid oscillations (Poincare oscillations) in a rotating paraboloid are investigated theoretically and experimentally. Within the framework of shallow-water theory, with account for the centrifugal force, expressions for the free oscillation frequencies are obtained and corrections to the frequencies related with the finiteness of the liquid depth are found. It is shown that in the rotating liquid, apart from the wave modes of free oscillations, a stationary vortex mode is also generated, that is, a process of geostrophic adjustment takes place. Solutions of the shallow-water equations which describe the wave dynamics of the adjustment process are presented. In the experiments performed the wave and vortex modes were excited by removing a previously immersed hemisphere from the central part of the paraboloid. Good agreement between theory and experiment was obtained. Address: alex_gaina@yahoo.com Database: phy

  15. Large-Scale Disasters

    NASA Astrophysics Data System (ADS)

    Gad-El-Hak, Mohamed

    "Extreme" events - including climatic events, such as hurricanes, tornadoes, and drought - can cause massive disruption to society, including large death tolls and property damage in the billions of dollars. Events in recent years have shown the importance of being prepared and that countries need to work together to help alleviate the resulting pain and suffering. This volume presents a review of the broad research field of large-scale disasters. It establishes a common framework for predicting, controlling and managing both manmade and natural disasters. There is a particular focus on events caused by weather and climate change. Other topics include air pollution, tsunamis, disaster modeling, the use of remote sensing and the logistics of disaster management. It will appeal to scientists, engineers, first responders and health-care professionals, in addition to graduate students and researchers who have an interest in the prediction, prevention or mitigation of large-scale disasters.

  16. [Hamburger consumption patterns and exposure assessment for verocytotoxigenic Escherichia coli (VTEC): simulation model].

    PubMed

    Signorini, M L; Marín, V; Quinteros, C; Tarabla, H

    2009-01-01

    A quantitative risk assessment was developed for verocytotoxigenic Escherichia coli (VTEC) associated with hamburger consumption. The assessment (simulation model) considers the distribution, storage and consumption patterns of hamburgers. The prevalence and concentration of VTEC were modelled at various stages along the agri-food beef production system using input derived from Argentinean data, whenever possible. The model predicted an infection risk of 4.45 x 10(-4) per meal for adults. The risk values obtained for children were 2.6 x 10(-4), 1.38 x 10(-5) and 4.54 x 10(-7) for infection, Hemolytic Uremic Syndrome (HUS) and mortality, respectively. The risk of infection and HUS was positively correlated with bacterial concentration in meat (r = 0.664). There was a negative association between homemade hamburgers (r = -0.116) and the risk of illness; however this association has been considered due to differences between retail and domiciliary storage systems (r = -0.567) and not because of the intrinsic characteristics of the product. The most sensitive points of the production system were identified through the risk assessment, therefore, these can be utilized as a basis to apply different risk management policies in public health.

  17. On the scaling of small-scale jet noise to large scale

    NASA Technical Reports Server (NTRS)

    Soderman, Paul T.; Allen, Christopher S.

    1992-01-01

    An examination was made of several published jet noise studies for the purpose of evaluating scale effects important to the simulation of jet aeroacoustics. Several studies confirmed that small conical jets, one as small as 59 mm diameter, could be used to correctly simulate the overall or perceived noise level (PNL) noise of large jets dominated by mixing noise. However, the detailed acoustic spectra of large jets are more difficult to simulate because of the lack of broad-band turbulence spectra in small jets. One study indicated that a jet Reynolds number of 5 x 10(exp 6) based on exhaust diameter enabled the generation of broad-band noise representative of large jet mixing noise. Jet suppressor aeroacoustics is even more difficult to simulate at small scale because of the small mixer nozzles with flows sensitive to Reynolds number. Likewise, one study showed incorrect ejector mixing and entrainment using a small-scale, short ejector that led to poor acoustic scaling. Conversely, fairly good results were found with a longer ejector and, in a different study, with a 32-chute suppressor nozzle. Finally, it was found that small-scale aeroacoustic resonance produced by jets impacting ground boards does not reproduce at large scale.

  18. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  19. Large Scale Metal Additive Techniques Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nycz, Andrzej; Adediran, Adeola I; Noakes, Mark W

    2016-01-01

    In recent years additive manufacturing made long strides toward becoming a main stream production technology. Particularly strong progress has been made in large-scale polymer deposition. However, large scale metal additive has not yet reached parity with large scale polymer. This paper is a review study of the metal additive techniques in the context of building large structures. Current commercial devices are capable of printing metal parts on the order of several cubic feet compared to hundreds of cubic feet for the polymer side. In order to follow the polymer progress path several factors are considered: potential to scale, economy, environmentmore » friendliness, material properties, feedstock availability, robustness of the process, quality and accuracy, potential for defects, and post processing as well as potential applications. This paper focuses on current state of art of large scale metal additive technology with a focus on expanding the geometric limits.« less

  20. Climate Education at the University of Hamburg

    NASA Astrophysics Data System (ADS)

    Dilly, Oliver; Stammer, Detlef; Pfeiffer, Eva-Maria

    2010-05-01

    The new graduate School of Integrated Climate Sciences (www.sicss.de) at the KlimaCampus of the University of Hamburg was opened at October 20, 2009 and includes a 2-yr MSc (120 ECTS, 30 compulsory, 90 eligible) and 3-yr doctoral program (12 ECTS). About 40 students were enrolled in early 2010. The interdisciplinary MSc program is based on a number of disciplines such as meteorology, geophysics, oceanography, geosciences and also economics and social sciences. These disciplines are required to address the faced key issues related to climate change effectively. The graduate school is guiding pupils and BSc students with competence in maths and physics on how to become a climate expert. Acquisition is done internationally at fairs, uni days and dircectly at schools and intuitions for higher education. BSc degree in the disciplines listed above is set for positive application. Climate experts are needed for both research and the professional world outside the university and research institutions. In accordance, connection within and outside the university are continuously explored and soft skills for the communication to politics and the public's are included in the MSc and PhD curricula. Since the graduate school was established within the cluster of excellence ‘Integrated Climate Analysis and Predication' (www.clisap.de), this school represents a prototype for graduate programs at the University of Hamburg. Advantages and limitations of this Climate System School concept will be discussed.

  1. Skin Friction Reduction Through Large-Scale Forcing

    NASA Astrophysics Data System (ADS)

    Bhatt, Shibani; Artham, Sravan; Gnanamanickam, Ebenezer

    2017-11-01

    Flow structures in a turbulent boundary layer larger than an integral length scale (δ), referred to as large-scales, interact with the finer scales in a non-linear manner. By targeting these large-scales and exploiting this non-linear interaction wall shear stress (WSS) reduction of over 10% has been achieved. The plane wall jet (PWJ), a boundary layer which has highly energetic large-scales that become turbulent independent of the near-wall finer scales, is the chosen model flow field. It's unique configuration allows for the independent control of the large-scales through acoustic forcing. Perturbation wavelengths from about 1 δ to 14 δ were considered with a reduction in WSS for all wavelengths considered. This reduction, over a large subset of the wavelengths, scales with both inner and outer variables indicating a mixed scaling to the underlying physics, while also showing dependence on the PWJ global properties. A triple decomposition of the velocity fields shows an increase in coherence due to forcing with a clear organization of the small scale turbulence with respect to the introduced large-scale. The maximum reduction in WSS occurs when the introduced large-scale acts in a manner so as to reduce the turbulent activity in the very near wall region. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-16-1-0194 monitored by Dr. Douglas Smith.

  2. Mapping sub-surface geostrophic currents from altimetry and a fleet of gliders

    NASA Astrophysics Data System (ADS)

    Alvarez, A.; Chiggiato, J.; Schroeder, K.

    2013-04-01

    Integrating the observations gathered by different platforms into a unique physical picture of the environment is a fundamental aspect of networked ocean observing systems. These are constituted by a spatially distributed set of sensors and platforms that simultaneously monitor a given ocean region. Remote sensing from satellites is an integral part of present ocean observing systems. Due to their autonomy, mobility and controllability, underwater gliders are envisioned to play a significant role in the development of networked ocean observatories. Exploiting synergism between remote sensing and underwater gliders is expected to result on a better characterization of the marine environment than using these observational sources individually. This study investigates a methodology to estimate the three dimensional distribution of geostrophic currents resulting from merging satellite altimetry and in situ samples gathered by a fleet of Slocum gliders. Specifically, the approach computes the volumetric or three dimensional distribution of absolute dynamic height (ADH) that minimizes the total energy of the system while being close to in situ observations and matching the absolute dynamic topography (ADT) observed from satellite at the sea surface. A three dimensional finite element technique is employed to solve the minimization problem. The methodology is validated making use of the dataset collected during the field experiment called Rapid Environmental Picture-2010 (REP-10) carried out by the NATO Undersea Research Center-NURC during August 2010. A marine region off-shore La Spezia (northwest coast of Italy) was sampled by a fleet of three coastal Slocum gliders. Results indicate that the geostrophic current field estimated from gliders and altimetry significantly improves the estimates obtained using only the data gathered by the glider fleet.

  3. Effects of Jet-Milled Defatted Soy Flour on the Physicochemical and Sensorial Properties of Hamburger Patties

    PubMed Central

    2017-01-01

    We investigated the physicochemical and sensorial properties of hamburger patties made with three different defatted soybean flour (DSF) preparations which differed in particle size. Coarse (Dv50=259.3±0.6 µm), fine (Dv50=91.5±0.5 µm), and superfine (Dv50=3.7±0.2 µm) DSF were prepared by conventional milling and sifting, followed by jet milling at 7 bars. Hamburger patties containing 5% of each DSF were prepared for a property analysis. The hamburger patties made with 5% superfine DSF showed the lowest cooking loss among the treatment groups (p<0.05). The patties with superfine DSF also retained the texture profile values of the control patties in terms of hardness, gumminess, springiness, and chewiness, while the addition of coarse and fine DSF increased the hardness and chewiness significantly (p<0.05). The sensorial results of quantitative descriptive analysis (QDA) indicate that the patties containing superfine DSF were softer and tenderer than the controls (p<0.05). Although the overall acceptability of the patties made with coarse and fine DSF was poor, the overall acceptability of the superfine DSF patty was the same as that of the control patty. These results suggest that superfine DSF is an excellent food material that can supply dietary fiber, while maintaining the physical characteristics and texture of hamburger patty. PMID:29725205

  4. Effects of Jet-Milled Defatted Soy Flour on the Physicochemical and Sensorial Properties of Hamburger Patties.

    PubMed

    Cho, Hyun-Woo; Jung, Young-Min; Auh, Joong-Hyuck; Lee, Dong-Un

    2017-01-01

    We investigated the physicochemical and sensorial properties of hamburger patties made with three different defatted soybean flour (DSF) preparations which differed in particle size. Coarse (Dv 50 =259.3±0.6 µm), fine (Dv 50 =91.5±0.5 µm), and superfine (Dv 50 =3.7±0.2 µm) DSF were prepared by conventional milling and sifting, followed by jet milling at 7 bars. Hamburger patties containing 5% of each DSF were prepared for a property analysis. The hamburger patties made with 5% superfine DSF showed the lowest cooking loss among the treatment groups ( p <0.05). The patties with superfine DSF also retained the texture profile values of the control patties in terms of hardness, gumminess, springiness, and chewiness, while the addition of coarse and fine DSF increased the hardness and chewiness significantly ( p <0.05). The sensorial results of quantitative descriptive analysis (QDA) indicate that the patties containing superfine DSF were softer and tenderer than the controls ( p <0.05). Although the overall acceptability of the patties made with coarse and fine DSF was poor, the overall acceptability of the superfine DSF patty was the same as that of the control patty. These results suggest that superfine DSF is an excellent food material that can supply dietary fiber, while maintaining the physical characteristics and texture of hamburger patty.

  5. Modelling the emissions from ships in ports and their impact on air quality in the metropolitan area of Hamburg

    NASA Astrophysics Data System (ADS)

    Ramacher, Martin; Karl, Matthias; Aulinger, Armin; Bieser, Johannes; Matthias, Volker; Quante, Markus

    2016-04-01

    Exhaust emissions from shipping contribute significantly to the anthropogenic burden of air pollutants such as nitrogen oxides (NOX) and particulate matter (PM). Ships emit not only when sailing on open sea, but also when approaching harbors, during port manoeuvers and at berth to produce electricity and heat for the ship's operations. This affects the population of harbor cities because long-term exposure to PM and NOX has significant effects on human health. The European Union has therefore has set air quality standards for air pollutants. Many port cities have problems meeting these standards. The port of Hamburg with around 10.000 ship calls per year is Germany's largest seaport and Europe's second largest container port. Air quality standard reporting in Hamburg has revealed problems in meeting limits for NO2 and PM10. The amount and contribution of port related ship emissions (38% for NOx and 17% for PM10) to the overall emissions in the metropolitan area in 2005 [BSU Hamburg (2012): Luftreinhalteplan für Hamburg. 1. Fortschreibung 2012] has been modelled with a bottom up approach by using statistical data of ship activities in the harbor, technical vessel information and specific emission algorithms [GAUSS (2008): Quantifizierung von gasförmigen Emissionen durch Maschinenanlagen der Seeschiffart an der deutschen Küste]. However, knowledge about the spatial distribution of the harbor ship emissions over the city area is crucial when it comes to air quality standards and policy decisions to protect human health. Hence, this model study examines the spatial distribution of harbor ship emissions (NOX, PM10) and their deposition in the Hamburg metropolitan area. The transport and chemical transformation of atmospheric pollutants is calculated with the well-established chemistry transport model TAPM (The Air Pollution Model). TAPM is a three-dimensional coupled prognostic meteorological and air pollution model with a condensed chemistry scheme including

  6. Large-scale neuromorphic computing systems

    NASA Astrophysics Data System (ADS)

    Furber, Steve

    2016-10-01

    Neuromorphic computing covers a diverse range of approaches to information processing all of which demonstrate some degree of neurobiological inspiration that differentiates them from mainstream conventional computing systems. The philosophy behind neuromorphic computing has its origins in the seminal work carried out by Carver Mead at Caltech in the late 1980s. This early work influenced others to carry developments forward, and advances in VLSI technology supported steady growth in the scale and capability of neuromorphic devices. Recently, a number of large-scale neuromorphic projects have emerged, taking the approach to unprecedented scales and capabilities. These large-scale projects are associated with major new funding initiatives for brain-related research, creating a sense that the time and circumstances are right for progress in our understanding of information processing in the brain. In this review we present a brief history of neuromorphic engineering then focus on some of the principal current large-scale projects, their main features, how their approaches are complementary and distinct, their advantages and drawbacks, and highlight the sorts of capabilities that each can deliver to neural modellers.

  7. Rye and Wheat Bran Extracts Isolated with Pressurized Solvents Increase Oxidative Stability and Antioxidant Potential of Beef Meat Hamburgers.

    PubMed

    Šulniūtė, Vaida; Jaime, Isabel; Rovira, Jordi; Venskutonis, Petras Rimantas

    2016-02-01

    Rye and wheat bran extracts containing phenolic compounds and demonstrating high DPPH• (2,2-diphenyl-1-picrylhydrazyl), ABTS(•+) (2,2'-azino-bis(3-ethylbenzthiazoline-6-sulphonic acid) scavenging and oxygen radical absorbance capacities (ORAC) were tested in beef hamburgers as possible functional ingredients. Bran extracts significantly increased the indicators of antioxidant potential of meat products and their global antioxidant response (GAR) during physiological in vitro digestion. The extracts also inhibited the formation of oxidation products, hexanal and malondialdehyde, of hamburgers during their storage; however, they did not have significant effect on the growth of microorganisms. Hamburgers with 0.8% wheat bran extract demonstrated the highest antioxidant potential. Some effects of bran extracts on other quality characteristics such as pH, color, formation of metmyoglobin were also observed, however, these effects did not have negative influence on the overall sensory evaluation score of hamburgers. Consequently, the use of bran extracts in meat products may be considered as promising means of increasing oxidative product stability and enriching with functional ingredients which might possess health benefits. © 2016 Institute of Food Technologists®

  8. Potential effects of the next 100 billion hamburgers sold by McDonald's.

    PubMed

    Spencer, Elsa H; Frank, Erica; McIntosh, Nichole F

    2005-05-01

    McDonald's has sold >100 billion beef-based hamburgers worldwide with a potentially considerable health impact. This paper explores whether there would be any advantages if the next 100 billion burgers were instead plant-based burgers. Nutrient composition of the beef hamburger patty and the McVeggie burger patty were obtained from the McDonald's website; sales data were obtained from the McDonald's customer service. Consuming 100 billion McDonald's beef burgers versus the same company's McVeggie burgers would provide, approximately, on average, an additional 550 million pounds of saturated fat and 1.2 billion total pounds of fat, as well as 1 billion fewer pounds of fiber, 660 million fewer pounds of protein, and no difference in calories. These data suggest that the McDonald's new McVeggie burger represents a less harmful fast-food choice than the beef burger.

  9. Learning to Fly: Family-Oriented Literacy Education in Schools. Celebrating the Tenth Anniversary of Hamburg's Family Literacy Project 2004-2014

    ERIC Educational Resources Information Center

    Rabkin, Gabriele, Ed.; Roche, Stephen, Ed.

    2014-01-01

    This book was published to mark the tenth anniversary of Hamburg's award-winning Family Literacy project (FLY). It includes contributions from key stakeholders--academics, teachers, parents and children--participating in the conceptualization and implementation of FLY in the city of Hamburg. FLY mainly targets people from socially disadvantaged…

  10. Evaluation of Georgia asphalt mixture properties using a Hamburg wheel-tracking device.

    DOT National Transportation Integrated Search

    2017-05-01

    This study used a Hamburg Wheel-Tracking Device (HWTD) to evaluate the resistance of Georgia asphalt mixtures to rutting and stripping. It aimed to develop an HWTD test procedure and criteria aligned with GDOTs asphalt materials and mixture design...

  11. Multiple zonal jets and convective heat transport barriers in a quasi-geostrophic model of planetary cores

    NASA Astrophysics Data System (ADS)

    Guervilly, C.; Cardin, P.

    2017-10-01

    We study rapidly rotating Boussinesq convection driven by internal heating in a full sphere. We use a numerical model based on the quasi-geostrophic approximation for the velocity field, whereas the temperature field is 3-D. This approximation allows us to perform simulations for Ekman numbers down to 10-8, Prandtl numbers relevant for liquid metals (˜10-1) and Reynolds numbers up to 3 × 104. Persistent zonal flows composed of multiple jets form as a result of the mixing of potential vorticity. For the largest Rayleigh numbers computed, the zonal velocity is larger than the convective velocity despite the presence of boundary friction. The convective structures and the zonal jets widen when the thermal forcing increases. Prograde and retrograde zonal jets are dynamically different: in the prograde jets (which correspond to weak potential vorticity gradients) the convection transports heat efficiently and the mean temperature tends to be homogenized; by contrast, in the cores of the retrograde jets (which correspond to steep gradients of potential vorticity) the dynamics is dominated by the propagation of Rossby waves, resulting in the formation of steep mean temperature gradients and the dominance of conduction in the heat transfer process. Consequently, in quasi-geostrophic systems, the width of the retrograde zonal jets controls the efficiency of the heat transfer.

  12. Large-Scale Outflows in Seyfert Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E. J. M.; Baum, S. A.

    1995-12-01

    \\catcode`\\@=11 \\ialign{m @th#1hfil ##hfil \\crcr#2\\crcr\\sim\\crcr}}} \\catcode`\\@=12 Highly collimated outflows extend out to Mpc scales in many radio-loud active galaxies. In Seyfert galaxies, which are radio-quiet, the outflows extend out to kpc scales and do not appear to be as highly collimated. In order to study the nature of large-scale (>~1 kpc) outflows in Seyferts, we have conducted optical, radio and X-ray surveys of a distance-limited sample of 22 edge-on Seyfert galaxies. Results of the optical emission-line imaging and spectroscopic survey imply that large-scale outflows are present in >~{{1} /{4}} of all Seyferts. The radio (VLA) and X-ray (ROSAT) surveys show that large-scale radio and X-ray emission is present at about the same frequency. Kinetic luminosities of the outflows in Seyferts are comparable to those in starburst-driven superwinds. Large-scale radio sources in Seyferts appear diffuse, but do not resemble radio halos found in some edge-on starburst galaxies (e.g. M82). We discuss the feasibility of the outflows being powered by the active nucleus (e.g. a jet) or a circumnuclear starburst.

  13. Spectator Democracy: An Intersectional Analysis of Education Reform in Hamburg, Germany

    ERIC Educational Resources Information Center

    Bale, Jeff

    2016-01-01

    This article uses the theoretical framework of intersectionality to analyze a partially failed school reform measure in Hamburg, Germany and the political conflict over it between 2008 and 2010. The analysis focuses on "the extent to which" and the "mechanisms by which" the interests of marginalized members of the proreform…

  14. Synchronization of coupled large-scale Boolean networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Fangfei, E-mail: li-fangfei@163.com

    2014-03-15

    This paper investigates the complete synchronization and partial synchronization of two large-scale Boolean networks. First, the aggregation algorithm towards large-scale Boolean network is reviewed. Second, the aggregation algorithm is applied to study the complete synchronization and partial synchronization of large-scale Boolean networks. Finally, an illustrative example is presented to show the efficiency of the proposed results.

  15. The Comparative Effect of Carrot and Lemon Fiber as a Fat Replacer on Physico-chemical, Textural, and Organoleptic Quality of Low-fat Beef Hamburger.

    PubMed

    Soncu, Eda Demirok; Kolsarıcı, Nuray; Çiçek, Neslihan; Öztürk, Görsen Salman; Akoğlu, Ilker T; Arıcı, Yeliz Kaşko

    2015-01-01

    This study was designed to determine the usability of lemon fiber (LF-2%, 4%, 6%) and carrot fiber (CF-2%, 4%, 6%) to produce low-fat beef hamburgers. To that end, a certain amount of fat was replaced with each fiber. The proximate composition, pH value, cholesterol content, cooking characteristics, color, texture profile, and sensory properties of low-fat beef hamburgers were investigated. LF increased moisture content and cooking yield due to its better water binding properties, while CF caused higher fat and cholesterol contents owing to its higher fat absorption capacity (p<0.05). LF resulted in a lighter, redder, and more yellow color (p<0.05). Hardness, gumminess, springiness, and chewiness parameters decreased when the usage level of both fibers increased (p<0.05). However, more tender, gummy, springy, and smoother hamburgers were produced by the addition of CF in comparison with LF (p<0.05). Moreover, hamburgers including CF were rated with higher sensory scores (p<0.05). In conclusion, LF demonstrated better technological results in terms of cooking yield, shrinkage, moisture retention, and fat retention. However it is suggested that CF produces better low-fat hamburgers since up to 2% CF presented sensory and textural properties similar to those of regular hamburgers.

  16. The Comparative Effect of Carrot and Lemon Fiber as a Fat Replacer on Physico-chemical, Textural, and Organoleptic Quality of Low-fat Beef Hamburger

    PubMed Central

    Soncu, Eda Demirok; Kolsarıcı, Nuray; Çiçek, Neslihan; Öztürk, Görsen Salman; Akoğlu, ilker T.; Arıcı, Yeliz Kaşko

    2015-01-01

    This study was designed to determine the usability of lemon fiber (LF-2%, 4%, 6%) and carrot fiber (CF-2%, 4%, 6%) to produce low-fat beef hamburgers. To that end, a certain amount of fat was replaced with each fiber. The proximate composition, pH value, cholesterol content, cooking characteristics, color, texture profile, and sensory properties of low-fat beef hamburgers were investigated. LF increased moisture content and cooking yield due to its better water binding properties, while CF caused higher fat and cholesterol contents owing to its higher fat absorption capacity (p<0.05). LF resulted in a lighter, redder, and more yellow color (p<0.05). Hardness, gumminess, springiness, and chewiness parameters decreased when the usage level of both fibers increased (p<0.05). However, more tender, gummy, springy, and smoother hamburgers were produced by the addition of CF in comparison with LF (p<0.05). Moreover, hamburgers including CF were rated with higher sensory scores (p<0.05). In conclusion, LF demonstrated better technological results in terms of cooking yield, shrinkage, moisture retention, and fat retention. However it is suggested that CF produces better low-fat hamburgers since up to 2% CF presented sensory and textural properties similar to those of regular hamburgers. PMID:26761851

  17. Dissecting the large-scale galactic conformity

    NASA Astrophysics Data System (ADS)

    Seo, Seongu

    2018-01-01

    Galactic conformity is an observed phenomenon that galaxies located in the same region have similar properties such as star formation rate, color, gas fraction, and so on. The conformity was first observed among galaxies within in the same halos (“one-halo conformity”). The one-halo conformity can be readily explained by mutual interactions among galaxies within a halo. Recent observations however further witnessed a puzzling connection among galaxies with no direct interaction. In particular, galaxies located within a sphere of ~5 Mpc radius tend to show similarities, even though the galaxies do not share common halos with each other ("two-halo conformity" or “large-scale conformity”). Using a cosmological hydrodynamic simulation, Illustris, we investigate the physical origin of the two-halo conformity and put forward two scenarios. First, back-splash galaxies are likely responsible for the large-scale conformity. They have evolved into red galaxies due to ram-pressure stripping in a given galaxy cluster and happen to reside now within a ~5 Mpc sphere. Second, galaxies in strong tidal field induced by large-scale structure also seem to give rise to the large-scale conformity. The strong tides suppress star formation in the galaxies. We discuss the importance of the large-scale conformity in the context of galaxy evolution.

  18. The Large -scale Distribution of Galaxies

    NASA Astrophysics Data System (ADS)

    Flin, Piotr

    A review of the Large-scale structure of the Universe is given. A connection is made with the titanic work by Johannes Kepler in many areas of astronomy and cosmology. A special concern is made to spatial distribution of Galaxies, voids and walls (cellular structure of the Universe). Finaly, the author is concluding that the large scale structure of the Universe can be observed in much greater scale that it was thought twenty years ago.

  19. Generalization of the quasi-geostrophic Eliassen-Palm flux to include eddy forcing of condensation heating

    NASA Technical Reports Server (NTRS)

    Stone, P. H.; Salustri, G.

    1984-01-01

    A modified Eulerian form of the Eliassen-Palm flux which includes the effect of eddy forcing on condensation heating is defined. With the two-dimensional vector flux in the meridional plane which is a function of the zonal mean eddy fluxes replaced by the modified flux, both the Eliassen-Palm theorem and a modified but more general form of the nonacceleration theorem for quasi-geostrophic motion still hold. Calculations of the divergence of the modified flux and of the eddy forcing of the moisture field are presented.

  20. The NASA/MSFC global reference atmospheric model: MOD 3 (with spherical harmonic wind model)

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Fletcher, G. R.; Gramling, F. E.; Pace, W. B.

    1980-01-01

    Improvements to the global reference atmospheric model are described. The basic model includes monthly mean values of pressure, density, temperature, and geostrophic winds, as well as quasi-biennial and small and large scale random perturbations. A spherical harmonic wind model for the 25 to 90 km height range is included. Below 25 km and above 90 km, the GRAM program uses the geostrophic wind equations and pressure data to compute the mean wind. In the altitudes where the geostrophic wind relations are used, an interpolation scheme is employed for estimating winds at low latitudes where the geostrophic wind relations being to mesh down. Several sample wind profiles are given, as computed by the spherical harmonic model. User and programmer manuals are presented.

  1. Large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Doolin, B. F.

    1975-01-01

    Classes of large scale dynamic systems were discussed in the context of modern control theory. Specific examples discussed were in the technical fields of aeronautics, water resources and electric power.

  2. Combined quantity management and biological treatment of sludge liquor at Hamburg's wastewater treatment plants--first experience in operation with the Store and Treat process.

    PubMed

    Laurich, F

    2004-01-01

    Store and Treat (SAT) is a new concept for the management of ammonium-rich process waste waters at wastewater treatment plants. It combines the advantages of quantity management and separate biological treatment, whereby both operations are carried out in the same tank. Now the first full-scale application of that method was realized in Hamburg. As first experience shows the process can help to increase nitrogen removal and to reduce energy consumption.

  3. Transition from large-scale to small-scale dynamo.

    PubMed

    Ponty, Y; Plunian, F

    2011-04-15

    The dynamo equations are solved numerically with a helical forcing corresponding to the Roberts flow. In the fully turbulent regime the flow behaves as a Roberts flow on long time scales, plus turbulent fluctuations at short time scales. The dynamo onset is controlled by the long time scales of the flow, in agreement with the former Karlsruhe experimental results. The dynamo mechanism is governed by a generalized α effect, which includes both the usual α effect and turbulent diffusion, plus all higher order effects. Beyond the onset we find that this generalized α effect scales as O(Rm(-1)), suggesting the takeover of small-scale dynamo action. This is confirmed by simulations in which dynamo occurs even if the large-scale field is artificially suppressed.

  4. Large-Scale Hybrid Motor Testing. Chapter 10

    NASA Technical Reports Server (NTRS)

    Story, George

    2006-01-01

    Hybrid rocket motors can be successfully demonstrated at a small scale virtually anywhere. There have been many suitcase sized portable test stands assembled for demonstration of hybrids. They show the safety of hybrid rockets to the audiences. These small show motors and small laboratory scale motors can give comparative burn rate data for development of different fuel/oxidizer combinations, however questions that are always asked when hybrids are mentioned for large scale applications are - how do they scale and has it been shown in a large motor? To answer those questions, large scale motor testing is required to verify the hybrid motor at its true size. The necessity to conduct large-scale hybrid rocket motor tests to validate the burn rate from the small motors to application size has been documented in several place^'^^.^. Comparison of small scale hybrid data to that of larger scale data indicates that the fuel burn rate goes down with increasing port size, even with the same oxidizer flux. This trend holds for conventional hybrid motors with forward oxidizer injection and HTPB based fuels. While the reason this is occurring would make a great paper or study or thesis, it is not thoroughly understood at this time. Potential causes include the fact that since hybrid combustion is boundary layer driven, the larger port sizes reduce the interaction (radiation, mixing and heat transfer) from the core region of the port. This chapter focuses on some of the large, prototype sized testing of hybrid motors. The largest motors tested have been AMROC s 250K-lbf thrust motor at Edwards Air Force Base and the Hybrid Propulsion Demonstration Program s 250K-lbf thrust motor at Stennis Space Center. Numerous smaller tests were performed to support the burn rate, stability and scaling concepts that went into the development of those large motors.

  5. Why small-scale cannabis growers stay small: five mechanisms that prevent small-scale growers from going large scale.

    PubMed

    Hammersvik, Eirik; Sandberg, Sveinung; Pedersen, Willy

    2012-11-01

    Over the past 15-20 years, domestic cultivation of cannabis has been established in a number of European countries. New techniques have made such cultivation easier; however, the bulk of growers remain small-scale. In this study, we explore the factors that prevent small-scale growers from increasing their production. The study is based on 1 year of ethnographic fieldwork and qualitative interviews conducted with 45 Norwegian cannabis growers, 10 of whom were growing on a large-scale and 35 on a small-scale. The study identifies five mechanisms that prevent small-scale indoor growers from going large-scale. First, large-scale operations involve a number of people, large sums of money, a high work-load and a high risk of detection, and thus demand a higher level of organizational skills than for small growing operations. Second, financial assets are needed to start a large 'grow-site'. Housing rent, electricity, equipment and nutrients are expensive. Third, to be able to sell large quantities of cannabis, growers need access to an illegal distribution network and knowledge of how to act according to black market norms and structures. Fourth, large-scale operations require advanced horticultural skills to maximize yield and quality, which demands greater skills and knowledge than does small-scale cultivation. Fifth, small-scale growers are often embedded in the 'cannabis culture', which emphasizes anti-commercialism, anti-violence and ecological and community values. Hence, starting up large-scale production will imply having to renegotiate or abandon these values. Going from small- to large-scale cannabis production is a demanding task-ideologically, technically, economically and personally. The many obstacles that small-scale growers face and the lack of interest and motivation for going large-scale suggest that the risk of a 'slippery slope' from small-scale to large-scale growing is limited. Possible political implications of the findings are discussed. Copyright

  6. Generation of Large-Scale Magnetic Fields by Small-Scale Dynamo in Shear Flows.

    PubMed

    Squire, J; Bhattacharjee, A

    2015-10-23

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.

  7. Large Scale Traffic Simulations

    DOT National Transportation Integrated Search

    1997-01-01

    Large scale microscopic (i.e. vehicle-based) traffic simulations pose high demands on computation speed in at least two application areas: (i) real-time traffic forecasting, and (ii) long-term planning applications (where repeated "looping" between t...

  8. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    DOE PAGES

    Squire, J.; Bhattacharjee, A.

    2015-10-20

    We propose a new mechanism for a turbulent mean-field dynamo in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of a large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the "shear-current" effect. Furthermore, given the inevitable existence of nonhelical small-scale magnetic fields in turbulent plasmas, as well as the generic naturemore » of velocity shear, the suggested mechanism may help explain the generation of large-scale magnetic fields across a wide range of astrophysical objects.« less

  9. Toward Increasing Fairness in Score Scale Calibrations Employed in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; von Davier, Matthias

    2014-01-01

    In this article, we investigate the creation of comparable score scales across countries in international assessments. We examine potential improvements to current score scale calibration procedures used in international large-scale assessments. Our approach seeks to improve fairness in scoring international large-scale assessments, which often…

  10. Very Large Scale Integration (VLSI).

    ERIC Educational Resources Information Center

    Yeaman, Andrew R. J.

    Very Large Scale Integration (VLSI), the state-of-the-art production techniques for computer chips, promises such powerful, inexpensive computing that, in the future, people will be able to communicate with computer devices in natural language or even speech. However, before full-scale VLSI implementation can occur, certain salient factors must be…

  11. Survey on large scale system control methods

    NASA Technical Reports Server (NTRS)

    Mercadal, Mathieu

    1987-01-01

    The problem inherent to large scale systems such as power network, communication network and economic or ecological systems were studied. The increase in size and flexibility of future spacecraft has put those dynamical systems into the category of large scale systems, and tools specific to the class of large systems are being sought to design control systems that can guarantee more stability and better performance. Among several survey papers, reference was found to a thorough investigation on decentralized control methods. Especially helpful was the classification made of the different existing approaches to deal with large scale systems. A very similar classification is used, even though the papers surveyed are somehow different from the ones reviewed in other papers. Special attention is brought to the applicability of the existing methods to controlling large mechanical systems like large space structures. Some recent developments are added to this survey.

  12. Large-Scale 3D Printing: The Way Forward

    NASA Astrophysics Data System (ADS)

    Jassmi, Hamad Al; Najjar, Fady Al; Ismail Mourad, Abdel-Hamid

    2018-03-01

    Research on small-scale 3D printing has rapidly evolved, where numerous industrial products have been tested and successfully applied. Nonetheless, research on large-scale 3D printing, directed to large-scale applications such as construction and automotive manufacturing, yet demands a great a great deal of efforts. Large-scale 3D printing is considered an interdisciplinary topic and requires establishing a blended knowledge base from numerous research fields including structural engineering, materials science, mechatronics, software engineering, artificial intelligence and architectural engineering. This review article summarizes key topics of relevance to new research trends on large-scale 3D printing, particularly pertaining (1) technological solutions of additive construction (i.e. the 3D printers themselves), (2) materials science challenges, and (3) new design opportunities.

  13. Novel method to construct large-scale design space in lubrication process utilizing Bayesian estimation based on a small-scale design-of-experiment and small sets of large-scale manufacturing data.

    PubMed

    Maeda, Jin; Suzuki, Tatsuya; Takayama, Kozo

    2012-12-01

    A large-scale design space was constructed using a Bayesian estimation method with a small-scale design of experiments (DoE) and small sets of large-scale manufacturing data without enforcing a large-scale DoE. The small-scale DoE was conducted using various Froude numbers (X(1)) and blending times (X(2)) in the lubricant blending process for theophylline tablets. The response surfaces, design space, and their reliability of the compression rate of the powder mixture (Y(1)), tablet hardness (Y(2)), and dissolution rate (Y(3)) on a small scale were calculated using multivariate spline interpolation, a bootstrap resampling technique, and self-organizing map clustering. The constant Froude number was applied as a scale-up rule. Three experiments under an optimal condition and two experiments under other conditions were performed on a large scale. The response surfaces on the small scale were corrected to those on a large scale by Bayesian estimation using the large-scale results. Large-scale experiments under three additional sets of conditions showed that the corrected design space was more reliable than that on the small scale, even if there was some discrepancy in the pharmaceutical quality between the manufacturing scales. This approach is useful for setting up a design space in pharmaceutical development when a DoE cannot be performed at a commercial large manufacturing scale.

  14. Antibacterial Effect of Garlic Aqueous Extract on Staphylococcus aureus in Hamburger

    PubMed Central

    Mozaffari Nejad, Amir Sasan; Shabani, Shahrokh; Bayat, Mansour; Hosseini, Seyed Ebrahim

    2014-01-01

    Background: Using garlic is widespread in Iran and other countries as a medicine and a natural spice. Garlic is a potential inhibitor for food pathogens. Foods contaminated with pathogens pose a potential danger to the consumer’s health. The use of garlic can increase the shelf life and decrease the possibilities of food poisoning and spoilage in processed foods. Objectives: The aim of this study was to investigate the antibacterial effect of garlic aqueous extract on growth of Staphylococcus aureus bacteria. Materials and Methods: In this study, the garlic aqueous extract was prepared under sterile conditions and was added in 1, 2, and 3 mL to 100g hamburger samples. A group of samples was prepared to be used as treatment sample, while a group was stored at 4°C and -18°C. The samples were kept in refrigerator for one and two weeks and they were frozen for one, two and three months and then subjected to microbial tests. Results: Statistical evaluation of the first and second week samples indicated a significant growth decreased by all the 1, 2, and 3-mL extracts. In treatment of one, two and three-month samples, the growth of S. aureus was significantly decreased by the 2 and 3-mL extracts. The 1-mL extract was effective in decreasing the growth, and a significant difference was observed in treatments with 2 and 3-mL extracts. However, there was no significant difference between the two and three-month samples, though they were significantly different from the one-month samples. After evaluations, treatment with the 2-mL extract was found to be the best one. Conclusions: Garlic aqueous extract has antibacterial properties against S. aureus present in hamburger. Moreover, garlic aqueous extract can be used not only as a flavor but also as a natural additive for hamburger. In addition, garlic has antibacterial properties against other Gram-positive and Gram-negative bacteria, which must be investigated in further studies. PMID:25774277

  15. South Atlantic Ocean circulation: Simulation experiments with a quasi-geostrophic model and assimilation of TOPEX/POSEIDON and ERS 1 altimeter data

    NASA Astrophysics Data System (ADS)

    Florenchie, P.; Verron, J.

    1998-10-01

    Simulation experiments of South Atlantic Ocean circulations are conducted with a 1/6°, four-layered, quasi-geostrophic model. By means of a simple nudging data assimilation procedure along satellite tracks, TOPEX/POSEIDON and ERS 1 altimeter measurements are introduced into the model to control the simulation of the basin-scale circulation for the period from October 1992 to September 1994. The model circulation appears to be strongly influenced by the introduction of altimeter data, offering a consistent picture of South Atlantic Ocean circulations. Comparisons with observations show that the assimilating model successfully simulates the kinematic behavior of a large number of surface circulation components. The assimilation procedure enables us to produce schematic diagrams of South Atlantic circulation in which patterns ranging from basin-scale currents to mesoscale eddies are portrayed in a realistic way, with respect to their complexity. The major features of the South Atlantic circulation are described and analyzed, with special emphasis on the Brazil-Malvinas Confluence region, the Subtropical Gyre with the formation of frontal structures, and the Agulhas Retroflection. The Agulhas eddy-shedding process has been studied extensively. Fourteen eddies appear to be shed during the 2-year experiment. Because of their strong surface topographic signature, Agulhas eddies have been tracked continuously during the assimilation experiment as they cross the South Atlantic basin westward. Other effects of the assimilation procedure are shown, such as the intensification of the Subtropical Gyre, the appearance of a strong seasonal cycle in the Brazil Current transport, and the increase of the mean Brazil Current transport. This last result, combined with the westward oriention of the Agulhas eddies' trajectories, leads to a southward transport of mean eddy kinetic energy across 30°S.

  16. A vectorized Poisson solver over a spherical shell and its application to the quasi-geostrophic omega-equation

    NASA Technical Reports Server (NTRS)

    Mullenmeister, Paul

    1988-01-01

    The quasi-geostrophic omega-equation in flux form is developed as an example of a Poisson problem over a spherical shell. Solutions of this equation are obtained by applying a two-parameter Chebyshev solver in vector layout for CDC 200 series computers. The performance of this vectorized algorithm greatly exceeds the performance of its scalar analog. The algorithm generates solutions of the omega-equation which are compared with the omega fields calculated with the aid of the mass continuity equation.

  17. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  18. Comparative analysis of European wide marine ecosystem shifts: a large-scale approach for developing the basis for ecosystem-based management.

    PubMed

    Möllmann, Christian; Conversi, Alessandra; Edwards, Martin

    2011-08-23

    Abrupt and rapid ecosystem shifts (where major reorganizations of food-web and community structures occur), commonly termed regime shifts, are changes between contrasting and persisting states of ecosystem structure and function. These shifts have been increasingly reported for exploited marine ecosystems around the world from the North Pacific to the North Atlantic. Understanding the drivers and mechanisms leading to marine ecosystem shifts is crucial in developing adaptive management strategies to achieve sustainable exploitation of marine ecosystems. An international workshop on a comparative approach to analysing these marine ecosystem shifts was held at Hamburg University, Institute for Hydrobiology and Fisheries Science, Germany on 1-3 November 2010. Twenty-seven scientists from 14 countries attended the meeting, representing specialists from seven marine regions, including the Baltic Sea, the North Sea, the Barents Sea, the Black Sea, the Mediterranean Sea, the Bay of Biscay and the Scotian Shelf off the Canadian East coast. The goal of the workshop was to conduct the first large-scale comparison of marine ecosystem regime shifts across multiple regional areas, in order to support the development of ecosystem-based management strategies. This journal is © 2011 The Royal Society

  19. Sound production due to large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.

  20. Hydrometeorological variability on a large french catchment and its relation to large-scale circulation across temporal scales

    NASA Astrophysics Data System (ADS)

    Massei, Nicolas; Dieppois, Bastien; Fritier, Nicolas; Laignel, Benoit; Debret, Maxime; Lavers, David; Hannah, David

    2015-04-01

    In the present context of global changes, considerable efforts have been deployed by the hydrological scientific community to improve our understanding of the impacts of climate fluctuations on water resources. Both observational and modeling studies have been extensively employed to characterize hydrological changes and trends, assess the impact of climate variability or provide future scenarios of water resources. In the aim of a better understanding of hydrological changes, it is of crucial importance to determine how and to what extent trends and long-term oscillations detectable in hydrological variables are linked to global climate oscillations. In this work, we develop an approach associating large-scale/local-scale correlation, enmpirical statistical downscaling and wavelet multiresolution decomposition of monthly precipitation and streamflow over the Seine river watershed, and the North Atlantic sea level pressure (SLP) in order to gain additional insights on the atmospheric patterns associated with the regional hydrology. We hypothesized that: i) atmospheric patterns may change according to the different temporal wavelengths defining the variability of the signals; and ii) definition of those hydrological/circulation relationships for each temporal wavelength may improve the determination of large-scale predictors of local variations. The results showed that the large-scale/local-scale links were not necessarily constant according to time-scale (i.e. for the different frequencies characterizing the signals), resulting in changing spatial patterns across scales. This was then taken into account by developing an empirical statistical downscaling (ESD) modeling approach which integrated discrete wavelet multiresolution analysis for reconstructing local hydrometeorological processes (predictand : precipitation and streamflow on the Seine river catchment) based on a large-scale predictor (SLP over the Euro-Atlantic sector) on a monthly time-step. This approach

  1. The Challenge of Large-Scale Literacy Improvement

    ERIC Educational Resources Information Center

    Levin, Ben

    2010-01-01

    This paper discusses the challenge of making large-scale improvements in literacy in schools across an entire education system. Despite growing interest and rhetoric, there are very few examples of sustained, large-scale change efforts around school-age literacy. The paper reviews 2 instances of such efforts, in England and Ontario. After…

  2. Large-scale influences in near-wall turbulence.

    PubMed

    Hutchins, Nicholas; Marusic, Ivan

    2007-03-15

    Hot-wire data acquired in a high Reynolds number facility are used to illustrate the need for adequate scale separation when considering the coherent structure in wall-bounded turbulence. It is found that a large-scale motion in the log region becomes increasingly comparable in energy to the near-wall cycle as the Reynolds number increases. Through decomposition of fluctuating velocity signals, it is shown that this large-scale motion has a distinct modulating influence on the small-scale energy (akin to amplitude modulation). Reassessment of DNS data, in light of these results, shows similar trends, with the rate and intensity of production due to the near-wall cycle subject to a modulating influence from the largest-scale motions.

  3. Generation of large-scale magnetic fields by small-scale dynamo in shear flows

    NASA Astrophysics Data System (ADS)

    Squire, Jonathan; Bhattacharjee, Amitava

    2015-11-01

    A new mechanism for turbulent mean-field dynamo is proposed, in which the magnetic fluctuations resulting from a small-scale dynamo drive the generation of large-scale magnetic fields. This is in stark contrast to the common idea that small-scale magnetic fields should be harmful to large-scale dynamo action. These dynamos occur in the presence of large-scale velocity shear and do not require net helicity, resulting from off-diagonal components of the turbulent resistivity tensor as the magnetic analogue of the ``shear-current'' effect. The dynamo is studied using a variety of computational and analytic techniques, both when the magnetic fluctuations arise self-consistently through the small-scale dynamo and in lower Reynolds number regimes. Given the inevitable existence of non-helical small-scale magnetic fields in turbulent plasmas, as well as the generic nature of velocity shear, the suggested mechanism may help to explain generation of large-scale magnetic fields across a wide range of astrophysical objects. This work was supported by a Procter Fellowship at Princeton University, and the US Department of Energy Grant DE-AC02-09-CH11466.

  4. PKI security in large-scale healthcare networks.

    PubMed

    Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos

    2012-06-01

    During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.

  5. Dispersion and Cluster Scales in the Ocean

    NASA Astrophysics Data System (ADS)

    Kirwan, A. D., Jr.; Chang, H.; Huntley, H.; Carlson, D. F.; Mensa, J. A.; Poje, A. C.; Fox-Kemper, B.

    2017-12-01

    Ocean flow space scales range from centimeters to thousands of kilometers. Because of their large Reynolds number these flows are considered turbulent. However, because of rotation and stratification constraints they do not conform to classical turbulence scaling theory. Mesoscale and large-scale motions are well described by geostrophic or "2D turbulence" theory, however extending this theory to submesoscales has proved to be problematic. One obvious reason is the difficulty in obtaining reliable data over many orders of magnitude of spatial scales in an ocean environment. The goal of this presentation is to provide a preliminary synopsis of two recent experiments that overcame these obstacles. The first experiment, the Grand LAgrangian Deployment (GLAD) was conducted during July 2012 in the eastern half of the Gulf of Mexico. Here approximately 300 GPS-tracked drifters were deployed with the primary goal to determine whether the relative dispersion of an initially densely clustered array was driven by processes acting at local pair separation scales or by straining imposed by mesoscale motions. The second experiment was a component of the LAgrangian Submesoscale Experiment (LASER) conducted during the winter of 2016. Here thousands of bamboo plates were tracked optically from an Aerostat. Together these two deployments provided an unprecedented data set on dispersion and clustering processes from 1 to 106 meter scales. Calculations of statistics such as two point separations, structure functions, and scale dependent relative diffusivities showed: inverse energy cascade as expected for scales above 10 km, a forward energy cascade at scales below 10 km with a possible energy input at Langmuir circulation scales. We also find evidence from structure function calculations for surface flow convergence at scales less than 10 km that account for material clustering at the ocean surface.

  6. Achievement Inequalities in Hamburg Schools: How Do They Change as Students Get Older?

    ERIC Educational Resources Information Center

    Caro, Daniel H.; Lehmann, Rainer

    2009-01-01

    A handful of studies have found evidence of a gap in academic achievement between students of high- and low-socioeconomic status (SES) families. Furthermore, some scholars argue that the gap tends to widen as students get older. Evidence is, however, inconclusive and relies mostly on limited methodological designs. Drawing on the Hamburg School…

  7. From the Quixotic to the Pragmatic: The "Hamburg Declaration", Adult Education, and Work

    ERIC Educational Resources Information Center

    Rose, Amy

    2013-01-01

    The "Hamburg Declaration" (UNESCO, 1997) is perhaps most quixotic and prescient in laying out the changing world of work as envisioned in 1997. It includes particular commitments to promote the rights to work and to work-related adult learning, to increase access to work-related adult learning for different target groups, and to…

  8. Large-scale velocities and primordial non-Gaussianity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Fabian

    2010-09-15

    We study the peculiar velocities of density peaks in the presence of primordial non-Gaussianity. Rare, high-density peaks in the initial density field can be identified with tracers such as galaxies and clusters in the evolved matter distribution. The distribution of relative velocities of peaks is derived in the large-scale limit using two different approaches based on a local biasing scheme. Both approaches agree, and show that halos still stream with the dark matter locally as well as statistically, i.e. they do not acquire a velocity bias. Nonetheless, even a moderate degree of (not necessarily local) non-Gaussianity induces a significant skewnessmore » ({approx}0.1-0.2) in the relative velocity distribution, making it a potentially interesting probe of non-Gaussianity on intermediate to large scales. We also study two-point correlations in redshift space. The well-known Kaiser formula is still a good approximation on large scales, if the Gaussian halo bias is replaced with its (scale-dependent) non-Gaussian generalization. However, there are additional terms not encompassed by this simple formula which become relevant on smaller scales (k > or approx. 0.01h/Mpc). Depending on the allowed level of non-Gaussianity, these could be of relevance for future large spectroscopic surveys.« less

  9. Large-scale regions of antimatter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grobov, A. V., E-mail: alexey.grobov@gmail.com; Rubin, S. G., E-mail: sgrubin@mephi.ru

    2015-07-15

    Amodified mechanism of the formation of large-scale antimatter regions is proposed. Antimatter appears owing to fluctuations of a complex scalar field that carries a baryon charge in the inflation era.

  10. Towards large scale multi-target tracking

    NASA Astrophysics Data System (ADS)

    Vo, Ba-Ngu; Vo, Ba-Tuong; Reuter, Stephan; Lam, Quang; Dietmayer, Klaus

    2014-06-01

    Multi-target tracking is intrinsically an NP-hard problem and the complexity of multi-target tracking solutions usually do not scale gracefully with problem size. Multi-target tracking for on-line applications involving a large number of targets is extremely challenging. This article demonstrates the capability of the random finite set approach to provide large scale multi-target tracking algorithms. In particular it is shown that an approximate filter known as the labeled multi-Bernoulli filter can simultaneously track one thousand five hundred targets in clutter on a standard laptop computer.

  11. The Expanded Large Scale Gap Test

    DTIC Science & Technology

    1987-03-01

    NSWC TR 86-32 DTIC THE EXPANDED LARGE SCALE GAP TEST BY T. P. LIDDIARD D. PRICE RESEARCH AND TECHNOLOGY DEPARTMENT ’ ~MARCH 1987 Ap~proved for public...arises, to reduce the spread in the LSGT 50% gap value.) The worst charges, such as those with the highest or lowest densities, the largest re-pressed...Arlington, VA 22217 PE 62314N INS3A 1 RJ14E31 7R4TBK 11 TITLE (Include Security CIlmsilficatiorn The Expanded Large Scale Gap Test . 12. PEIRSONAL AUTHOR() T

  12. An informal paper on large-scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Ho, Y. C.

    1975-01-01

    Large scale systems are defined as systems requiring more than one decision maker to control the system. Decentralized control and decomposition are discussed for large scale dynamic systems. Information and many-person decision problems are analyzed.

  13. On large-scale dynamo action at high magnetic Reynolds number

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cattaneo, F.; Tobias, S. M., E-mail: smt@maths.leeds.ac.uk

    2014-07-01

    We consider the generation of magnetic activity—dynamo waves—in the astrophysical limit of very large magnetic Reynolds number. We consider kinematic dynamo action for a system consisting of helical flow and large-scale shear. We demonstrate that large-scale dynamo waves persist at high Rm if the helical flow is characterized by a narrow band of spatial scales and the shear is large enough. However, for a wide band of scales the dynamo becomes small scale with a further increase of Rm, with dynamo waves re-emerging only if the shear is then increased. We show that at high Rm, the key effect ofmore » the shear is to suppress small-scale dynamo action, allowing large-scale dynamo action to be observed. We conjecture that this supports a general 'suppression principle'—large-scale dynamo action can only be observed if there is a mechanism that suppresses the small-scale fluctuations.« less

  14. Large-scale dynamos in rapidly rotating plane layer convection

    NASA Astrophysics Data System (ADS)

    Bushby, P. J.; Käpylä, P. J.; Masada, Y.; Brandenburg, A.; Favier, B.; Guervilly, C.; Käpylä, M. J.

    2018-05-01

    Context. Convectively driven flows play a crucial role in the dynamo processes that are responsible for producing magnetic activity in stars and planets. It is still not fully understood why many astrophysical magnetic fields have a significant large-scale component. Aims: Our aim is to investigate the dynamo properties of compressible convection in a rapidly rotating Cartesian domain, focusing upon a parameter regime in which the underlying hydrodynamic flow is known to be unstable to a large-scale vortex instability. Methods: The governing equations of three-dimensional non-linear magnetohydrodynamics (MHD) are solved numerically. Different numerical schemes are compared and we propose a possible benchmark case for other similar codes. Results: In keeping with previous related studies, we find that convection in this parameter regime can drive a large-scale dynamo. The components of the mean horizontal magnetic field oscillate, leading to a continuous overall rotation of the mean field. Whilst the large-scale vortex instability dominates the early evolution of the system, the large-scale vortex is suppressed by the magnetic field and makes a negligible contribution to the mean electromotive force that is responsible for driving the large-scale dynamo. The cycle period of the dynamo is comparable to the ohmic decay time, with longer cycles for dynamos in convective systems that are closer to onset. In these particular simulations, large-scale dynamo action is found only when vertical magnetic field boundary conditions are adopted at the upper and lower boundaries. Strongly modulated large-scale dynamos are found at higher Rayleigh numbers, with periods of reduced activity (grand minima-like events) occurring during transient phases in which the large-scale vortex temporarily re-establishes itself, before being suppressed again by the magnetic field.

  15. Large-scale anisotropy of the cosmic microwave background radiation

    NASA Technical Reports Server (NTRS)

    Silk, J.; Wilson, M. L.

    1981-01-01

    Inhomogeneities in the large-scale distribution of matter inevitably lead to the generation of large-scale anisotropy in the cosmic background radiation. The dipole, quadrupole, and higher order fluctuations expected in an Einstein-de Sitter cosmological model have been computed. The dipole and quadrupole anisotropies are comparable to the measured values, and impose important constraints on the allowable spectrum of large-scale matter density fluctuations. A significant dipole anisotropy is generated by the matter distribution on scales greater than approximately 100 Mpc. The large-scale anisotropy is insensitive to the ionization history of the universe since decoupling, and cannot easily be reconciled with a galaxy formation theory that is based on primordial adiabatic density fluctuations.

  16. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  17. Problems of simulation of large, long-lived vortices in the atmospheres of the giant planets (jupiter, saturn, neptune)

    NASA Astrophysics Data System (ADS)

    Nezlin, Michael V.; Sutyrin, Georgi G.

    1994-01-01

    Large, long-lived vortices are abundant in the atmospheres of the giant planets. Some of them survive a few orders of magnitude longer than the dispersive linear Rossby wave packets, e.g. the Great Red Spot (GRS), Little Red Spot (LRS) and White Ovals (WO) of Jupiter, Big Bertha, Brown Spot and Anne's Spot of Saturn, the Great Dark Spot (GDS) of Neptune, etc. Nonlinear effects which prevent their dispersion spreading are the main subject of our consideration. Particular emphasis is placed on determining the dynamical processes which may explain the remarkable properties of observed vortices such as anticyclonic rotation in preference to cyclonic one and the uniqueness of the GRS, the largest coherent vortex, along the perimeter of Jupiter at corresponding latitude. We review recent experimental and theoretical studies of steadily translating solitary Rossby vortices (anticyclones) in a rotating shallow fluid. Two-dimensional monopolar solitary vortices trap fluid which is transported westward. These dualistic structures appear to be vortices, on the one hand, and solitary “waves”, on the other hand. Owing to the presence of the trapped fluid, such solitary structures collide inelastically and have a memory of the initial disturbance which is responsible for the formation of the structure. As a consequence, they have no definite relationship between the amplitude and characteristic size. Their vortical properties are connected with geostrophic advection of local vorticity. Their solitary properties (nonspreading and stationary translation) are due to a balance between Rossby wave dispersion and nonlinear effects which allow the anticyclones, with an elevation of a free surface, to propagate faster than the linear waves, without a resonance with linear waves, i.e. without wave radiation. On the other hand, cyclones, with a depression of a free surface, are dispersive and nonstationary features. This asymmetry in dispersion-nonlinear properties of cyclones and

  18. Large-Scale Coronal Heating from the Solar Magnetic Network

    NASA Technical Reports Server (NTRS)

    Falconer, David A.; Moore, Ronald L.; Porter, Jason G.; Hathaway, David H.

    1999-01-01

    In Fe 12 images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi- supergranular. In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. The emission of the coronal network and bright points contribute only about 5% of the entire quiet solar coronal Fe MI emission. Here we investigate the large-scale corona, the supergranular and larger-scale structure that we had previously treated as a background, and that emits 95% of the total Fe XII emission. We compare the dim and bright halves of the large- scale corona and find that the bright half is 1.5 times brighter than the dim half, has an order of magnitude greater area of bright point coverage, has three times brighter coronal network, and has about 1.5 times more magnetic flux than the dim half These results suggest that the brightness of the large-scale corona is more closely related to the large- scale total magnetic flux than to bright point activity. We conclude that in the quiet sun: (1) Magnetic flux is modulated (concentrated/diluted) on size scales larger than supergranules. (2) The large-scale enhanced magnetic flux gives an enhanced, more active, magnetic network and an increased incidence of network bright point formation. (3) The heating of the large-scale corona is dominated by more widespread, but weaker, network activity than that which heats the bright points. This work was funded by the Solar Physics Branch of NASA's office of Space Science through the SR&T Program and the SEC Guest Investigator Program.

  19. Large- and Very-Large-Scale Motions in Katabatic Flows Over Steep Slopes

    NASA Astrophysics Data System (ADS)

    Giometto, M. G.; Fang, J.; Salesky, S.; Parlange, M. B.

    2016-12-01

    Evidence of large- and very-large-scale motions populating the boundary layer in katabatic flows over steep slopes is presented via direct numerical simulations (DNSs). DNSs are performed at a modified Reynolds number (Rem = 967), considering four sloping angles (α = 60°, 70°, 80° and 90°). Large coherent structures prove to be strongly dependent on the inclination of the underlying surface. Spectra and co-spectra consistently show signatures of large-scale motions (LSMs), with streamwise extension on the order of the boundary layer thickness. A second low-wavenumber mode characterizes pre-multiplied spectra and co-spectra when the slope angle is below 70°, indicative of very-large-scale motions (VLSMs). In addition, conditional sampling and averaging shows how LSMs and VLSMs are induced by counter-rotating roll modes, in agreement with findings from canonical wall-bounded flows. VLSMs contribute to the stream-wise velocity variance and shear stress in the above-jet regions up to 30% and 45% respectively, whereas both LSMs and VLSMs are inactive in the near-wall regions.

  20. Large Scale Processes and Extreme Floods in Brazil

    NASA Astrophysics Data System (ADS)

    Ribeiro Lima, C. H.; AghaKouchak, A.; Lall, U.

    2016-12-01

    Persistent large scale anomalies in the atmospheric circulation and ocean state have been associated with heavy rainfall and extreme floods in water basins of different sizes across the world. Such studies have emerged in the last years as a new tool to improve the traditional, stationary based approach in flood frequency analysis and flood prediction. Here we seek to advance previous studies by evaluating the dominance of large scale processes (e.g. atmospheric rivers/moisture transport) over local processes (e.g. local convection) in producing floods. We consider flood-prone regions in Brazil as case studies and the role of large scale climate processes in generating extreme floods in such regions is explored by means of observed streamflow, reanalysis data and machine learning methods. The dynamics of the large scale atmospheric circulation in the days prior to the flood events are evaluated based on the vertically integrated moisture flux and its divergence field, which are interpreted in a low-dimensional space as obtained by machine learning techniques, particularly supervised kernel principal component analysis. In such reduced dimensional space, clusters are obtained in order to better understand the role of regional moisture recycling or teleconnected moisture in producing floods of a given magnitude. The convective available potential energy (CAPE) is also used as a measure of local convection activities. We investigate for individual sites the exceedance probability in which large scale atmospheric fluxes dominate the flood process. Finally, we analyze regional patterns of floods and how the scaling law of floods with drainage area responds to changes in the climate forcing mechanisms (e.g. local vs large scale).

  1. Generation of large-scale density fluctuations by buoyancy

    NASA Technical Reports Server (NTRS)

    Chasnov, J. R.; Rogallo, R. S.

    1990-01-01

    The generation of fluid motion from a state of rest by buoyancy forces acting on a homogeneous isotropic small-scale density field is considered. Nonlinear interactions between the generated fluid motion and the initial isotropic small-scale density field are found to create an anisotropic large-scale density field with spectrum proportional to kappa(exp 4). This large-scale density field is observed to result in an increasing Reynolds number of the fluid turbulence in its final period of decay.

  2. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2009-09-30

    Modeling of Burning Emissions ( FLAMBE ) project, and other related parameters. Our plans to embed NAAPS inside NOGAPS may need to be put on hold...AOD, FLAMBE and FAROP at FNMOC are supported by 6.4 funding from PMW-120 for “Large-scale Atmospheric Models”, “Small-scale Atmospheric Models

  3. Information Tailoring Enhancements for Large-Scale Social Data

    DTIC Science & Technology

    2016-06-15

    Intelligent Automation Incorporated Information Tailoring Enhancements for Large-Scale... Automation Incorporated Progress Report No. 3 Information Tailoring Enhancements for Large-Scale Social Data Submitted in accordance with...1 Work Performed within This Reporting Period .................................................... 2 1.1 Enhanced Named Entity Recognition (NER

  4. Long-lived planetary vortices and their evolution: Conservative intermediate geostrophic model.

    PubMed

    Sutyrin, Georgi G.

    1994-06-01

    Large, long-lived vortices, surviving during many turnaround times and far longer than the dispersive linear Rossby wave packets, are abundant in planetary atmospheres and oceans. Nonlinear effects which prevent dispersive decay of intense cyclones and anticyclones and provide their self-propelling propagation are revised here using shallow water equations and their balanced approximations. The main physical mechanism allowing vortical structures to be long-lived in planetary fluid is the quick fluid rotation inside their cores which prevents growth in the amplitude of asymmetric circulation arising due to the beta-effect. Intense vortices of both signs survive essentially longer than the linear Rossby wave packet if their azimuthal velocity is much larger than the Rossby wave speed. However, in the long-time evolution, cyclonic and anticyclonic vortices behave essentially differently that is illustrated by the conservative intermediate geostrophic model. Asymmetric circulation governing vortex propagation is described by the azimuthal mode m=1 for the initial value problem as well as for steadily propagating solutions. Cyclonic vortices move west-poleward decaying gradually due to Rossby wave radiation while anticyclonic ones adjust to non-radiating solitary vortices. Slow weakening of an intense cyclone with decreasing of its size and shrinking of the core is described assuming zero azimuthal velocity outside the core while drifting poleward. The poleward tendency of the cyclone motion relative to the stirring flow corresponds to characteristic trajectories of tropical cyclones in the Earth's atmosphere. The asymmetry in dispersion-nonlinear properties of cyclones and anticyclones is thought to be one of the essential reasons for the observed predominance of anticyclones among long-lived vortices in the atmospheres of the giant planets and also among intrathermoclinic eddies in the ocean.

  5. A bibliographical surveys of large-scale systems

    NASA Technical Reports Server (NTRS)

    Corliss, W. R.

    1970-01-01

    A limited, partly annotated bibliography was prepared on the subject of large-scale system control. Approximately 400 references are divided into thirteen application areas, such as large societal systems and large communication systems. A first-author index is provided.

  6. Installation and performance of the Budapest Hamburg proton microprobe

    NASA Astrophysics Data System (ADS)

    Kovács, I.; Kocsonya, A.; Kostka, P.; Szőkefalvi-Nagy, Z.; Schrang, K.; Krüger, A.; Niecke, M.

    2005-04-01

    A new scanning proton microprobe has been installed at the 5 MV Van de Graaff accelerator of the KFKI Research Institute for Particle and Nuclear Physics. It is the energy-upgraded version of the Hamburg proton microprobe dismantled in 2001. The probe forming system includes a pair of focusing quadrupoles and an additional quadrupole pair in front of it, which is applied to increase the proton beam divergence. The average probe size at 2.5 MeV proton energy is 2.2 μm × 1.1 μm. The test results on stability and the preliminary experiments on cement corrosion and fish otoliths are also presented.

  7. Evolution of scaling emergence in large-scale spatial epidemic spreading.

    PubMed

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease.

  8. Moon-based Earth Observation for Large Scale Geoscience Phenomena

    NASA Astrophysics Data System (ADS)

    Guo, Huadong; Liu, Guang; Ding, Yixing

    2016-07-01

    The capability of Earth observation for large-global-scale natural phenomena needs to be improved and new observing platform are expected. We have studied the concept of Moon as an Earth observation in these years. Comparing with manmade satellite platform, Moon-based Earth observation can obtain multi-spherical, full-band, active and passive information,which is of following advantages: large observation range, variable view angle, long-term continuous observation, extra-long life cycle, with the characteristics of longevity ,consistency, integrity, stability and uniqueness. Moon-based Earth observation is suitable for monitoring the large scale geoscience phenomena including large scale atmosphere change, large scale ocean change,large scale land surface dynamic change,solid earth dynamic change,etc. For the purpose of establishing a Moon-based Earth observation platform, we already have a plan to study the five aspects as follows: mechanism and models of moon-based observing earth sciences macroscopic phenomena; sensors' parameters optimization and methods of moon-based Earth observation; site selection and environment of moon-based Earth observation; Moon-based Earth observation platform; and Moon-based Earth observation fundamental scientific framework.

  9. Large-Scale Structure and Hyperuniformity of Amorphous Ices

    NASA Astrophysics Data System (ADS)

    Martelli, Fausto; Torquato, Salvatore; Giovambattista, Nicolas; Car, Roberto

    2017-09-01

    We investigate the large-scale structure of amorphous ices and transitions between their different forms by quantifying their large-scale density fluctuations. Specifically, we simulate the isothermal compression of low-density amorphous ice (LDA) and hexagonal ice to produce high-density amorphous ice (HDA). Both HDA and LDA are nearly hyperuniform; i.e., they are characterized by an anomalous suppression of large-scale density fluctuations. By contrast, in correspondence with the nonequilibrium phase transitions to HDA, the presence of structural heterogeneities strongly suppresses the hyperuniformity and the system becomes hyposurficial (devoid of "surface-area fluctuations"). Our investigation challenges the largely accepted "frozen-liquid" picture, which views glasses as structurally arrested liquids. Beyond implications for water, our findings enrich our understanding of pressure-induced structural transformations in glasses.

  10. Quality of life of Turkish type 2 diabetics in Germany and Turkey--a comparison between Hamburg and Istanbul.

    PubMed

    Kofahl, Christopher; Doğan, Mustafa; Doğan, Gülsün; Mnich, Eva; von dem Knesebeck, Olaf

    2014-01-01

    The analyses address the following research questions: (1) Do Turkish diabetics in Germany and Turkey differ in terms of quality of life? (2) If yes, can these differences (in part) be explained by social factors (age, gender, education, household size), functional limitations and availability of support? (3) Are social factors, functional limitations and availability of support differently associated with quality of life among Turkish diabetics in Germany and Turkey? For this comparative cross-sectional study, 111 patients with type 2 diabetes were personally interviewed in Istanbul (Turkey) and 294 Turkish patients in Hamburg (Germany). For quality of life measurement we have used the Turkish version of the WHOQOL-Bref-26. Sociodemographics included age, sex, education and household-size. Health related functional limitations were assessed on the basis of an index of (instrumental) activities of daily living including the availability of help. Statistical analyses were conducted on group comparisons with Chi-square- and T-tests as well as linear regressions. There are no significant differences between Turkish diabetics in Germany and Turkey in the physical and the psychological dimensions of the WHOQOL-Bref. However, in the WHOQOL-domains 'social QoL' and 'environmental QoL' Turkish diabetics living in Hamburg have a significantly better quality of life than their counterparts in Istanbul. These differences cannot be explained by individual sociodemographic factors, functional limitations and availability of support. Furthermore, we found much stronger positive associations between education and quality of life in Istanbul than in Hamburg. Beyond strong similarities between the two samples in sociodemographics, physical and mental health the social and environmental quality of life was significantly assessed better by the Turkish diabetics living in Hamburg. This is most likely an effect of public investment in social security, infrastructure and health care

  11. Large Scale Cross Drive Correlation Of Digital Media

    DTIC Science & Technology

    2016-03-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS LARGE SCALE CROSS-DRIVE CORRELATION OF DIGITAL MEDIA by Joseph Van Bruaene March 2016 Thesis Co...CROSS-DRIVE CORRELATION OF DIGITAL MEDIA 5. FUNDING NUMBERS 6. AUTHOR(S) Joseph Van Bruaene 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Naval...the ability to make large scale cross-drive correlations among a large corpus of digital media becomes increasingly important. We propose a

  12. The large-scale distribution of galaxies

    NASA Technical Reports Server (NTRS)

    Geller, Margaret J.

    1989-01-01

    The spatial distribution of galaxies in the universe is characterized on the basis of the six completed strips of the Harvard-Smithsonian Center for Astrophysics redshift-survey extension. The design of the survey is briefly reviewed, and the results are presented graphically. Vast low-density voids similar to the void in Bootes are found, almost completely surrounded by thin sheets of galaxies. Also discussed are the implications of the results for the survey sampling problem, the two-point correlation function of the galaxy distribution, the possibility of detecting large-scale coherent flows, theoretical models of large-scale structure, and the identification of groups and clusters of galaxies.

  13. A Single Mode Study of a Quasi-Geostrophic Convection-Driven Dynamo Model

    NASA Astrophysics Data System (ADS)

    Plumley, M.; Calkins, M. A.; Julien, K. A.; Tobias, S.

    2017-12-01

    Planetary magnetic fields are thought to be the product of hydromagnetic dynamo action. For Earth, this process occurs within the convecting, turbulent and rapidly rotating outer core, where the dynamics are characterized by low Rossby, low magnetic Prandtl and high Rayleigh numbers. Progress in studying dynamos has been limited by current computing capabilities and the difficulties in replicating the extreme values that define this setting. Asymptotic models that embrace these extreme parameter values and enforce the dominant balance of geostrophy provide an option for the study of convective flows with actual relevance to geophysics. The quasi-geostrophic dynamo model (QGDM) is a multiscale, fully-nonlinear Cartesian dynamo model that is valid in the asymptotic limit of low Rossby number. We investigate the QGDM using a simplified class of solutions that consist of a single horizontal wavenumber which enforces a horizontal structure on the solutions. This single mode study is used to explore multiscale time stepping techniques and analyze the influence of the magnetic field on convection.

  14. Large-scale environments of narrow-line Seyfert 1 galaxies

    NASA Astrophysics Data System (ADS)

    Järvelä, E.; Lähteenmäki, A.; Lietzen, H.; Poudel, A.; Heinämäki, P.; Einasto, M.

    2017-09-01

    Studying large-scale environments of narrow-line Seyfert 1 (NLS1) galaxies gives a new perspective on their properties, particularly their radio loudness. The large-scale environment is believed to have an impact on the evolution and intrinsic properties of galaxies, however, NLS1 sources have not been studied in this context before. We have a large and diverse sample of 1341 NLS1 galaxies and three separate environment data sets constructed using Sloan Digital Sky Survey. We use various statistical methods to investigate how the properties of NLS1 galaxies are connected to the large-scale environment, and compare the large-scale environments of NLS1 galaxies with other active galactic nuclei (AGN) classes, for example, other jetted AGN and broad-line Seyfert 1 (BLS1) galaxies, to study how they are related. NLS1 galaxies reside in less dense environments than any of the comparison samples, thus confirming their young age. The average large-scale environment density and environmental distribution of NLS1 sources is clearly different compared to BLS1 galaxies, thus it is improbable that they could be the parent population of NLS1 galaxies and unified by orientation. Within the NLS1 class there is a trend of increasing radio loudness with increasing large-scale environment density, indicating that the large-scale environment affects their intrinsic properties. Our results suggest that the NLS1 class of sources is not homogeneous, and furthermore, that a considerable fraction of them are misclassified. We further support a published proposal to replace the traditional classification to radio-loud, and radio-quiet or radio-silent sources with a division into jetted and non-jetted sources.

  15. Geostrophic tripolar vortices in a two-layer fluid: Linear stability and nonlinear evolution of equilibria

    NASA Astrophysics Data System (ADS)

    Reinaud, J. N.; Sokolovskiy, M. A.; Carton, X.

    2017-03-01

    We investigate equilibrium solutions for tripolar vortices in a two-layer quasi-geostrophic flow. Two of the vortices are like-signed and lie in one layer. An opposite-signed vortex lies in the other layer. The families of equilibria can be spanned by the distance (called separation) between the two like-signed vortices. Two equilibrium configurations are possible when the opposite-signed vortex lies between the two other vortices. In the first configuration (called ordinary roundabout), the opposite signed vortex is equidistant to the two other vortices. In the second configuration (eccentric roundabouts), the distances are unequal. We determine the equilibria numerically and describe their characteristics for various internal deformation radii. The two branches of equilibria can co-exist and intersect for small deformation radii. Then, the eccentric roundabouts are stable while unstable ordinary roundabouts can be found. Indeed, ordinary roundabouts exist at smaller separations than eccentric roundabouts do, thus inducing stronger vortex interactions. However, for larger deformation radii, eccentric roundabouts can also be unstable. Then, the two branches of equilibria do not cross. The branch of eccentric roundabouts only exists for large separations. Near the end of the branch of eccentric roundabouts (at the smallest separation), one of the like-signed vortices exhibits a sharp inner corner where instabilities can be triggered. Finally, we investigate the nonlinear evolution of a few selected cases of tripoles.

  16. Double inflation - A possible resolution of the large-scale structure problem

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.; Villumsen, Jens V.; Vittorio, Nicola; Silk, Joseph; Juszkiewicz, Roman

    1987-01-01

    A model is presented for the large-scale structure of the universe in which two successive inflationary phases resulted in large small-scale and small large-scale density fluctuations. This bimodal density fluctuation spectrum in an Omega = 1 universe dominated by hot dark matter leads to large-scale structure of the galaxy distribution that is consistent with recent observational results. In particular, large, nearly empty voids and significant large-scale peculiar velocity fields are produced over scales of about 100 Mpc, while the small-scale structure over less than about 10 Mpc resembles that in a low-density universe, as observed. Detailed analytical calculations and numerical simulations are given of the spatial and velocity correlations.

  17. Quasi-geostrophic free mode models of long-lived Jovian eddies: Forcing mechanisms and crucial observational tests

    NASA Technical Reports Server (NTRS)

    Read, P. L.

    1986-01-01

    Observations of Jupiter and Saturn long-lived eddies, such as Jupiter's Great Red Spot and White Ovals, are presently compared with laboratory experiments and corresponding numerical simulations for free thermal convection in a rotating fluid that is subject to horizontal differential heating and cooling. Difficulties in determining the essential processes maintaining and dissipating stable eddies, on the basis of global energy budget studies, are discussed; such difficulties do not arise in considerations of the flow's potential vorticity budget. On Jupiter, diabatically forced and transient eddy-driven flows primarily differ in the implied role of transient eddies in transporting potential vorticity across closed geostrophic streamlines in the time mean.

  18. Measuring the Large-scale Solar Magnetic Field

    NASA Astrophysics Data System (ADS)

    Hoeksema, J. T.; Scherrer, P. H.; Peterson, E.; Svalgaard, L.

    2017-12-01

    The Sun's large-scale magnetic field is important for determining global structure of the corona and for quantifying the evolution of the polar field, which is sometimes used for predicting the strength of the next solar cycle. Having confidence in the determination of the large-scale magnetic field of the Sun is difficult because the field is often near the detection limit, various observing methods all measure something a little different, and various systematic effects can be very important. We compare resolved and unresolved observations of the large-scale magnetic field from the Wilcox Solar Observatory, Heliseismic and Magnetic Imager (HMI), Michelson Doppler Imager (MDI), and Solis. Cross comparison does not enable us to establish an absolute calibration, but it does allow us to discover and compensate for instrument problems, such as the sensitivity decrease seen in the WSO measurements in late 2016 and early 2017.

  19. Actions at Hamburg International Association of Seismology and Physics of the Earth's Interior

    NASA Astrophysics Data System (ADS)

    The third Workshop on Historical Seismograms, held in Hamburg on August 18-19, 1983, in conjunction with the meeting of the International Union of Geodesy and Geophysics in Hamburg, Federal Republic of Germany, was specifically organized to discuss the status of historical seismic data for Latin America and Europe. Since it is unlikely that an additional workshop will be held on this subject, reports for other regions were included as well.In the first session, H. Meyers described the purpose of the workshop and gave some history of the previous activities of the IASPEI/Unesco Working Group on Historical Seismograms. E.R. Engdahl noted that thus far more than 500,000 seismograms have been filmed as part of the Historical Microfilming Project and emphasized the importance of the activities to be covered during the workshop. M. Hashizume, representing Unesco, described the importance of historical seismic data and the Unesco interests in having these data available for the analysis of seismic risks, particularly in areas where the recurrence rate of significant earthquakes is very low and for regions where much data do not exist. He mentioned that both these conditions occur frequently in developing nations.

  20. Spectral fingerprints of large-scale neuronal interactions.

    PubMed

    Siegel, Markus; Donner, Tobias H; Engel, Andreas K

    2012-01-11

    Cognition results from interactions among functionally specialized but widely distributed brain regions; however, neuroscience has so far largely focused on characterizing the function of individual brain regions and neurons therein. Here we discuss recent studies that have instead investigated the interactions between brain regions during cognitive processes by assessing correlations between neuronal oscillations in different regions of the primate cerebral cortex. These studies have opened a new window onto the large-scale circuit mechanisms underlying sensorimotor decision-making and top-down attention. We propose that frequency-specific neuronal correlations in large-scale cortical networks may be 'fingerprints' of canonical neuronal computations underlying cognitive processes.

  1. A unified large/small-scale dynamo in helical turbulence

    NASA Astrophysics Data System (ADS)

    Bhat, Pallavi; Subramanian, Kandaswamy; Brandenburg, Axel

    2016-09-01

    We use high resolution direct numerical simulations (DNS) to show that helical turbulence can generate significant large-scale fields even in the presence of strong small-scale dynamo action. During the kinematic stage, the unified large/small-scale dynamo grows fields with a shape-invariant eigenfunction, with most power peaked at small scales or large k, as in Subramanian & Brandenburg. Nevertheless, the large-scale field can be clearly detected as an excess power at small k in the negatively polarized component of the energy spectrum for a forcing with positively polarized waves. Its strength overline{B}, relative to the total rms field Brms, decreases with increasing magnetic Reynolds number, ReM. However, as the Lorentz force becomes important, the field generated by the unified dynamo orders itself by saturating on successively larger scales. The magnetic integral scale for the positively polarized waves, characterizing the small-scale field, increases significantly from the kinematic stage to saturation. This implies that the small-scale field becomes as coherent as possible for a given forcing scale, which averts the ReM-dependent quenching of overline{B}/B_rms. These results are obtained for 10243 DNS with magnetic Prandtl numbers of PrM = 0.1 and 10. For PrM = 0.1, overline{B}/B_rms grows from about 0.04 to about 0.4 at saturation, aided in the final stages by helicity dissipation. For PrM = 10, overline{B}/B_rms grows from much less than 0.01 to values of the order the 0.2. Our results confirm that there is a unified large/small-scale dynamo in helical turbulence.

  2. Modulation of Small-scale Turbulence Structure by Large-scale Motions in the Absence of Direct Energy Transfer.

    NASA Astrophysics Data System (ADS)

    Brasseur, James G.; Juneja, Anurag

    1996-11-01

    Previous DNS studies indicate that small-scale structure can be directly altered through ``distant'' dynamical interactions by energetic forcing of the large scales. To remove the possibility of stimulating energy transfer between the large- and small-scale motions in these long-range interactions, we here perturb the large scale structure without altering its energy content by suddenly altering only the phases of large-scale Fourier modes. Scale-dependent changes in turbulence structure appear as a non zero difference field between two simulations from identical initial conditions of isotropic decaying turbulence, one perturbed and one unperturbed. We find that the large-scale phase perturbations leave the evolution of the energy spectrum virtually unchanged relative to the unperturbed turbulence. The difference field, on the other hand, is strongly affected by the perturbation. Most importantly, the time scale τ characterizing the change in in turbulence structure at spatial scale r shortly after initiating a change in large-scale structure decreases with decreasing turbulence scale r. Thus, structural information is transferred directly from the large- to the smallest-scale motions in the absence of direct energy transfer---a long-range effect which cannot be explained by a linear mechanism such as rapid distortion theory. * Supported by ARO grant DAAL03-92-G-0117

  3. Evolution of Scaling Emergence in Large-Scale Spatial Epidemic Spreading

    PubMed Central

    Wang, Lin; Li, Xiang; Zhang, Yi-Qing; Zhang, Yan; Zhang, Kan

    2011-01-01

    Background Zipf's law and Heaps' law are two representatives of the scaling concepts, which play a significant role in the study of complexity science. The coexistence of the Zipf's law and the Heaps' law motivates different understandings on the dependence between these two scalings, which has still hardly been clarified. Methodology/Principal Findings In this article, we observe an evolution process of the scalings: the Zipf's law and the Heaps' law are naturally shaped to coexist at the initial time, while the crossover comes with the emergence of their inconsistency at the larger time before reaching a stable state, where the Heaps' law still exists with the disappearance of strict Zipf's law. Such findings are illustrated with a scenario of large-scale spatial epidemic spreading, and the empirical results of pandemic disease support a universal analysis of the relation between the two laws regardless of the biological details of disease. Employing the United States domestic air transportation and demographic data to construct a metapopulation model for simulating the pandemic spread at the U.S. country level, we uncover that the broad heterogeneity of the infrastructure plays a key role in the evolution of scaling emergence. Conclusions/Significance The analyses of large-scale spatial epidemic spreading help understand the temporal evolution of scalings, indicating the coexistence of the Zipf's law and the Heaps' law depends on the collective dynamics of epidemic processes, and the heterogeneity of epidemic spread indicates the significance of performing targeted containment strategies at the early time of a pandemic disease. PMID:21747932

  4. A Functional Model for Management of Large Scale Assessments.

    ERIC Educational Resources Information Center

    Banta, Trudy W.; And Others

    This functional model for managing large-scale program evaluations was developed and validated in connection with the assessment of Tennessee's Nutrition Education and Training Program. Management of such a large-scale assessment requires the development of a structure for the organization; distribution and recovery of large quantities of…

  5. Current Scientific Issues in Large Scale Atmospheric Dynamics

    NASA Technical Reports Server (NTRS)

    Miller, T. L. (Compiler)

    1986-01-01

    Topics in large scale atmospheric dynamics are discussed. Aspects of atmospheric blocking, the influence of transient baroclinic eddies on planetary-scale waves, cyclogenesis, the effects of orography on planetary scale flow, small scale frontal structure, and simulations of gravity waves in frontal zones are discussed.

  6. Condition Monitoring of Large-Scale Facilities

    NASA Technical Reports Server (NTRS)

    Hall, David L.

    1999-01-01

    This document provides a summary of the research conducted for the NASA Ames Research Center under grant NAG2-1182 (Condition-Based Monitoring of Large-Scale Facilities). The information includes copies of view graphs presented at NASA Ames in the final Workshop (held during December of 1998), as well as a copy of a technical report provided to the COTR (Dr. Anne Patterson-Hine) subsequent to the workshop. The material describes the experimental design, collection of data, and analysis results associated with monitoring the health of large-scale facilities. In addition to this material, a copy of the Pennsylvania State University Applied Research Laboratory data fusion visual programming tool kit was also provided to NASA Ames researchers.

  7. Seismic safety in conducting large-scale blasts

    NASA Astrophysics Data System (ADS)

    Mashukov, I. V.; Chaplygin, V. V.; Domanov, V. P.; Semin, A. A.; Klimkin, M. A.

    2017-09-01

    In mining enterprises to prepare hard rocks for excavation a drilling and blasting method is used. With the approach of mining operations to settlements the negative effect of large-scale blasts increases. To assess the level of seismic impact of large-scale blasts the scientific staff of Siberian State Industrial University carried out expertise for coal mines and iron ore enterprises. Determination of the magnitude of surface seismic vibrations caused by mass explosions was performed using seismic receivers, an analog-digital converter with recording on a laptop. The registration results of surface seismic vibrations during production of more than 280 large-scale blasts at 17 mining enterprises in 22 settlements are presented. The maximum velocity values of the Earth’s surface vibrations are determined. The safety evaluation of seismic effect was carried out according to the permissible value of vibration velocity. For cases with exceedance of permissible values recommendations were developed to reduce the level of seismic impact.

  8. Potential for geophysical experiments in large scale tests.

    USGS Publications Warehouse

    Dieterich, J.H.

    1981-01-01

    Potential research applications for large-specimen geophysical experiments include measurements of scale dependence of physical parameters and examination of interactions with heterogeneities, especially flaws such as cracks. In addition, increased specimen size provides opportunities for improved recording resolution and greater control of experimental variables. Large-scale experiments using a special purpose low stress (100MPa).-Author

  9. Homogenization of Large-Scale Movement Models in Ecology

    USGS Publications Warehouse

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  10. The large-scale organization of metabolic networks

    NASA Astrophysics Data System (ADS)

    Jeong, H.; Tombor, B.; Albert, R.; Oltvai, Z. N.; Barabási, A.-L.

    2000-10-01

    In a cell or microorganism, the processes that generate mass, energy, information transfer and cell-fate specification are seamlessly integrated through a complex network of cellular constituents and reactions. However, despite the key role of these networks in sustaining cellular functions, their large-scale structure is essentially unknown. Here we present a systematic comparative mathematical analysis of the metabolic networks of 43 organisms representing all three domains of life. We show that, despite significant variation in their individual constituents and pathways, these metabolic networks have the same topological scaling properties and show striking similarities to the inherent organization of complex non-biological systems. This may indicate that metabolic organization is not only identical for all living organisms, but also complies with the design principles of robust and error-tolerant scale-free networks, and may represent a common blueprint for the large-scale organization of interactions among all cellular constituents.

  11. Large-scale weakly supervised object localization via latent category learning.

    PubMed

    Chong Wang; Kaiqi Huang; Weiqiang Ren; Junge Zhang; Maybank, Steve

    2015-04-01

    Localizing objects in cluttered backgrounds is challenging under large-scale weakly supervised conditions. Due to the cluttered image condition, objects usually have large ambiguity with backgrounds. Besides, there is also a lack of effective algorithm for large-scale weakly supervised localization in cluttered backgrounds. However, backgrounds contain useful latent information, e.g., the sky in the aeroplane class. If this latent information can be learned, object-background ambiguity can be largely reduced and background can be suppressed effectively. In this paper, we propose the latent category learning (LCL) in large-scale cluttered conditions. LCL is an unsupervised learning method which requires only image-level class labels. First, we use the latent semantic analysis with semantic object representation to learn the latent categories, which represent objects, object parts or backgrounds. Second, to determine which category contains the target object, we propose a category selection strategy by evaluating each category's discrimination. Finally, we propose the online LCL for use in large-scale conditions. Evaluation on the challenging PASCAL Visual Object Class (VOC) 2007 and the large-scale imagenet large-scale visual recognition challenge 2013 detection data sets shows that the method can improve the annotation precision by 10% over previous methods. More importantly, we achieve the detection precision which outperforms previous results by a large margin and can be competitive to the supervised deformable part model 5.0 baseline on both data sets.

  12. Large-scale structure of randomly jammed spheres

    NASA Astrophysics Data System (ADS)

    Ikeda, Atsushi; Berthier, Ludovic; Parisi, Giorgio

    2017-05-01

    We numerically analyze the density field of three-dimensional randomly jammed packings of monodisperse soft frictionless spherical particles, paying special attention to fluctuations occurring at large length scales. We study in detail the two-point static structure factor at low wave vectors in Fourier space. We also analyze the nature of the density field in real space by studying the large-distance behavior of the two-point pair correlation function, of density fluctuations in subsystems of increasing sizes, and of the direct correlation function. We show that such real space analysis can be greatly improved by introducing a coarse-grained density field to disentangle genuine large-scale correlations from purely local effects. Our results confirm that both Fourier and real space signatures of vanishing density fluctuations at large scale are absent, indicating that randomly jammed packings are not hyperuniform. In addition, we establish that the pair correlation function displays a surprisingly complex structure at large distances, which is however not compatible with the long-range negative correlation of hyperuniform systems but fully compatible with an analytic form for the structure factor. This implies that the direct correlation function is short ranged, as we also demonstrate directly. Our results reveal that density fluctuations in jammed packings do not follow the behavior expected for random hyperuniform materials, but display instead a more complex behavior.

  13. An Novel Architecture of Large-scale Communication in IOT

    NASA Astrophysics Data System (ADS)

    Ma, Wubin; Deng, Su; Huang, Hongbin

    2018-03-01

    In recent years, many scholars have done a great deal of research on the development of Internet of Things and networked physical systems. However, few people have made the detailed visualization of the large-scale communications architecture in the IOT. In fact, the non-uniform technology between IPv6 and access points has led to a lack of broad principles of large-scale communications architectures. Therefore, this paper presents the Uni-IPv6 Access and Information Exchange Method (UAIEM), a new architecture and algorithm that addresses large-scale communications in the IOT.

  14. Gravitational lenses and large scale structure

    NASA Technical Reports Server (NTRS)

    Turner, Edwin L.

    1987-01-01

    Four possible statistical tests of the large scale distribution of cosmic material are described. Each is based on gravitational lensing effects. The current observational status of these tests is also summarized.

  15. Large-Scale 1:1 Computing Initiatives: An Open Access Database

    ERIC Educational Resources Information Center

    Richardson, Jayson W.; McLeod, Scott; Flora, Kevin; Sauers, Nick J.; Kannan, Sathiamoorthy; Sincar, Mehmet

    2013-01-01

    This article details the spread and scope of large-scale 1:1 computing initiatives around the world. What follows is a review of the existing literature around 1:1 programs followed by a description of the large-scale 1:1 database. Main findings include: 1) the XO and the Classmate PC dominate large-scale 1:1 initiatives; 2) if professional…

  16. Effects of Lemon Balm on the Oxidative Stability and the Quality Properties of Hamburger Patties during Refrigerated Storage

    PubMed Central

    Lee, Hyun-Joo; Choi, Yang-Il

    2014-01-01

    This study was performed to investigate the effects of lemon balm (Melissa officinalis L.) on various quality and antioxidant activity of hamburger patties. Lemon balm extract (LBE) showed the highest amount of total polyphenol (801.00 mg TAE/g DW) and flavonoids (65.05 mg RA/g DW). The IC50 value of DPPH hydroxyl scavenging of LBE was 132 μg/mL. The hamburger patties were prepared by 0% (N), 0.1% (L1), 0.5% (L2), and 1.0% (L3) of the lemon balm powder. The addition of lemon balm powder increased the chewiness value, but did not affect the hardness, cohesiveness, and springiness values. Lemon balm powder had positive effects on sensory evaluation of patties. The pH of all patties decreased with longer storage period. 2-Thiobarbituric acid value, volatile basic nitrogen content, and the total microbial counts of hamburger patties in the L3 group were lower, compared to those of the normal (N group). In conclusion, the L3 group had significantly delayed lipid peroxidation compared to other treatment groups. However, the addition of lemon balm powder into patties showed no significantly influence on proximate composition, calorie contents, water holding capacity and cooking loss of patties. Therefore, lemon balm might be a useful natural antioxidant additive in meat products. PMID:26761292

  17. Extreme weather: Subtropical floods and tropical cyclones

    NASA Astrophysics Data System (ADS)

    Shaevitz, Daniel A.

    Extreme weather events have a large effect on society. As such, it is important to understand these events and to project how they may change in a future, warmer climate. The aim of this thesis is to develop a deeper understanding of two types of extreme weather events: subtropical floods and tropical cyclones (TCs). In the subtropics, the latitude is high enough that quasi-geostrophic dynamics are at least qualitatively relevant, while low enough that moisture may be abundant and convection strong. Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. In the first part of this thesis, I examine the possible triggering of convection by the large-scale dynamics and investigate the coupling between the two. Specifically two examples of extreme precipitation events in the subtropics are analyzed, the 2010 and 2014 floods of India and Pakistan and the 2015 flood of Texas and Oklahoma. I invert the quasi-geostrophic omega equation to decompose the large-scale vertical motion profile to components due to synoptic forcing and diabatic heating. Additionally, I present model results from within the Column Quasi-Geostrophic framework. A single column model and cloud-revolving model are forced with the large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation with input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. It is found that convection was triggered primarily by mechanically forced orographic ascent over the Himalayas during the India/Pakistan flood and by upper-level Potential Vorticity disturbances during the Texas/Oklahoma flood. Furthermore, a climate attribution analysis was conducted for the Texas/Oklahoma flood and it is found that anthropogenic climate change was responsible for a small amount of rainfall during the event but the

  18. Spatiotemporal property and predictability of large-scale human mobility

    NASA Astrophysics Data System (ADS)

    Zhang, Hai-Tao; Zhu, Tao; Fu, Dongfei; Xu, Bowen; Han, Xiao-Pu; Chen, Duxin

    2018-04-01

    Spatiotemporal characteristics of human mobility emerging from complexity on individual scale have been extensively studied due to the application potential on human behavior prediction and recommendation, and control of epidemic spreading. We collect and investigate a comprehensive data set of human activities on large geographical scales, including both websites browse and mobile towers visit. Numerical results show that the degree of activity decays as a power law, indicating that human behaviors are reminiscent of scale-free random walks known as Lévy flight. More significantly, this study suggests that human activities on large geographical scales have specific non-Markovian characteristics, such as a two-segment power-law distribution of dwelling time and a high possibility for prediction. Furthermore, a scale-free featured mobility model with two essential ingredients, i.e., preferential return and exploration, and a Gaussian distribution assumption on the exploration tendency parameter is proposed, which outperforms existing human mobility models under scenarios of large geographical scales.

  19. Wedge measures parallax separations...on large-scale 70-mm

    Treesearch

    Steven L. Wert; Richard J. Myhre

    1967-01-01

    A new parallax wedge (range: 1.5 to 2 inches) has been designed for use with large-scaled 70-mm. aerial photographs. The narrow separation of the wedge allows the user to measure small parallax separations that are characteristic of large-scale photographs.

  20. Hamburg's Family Literacy project (FLY) in the context of international trends and recent evaluation findings

    NASA Astrophysics Data System (ADS)

    Rabkin, Gabriele; Geffers, Stefanie; Hanemann, Ulrike; Heckt, Meike; Pietsch, Marcus

    2018-05-01

    The authors of this article begin with an introduction to the holistic concept of family literacy and learning and its implementation in various international contexts, paying special attention to the key role played by the notions of lifelong learning and intergenerational learning. The international trends and experiences they outline inspired and underpinned the concept of a prize-winning Family Literacy project called FLY, which was piloted in 2004 in Hamburg, Germany. FLY aims to build bridges between preschools, schools and families by actively involving parents and other family members in children's literacy education. Its three main pillars are: (1) parents' participation in their children's classes; (2) special sessions for parents (without their children); and (3) joint out-of-school activities for teachers, parents and children. These three pillars help families from migrant backgrounds, in particular, to develop a better understanding of German schools and to play a more active role in school life. To illustrate how the FLY concept is integrated into everyday school life, the authors showcase one participating Hamburg school before presenting their own recent study on the impact of FLY in a group of Hamburg primary schools with several years of FLY experience. The results of the evaluation clearly indicate that the project's main objectives have been achieved: (1) parents of children in FLY schools feel more involved in their children's learning and are offered more opportunities to take part in school activities; (2) the quality of teaching in these schools has improved, with instruction developing a more skills-based focus due to markedly better classroom management und a more supportive learning environment; and (3) children in FLY schools are more likely to have opportunities to accumulate experience in out-of-school contexts and to be exposed to environments that stimulate and enhance their literacy skills in a tangible way.

  1. Automated sample-changing robot for solution scattering experiments at the EMBL Hamburg SAXS station X33

    PubMed Central

    Round, A. R.; Franke, D.; Moritz, S.; Huchler, R.; Fritsche, M.; Malthan, D.; Klaering, R.; Svergun, D. I.; Roessle, M.

    2008-01-01

    There is a rapidly increasing interest in the use of synchrotron small-angle X-ray scattering (SAXS) for large-scale studies of biological macromolecules in solution, and this requires an adequate means of automating the experiment. A prototype has been developed of an automated sample changer for solution SAXS, where the solutions are kept in thermostatically controlled well plates allowing for operation with up to 192 samples. The measuring protocol involves controlled loading of protein solutions and matching buffers, followed by cleaning and drying of the cell between measurements. The system was installed and tested at the X33 beamline of the EMBL, at the storage ring DORIS-III (DESY, Hamburg), where it was used by over 50 external groups during 2007. At X33, a throughput of approximately 12 samples per hour, with a failure rate of sample loading of less than 0.5%, was observed. The feedback from users indicates that the ease of use and reliability of the user operation at the beamline were greatly improved compared with the manual filling mode. The changer is controlled by a client–server-based network protocol, locally and remotely. During the testing phase, the changer was operated in an attended mode to assess its reliability and convenience. Full integration with the beamline control software, allowing for automated data collection of all samples loaded into the machine with remote control from the user, is presently being implemented. The approach reported is not limited to synchrotron-based SAXS but can also be used on laboratory and neutron sources. PMID:25484841

  2. Management of large-scale technology

    NASA Technical Reports Server (NTRS)

    Levine, A.

    1985-01-01

    Two major themes are addressed in this assessment of the management of large-scale NASA programs: (1) how a high technology agency was a decade marked by a rapid expansion of funds and manpower in the first half and almost as rapid contraction in the second; and (2) how NASA combined central planning and control with decentralized project execution.

  3. US National Large-scale City Orthoimage Standard Initiative

    USGS Publications Warehouse

    Zhou, G.; Song, C.; Benjamin, S.; Schickler, W.

    2003-01-01

    The early procedures and algorithms for National digital orthophoto generation in National Digital Orthophoto Program (NDOP) were based on earlier USGS mapping operations, such as field control, aerotriangulation (derived in the early 1920's), the quarter-quadrangle-centered (3.75 minutes of longitude and latitude in geographic extent), 1:40,000 aerial photographs, and 2.5 D digital elevation models. However, large-scale city orthophotos using early procedures have disclosed many shortcomings, e.g., ghost image, occlusion, shadow. Thus, to provide the technical base (algorithms, procedure) and experience needed for city large-scale digital orthophoto creation is essential for the near future national large-scale digital orthophoto deployment and the revision of the Standards for National Large-scale City Digital Orthophoto in National Digital Orthophoto Program (NDOP). This paper will report our initial research results as follows: (1) High-precision 3D city DSM generation through LIDAR data processing, (2) Spatial objects/features extraction through surface material information and high-accuracy 3D DSM data, (3) 3D city model development, (4) Algorithm development for generation of DTM-based orthophoto, and DBM-based orthophoto, (5) True orthophoto generation by merging DBM-based orthophoto and DTM-based orthophoto, and (6) Automatic mosaic by optimizing and combining imagery from many perspectives.

  4. Large-scale structure in superfluid Chaplygin gas cosmology

    NASA Astrophysics Data System (ADS)

    Yang, Rongjia

    2014-03-01

    We investigate the growth of the large-scale structure in the superfluid Chaplygin gas (SCG) model. Both linear and nonlinear growth, such as σ8 and the skewness S3, are discussed. We find the growth factor of SCG reduces to the Einstein-de Sitter case at early times while it differs from the cosmological constant model (ΛCDM) case in the large a limit. We also find there will be more stricture growth on large scales in the SCG scenario than in ΛCDM and the variations of σ8 and S3 between SCG and ΛCDM cannot be discriminated.

  5. Geospatial Optimization of Siting Large-Scale Solar Projects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Macknick, Jordan; Quinby, Ted; Caulfield, Emmet

    2014-03-01

    Recent policy and economic conditions have encouraged a renewed interest in developing large-scale solar projects in the U.S. Southwest. However, siting large-scale solar projects is complex. In addition to the quality of the solar resource, solar developers must take into consideration many environmental, social, and economic factors when evaluating a potential site. This report describes a proof-of-concept, Web-based Geographical Information Systems (GIS) tool that evaluates multiple user-defined criteria in an optimization algorithm to inform discussions and decisions regarding the locations of utility-scale solar projects. Existing siting recommendations for large-scale solar projects from governmental and non-governmental organizations are not consistent withmore » each other, are often not transparent in methods, and do not take into consideration the differing priorities of stakeholders. The siting assistance GIS tool we have developed improves upon the existing siting guidelines by being user-driven, transparent, interactive, capable of incorporating multiple criteria, and flexible. This work provides the foundation for a dynamic siting assistance tool that can greatly facilitate siting decisions among multiple stakeholders.« less

  6. Critical Issues in Large-Scale Assessment: A Resource Guide.

    ERIC Educational Resources Information Center

    Redfield, Doris

    The purpose of this document is to provide practical guidance and support for the design, development, and implementation of large-scale assessment systems that are grounded in research and best practice. Information is included about existing large-scale testing efforts, including national testing programs, state testing programs, and…

  7. State of the Art in Large-Scale Soil Moisture Monitoring

    NASA Technical Reports Server (NTRS)

    Ochsner, Tyson E.; Cosh, Michael Harold; Cuenca, Richard H.; Dorigo, Wouter; Draper, Clara S.; Hagimoto, Yutaka; Kerr, Yan H.; Larson, Kristine M.; Njoku, Eni Gerald; Small, Eric E.; hide

    2013-01-01

    Soil moisture is an essential climate variable influencing land atmosphere interactions, an essential hydrologic variable impacting rainfall runoff processes, an essential ecological variable regulating net ecosystem exchange, and an essential agricultural variable constraining food security. Large-scale soil moisture monitoring has advanced in recent years creating opportunities to transform scientific understanding of soil moisture and related processes. These advances are being driven by researchers from a broad range of disciplines, but this complicates collaboration and communication. For some applications, the science required to utilize large-scale soil moisture data is poorly developed. In this review, we describe the state of the art in large-scale soil moisture monitoring and identify some critical needs for research to optimize the use of increasingly available soil moisture data. We review representative examples of 1) emerging in situ and proximal sensing techniques, 2) dedicated soil moisture remote sensing missions, 3) soil moisture monitoring networks, and 4) applications of large-scale soil moisture measurements. Significant near-term progress seems possible in the use of large-scale soil moisture data for drought monitoring. Assimilation of soil moisture data for meteorological or hydrologic forecasting also shows promise, but significant challenges related to model structures and model errors remain. Little progress has been made yet in the use of large-scale soil moisture observations within the context of ecological or agricultural modeling. Opportunities abound to advance the science and practice of large-scale soil moisture monitoring for the sake of improved Earth system monitoring, modeling, and forecasting.

  8. Nonlinear Generation of shear flows and large scale magnetic fields by small scale

    NASA Astrophysics Data System (ADS)

    Aburjania, G.

    2009-04-01

    EGU2009-233 Nonlinear Generation of shear flows and large scale magnetic fields by small scale turbulence in the ionosphere by G. Aburjania Contact: George Aburjania, g.aburjania@gmail.com,aburj@mymail.ge

  9. Risk Assessment of Escherichia coli O157 illness from consumption of hamburgers in the United States made from Australian manufacturing beef.

    PubMed

    Kiermeier, Andreas; Jenson, Ian; Sumner, John

    2015-01-01

    We analyze the risk of contracting illness due to the consumption in the United States of hamburgers contaminated with enterohemorrhagic Escherichia coli (EHEC) of serogroup O157 produced from manufacturing beef imported from Australia. We have used a novel approach for estimating risk by using the prevalence and concentration estimates of E. coli O157 in lots of beef that were withdrawn from the export chain following detection of the pathogen. For the purpose of the present assessment an assumption was that no product is removed from the supply chain following testing. This, together with a number of additional conservative assumptions, leads to an overestimation of E. coli O157-associated illness attributable to the consumption of ground beef patties manufactured only from Australian beef. We predict 49.6 illnesses (95%: 0.0-148.6) from the 2.46 billion hamburgers made from 155,000 t of Australian manufacturing beef exported to the United States in 2012. All these illness were due to undercooking in the home and less than one illness is predicted from consumption of hamburgers cooked to a temperature of 68 °C in quick-service restaurants. © 2014 Society for Risk Analysis.

  10. Large-scale Labeled Datasets to Fuel Earth Science Deep Learning Applications

    NASA Astrophysics Data System (ADS)

    Maskey, M.; Ramachandran, R.; Miller, J.

    2017-12-01

    Deep learning has revolutionized computer vision and natural language processing with various algorithms scaled using high-performance computing. However, generic large-scale labeled datasets such as the ImageNet are the fuel that drives the impressive accuracy of deep learning results. Large-scale labeled datasets already exist in domains such as medical science, but creating them in the Earth science domain is a challenge. While there are ways to apply deep learning using limited labeled datasets, there is a need in the Earth sciences for creating large-scale labeled datasets for benchmarking and scaling deep learning applications. At the NASA Marshall Space Flight Center, we are using deep learning for a variety of Earth science applications where we have encountered the need for large-scale labeled datasets. We will discuss our approaches for creating such datasets and why these datasets are just as valuable as deep learning algorithms. We will also describe successful usage of these large-scale labeled datasets with our deep learning based applications.

  11. Large-Scale Coherent Vortex Formation in Two-Dimensional Turbulence

    NASA Astrophysics Data System (ADS)

    Orlov, A. V.; Brazhnikov, M. Yu.; Levchenko, A. A.

    2018-04-01

    The evolution of a vortex flow excited by an electromagnetic technique in a thin layer of a conducting liquid was studied experimentally. Small-scale vortices, excited at the pumping scale, merge with time due to the nonlinear interaction and produce large-scale structures—the inverse energy cascade is formed. The dependence of the energy spectrum in the developed inverse cascade is well described by the Kraichnan law k -5/3. At large scales, the inverse cascade is limited by cell sizes, and a large-scale coherent vortex flow is formed, which occupies almost the entire area of the experimental cell. The radial profile of the azimuthal velocity of the coherent vortex immediately after the pumping was switched off has been established for the first time. Inside the vortex core, the azimuthal velocity grows linearly along a radius and reaches a constant value outside the core, which agrees well with the theoretical prediction.

  12. Robust regression for large-scale neuroimaging studies.

    PubMed

    Fritsch, Virgile; Da Mota, Benoit; Loth, Eva; Varoquaux, Gaël; Banaschewski, Tobias; Barker, Gareth J; Bokde, Arun L W; Brühl, Rüdiger; Butzek, Brigitte; Conrod, Patricia; Flor, Herta; Garavan, Hugh; Lemaitre, Hervé; Mann, Karl; Nees, Frauke; Paus, Tomas; Schad, Daniel J; Schümann, Gunter; Frouin, Vincent; Poline, Jean-Baptiste; Thirion, Bertrand

    2015-05-01

    Multi-subject datasets used in neuroimaging group studies have a complex structure, as they exhibit non-stationary statistical properties across regions and display various artifacts. While studies with small sample sizes can rarely be shown to deviate from standard hypotheses (such as the normality of the residuals) due to the poor sensitivity of normality tests with low degrees of freedom, large-scale studies (e.g. >100 subjects) exhibit more obvious deviations from these hypotheses and call for more refined models for statistical inference. Here, we demonstrate the benefits of robust regression as a tool for analyzing large neuroimaging cohorts. First, we use an analytic test based on robust parameter estimates; based on simulations, this procedure is shown to provide an accurate statistical control without resorting to permutations. Second, we show that robust regression yields more detections than standard algorithms using as an example an imaging genetics study with 392 subjects. Third, we show that robust regression can avoid false positives in a large-scale analysis of brain-behavior relationships with over 1500 subjects. Finally we embed robust regression in the Randomized Parcellation Based Inference (RPBI) method and demonstrate that this combination further improves the sensitivity of tests carried out across the whole brain. Altogether, our results show that robust procedures provide important advantages in large-scale neuroimaging group studies. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Intensive agriculture erodes β-diversity at large scales.

    PubMed

    Karp, Daniel S; Rominger, Andrew J; Zook, Jim; Ranganathan, Jai; Ehrlich, Paul R; Daily, Gretchen C

    2012-09-01

    Biodiversity is declining from unprecedented land conversions that replace diverse, low-intensity agriculture with vast expanses under homogeneous, intensive production. Despite documented losses of species richness, consequences for β-diversity, changes in community composition between sites, are largely unknown, especially in the tropics. Using a 10-year data set on Costa Rican birds, we find that low-intensity agriculture sustained β-diversity across large scales on a par with forest. In high-intensity agriculture, low local (α) diversity inflated β-diversity as a statistical artefact. Therefore, at small spatial scales, intensive agriculture appeared to retain β-diversity. Unlike in forest or low-intensity systems, however, high-intensity agriculture also homogenised vegetation structure over large distances, thereby decoupling the fundamental ecological pattern of bird communities changing with geographical distance. This ~40% decline in species turnover indicates a significant decline in β-diversity at large spatial scales. These findings point the way towards multi-functional agricultural systems that maintain agricultural productivity while simultaneously conserving biodiversity. © 2012 Blackwell Publishing Ltd/CNRS.

  14. Tools for Large-Scale Mobile Malware Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bierma, Michael

    Analyzing mobile applications for malicious behavior is an important area of re- search, and is made di cult, in part, by the increasingly large number of appli- cations available for the major operating systems. There are currently over 1.2 million apps available in both the Google Play and Apple App stores (the respec- tive o cial marketplaces for the Android and iOS operating systems)[1, 2]. Our research provides two large-scale analysis tools to aid in the detection and analysis of mobile malware. The rst tool we present, Andlantis, is a scalable dynamic analysis system capa- ble of processing over 3000more » Android applications per hour. Traditionally, Android dynamic analysis techniques have been relatively limited in scale due to the compu- tational resources required to emulate the full Android system to achieve accurate execution. Andlantis is the most scalable Android dynamic analysis framework to date, and is able to collect valuable forensic data, which helps reverse-engineers and malware researchers identify and understand anomalous application behavior. We discuss the results of running 1261 malware samples through the system, and provide examples of malware analysis performed with the resulting data. While techniques exist to perform static analysis on a large number of appli- cations, large-scale analysis of iOS applications has been relatively small scale due to the closed nature of the iOS ecosystem, and the di culty of acquiring appli- cations for analysis. The second tool we present, iClone, addresses the challenges associated with iOS research in order to detect application clones within a dataset of over 20,000 iOS applications.« less

  15. Stability of large-scale systems.

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.

    1972-01-01

    The purpose of this paper is to present the results obtained in stability study of large-scale systems based upon the comparison principle and vector Liapunov functions. The exposition is essentially self-contained, with emphasis on recent innovations which utilize explicit information about the system structure. This provides a natural foundation for the stability theory of dynamic systems under structural perturbations.

  16. [Transfer and Implementation of Innovative Awareness and Education Measures, e-Mental Health and Care Models in psychenet - Hamburg Network for Mental Health].

    PubMed

    Lambert, Martin; Härter, Martin; Brandes, Andreas; Hillebrandt, Bernd; Schlüter, Catarina; Quante, Susanne

    2015-07-01

    The Hamburg Network for Mental Health belongs to the healthcare regions in Germany, funded by the Federal Ministry of Education and Research from 2011 to 2015. More than 330 partners from research, health care, health industry and government are promoting innovative health care models and products to improve mental health care in Hamburg. The main objectives comprise the sustained implementation of the Network itself and of successful health care models and products. The article describes current and future implementation possibilities and the present state of the implementation process. © Georg Thieme Verlag KG Stuttgart · New York.

  17. Real-time simulation of large-scale floods

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.

    2016-08-01

    According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.

  18. Pitch and Harmony in Gyorgy Ligeti's "Hamburg Concerto" and "Syzygy" for String Quartet

    NASA Astrophysics Data System (ADS)

    Corey, Charles

    The analysis component of this dissertation focuses on intricate and complex pitch relationships in Gyorgy Ligeti's last work, the Hamburg Concerto. This piece uses two distinct tuning systems---twelve tone equal temperament and just intonation---throughout its seven movements. Often, these two systems are used simultaneously, creating complex harmonic relationships. This combination allows Ligeti to exploit the unique features of each system and explore their relationships to each other. Ligeti's just intonation in the Hamburg Concerto comes mainly from the five French horns, who are instructed to keep their hands out of the bell to allow the instrument to sound its exact harmonics. The horns themselves, however, are tuned to varying different fundamentals, creating a constantly changing series of just-intoned pitches anchored above an equal-tempered bass. This method of generating just-intoned intervals adds a second layer to the relationship between equal temperament and just intonation. This paper focuses on creating ways to understand this relationship, and describing the ramifications of these tunings as they unfold throughout the piece. Ligeti very carefully crafts this work in a way that creates a balance between the systems. Research done at the Paul Sacher Stiftung has uncovered a significant collection of errors in the published score. Clearing up these discrepancies allows for a much more accurate and more informed analysis. Throughout this dissertation, mistakes are corrected, and several aspects of the score are clarified. The tuning systems are described, and a likely tuning scheme for the horns is posited. (The analytical component of the dissertation delves into the many varying intervals which all fit into one interval class---a feature that is best explored when two distinct tuning systems are juxtaposed.) A language is created herein to better understand these pitch relationships that fit neither into equal temperament nor just intonation. The

  19. The transforming perception of a regional geohazard between coastal defence and mediated discourse on global warming: Storm surges in Hamburg, Germany

    NASA Astrophysics Data System (ADS)

    Neverla, I.; Lüthje, C.

    2010-03-01

    The term regional geohazard is used for a major geophysical risk which can lead to a natural disaster. The effects will be strictly located to a specific region. It is expected but still not proven that global warming will intensify weather extremes and thus the number of regional geohazards will increase. Regional geohazards are not dangerous per se, but from the perspective of human being certain weather and nature extremes are considered dangerous as they impose damage on human beings and their belongings. Therefore the media often call them ‘natural disaster’ and as a matter of fact it seems to be a ‘must’ - according to theory and practice of news selections - that media report on any natural disaster that occur in their region. Moreover, media even report on geohazards in any other region as soon as these events seem to have any general impact. The major geophysical risk along the coast of the North Sea is storm surges. A long list of historical disasters has deeply engraved the ubiquity of this hazard into the collective memory and habitus of the local population. Not only coastal region is concerned by this danger but also the megacity of Hamburg. Hamburg is the second-largest city in Germany and the sixth-largest city in the European Union. The Hamburg Metropolitan Region has more than 4.3 million inhabitants. The estuary of the river Elbe extends from Cuxhaven (coast) to Hamburg a distance of about 130 km. Hamburg has often been subject to storm surges with significant damages. But after the storm flood in 1855 for more than 100 years until 1962 no severe storm surge happened. The Big Flood in the night from February 16 to February 17 1962 destroyed the homes of about 60.000 people. The death toll amounted to 315 in the city of Hamburg, where the storm surge had a traumatic impact and was followed by political decisions driven by the believe in technological solutions. After 1962 massive investments into the coastal defence were made and dikes

  20. Large scale anomalies in the microwave background: causation and correlation.

    PubMed

    Aslanyan, Grigor; Easther, Richard

    2013-12-27

    Most treatments of large scale anomalies in the microwave sky are a posteriori, with unquantified look-elsewhere effects. We contrast these with physical models of specific inhomogeneities in the early Universe which can generate these apparent anomalies. Physical models predict correlations between candidate anomalies and the corresponding signals in polarization and large scale structure, reducing the impact of cosmic variance. We compute the apparent spatial curvature associated with large-scale inhomogeneities and show that it is typically small, allowing for a self-consistent analysis. As an illustrative example we show that a single large plane wave inhomogeneity can contribute to low-l mode alignment and odd-even asymmetry in the power spectra and the best-fit model accounts for a significant part of the claimed odd-even asymmetry. We argue that this approach can be generalized to provide a more quantitative assessment of potential large scale anomalies in the Universe.

  1. Effects of biasing on the galaxy power spectrum at large scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran Jimenez, Jose; Departamento de Fisica Teorica, Universidad Complutense de Madrid, 28040, Madrid; Durrer, Ruth

    2011-05-15

    In this paper we study the effect of biasing on the power spectrum at large scales. We show that even though nonlinear biasing does introduce a white noise contribution on large scales, the P(k){proportional_to}k{sup n} behavior of the matter power spectrum on large scales may still be visible and above the white noise for about one decade. We show, that the Kaiser biasing scheme which leads to linear bias of the correlation function on large scales, also generates a linear bias of the power spectrum on rather small scales. This is a consequence of the divergence on small scales ofmore » the pure Harrison-Zeldovich spectrum. However, biasing becomes k dependent if we damp the underlying power spectrum on small scales. We also discuss the effect of biasing on the baryon acoustic oscillations.« less

  2. Antioxidant-rich spice added to hamburger meat during cooking results in reduced meat, plasma, and urine malondialdehyde concentrations1234

    PubMed Central

    Li, Zhaoping; Henning, Susanne M; Zhang, Yanjun; Zerlin, Alona; Li, Luyi; Gao, Kun; Lee, Ru-Po; Karp, Hannah; Thames, Gail; Bowerman, Susan

    2010-01-01

    Background: Emerging science has shown the effect of oxidation products and inflammation on atherogenesis and carcinogenesis. Cooking hamburger meat can promote the formation of malondialdehyde that can be absorbed after ingestion. Objective:We studied the effect of an antioxidant spice mixture on malondialdehyde formation while cooking hamburger meat and its effects on plasma and urinary malondialdehyde concentrations. Design: Eleven healthy volunteers consumed 2 kinds of burgers in a randomized order: one burger was seasoned with a spice blend, and one burger was not seasoned with the spice blend. The production of malondialdehyde in burgers and malondialdehyde concentrations in plasma and urine after ingestion were measured by HPLC. Results:Rosmarinic acid from oregano was monitored to assess the effect of cooking on spice antioxidant content. Forty percent (19 mg) of the added rosmarinic acid remained in the spiced burger (SB) after cooking. There was a 71% reduction in the malondialdehyde concentration (mean ± SD: 0.52 ± 0.02 μmol/250 g) in the meat of the SBs compared with the malondialdehyde concentration (1.79 ± 0.17 μmol/250 g) in the meat of the control burgers (CBs). The plasma malondialdehyde concentration increased significantly in the CB group as a change from baseline (P = 0.026). There was a significant time-trend difference (P = 0.013) between the 2 groups. Urinary malondialdehyde concentrations (μmol/g creatinine) decreased by 49% (P = 0.021) in subjects consuming the SBs compared with subjects consuming the CBs. Conclusions: The overall effect of adding the spice mixture to hamburger meat before cooking was a reduction in malondialdehyde concentrations in the meat, plasma, and urine after ingestion. Therefore, cooking hamburgers with a polyphenol-rich spice mixture can significantly decrease the concentration of malondialdehyde, which suggests potential health benefits for atherogenesis and carcinogenesis. This trial was registered at

  3. Statistical Measures of Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Vogeley, Michael; Geller, Margaret; Huchra, John; Park, Changbom; Gott, J. Richard

    1993-12-01

    \\inv Mpc} To quantify clustering in the large-scale distribution of galaxies and to test theories for the formation of structure in the universe, we apply statistical measures to the CfA Redshift Survey. This survey is complete to m_{B(0)}=15.5 over two contiguous regions which cover one-quarter of the sky and include ~ 11,000 galaxies. The salient features of these data are voids with diameter 30-50\\hmpc and coherent dense structures with a scale ~ 100\\hmpc. Comparison with N-body simulations rules out the ``standard" CDM model (Omega =1, b=1.5, sigma_8 =1) at the 99% confidence level because this model has insufficient power on scales lambda >30\\hmpc. An unbiased open universe CDM model (Omega h =0.2) and a biased CDM model with non-zero cosmological constant (Omega h =0.24, lambda_0 =0.6) match the observed power spectrum. The amplitude of the power spectrum depends on the luminosity of galaxies in the sample; bright (L>L(*) ) galaxies are more strongly clustered than faint galaxies. The paucity of bright galaxies in low-density regions may explain this dependence. To measure the topology of large-scale structure, we compute the genus of isodensity surfaces of the smoothed density field. On scales in the ``non-linear" regime, <= 10\\hmpc, the high- and low-density regions are multiply-connected over a broad range of density threshold, as in a filamentary net. On smoothing scales >10\\hmpc, the topology is consistent with statistics of a Gaussian random field. Simulations of CDM models fail to produce the observed coherence of structure on non-linear scales (>95% confidence level). The underdensity probability (the frequency of regions with density contrast delta rho //lineρ=-0.8) depends strongly on the luminosity of galaxies; underdense regions are significantly more common (>2sigma ) in bright (L>L(*) ) galaxy samples than in samples which include fainter galaxies.

  4. Cosmic strings and the large-scale structure

    NASA Technical Reports Server (NTRS)

    Stebbins, Albert

    1988-01-01

    A possible problem for cosmic string models of galaxy formation is presented. If very large voids are common and if loop fragmentation is not much more efficient than presently believed, then it may be impossible for string scenarios to produce the observed large-scale structure with Omega sub 0 = 1 and without strong environmental biasing.

  5. Coronal hole evolution by sudden large scale changes

    NASA Technical Reports Server (NTRS)

    Nolte, J. T.; Gerassimenko, M.; Krieger, A. S.; Solodyna, C. V.

    1978-01-01

    Sudden shifts in coronal-hole boundaries observed by the S-054 X-ray telescope on Skylab between May and November, 1973, within 1 day of CMP of the holes, at latitudes not exceeding 40 deg, are compared with the long-term evolution of coronal-hole area. It is found that large-scale shifts in boundary locations can account for most if not all of the evolution of coronal holes. The temporal and spatial scales of these large-scale changes imply that they are the results of a physical process occurring in the corona. It is concluded that coronal holes evolve by magnetic-field lines' opening when the holes are growing, and by fields' closing as the holes shrink.

  6. New probes of Cosmic Microwave Background large-scale anomalies

    NASA Astrophysics Data System (ADS)

    Aiola, Simone

    Fifty years of Cosmic Microwave Background (CMB) data played a crucial role in constraining the parameters of the LambdaCDM model, where Dark Energy, Dark Matter, and Inflation are the three most important pillars not yet understood. Inflation prescribes an isotropic universe on large scales, and it generates spatially-correlated density fluctuations over the whole Hubble volume. CMB temperature fluctuations on scales bigger than a degree in the sky, affected by modes on super-horizon scale at the time of recombination, are a clean snapshot of the universe after inflation. In addition, the accelerated expansion of the universe, driven by Dark Energy, leaves a hardly detectable imprint in the large-scale temperature sky at late times. Such fundamental predictions have been tested with current CMB data and found to be in tension with what we expect from our simple LambdaCDM model. Is this tension just a random fluke or a fundamental issue with the present model? In this thesis, we present a new framework to probe the lack of large-scale correlations in the temperature sky using CMB polarization data. Our analysis shows that if a suppression in the CMB polarization correlations is detected, it will provide compelling evidence for new physics on super-horizon scale. To further analyze the statistical properties of the CMB temperature sky, we constrain the degree of statistical anisotropy of the CMB in the context of the observed large-scale dipole power asymmetry. We find evidence for a scale-dependent dipolar modulation at 2.5sigma. To isolate late-time signals from the primordial ones, we test the anomalously high Integrated Sachs-Wolfe effect signal generated by superstructures in the universe. We find that the detected signal is in tension with the expectations from LambdaCDM at the 2.5sigma level, which is somewhat smaller than what has been previously argued. To conclude, we describe the current status of CMB observations on small scales, highlighting the

  7. Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations

    NASA Astrophysics Data System (ADS)

    Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto

    2018-04-01

    Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.

  8. Large-scale microwave anisotropy from gravitating seeds

    NASA Technical Reports Server (NTRS)

    Veeraraghavan, Shoba; Stebbins, Albert

    1992-01-01

    Topological defects could have seeded primordial inhomogeneities in cosmological matter. We examine the horizon-scale matter and geometry perturbations generated by such seeds in an expanding homogeneous and isotropic universe. Evolving particle horizons generally lead to perturbations around motionless seeds, even when there are compensating initial underdensities in the matter. We describe the pattern of the resulting large angular scale microwave anisotropy.

  9. Preventing Large-Scale Controlled Substance Diversion From Within the Pharmacy

    PubMed Central

    Martin, Emory S.; Dzierba, Steven H.; Jones, David M.

    2013-01-01

    Large-scale diversion of controlled substances (CS) from within a hospital or heath system pharmacy is a rare but growing problem. It is the responsibility of pharmacy leadership to scrutinize control processes to expose weaknesses. This article reviews examples of large-scale diversion incidents and diversion techniques and provides practical strategies to stimulate enhanced CS security within the pharmacy staff. Large-scale diversion from within a pharmacy department can be averted by a pharmacist-in-charge who is informed and proactive in taking effective countermeasures. PMID:24421497

  10. Large-scale flow experiments for managing river systems

    USGS Publications Warehouse

    Konrad, Christopher P.; Olden, Julian D.; Lytle, David A.; Melis, Theodore S.; Schmidt, John C.; Bray, Erin N.; Freeman, Mary C.; Gido, Keith B.; Hemphill, Nina P.; Kennard, Mark J.; McMullen, Laura E.; Mims, Meryl C.; Pyron, Mark; Robinson, Christopher T.; Williams, John G.

    2011-01-01

    Experimental manipulations of streamflow have been used globally in recent decades to mitigate the impacts of dam operations on river systems. Rivers are challenging subjects for experimentation, because they are open systems that cannot be isolated from their social context. We identify principles to address the challenges of conducting effective large-scale flow experiments. Flow experiments have both scientific and social value when they help to resolve specific questions about the ecological action of flow with a clear nexus to water policies and decisions. Water managers must integrate new information into operating policies for large-scale experiments to be effective. Modeling and monitoring can be integrated with experiments to analyze long-term ecological responses. Experimental design should include spatially extensive observations and well-defined, repeated treatments. Large-scale flow manipulations are only a part of dam operations that affect river systems. Scientists can ensure that experimental manipulations continue to be a valuable approach for the scientifically based management of river systems.

  11. Experimental Simulations of Large-Scale Collisions

    NASA Technical Reports Server (NTRS)

    Housen, Kevin R.

    2002-01-01

    This report summarizes research on the effects of target porosity on the mechanics of impact cratering. Impact experiments conducted on a centrifuge provide direct simulations of large-scale cratering on porous asteroids. The experiments show that large craters in porous materials form mostly by compaction, with essentially no deposition of material into the ejecta blanket that is a signature of cratering in less-porous materials. The ratio of ejecta mass to crater mass is shown to decrease with increasing crater size or target porosity. These results are consistent with the observation that large closely-packed craters on asteroid Mathilde appear to have formed without degradation to earlier craters.

  12. Large Scale Underground Detectors in Europe

    NASA Astrophysics Data System (ADS)

    Katsanevas, S. K.

    2006-07-01

    The physics potential and the complementarity of the large scale underground European detectors: Water Cherenkov (MEMPHYS), Liquid Argon TPC (GLACIER) and Liquid Scintillator (LENA) is presented with emphasis on the major physics opportunities, namely proton decay, supernova detection and neutrino parameter determination using accelerator beams.

  13. Large-scale retrieval for medical image analytics: A comprehensive review.

    PubMed

    Li, Zhongyu; Zhang, Xiaofan; Müller, Henning; Zhang, Shaoting

    2018-01-01

    Over the past decades, medical image analytics was greatly facilitated by the explosion of digital imaging techniques, where huge amounts of medical images were produced with ever-increasing quality and diversity. However, conventional methods for analyzing medical images have achieved limited success, as they are not capable to tackle the huge amount of image data. In this paper, we review state-of-the-art approaches for large-scale medical image analysis, which are mainly based on recent advances in computer vision, machine learning and information retrieval. Specifically, we first present the general pipeline of large-scale retrieval, summarize the challenges/opportunities of medical image analytics on a large-scale. Then, we provide a comprehensive review of algorithms and techniques relevant to major processes in the pipeline, including feature representation, feature indexing, searching, etc. On the basis of existing work, we introduce the evaluation protocols and multiple applications of large-scale medical image retrieval, with a variety of exploratory and diagnostic scenarios. Finally, we discuss future directions of large-scale retrieval, which can further improve the performance of medical image analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Large-scale magnetic fields at high Reynolds numbers in magnetohydrodynamic simulations.

    PubMed

    Hotta, H; Rempel, M; Yokoyama, T

    2016-03-25

    The 11-year solar magnetic cycle shows a high degree of coherence in spite of the turbulent nature of the solar convection zone. It has been found in recent high-resolution magnetohydrodynamics simulations that the maintenance of a large-scale coherent magnetic field is difficult with small viscosity and magnetic diffusivity (≲10 (12) square centimenters per second). We reproduced previous findings that indicate a reduction of the energy in the large-scale magnetic field for lower diffusivities and demonstrate the recovery of the global-scale magnetic field using unprecedentedly high resolution. We found an efficient small-scale dynamo that suppresses small-scale flows, which mimics the properties of large diffusivity. As a result, the global-scale magnetic field is maintained even in the regime of small diffusivities-that is, large Reynolds numbers. Copyright © 2016, American Association for the Advancement of Science.

  15. Prototype Vector Machine for Large Scale Semi-Supervised Learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Kwok, James T.; Parvin, Bahram

    2009-04-29

    Practicaldataminingrarelyfalls exactlyinto the supervisedlearning scenario. Rather, the growing amount of unlabeled data poses a big challenge to large-scale semi-supervised learning (SSL). We note that the computationalintensivenessofgraph-based SSLarises largely from the manifold or graph regularization, which in turn lead to large models that are dificult to handle. To alleviate this, we proposed the prototype vector machine (PVM), a highlyscalable,graph-based algorithm for large-scale SSL. Our key innovation is the use of"prototypes vectors" for effcient approximation on both the graph-based regularizer and model representation. The choice of prototypes are grounded upon two important criteria: they not only perform effective low-rank approximation of themore » kernel matrix, but also span a model suffering the minimum information loss compared with the complete model. We demonstrate encouraging performance and appealing scaling properties of the PVM on a number of machine learning benchmark data sets.« less

  16. Inactivation of Mycobacterium avium subsp. paratuberculosis during cooking of hamburger patties.

    PubMed

    Hammer, Philipp; Walte, Hans-Georg C; Matzen, Sönke; Hensel, Jann; Kiesner, Christian

    2013-07-01

    The role of Mycobacterium avium subsp. paratuberculosis (MAP) in Crohn's disease in humans has been debated for many years. Milk and milk products have been suggested as possible vectors for transmission since the beginning of this debate, whereas recent publications show that slaughtered cattle and their carcasses, meat, and organs can also serve as reservoirs for MAP transmission. The objective of this study was to generate heat-inactivation data for MAP during the cooking of hamburger patties. Hamburger patties of lean ground beef weighing 70 and 50 g were cooked for 6, 5, 4, 3, and 2 min, which were sterilized by irradiation and spiked with three different MAP strains at levels between 10² and 10⁶ CFU/ml. Single-sided cooking with one flip was applied, and the temperatures within the patties were recorded by seven thermocouples. Counting of the surviving bacteria was performed by direct plating onto Herrold's egg yolk medium and a three-vial most-probable-number method by using modified Dubos medium. There was considerable variability in temperature throughout the patties during frying. In addition, the log reduction in MAP numbers showed strong variations. In patties weighing 70 g, considerable bacterial reduction of 4 log or larger could only be achieved after 6 min of cooking. For all other cooking times, the bacterial reduction was less than 2 log. Patties weighing 50 g showed a 5-log or larger reduction after cooking times of 5 and 6 min. To determine the inactivation kinetics, a log-linear regression model was used, showing a constant decrease of MAP numbers over cooking time.

  17. The impact of domain aspect ratio on the inverse cascade in rotationally constrained convection.

    NASA Astrophysics Data System (ADS)

    Julien, K. A.; Plumley, M.; Knobloch, E.

    2017-12-01

    Rotationally constrained convective flows are characterized as buoyantly unstable flows with a primary geostrophic balance (i.e. a pointwise balance between the Coriolis and pressure gradient forces). Such flows are known to occur within planetary and stellar interiors and also within isolated regions of the worlds oceans. Rapidly rotating Rayleigh-B'enard convection represents the simplest paradigm for investigations. Recent numerical studies, performed in square domains, have discovered the existence of a strong non-local inverse energy cascade that results in a box filling dipole vortex upon which geostrophic turbulent convection resides. Utilizing the non-hydrostatic quasi-geostrophic equations, the effect of domain aspect ratio on the inverse energy cascade is explored. As the domain aspect ratio becomes anisotropy it is demonstrated that the large-scale states evolve from vortical dipoles to jets. Properties of these jets will be presented and discussed.

  18. The impact of domain aspect ratio on the inverse cascade in rotationally constrained convection

    NASA Astrophysics Data System (ADS)

    Julien, Keith; Knobloch, Edgar; Plumley, Meredith

    2017-11-01

    Rotationally constrained convective flows are characterized as buoyantly unstable flows with a primary geostrophic balance (i.e. a pointwise balance between the Coriolis and pressure gradient forces). Such flows are known to occur within planetary and stellar interiors and also within isolated regions of the worlds oceans. Rapidly rotating Rayleigh-Benard convection represents the simplest paradigm for investigations. Recent numerical studies, performed in square domains, have discovered the existence of a strong non-local inverse energy cascade that results in a box filling dipole vortex upon which geostrophic turbulent convection resides. Utilizing the non-hydrostatic quasi-geostrophic equations, the effect of domain aspect ratio on the inverse energy cascade is explored. As the domain aspect ratio becomes anisotropy it is demonstrated that the large-scale states evolve from vortical dipoles to jets. Properties of these jets will be presented and discussed.

  19. Large- to small-scale dynamo in domains of large aspect ratio: kinematic regime

    NASA Astrophysics Data System (ADS)

    Shumaylova, Valeria; Teed, Robert J.; Proctor, Michael R. E.

    2017-04-01

    The Sun's magnetic field exhibits coherence in space and time on much larger scales than the turbulent convection that ultimately powers the dynamo. In this work, we look for numerical evidence of a large-scale magnetic field as the magnetic Reynolds number, Rm, is increased. The investigation is based on the simulations of the induction equation in elongated periodic boxes. The imposed flows considered are the standard ABC flow (named after Arnold, Beltrami & Childress) with wavenumber ku = 1 (small-scale) and a modulated ABC flow with wavenumbers ku = m, 1, 1 ± m, where m is the wavenumber corresponding to the long-wavelength perturbation on the scale of the box. The critical magnetic Reynolds number R_m^{crit} decreases as the permitted scale separation in the system increases, such that R_m^{crit} ∝ [L_x/L_z]^{-1/2}. The results show that the α-effect derived from the mean-field theory ansatz is valid for a small range of Rm after which small scale dynamo instability occurs and the mean-field approximation is no longer valid. The transition from large- to small-scale dynamo is smooth and takes place in two stages: a fast transition into a predominantly small-scale magnetic energy state and a slower transition into even smaller scales. In the range of Rm considered, the most energetic Fourier component corresponding to the structure in the long x-direction has twice the length-scale of the forcing scale. The long-wavelength perturbation imposed on the ABC flow in the modulated case is not preserved in the eigenmodes of the magnetic field.

  20. Zonostrophic turbulence

    NASA Astrophysics Data System (ADS)

    Galperin, Boris; Sukoriansky, Semion; Dikovskaya, Nadejda

    2008-12-01

    Geostrophic turbulence is a flow regime attained by turbulent, rotating, stably stratified fluids in near-geostrophic balance. When a small-scale forcing is present, flows in this regime may develop an inverse energy cascade. Geostrophic turbulence has been used in geophysical fluid dynamics as a relatively simple model of the large-scale planetary and terrestrial circulations. When the meridional variation of the Coriolis parameter (or a β-effect) is taken into account, the horizontal flow symmetry breaks down giving rise to the emergence of jet flows. In a certain parameter range, a new flow regime comes to life. Its main characteristics include strongly anisotropic kinetic energy spectrum and slowly evolving systems of alternating zonal jets. This regime is a subset of geostrophic turbulence and has been coined zonostrophic turbulence; it can develop both on a β-plane and on the surface of a rotating sphere. This regime was first discovered in computer simulations but later revealed in the laboratory experiments, in the deep terrestrial oceans, and on solar giant planets where it is believed to be the primary physical mechanism responsible for the generation and maintenance of the stable systems of alternating zonal jets. The hallmarks of zonostrophic turbulence are the anisotropic inverse energy cascade and complicated interaction between turbulence and Rossby-Haurwitz waves. Addressing the goals of the conference 'Turbulent Mixing and Beyond' that took place in August 2007 in Trieste, Italy, this paper exposes the regime of zonostrophic turbulence to a wide scientific community, provides a survey of this regime, elaborates its main characteristics, offers novel approaches to describe and understand this phenomenon, and discusses its applicability as a model of the large-scale planetary and terrestrial circulations.

  1. Measuring large-scale vertical motion in the atmosphere with dropsondes

    NASA Astrophysics Data System (ADS)

    Bony, Sandrine; Stevens, Bjorn

    2017-04-01

    Large-scale vertical velocity modulates important processes in the atmosphere, including the formation of clouds, and constitutes a key component of the large-scale forcing of Single-Column Model simulations and Large-Eddy Simulations. Its measurement has also been a long-standing challenge for observationalists. We will show that it is possible to measure the vertical profile of large-scale wind divergence and vertical velocity from aircraft by using dropsondes. This methodology was tested in August 2016 during the NARVAL2 campaign in the lower Atlantic trades. Results will be shown for several research flights, the robustness and the uncertainty of measurements will be assessed, ands observational estimates will be compared with data from high-resolution numerical forecasts.

  2. Asymptotics and numerics of a family of two-dimensional generalized surface quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Ohkitani, Koji

    2012-09-01

    We study the generalised 2D surface quasi-geostrophic (SQG) equation, where the active scalar is given by a fractional power α of Laplacian applied to the stream function. This includes the 2D SQG and Euler equations as special cases. Using Poincaré's successive approximation to higher α-derivatives of the active scalar, we derive a variational equation for describing perturbations in the generalized SQG equation. In particular, in the limit α → 0, an asymptotic equation is derived on a stretched time variable τ = αt, which unifies equations in the family near α = 0. The successive approximation is also discussed at the other extreme of the 2D Euler limit α = 2-0. Numerical experiments are presented for both limits. We consider whether the solution behaves in a more singular fashion, with more effective nonlinearity, when α is increased. Two competing effects are identified: the regularizing effect of a fractional inverse Laplacian (control by conservation) and cancellation by symmetry (nonlinearity depletion). Near α = 0 (complete depletion), the solution behaves in a more singular fashion as α increases. Near α = 2 (maximal control by conservation), the solution behave in a more singular fashion, as α decreases, suggesting that there may be some α in [0, 2] at which the solution behaves in the most singular manner. We also present some numerical results of the family for α = 0.5, 1, and 1.5. On the original time t, the H1 norm of θ generally grows more rapidly with increasing α. However, on the new time τ, this order is reversed. On the other hand, contour patterns for different α appear to be similar at fixed τ, even though the norms are markedly different in magnitude. Finally, point-vortex systems for the generalized SQG family are discussed to shed light on the above problems of time scale.

  3. Bayesian hierarchical model for large-scale covariance matrix estimation.

    PubMed

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  4. Absence of splash singularities for surface quasi-geostrophic sharp fronts and the Muskat problem.

    PubMed

    Gancedo, Francisco; Strain, Robert M

    2014-01-14

    In this paper, for both the sharp front surface quasi-geostrophic equation and the Muskat problem, we rule out the "splash singularity" blow-up scenario; in other words, we prove that the contours evolving from either of these systems cannot intersect at a single point while the free boundary remains smooth. Splash singularities have been shown to hold for the free boundary incompressible Euler equation in the form of the water waves contour evolution problem. Our result confirms the numerical simulations in earlier work, in which it was shown that the curvature blows up because the contours collapse at a point. Here, we prove that maintaining control of the curvature will remove the possibility of pointwise interphase collapse. Another conclusion that we provide is a better understanding of earlier work in which squirt singularities are ruled out; in this case, a positive volume of fluid between the contours cannot be ejected in finite time.

  5. Absence of splash singularities for surface quasi-geostrophic sharp fronts and the Muskat problem

    PubMed Central

    Gancedo, Francisco; Strain, Robert M.

    2014-01-01

    In this paper, for both the sharp front surface quasi-geostrophic equation and the Muskat problem, we rule out the “splash singularity” blow-up scenario; in other words, we prove that the contours evolving from either of these systems cannot intersect at a single point while the free boundary remains smooth. Splash singularities have been shown to hold for the free boundary incompressible Euler equation in the form of the water waves contour evolution problem. Our result confirms the numerical simulations in earlier work, in which it was shown that the curvature blows up because the contours collapse at a point. Here, we prove that maintaining control of the curvature will remove the possibility of pointwise interphase collapse. Another conclusion that we provide is a better understanding of earlier work in which squirt singularities are ruled out; in this case, a positive volume of fluid between the contours cannot be ejected in finite time. PMID:24347645

  6. Large Scale Deformation of the Western US Cordillera

    NASA Technical Reports Server (NTRS)

    Bennett, Richard A.

    2001-01-01

    Destructive earthquakes occur throughout the western US Cordillera (WUSC), not just within the San Andreas fault zone. But because we do not understand the present-day large-scale deformations of the crust throughout the WUSC, our ability to assess the potential for seismic hazards in this region remains severely limited. To address this problem, we are using a large collection of Global Positioning System (GPS) networks which spans the WUSC to precisely quantify present-day large-scale crustal deformations in a single uniform reference frame. Our work can roughly be divided into an analysis of the GPS observations to infer the deformation field across and within the entire plate boundary zone and an investigation of the implications of this deformation field regarding plate boundary dynamics.

  7. QUAGMIRE v1.3: a quasi-geostrophic model for investigating rotating fluids experiments

    NASA Astrophysics Data System (ADS)

    Williams, P. D.; Haine, T. W. N.; Read, P. L.; Lewis, S. R.; Yamazaki, Y. H.

    2008-09-01

    QUAGMIRE is a quasi-geostrophic numerical model for performing fast, high-resolution simulations of multi-layer rotating annulus laboratory experiments on a desktop personal computer. The model uses a hybrid finite-difference/spectral approach to numerically integrate the coupled nonlinear partial differential equations of motion in cylindrical geometry in each layer. Version 1.3 implements the special case of two fluid layers of equal resting depths. The flow is forced either by a differentially rotating lid, or by relaxation to specified streamfunction or potential vorticity fields, or both. Dissipation is achieved through Ekman layer pumping and suction at the horizontal boundaries, including the internal interface. The effects of weak interfacial tension are included, as well as the linear topographic beta-effect and the quadratic centripetal beta-effect. Stochastic forcing may optionally be activated, to represent approximately the effects of random unresolved features. A leapfrog time stepping scheme is used, with a Robert filter. Flows simulated by the model agree well with those observed in the corresponding laboratory experiments.

  8. QUAGMIRE v1.3: a quasi-geostrophic model for investigating rotating fluids experiments

    NASA Astrophysics Data System (ADS)

    Williams, P. D.; Haine, T. W. N.; Read, P. L.; Lewis, S. R.; Yamazaki, Y. H.

    2009-02-01

    QUAGMIRE is a quasi-geostrophic numerical model for performing fast, high-resolution simulations of multi-layer rotating annulus laboratory experiments on a desktop personal computer. The model uses a hybrid finite-difference/spectral approach to numerically integrate the coupled nonlinear partial differential equations of motion in cylindrical geometry in each layer. Version 1.3 implements the special case of two fluid layers of equal resting depths. The flow is forced either by a differentially rotating lid, or by relaxation to specified streamfunction or potential vorticity fields, or both. Dissipation is achieved through Ekman layer pumping and suction at the horizontal boundaries, including the internal interface. The effects of weak interfacial tension are included, as well as the linear topographic beta-effect and the quadratic centripetal beta-effect. Stochastic forcing may optionally be activated, to represent approximately the effects of random unresolved features. A leapfrog time stepping scheme is used, with a Robert filter. Flows simulated by the model agree well with those observed in the corresponding laboratory experiments.

  9. Large Scale Survey Data in Career Development Research

    ERIC Educational Resources Information Center

    Diemer, Matthew A.

    2008-01-01

    Large scale survey datasets have been underutilized but offer numerous advantages for career development scholars, as they contain numerous career development constructs with large and diverse samples that are followed longitudinally. Constructs such as work salience, vocational expectations, educational expectations, work satisfaction, and…

  10. Composite and case study analyses of the large-scale environments associated with West Pacific Polar and subtropical vertical jet superposition events

    NASA Astrophysics Data System (ADS)

    Handlos, Zachary J.

    Though considerable research attention has been devoted to examination of the Northern Hemispheric polar and subtropical jet streams, relatively little has been directed toward understanding the circumstances that conspire to produce the relatively rare vertical superposition of these usually separate features. This dissertation investigates the structure and evolution of large-scale environments associated with jet superposition events in the northwest Pacific. An objective identification scheme, using NCEP/NCAR Reanalysis 1 data, is employed to identify all jet superpositions in the west Pacific (30-40°N, 135-175°E) for boreal winters (DJF) between 1979/80 - 2009/10. The analysis reveals that environments conducive to west Pacific jet superposition share several large-scale features usually associated with East Asian Winter Monsoon (EAWM) northerly cold surges, including the presence of an enhanced Hadley Cell-like circulation within the jet entrance region. It is further demonstrated that several EAWM indices are statistically significantly correlated with jet superposition frequency in the west Pacific. The life cycle of EAWM cold surges promotes interaction between tropical convection and internal jet dynamics. Low potential vorticity (PV), high theta e tropical boundary layer air, exhausted by anomalous convection in the west Pacific lower latitudes, is advected poleward towards the equatorward side of the jet in upper tropospheric isentropic layers resulting in anomalous anticyclonic wind shear that accelerates the jet. This, along with geostrophic cold air advection in the left jet entrance region that drives the polar tropopause downward through the jet core, promotes the development of the deep, vertical PV wall characteristic of superposed jets. West Pacific jet superpositions preferentially form within an environment favoring the aforementioned characteristics regardless of EAWM seasonal strength. Post-superposition, it is shown that the west Pacific

  11. Large-Scale Weather Disturbances in Mars’ Southern Extratropics

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2015-11-01

    Between late autumn and early spring, Mars’ middle and high latitudes within its atmosphere support strong mean thermal gradients between the tropics and poles. Observations from both the Mars Global Surveyor (MGS) and Mars Reconnaissance Orbiter (MRO) indicate that this strong baroclinicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). These extratropical weather disturbances are key components of the global circulation. Such wave-like disturbances act as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of large-scale, traveling extratropical synoptic-period disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively lifted and radiatively active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to their northern-hemisphere counterparts, southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are examined. Simulations that adapt Mars’ full topography compared to simulations that utilize synthetic topographies emulating key large-scale features of the southern middle latitudes indicate that Mars’ transient barotropic/baroclinic eddies are highly influenced by the great impact basins of this hemisphere (e.g., Argyre and Hellas). The occurrence of a southern storm zone in late winter and early spring appears to be anchored to the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre

  12. Properties of the Eliassen-Palm flux for planetary scale motions

    NASA Technical Reports Server (NTRS)

    Palmer, T. N.

    1982-01-01

    In an investigation of the properties of the quasi-geostrophic Eliassen-Palm (EP) flux for planetary-scale motions, particular attention is given to the relation between the EP flux divergence and the meridional flux of eddy potential vorticity, and the relations between the EP flux, group velocity, and the zonal mean refractive index in the Wentzel-Kramers-Brillouin-Jeffreys limit. This latter diagnostic has appeared in a number of different forms as that quantity whose gradient determines the refraction of group velocity paths or EP flux trajectories. The question is considered which, if any, of these forms holds for planetary scale motions. In this investigation, a planetary-scale motion is formally defined to be one for which Burger's (1958) quasigeostrophic theory is appropriate.

  13. How Large Scales Flows May Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun's magnetic activity cycle and play important roles in shaping the Sun's magnetic field. Differential rotation amplifies the magnetic field through its shearing action and converts poloidal field into toroidal field. Poleward meridional flow near the surface carries magnetic flux that reverses the magnetic poles at about the time of solar maximum. The deeper, equatorward meridional flow can carry magnetic flux back toward the lower latitudes where it erupts through the surface to form tilted active regions that convert toroidal fields into oppositely directed poloidal fields. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun's rotation on convection produce velocity correlations that can maintain both the differential rotation and the meridional circulation. These convective motions can also influence solar activity directly by shaping the magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  14. Optical interconnect for large-scale systems

    NASA Astrophysics Data System (ADS)

    Dress, William

    2013-02-01

    This paper presents a switchless, optical interconnect module that serves as a node in a network of identical distribution modules for large-scale systems. Thousands to millions of hosts or endpoints may be interconnected by a network of such modules, avoiding the need for multi-level switches. Several common network topologies are reviewed and their scaling properties assessed. The concept of message-flow routing is discussed in conjunction with the unique properties enabled by the optical distribution module where it is shown how top-down software control (global routing tables, spanning-tree algorithms) may be avoided.

  15. Contribution of peculiar shear motions to large-scale structure

    NASA Technical Reports Server (NTRS)

    Mueler, Hans-Reinhard; Treumann, Rudolf A.

    1994-01-01

    Self-gravitating shear flow instability simulations in a cold dark matter-dominated expanding Einstein-de Sitter universe have been performed. When the shear flow speed exceeds a certain threshold, self-gravitating Kelvin-Helmoholtz instability occurs, forming density voids and excesses along the shear flow layer which serve as seeds for large-scale structure formation. A possible mechanism for generating shear peculiar motions are velocity fluctuations induced by the density perturbations of the postinflation era. In this scenario, short scales grow earlier than large scales. A model of this kind may contribute to the cellular structure of the luminous mass distribution in the universe.

  16. A first large-scale flood inundation forecasting model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domainmore » has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast

  17. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    NASA Astrophysics Data System (ADS)

    Fonseca, R. A.; Vieira, J.; Fiuza, F.; Davidson, A.; Tsung, F. S.; Mori, W. B.; Silva, L. O.

    2013-12-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ˜106 cores and sustained performance over ˜2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios.

  18. [A large-scale accident in Alpine terrain].

    PubMed

    Wildner, M; Paal, P

    2015-02-01

    Due to the geographical conditions, large-scale accidents amounting to mass casualty incidents (MCI) in Alpine terrain regularly present rescue teams with huge challenges. Using an example incident, specific conditions and typical problems associated with such a situation are presented. The first rescue team members to arrive have the elementary tasks of qualified triage and communication to the control room, which is required to dispatch the necessary additional support. Only with a clear "concept", to which all have to adhere, can the subsequent chaos phase be limited. In this respect, a time factor confounded by adverse weather conditions or darkness represents enormous pressure. Additional hazards are frostbite and hypothermia. If priorities can be established in terms of urgency, then treatment and procedure algorithms have proven successful. For evacuation of causalities, a helicopter should be strived for. Due to the low density of hospitals in Alpine regions, it is often necessary to distribute the patients over a wide area. Rescue operations in Alpine terrain have to be performed according to the particular conditions and require rescue teams to have specific knowledge and expertise. The possibility of a large-scale accident should be considered when planning events. With respect to optimization of rescue measures, regular training and exercises are rational, as is the analysis of previous large-scale Alpine accidents.

  19. Large-scale anisotropy in stably stratified rotating flows

    DOE PAGES

    Marino, R.; Mininni, P. D.; Rosenberg, D. L.; ...

    2014-08-28

    We present results from direct numerical simulations of the Boussinesq equations in the presence of rotation and/or stratification, both in the vertical direction. The runs are forced isotropically and randomly at small scales and have spatial resolutions of up tomore » $1024^3$ grid points and Reynolds numbers of $$\\approx 1000$$. We first show that solutions with negative energy flux and inverse cascades develop in rotating turbulence, whether or not stratification is present. However, the purely stratified case is characterized instead by an early-time, highly anisotropic transfer to large scales with almost zero net isotropic energy flux. This is consistent with previous studies that observed the development of vertically sheared horizontal winds, although only at substantially later times. However, and unlike previous works, when sufficient scale separation is allowed between the forcing scale and the domain size, the total energy displays a perpendicular (horizontal) spectrum with power law behavior compatible with $$\\sim k_\\perp^{-5/3}$$, including in the absence of rotation. In this latter purely stratified case, such a spectrum is the result of a direct cascade of the energy contained in the large-scale horizontal wind, as is evidenced by a strong positive flux of energy in the parallel direction at all scales including the largest resolved scales.« less

  20. Poverty, Education and Gender: Pedagogic Transformations in the Schools for the Poor ("Armenschulwesen") in Hamburg, 1788-1871

    ERIC Educational Resources Information Center

    Mayer, Christine

    2011-01-01

    In the second half of the eighteenth century, an enlightened reformist spirit spread among Hamburg's bourgeois upper classes. This was exemplified by the activities of the "Gesellschaft zur Beforderung der Kunste und nutzlichen Gewerbe" ("Society for the Promotion of the Arts and Useful Trades") founded in 1765 as well as by a…

  1. The Determination of the Large-Scale Circulation of the Pacific Ocean from Satellite Altimetry using Model Green's Functions

    NASA Technical Reports Server (NTRS)

    Stammer, Detlef; Wunsch, Carl

    1996-01-01

    A Green's function method for obtaining an estimate of the ocean circulation using both a general circulation model and altimetric data is demonstrated. The fundamental assumption is that the model is so accurate that the differences between the observations and the model-estimated fields obey a linear dynamics. In the present case, the calculations are demonstrated for model/data differences occurring on very a large scale, where the linearization hypothesis appears to be a good one. A semi-automatic linearization of the Bryan/Cox general circulation model is effected by calculating the model response to a series of isolated (in both space and time) geostrophically balanced vortices. These resulting impulse responses or 'Green's functions' then provide the kernels for a linear inverse problem. The method is first demonstrated with a set of 'twin experiments' and then with real data spanning the entire model domain and a year of TOPEX/POSEIDON observations. Our present focus is on the estimate of the time-mean and annual cycle of the model. Residuals of the inversion/assimilation are largest in the western tropical Pacific, and are believed to reflect primarily geoid error. Vertical resolution diminishes with depth with 1 year of data. The model mean is modified such that the subtropical gyre is weakened by about 1 cm/s and the center of the gyre shifted southward by about 10 deg. Corrections to the flow field at the annual cycle suggest that the dynamical response is weak except in the tropics, where the estimated seasonal cycle of the low-latitude current system is of the order of 2 cm/s. The underestimation of observed fluctuations can be related to the inversion on the coarse spatial grid, which does not permit full resolution of the tropical physics. The methodology is easily extended to higher resolution, to use of spatially correlated errors, and to other data types.

  2. Neutrino footprint in large scale structure

    NASA Astrophysics Data System (ADS)

    Garay, Carlos Peña; Verde, Licia; Jimenez, Raul

    2017-03-01

    Recent constrains on the sum of neutrino masses inferred by analyzing cosmological data, show that detecting a non-zero neutrino mass is within reach of forthcoming cosmological surveys. Such a measurement will imply a direct determination of the absolute neutrino mass scale. Physically, the measurement relies on constraining the shape of the matter power spectrum below the neutrino free streaming scale: massive neutrinos erase power at these scales. However, detection of a lack of small-scale power from cosmological data could also be due to a host of other effects. It is therefore of paramount importance to validate neutrinos as the source of power suppression at small scales. We show that, independent on hierarchy, neutrinos always show a footprint on large, linear scales; the exact location and properties are fully specified by the measured power suppression (an astrophysical measurement) and atmospheric neutrinos mass splitting (a neutrino oscillation experiment measurement). This feature cannot be easily mimicked by systematic uncertainties in the cosmological data analysis or modifications in the cosmological model. Therefore the measurement of such a feature, up to 1% relative change in the power spectrum for extreme differences in the mass eigenstates mass ratios, is a smoking gun for confirming the determination of the absolute neutrino mass scale from cosmological observations. It also demonstrates the synergy between astrophysics and particle physics experiments.

  3. Sensitivity analysis for large-scale problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Whitworth, Sandra L.

    1987-01-01

    The development of efficient techniques for calculating sensitivity derivatives is studied. The objective is to present a computational procedure for calculating sensitivity derivatives as part of performing structural reanalysis for large-scale problems. The scope is limited to framed type structures. Both linear static analysis and free-vibration eigenvalue problems are considered.

  4. Corridors Increase Plant Species Richness at Large Scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Damschen, Ellen I.; Haddad, Nick M.; Orrock,John L.

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  5. Corridors increase plant species richness at large scales.

    PubMed

    Damschen, Ellen I; Haddad, Nick M; Orrock, John L; Tewksbury, Joshua J; Levey, Douglas J

    2006-09-01

    Habitat fragmentation is one of the largest threats to biodiversity. Landscape corridors, which are hypothesized to reduce the negative consequences of fragmentation, have become common features of ecological management plans worldwide. Despite their popularity, there is little evidence documenting the effectiveness of corridors in preserving biodiversity at large scales. Using a large-scale replicated experiment, we showed that habitat patches connected by corridors retain more native plant species than do isolated patches, that this difference increases over time, and that corridors do not promote invasion by exotic species. Our results support the use of corridors in biodiversity conservation.

  6. The Cosmology Large Angular Scale Surveyor

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John; Bennett, Charles; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from inflation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  7. The Cosmology Large Angular Scale Surveyor

    NASA Astrophysics Data System (ADS)

    Harrington, Kathleen; Marriage, Tobias; Ali, Aamir; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; Dahal, Sumit; Denis, Kevin; Dünner, Rolando; Eimer, Joseph; Essinger-Hileman, Thomas; Fluxa, Pedro; Halpern, Mark; Hilton, Gene; Hinshaw, Gary F.; Hubmayr, Johannes; Iuliano, Jeffrey; Karakla, John; McMahon, Jeff; Miller, Nathan T.; Moseley, Samuel H.; Palma, Gonzalo; Parker, Lucas; Petroff, Matthew; Pradenas, Bastián.; Rostem, Karwan; Sagliocca, Marco; Valle, Deniz; Watts, Duncan; Wollack, Edward; Xu, Zhilei; Zeng, Lingzhen

    2016-07-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  8. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    PubMed

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  9. Facilitating large-scale clinical trials: in Asia.

    PubMed

    Choi, Han Yong; Ko, Jae-Wook

    2010-01-01

    The number of clinical trials conducted in Asian countries has started to increase as a result of expansion of the pharmaceutical market in this area. There is a growing opportunity for large-scale clinical trials because of the large number of patients, significant market potential, good quality of data, and the cost effective and qualified medical infrastructure. However, for carrying out large-scale clinical trials in Asia, there are several major challenges, including the quality control of data, budget control, laboratory validation, monitoring capacity, authorship, staff training, and nonstandard treatment that need to be considered. There are also several difficulties in collaborating on international trials in Asia because Asia is an extremely diverse continent. The major challenges are language differences, diversity of patterns of disease, and current treatments, a large gap in the experience with performing multinational trials, and regulatory differences among the Asian countries. In addition, there are also differences in the understanding of global clinical trials, medical facilities, indemnity assurance, and culture, including food and religion. To make regional and local data provide evidence for efficacy through the standardization of these differences, unlimited effort is required. At this time, there are no large clinical trials led by urologists in Asia, but it is anticipated that the role of urologists in clinical trials will continue to increase. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Mems: Platform for Large-Scale Integrated Vacuum Electronic Circuits

    DTIC Science & Technology

    2017-03-20

    SECURITY CLASSIFICATION OF: The objective of the LIVEC advanced study project was to develop a platform for large-scale integrated vacuum electronic ...Distribution Unlimited UU UU UU UU 20-03-2017 1-Jul-2014 30-Jun-2015 Final Report: MEMS Platform for Large-Scale Integrated Vacuum Electronic ... Electronic Circuits (LIVEC) Contract No: W911NF-14-C-0093 COR Dr. James Harvey U.S. ARO RTP, NC 27709-2211 Phone: 702-696-2533 e-mail

  11. Latest COBE results, large-scale data, and predictions of inflation

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1992-01-01

    One of the predictions of the inflationary scenario of cosmology is that the initial spectrum of primordial density fluctuations (PDFs) must have the Harrison-Zeldovich (HZ) form. Here, in order to test the inflationary scenario, predictions of the microwave background radiation (MBR) anisotropies measured by COBE are computed based on large-scale data for the universe and assuming Omega-1 and the HZ spectrum on large scales. It is found that the minimal scale where the spectrum can first enter the HZ regime is found, constraining the power spectrum of the mass distribution to within the bias factor b. This factor is determined and used to predict parameters of the MBR anisotropy field. For the spectrum of PDFs that reaches the HZ regime immediately after the scale accessible to the APM catalog, the numbers on MBR anisotropies are consistent with the COBE detections and thus the standard inflation can indeed be considered a viable theory for the origin of the large-scale structure in the universe.

  12. Managing Risk and Uncertainty in Large-Scale University Research Projects

    ERIC Educational Resources Information Center

    Moore, Sharlissa; Shangraw, R. F., Jr.

    2011-01-01

    Both publicly and privately funded research projects managed by universities are growing in size and scope. Complex, large-scale projects (over $50 million) pose new management challenges and risks for universities. This paper explores the relationship between project success and a variety of factors in large-scale university projects. First, we…

  13. Parallel Clustering Algorithm for Large-Scale Biological Data Sets

    PubMed Central

    Wang, Minchao; Zhang, Wu; Ding, Wang; Dai, Dongbo; Zhang, Huiran; Xie, Hao; Chen, Luonan; Guo, Yike; Xie, Jiang

    2014-01-01

    Backgrounds Recent explosion of biological data brings a great challenge for the traditional clustering algorithms. With increasing scale of data sets, much larger memory and longer runtime are required for the cluster identification problems. The affinity propagation algorithm outperforms many other classical clustering algorithms and is widely applied into the biological researches. However, the time and space complexity become a great bottleneck when handling the large-scale data sets. Moreover, the similarity matrix, whose constructing procedure takes long runtime, is required before running the affinity propagation algorithm, since the algorithm clusters data sets based on the similarities between data pairs. Methods Two types of parallel architectures are proposed in this paper to accelerate the similarity matrix constructing procedure and the affinity propagation algorithm. The memory-shared architecture is used to construct the similarity matrix, and the distributed system is taken for the affinity propagation algorithm, because of its large memory size and great computing capacity. An appropriate way of data partition and reduction is designed in our method, in order to minimize the global communication cost among processes. Result A speedup of 100 is gained with 128 cores. The runtime is reduced from serval hours to a few seconds, which indicates that parallel algorithm is capable of handling large-scale data sets effectively. The parallel affinity propagation also achieves a good performance when clustering large-scale gene data (microarray) and detecting families in large protein superfamilies. PMID:24705246

  14. Efficient On-Demand Operations in Large-Scale Infrastructures

    ERIC Educational Resources Information Center

    Ko, Steven Y.

    2009-01-01

    In large-scale distributed infrastructures such as clouds, Grids, peer-to-peer systems, and wide-area testbeds, users and administrators typically desire to perform "on-demand operations" that deal with the most up-to-date state of the infrastructure. However, the scale and dynamism present in the operating environment make it challenging to…

  15. Scaling relations for large Martian valleys

    NASA Astrophysics Data System (ADS)

    Som, Sanjoy M.; Montgomery, David R.; Greenberg, Harvey M.

    2009-02-01

    The dendritic morphology of Martian valley networks, particularly in the Noachian highlands, has long been argued to imply a warmer, wetter early Martian climate, but the character and extent of this period remains controversial. We analyzed scaling relations for the 10 large valley systems incised in terrain of various ages, resolvable using the Mars Orbiter Laser Altimeter (MOLA) and the Thermal Emission Imaging System (THEMIS). Four of the valleys originate in point sources with negligible contributions from tributaries, three are very poorly dissected with a few large tributaries separated by long uninterrupted trunks, and three exhibit the dendritic, branching morphology typical of terrestrial channel networks. We generated width-area and slope-area relationships for each because these relations are identified as either theoretically predicted or robust terrestrial empiricisms for graded precipitation-fed, perennial channels. We also generated distance-area relationships (Hack's law) because they similarly represent robust characteristics of terrestrial channels (whether perennial or ephemeral). We find that the studied Martian valleys, even the dendritic ones, do not satisfy those empiricisms. On Mars, the width-area scaling exponent b of -0.7-4.7 contrasts with values of 0.3-0.6 typical of terrestrial channels; the slope-area scaling exponent $\\theta$ ranges from -25.6-5.5, whereas values of 0.3-0.5 are typical on Earth; the length-area, or Hack's exponent n ranges from 0.47 to 19.2, while values of 0.5-0.6 are found on Earth. None of the valleys analyzed satisfy all three relations typical of terrestrial perennial channels. As such, our analysis supports the hypotheses that ephemeral and/or immature channel morphologies provide the closest terrestrial analogs to the dendritic networks on Mars, and point source discharges provide terrestrial analogs best suited to describe the other large Martian valleys.

  16. Zonally averaged model of dynamics, chemistry and radiation for the atmosphere

    NASA Technical Reports Server (NTRS)

    Tung, K. K.

    1985-01-01

    A nongeostrophic theory of zonally averaged circulation is formulated using the nonlinear primitive equations on a sphere, taking advantage of the more direct relationship between the mean meridional circulation and diabatic heating rate which is available in isentropic coordinates. Possible differences between results of nongeostrophic theory and the commonly used geostrophic formulation are discussed concerning: (1) the role of eddy forcing of the diabatic circulation, and (2) the nonlinear nearly inviscid limit vs the geostrophic limit. Problems associated with the traditional Rossby number scaling in quasi-geostrophic formulations are pointed out and an alternate, more general scaling based on the smallness of mean meridional to zonal velocities for a rotating planet is suggested. Such a scaling recovers the geostrophic balanced wind relationship for the mean zonal flow but reveals that the mean meridional velocity is in general ageostrophic.

  17. Large-scale particle acceleration by magnetic reconnection during solar flares

    NASA Astrophysics Data System (ADS)

    Li, X.; Guo, F.; Li, H.; Li, G.; Li, S.

    2017-12-01

    Magnetic reconnection that triggers explosive magnetic energy release has been widely invoked to explain the large-scale particle acceleration during solar flares. While great efforts have been spent in studying the acceleration mechanism in small-scale kinetic simulations, there have been rare studies that make predictions to acceleration in the large scale comparable to the flare reconnection region. Here we present a new arrangement to study this problem. We solve the large-scale energetic-particle transport equation in the fluid velocity and magnetic fields from high-Lundquist-number MHD simulations of reconnection layers. This approach is based on examining the dominant acceleration mechanism and pitch-angle scattering in kinetic simulations. Due to the fluid compression in reconnection outflows and merging magnetic islands, particles are accelerated to high energies and develop power-law energy distributions. We find that the acceleration efficiency and power-law index depend critically on upstream plasma beta and the magnitude of guide field (the magnetic field component perpendicular to the reconnecting component) as they influence the compressibility of the reconnection layer. We also find that the accelerated high-energy particles are mostly concentrated in large magnetic islands, making the islands a source of energetic particles and high-energy emissions. These findings may provide explanations for acceleration process in large-scale magnetic reconnection during solar flares and the temporal and spatial emission properties observed in different flare events.

  18. Large-Scale Advanced Prop-Fan (LAP)

    NASA Technical Reports Server (NTRS)

    Degeorge, C. L.

    1988-01-01

    In recent years, considerable attention has been directed toward improving aircraft fuel efficiency. Analytical studies and research with wind tunnel models have demonstrated that the high inherent efficiency of low speed turboprop propulsion systems may now be extended to the Mach .8 flight regime of today's commercial airliners. This can be accomplished with a propeller, employing a large number of thin highly swept blades. The term Prop-Fan has been coined to describe such a propulsion system. In 1983 the NASA-Lewis Research Center contracted with Hamilton Standard to design, build and test a near full scale Prop-Fan, designated the Large Scale Advanced Prop-Fan (LAP). This report provides a detailed description of the LAP program. The assumptions and analytical procedures used in the design of Prop-Fan system components are discussed in detail. The manufacturing techniques used in the fabrication of the Prop-Fan are presented. Each of the tests run during the course of the program are also discussed and the major conclusions derived from them stated.

  19. Exact-Differential Large-Scale Traffic Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hanai, Masatoshi; Suzumura, Toyotaro; Theodoropoulos, Georgios

    2015-01-01

    Analyzing large-scale traffics by simulation needs repeating execution many times with various patterns of scenarios or parameters. Such repeating execution brings about big redundancy because the change from a prior scenario to a later scenario is very minor in most cases, for example, blocking only one of roads or changing the speed limit of several roads. In this paper, we propose a new redundancy reduction technique, called exact-differential simulation, which enables to simulate only changing scenarios in later execution while keeping exactly same results as in the case of whole simulation. The paper consists of two main efforts: (i) amore » key idea and algorithm of the exact-differential simulation, (ii) a method to build large-scale traffic simulation on the top of the exact-differential simulation. In experiments of Tokyo traffic simulation, the exact-differential simulation shows 7.26 times as much elapsed time improvement in average and 2.26 times improvement even in the worst case as the whole simulation.« less

  20. Local gravity and large-scale structure

    NASA Technical Reports Server (NTRS)

    Juszkiewicz, Roman; Vittorio, Nicola; Wyse, Rosemary F. G.

    1990-01-01

    The magnitude and direction of the observed dipole anisotropy of the galaxy distribution can in principle constrain the amount of large-scale power present in the spectrum of primordial density fluctuations. This paper confronts the data, provided by a recent redshift survey of galaxies detected by the IRAS satellite, with the predictions of two cosmological models with very different levels of large-scale power: the biased Cold Dark Matter dominated model (CDM) and a baryon-dominated model (BDM) with isocurvature initial conditions. Model predictions are investigated for the Local Group peculiar velocity, v(R), induced by mass inhomogeneities distributed out to a given radius, R, for R less than about 10,000 km/s. Several convergence measures for v(R) are developed, which can become powerful cosmological tests when deep enough samples become available. For the present data sets, the CDM and BDM predictions are indistinguishable at the 2 sigma level and both are consistent with observations. A promising discriminant between cosmological models is the misalignment angle between v(R) and the apex of the dipole anisotropy of the microwave background.

  1. On type B cyclogenesis in a quasi-geostrophic model

    NASA Astrophysics Data System (ADS)

    Grotjahn, Richard

    2005-01-01

    A quasi-geostrophic (QG) model is used to approximate some aspects of 'type B' cyclogenesis as described in an observational paper that appeared several decades earlier in this journal. Though often cited, that earlier work has some ambiguity that has propagated into subsequent analyses. The novel aspects examined here include allowing advective nonlinearity to distort and amplify structures that are quasi-coherent and nearly stable in a linear form of the model; also, separate upper and lower structures are localized in space. Cases are studied separately where the upper trough tracks across different low-level features: an enhanced baroclinic zone (stronger horizontal temperature gradient) or a region of augmented temperature. Growth by superposition of lower and upper features is excluded by experimental design. The dynamics are evaluated with the vertical motion equation, the QG vorticity equation, the QG perturbation energy equation, and 'potential-vorticity thinking'. Results are compared against 'control' cases having no additional low-level features. Nonlinearity is examined relative to a corresponding linear calculation and is generally positive. The results are perhaps richer than the seminal article might imply, because growth is enhanced not only when properties of the lower feature reinforce growth but also when the lower feature opposes decay of the upper feature. For example, growth is enhanced where low-level warm advection introduces rising warm air to oppose the rising cold air ahead of the upper trough. Such growth is magnified when adjacent warm and cold anomalies have a strong baroclinic zone between them. The enhanced growth triggers an upstream tilt in the solution whose properties further accelerate the growth.

  2. Comprehensive School Teachers' Professional Agency in Large-Scale Educational Change

    ERIC Educational Resources Information Center

    Pyhältö, Kirsi; Pietarinen, Janne; Soini, Tiina

    2014-01-01

    This article explores how comprehensive school teachers' sense of professional agency changes in the context of large-scale national educational change in Finland. We analysed the premises on which teachers (n = 100) view themselves and their work in terms of developing their own school, catalysed by the large-scale national change. The study…

  3. The influence of large-scale wind power on global climate.

    PubMed

    Keith, David W; Decarolis, Joseph F; Denkenberger, David C; Lenschow, Donald H; Malyshev, Sergey L; Pacala, Stephen; Rasch, Philip J

    2004-11-16

    Large-scale use of wind power can alter local and global climate by extracting kinetic energy and altering turbulent transport in the atmospheric boundary layer. We report climate-model simulations that address the possible climatic impacts of wind power at regional to global scales by using two general circulation models and several parameterizations of the interaction of wind turbines with the boundary layer. We find that very large amounts of wind power can produce nonnegligible climatic change at continental scales. Although large-scale effects are observed, wind power has a negligible effect on global-mean surface temperature, and it would deliver enormous global benefits by reducing emissions of CO(2) and air pollutants. Our results may enable a comparison between the climate impacts due to wind power and the reduction in climatic impacts achieved by the substitution of wind for fossil fuels.

  4. Large-Angular-Scale Clustering as a Clue to the Source of UHECRs

    NASA Astrophysics Data System (ADS)

    Berlind, Andreas A.; Farrar, Glennys R.

    We explore what can be learned about the sources of UHECRs from their large-angular-scale clustering (referred to as their "bias" by the cosmology community). Exploiting the clustering on large scales has the advantage over small-scale correlations of being insensitive to uncertainties in source direction from magnetic smearing or measurement error. In a Cold Dark Matter cosmology, the amplitude of large-scale clustering depends on the mass of the system, with more massive systems such as galaxy clusters clustering more strongly than less massive systems such as ordinary galaxies or AGN. Therefore, studying the large-scale clustering of UHECRs can help determine a mass scale for their sources, given the assumption that their redshift depth is as expected from the GZK cutoff. We investigate the constraining power of a given UHECR sample as a function of its cutoff energy and number of events. We show that current and future samples should be able to distinguish between the cases of their sources being galaxy clusters, ordinary galaxies, or sources that are uncorrelated with the large-scale structure of the universe.

  5. Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances

    PubMed Central

    Parker, V. Thomas

    2015-01-01

    Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560

  6. Cedar-a large scale multiprocessor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gajski, D.; Kuck, D.; Lawrie, D.

    1983-01-01

    This paper presents an overview of Cedar, a large scale multiprocessor being designed at the University of Illinois. This machine is designed to accommodate several thousand high performance processors which are capable of working together on a single job, or they can be partitioned into groups of processors where each group of one or more processors can work on separate jobs. Various aspects of the machine are described including the control methodology, communication network, optimizing compiler and plans for construction. 13 references.

  7. Fabrication of the HIAD Large-Scale Demonstration Assembly

    NASA Technical Reports Server (NTRS)

    Swanson, G. T.; Johnson, R. K.; Hughes, S. J.; DiNonno, J. M.; Cheatwood, F. M.

    2017-01-01

    Over a decade of work has been conducted in the development of NASA's Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology. This effort has included multiple ground test campaigns and flight tests culminating in the HIAD projects second generation (Gen-2) deployable aeroshell system and associated analytical tools. NASAs HIAD project team has developed, fabricated, and tested inflatable structures (IS) integrated with flexible thermal protection system (F-TPS), ranging in diameters from 3-6m, with cone angles of 60 and 70 deg.In 2015, United Launch Alliance (ULA) announced that they will use a HIAD (10-12m) as part of their Sensible, Modular, Autonomous Return Technology (SMART) for their upcoming Vulcan rocket. ULA expects SMART reusability, coupled with other advancements for Vulcan, will substantially reduce the cost of access to space. The first booster engine recovery via HIAD is scheduled for 2024. To meet this near-term need, as well as future NASA applications, the HIAD team is investigating taking the technology to the 10-15m diameter scale. In the last year, many significant development and fabrication efforts have been accomplished, culminating in the construction of a large-scale inflatable structure demonstration assembly. This assembly incorporated the first three tori for a 12m Mars Human-Scale Pathfinder HIAD conceptual design that was constructed with the current state of the art material set. Numerous design trades and torus fabrication demonstrations preceded this effort. In 2016, three large-scale tori (0.61m cross-section) and six subscale tori (0.25m cross-section) were manufactured to demonstrate fabrication techniques using the newest candidate material sets. These tori were tested to evaluate durability and load capacity. This work led to the selection of the inflatable structures third generation (Gen-3) structural liner. In late 2016, the three tori required for the large-scale demonstration assembly were fabricated, and then

  8. An interactive display system for large-scale 3D models

    NASA Astrophysics Data System (ADS)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  9. Collaborative study of an enzymatic digestion method for the isolation of light filth from ground beef or hamburger.

    PubMed

    Alioto, P; Andreas, M

    1976-01-01

    Collaborative results are presented for a proposed method for light filth extraction from ground beef or hamburger. The method involves enzymatic digestion, wet sieving, and extraction with light mineral oil from 40% isopropanol. Recoveries are good and filter papers are clean. This method has been adopted as official first action.

  10. Decoupling local mechanics from large-scale structure in modular metamaterials.

    PubMed

    Yang, Nan; Silverberg, Jesse L

    2017-04-04

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such "inverse design" is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module's design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  11. Decoupling local mechanics from large-scale structure in modular metamaterials

    NASA Astrophysics Data System (ADS)

    Yang, Nan; Silverberg, Jesse L.

    2017-04-01

    A defining feature of mechanical metamaterials is that their properties are determined by the organization of internal structure instead of the raw fabrication materials. This shift of attention to engineering internal degrees of freedom has coaxed relatively simple materials into exhibiting a wide range of remarkable mechanical properties. For practical applications to be realized, however, this nascent understanding of metamaterial design must be translated into a capacity for engineering large-scale structures with prescribed mechanical functionality. Thus, the challenge is to systematically map desired functionality of large-scale structures backward into a design scheme while using finite parameter domains. Such “inverse design” is often complicated by the deep coupling between large-scale structure and local mechanical function, which limits the available design space. Here, we introduce a design strategy for constructing 1D, 2D, and 3D mechanical metamaterials inspired by modular origami and kirigami. Our approach is to assemble a number of modules into a voxelized large-scale structure, where the module’s design has a greater number of mechanical design parameters than the number of constraints imposed by bulk assembly. This inequality allows each voxel in the bulk structure to be uniquely assigned mechanical properties independent from its ability to connect and deform with its neighbors. In studying specific examples of large-scale metamaterial structures we show that a decoupling of global structure from local mechanical function allows for a variety of mechanically and topologically complex designs.

  12. Large- and small-scale constraints on power spectra in Omega = 1 universes

    NASA Technical Reports Server (NTRS)

    Gelb, James M.; Gradwohl, Ben-Ami; Frieman, Joshua A.

    1993-01-01

    The CDM model of structure formation, normalized on large scales, leads to excessive pairwise velocity dispersions on small scales. In an attempt to circumvent this problem, we study three scenarios (all with Omega = 1) with more large-scale and less small-scale power than the standard CDM model: (1) cold dark matter with significantly reduced small-scale power (inspired by models with an admixture of cold and hot dark matter); (2) cold dark matter with a non-scale-invariant power spectrum; and (3) cold dark matter with coupling of dark matter to a long-range vector field. When normalized to COBE on large scales, such models do lead to reduced velocities on small scales and they produce fewer halos compared with CDM. However, models with sufficiently low small-scale velocities apparently fail to produce an adequate number of halos.

  13. Linking crop yield anomalies to large-scale atmospheric circulation in Europe.

    PubMed

    Ceglar, Andrej; Turco, Marco; Toreti, Andrea; Doblas-Reyes, Francisco J

    2017-06-15

    Understanding the effects of climate variability and extremes on crop growth and development represents a necessary step to assess the resilience of agricultural systems to changing climate conditions. This study investigates the links between the large-scale atmospheric circulation and crop yields in Europe, providing the basis to develop seasonal crop yield forecasting and thus enabling a more effective and dynamic adaptation to climate variability and change. Four dominant modes of large-scale atmospheric variability have been used: North Atlantic Oscillation, Eastern Atlantic, Scandinavian and Eastern Atlantic-Western Russia patterns. Large-scale atmospheric circulation explains on average 43% of inter-annual winter wheat yield variability, ranging between 20% and 70% across countries. As for grain maize, the average explained variability is 38%, ranging between 20% and 58%. Spatially, the skill of the developed statistical models strongly depends on the large-scale atmospheric variability impact on weather at the regional level, especially during the most sensitive growth stages of flowering and grain filling. Our results also suggest that preceding atmospheric conditions might provide an important source of predictability especially for maize yields in south-eastern Europe. Since the seasonal predictability of large-scale atmospheric patterns is generally higher than the one of surface weather variables (e.g. precipitation) in Europe, seasonal crop yield prediction could benefit from the integration of derived statistical models exploiting the dynamical seasonal forecast of large-scale atmospheric circulation.

  14. Effects of large-scale wind driven turbulence on sound propagation

    NASA Technical Reports Server (NTRS)

    Noble, John M.; Bass, Henry E.; Raspet, Richard

    1990-01-01

    Acoustic measurements made in the atmosphere have shown significant fluctuations in amplitude and phase resulting from the interaction with time varying meteorological conditions. The observed variations appear to have short term and long term (1 to 5 minutes) variations at least in the phase of the acoustic signal. One possible way to account for this long term variation is the use of a large scale wind driven turbulence model. From a Fourier analysis of the phase variations, the outer scales for the large scale turbulence is 200 meters and greater, which corresponds to turbulence in the energy-containing subrange. The large scale turbulence is assumed to be elongated longitudinal vortex pairs roughly aligned with the mean wind. Due to the size of the vortex pair compared to the scale of the present experiment, the effect of the vortex pair on the acoustic field can be modeled as the sound speed of the atmosphere varying with time. The model provides results with the same trends and variations in phase observed experimentally.

  15. The Large-scale Structure of the Universe: Probes of Cosmology and Structure Formation

    NASA Astrophysics Data System (ADS)

    Noh, Yookyung

    The usefulness of large-scale structure as a probe of cosmology and structure formation is increasing as large deep surveys in multi-wavelength bands are becoming possible. The observational analysis of large-scale structure guided by large volume numerical simulations are beginning to offer us complementary information and crosschecks of cosmological parameters estimated from the anisotropies in Cosmic Microwave Background (CMB) radiation. Understanding structure formation and evolution and even galaxy formation history is also being aided by observations of different redshift snapshots of the Universe, using various tracers of large-scale structure. This dissertation work covers aspects of large-scale structure from the baryon acoustic oscillation scale, to that of large scale filaments and galaxy clusters. First, I discuss a large- scale structure use for high precision cosmology. I investigate the reconstruction of Baryon Acoustic Oscillation (BAO) peak within the context of Lagrangian perturbation theory, testing its validity in a large suite of cosmological volume N-body simulations. Then I consider galaxy clusters and the large scale filaments surrounding them in a high resolution N-body simulation. I investigate the geometrical properties of galaxy cluster neighborhoods, focusing on the filaments connected to clusters. Using mock observations of galaxy clusters, I explore the correlations of scatter in galaxy cluster mass estimates from multi-wavelength observations and different measurement techniques. I also examine the sources of the correlated scatter by considering the intrinsic and environmental properties of clusters.

  16. Determination of fat content in chicken hamburgers using NIR spectroscopy and the Successive Projections Algorithm for interval selection in PLS regression (iSPA-PLS)

    NASA Astrophysics Data System (ADS)

    Krepper, Gabriela; Romeo, Florencia; Fernandes, David Douglas de Sousa; Diniz, Paulo Henrique Gonçalves Dias; de Araújo, Mário César Ugulino; Di Nezio, María Susana; Pistonesi, Marcelo Fabián; Centurión, María Eugenia

    2018-01-01

    Determining fat content in hamburgers is very important to minimize or control the negative effects of fat on human health, effects such as cardiovascular diseases and obesity, which are caused by the high consumption of saturated fatty acids and cholesterol. This study proposed an alternative analytical method based on Near Infrared Spectroscopy (NIR) and Successive Projections Algorithm for interval selection in Partial Least Squares regression (iSPA-PLS) for fat content determination in commercial chicken hamburgers. For this, 70 hamburger samples with a fat content ranging from 14.27 to 32.12 mg kg- 1 were prepared based on the upper limit recommended by the Argentinean Food Codex, which is 20% (w w- 1). NIR spectra were then recorded and then preprocessed by applying different approaches: base line correction, SNV, MSC, and Savitzky-Golay smoothing. For comparison, full-spectrum PLS and the Interval PLS are also used. The best performance for the prediction set was obtained for the first derivative Savitzky-Golay smoothing with a second-order polynomial and window size of 19 points, achieving a coefficient of correlation of 0.94, RMSEP of 1.59 mg kg- 1, REP of 7.69% and RPD of 3.02. The proposed methodology represents an excellent alternative to the conventional Soxhlet extraction method, since waste generation is avoided, yet without the use of either chemical reagents or solvents, which follows the primary principles of Green Chemistry. The new method was successfully applied to chicken hamburger analysis, and the results agreed with those with reference values at a 95% confidence level, making it very attractive for routine analysis.

  17. Determination of fat content in chicken hamburgers using NIR spectroscopy and the Successive Projections Algorithm for interval selection in PLS regression (iSPA-PLS).

    PubMed

    Krepper, Gabriela; Romeo, Florencia; Fernandes, David Douglas de Sousa; Diniz, Paulo Henrique Gonçalves Dias; de Araújo, Mário César Ugulino; Di Nezio, María Susana; Pistonesi, Marcelo Fabián; Centurión, María Eugenia

    2018-01-15

    Determining fat content in hamburgers is very important to minimize or control the negative effects of fat on human health, effects such as cardiovascular diseases and obesity, which are caused by the high consumption of saturated fatty acids and cholesterol. This study proposed an alternative analytical method based on Near Infrared Spectroscopy (NIR) and Successive Projections Algorithm for interval selection in Partial Least Squares regression (iSPA-PLS) for fat content determination in commercial chicken hamburgers. For this, 70 hamburger samples with a fat content ranging from 14.27 to 32.12mgkg -1 were prepared based on the upper limit recommended by the Argentinean Food Codex, which is 20% (ww -1 ). NIR spectra were then recorded and then preprocessed by applying different approaches: base line correction, SNV, MSC, and Savitzky-Golay smoothing. For comparison, full-spectrum PLS and the Interval PLS are also used. The best performance for the prediction set was obtained for the first derivative Savitzky-Golay smoothing with a second-order polynomial and window size of 19 points, achieving a coefficient of correlation of 0.94, RMSEP of 1.59mgkg -1 , REP of 7.69% and RPD of 3.02. The proposed methodology represents an excellent alternative to the conventional Soxhlet extraction method, since waste generation is avoided, yet without the use of either chemical reagents or solvents, which follows the primary principles of Green Chemistry. The new method was successfully applied to chicken hamburger analysis, and the results agreed with those with reference values at a 95% confidence level, making it very attractive for routine analysis. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Large eddy simulations and reduced models of the Unsteady Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Momen, M.; Bou-Zeid, E.

    2013-12-01

    Most studies of the dynamics of Atmospheric Boundary Layers (ABLs) have focused on steady geostrophic conditions, such as the classic Ekman boundary layer problem. However, real-world ABLs are driven by a time-dependent geostrophic forcing that changes at sub-diurnal scales. Hence, to advance our understanding of the dynamics of atmospheric flows, and to improve their modeling, the unsteady cases have to be analyzed and understood. This is particularly relevant to new applications related to wind energy (e.g. short-term forecast of wind power changes) and pollutant dispersion (forecasting of rapid changes in wind velocity and direction after an accidental spill), as well as to classic weather prediction and hydrometeorological applications. The present study aims to investigate the ABL behavior under variable forcing and to derive a simple model to predict the ABL response under these forcing fluctuations. Simplifications of the governing Navier-Stokes equations, with the Coriolis force, are tested using LES and then applied to derive a physical model of the unsteady ABL. LES is then exploited again to validate the analogy and the output of the simpler model. Results from the analytical model, as well as LES outputs, open the way for inertial oscillations to play an important role in the dynamics. Several simulations with different variable forcing patterns are then conducted to investigate some of the characteristics of the unsteady ABL such as resonant frequency, ABL response time, equilibrium states, etc. The variability of wind velocity profiles and hodographs, turbulent kinetic energy, and vertical profiles of the total stress and potential temperature are also examined. Wind Hodograph of the Unsteady ABL at Different Heights - This figure shows fluctuations in the mean u and v components of the velocity as time passes due to variable geostrophic forcing

  19. The role of large scale motions on passive scalar transport

    NASA Astrophysics Data System (ADS)

    Dharmarathne, Suranga; Araya, Guillermo; Tutkun, Murat; Leonardi, Stefano; Castillo, Luciano

    2014-11-01

    We study direct numerical simulation (DNS) of turbulent channel flow at Reτ = 394 to investigate effect of large scale motions on fluctuating temperature field which forms a passive scalar field. Statistical description of the large scale features of the turbulent channel flow is obtained using two-point correlations of velocity components. Two-point correlations of fluctuating temperature field is also examined in order to identify possible similarities between velocity and temperature fields. The two-point cross-correlations betwen the velocity and temperature fluctuations are further analyzed to establish connections between these two fields. In addition, we use proper orhtogonal decompotion (POD) to extract most dominant modes of the fields and discuss the coupling of large scale features of turbulence and the temperature field.

  20. Random access in large-scale DNA data storage.

    PubMed

    Organick, Lee; Ang, Siena Dumas; Chen, Yuan-Jyue; Lopez, Randolph; Yekhanin, Sergey; Makarychev, Konstantin; Racz, Miklos Z; Kamath, Govinda; Gopalan, Parikshit; Nguyen, Bichlien; Takahashi, Christopher N; Newman, Sharon; Parker, Hsing-Yeh; Rashtchian, Cyrus; Stewart, Kendall; Gupta, Gagan; Carlson, Robert; Mulligan, John; Carmean, Douglas; Seelig, Georg; Ceze, Luis; Strauss, Karin

    2018-03-01

    Synthetic DNA is durable and can encode digital data with high density, making it an attractive medium for data storage. However, recovering stored data on a large-scale currently requires all the DNA in a pool to be sequenced, even if only a subset of the information needs to be extracted. Here, we encode and store 35 distinct files (over 200 MB of data), in more than 13 million DNA oligonucleotides, and show that we can recover each file individually and with no errors, using a random access approach. We design and validate a large library of primers that enable individual recovery of all files stored within the DNA. We also develop an algorithm that greatly reduces the sequencing read coverage required for error-free decoding by maximizing information from all sequence reads. These advances demonstrate a viable, large-scale system for DNA data storage and retrieval.

  1. Improving Design Efficiency for Large-Scale Heterogeneous Circuits

    NASA Astrophysics Data System (ADS)

    Gregerson, Anthony

    Despite increases in logic density, many Big Data applications must still be partitioned across multiple computing devices in order to meet their strict performance requirements. Among the most demanding of these applications is high-energy physics (HEP), which uses complex computing systems consisting of thousands of FPGAs and ASICs to process the sensor data created by experiments at particles accelerators such as the Large Hadron Collider (LHC). Designing such computing systems is challenging due to the scale of the systems, the exceptionally high-throughput and low-latency performance constraints that necessitate application-specific hardware implementations, the requirement that algorithms are efficiently partitioned across many devices, and the possible need to update the implemented algorithms during the lifetime of the system. In this work, we describe our research to develop flexible architectures for implementing such large-scale circuits on FPGAs. In particular, this work is motivated by (but not limited in scope to) high-energy physics algorithms for the Compact Muon Solenoid (CMS) experiment at the LHC. To make efficient use of logic resources in multi-FPGA systems, we introduce Multi-Personality Partitioning, a novel form of the graph partitioning problem, and present partitioning algorithms that can significantly improve resource utilization on heterogeneous devices while also reducing inter-chip connections. To reduce the high communication costs of Big Data applications, we also introduce Information-Aware Partitioning, a partitioning method that analyzes the data content of application-specific circuits, characterizes their entropy, and selects circuit partitions that enable efficient compression of data between chips. We employ our information-aware partitioning method to improve the performance of the hardware validation platform for evaluating new algorithms for the CMS experiment. Together, these research efforts help to improve the efficiency

  2. Cytology of DNA Replication Reveals Dynamic Plasticity of Large-Scale Chromatin Fibers.

    PubMed

    Deng, Xiang; Zhironkina, Oxana A; Cherepanynets, Varvara D; Strelkova, Olga S; Kireev, Igor I; Belmont, Andrew S

    2016-09-26

    In higher eukaryotic interphase nuclei, the 100- to >1,000-fold linear compaction of chromatin is difficult to reconcile with its function as a template for transcription, replication, and repair. It is challenging to imagine how DNA and RNA polymerases with their associated molecular machinery would move along the DNA template without transient decondensation of observed large-scale chromatin "chromonema" fibers [1]. Transcription or "replication factory" models [2], in which polymerases remain fixed while DNA is reeled through, are similarly difficult to conceptualize without transient decondensation of these chromonema fibers. Here, we show how a dynamic plasticity of chromatin folding within large-scale chromatin fibers allows DNA replication to take place without significant changes in the global large-scale chromatin compaction or shape of these large-scale chromatin fibers. Time-lapse imaging of lac-operator-tagged chromosome regions shows no major change in the overall compaction of these chromosome regions during their DNA replication. Improved pulse-chase labeling of endogenous interphase chromosomes yields a model in which the global compaction and shape of large-Mbp chromatin domains remains largely invariant during DNA replication, with DNA within these domains undergoing significant movements and redistribution as they move into and then out of adjacent replication foci. In contrast to hierarchical folding models, this dynamic plasticity of large-scale chromatin organization explains how localized changes in DNA topology allow DNA replication to take place without an accompanying global unfolding of large-scale chromatin fibers while suggesting a possible mechanism for maintaining epigenetic programming of large-scale chromatin domains throughout DNA replication. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Lessons Learned from Large-Scale Randomized Experiments

    ERIC Educational Resources Information Center

    Slavin, Robert E.; Cheung, Alan C. K.

    2017-01-01

    Large-scale randomized studies provide the best means of evaluating practical, replicable approaches to improving educational outcomes. This article discusses the advantages, problems, and pitfalls of these evaluations, focusing on alternative methods of randomization, recruitment, ensuring high-quality implementation, dealing with attrition, and…

  4. Nonlinear modulation of the HI power spectrum on ultra-large scales. I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Umeh, Obinna; Maartens, Roy; Santos, Mario, E-mail: umeobinna@gmail.com, E-mail: roy.maartens@gmail.com, E-mail: mgrsantos@uwc.ac.za

    2016-03-01

    Intensity mapping of the neutral hydrogen brightness temperature promises to provide a three-dimensional view of the universe on very large scales. Nonlinear effects are typically thought to alter only the small-scale power, but we show how they may bias the extraction of cosmological information contained in the power spectrum on ultra-large scales. For linear perturbations to remain valid on large scales, we need to renormalize perturbations at higher order. In the case of intensity mapping, the second-order contribution to clustering from weak lensing dominates the nonlinear contribution at high redshift. Renormalization modifies the mean brightness temperature and therefore the evolutionmore » bias. It also introduces a term that mimics white noise. These effects may influence forecasting analysis on ultra-large scales.« less

  5. Combined effects of gamma irradiation and rosemary extract on the shelf-life of a ready-to-eat hamburger steak

    NASA Astrophysics Data System (ADS)

    Lee, Ju-Woon; Park, Kyung-Sook; Kim, Jong-Goon; Oh, Sang-Hee; Lee, You-Seok; Kim, Jang-Ho; Byun, Myung-Woo

    2005-01-01

    To evaluate the effects of the combined treatment of gamma irradiation and rosemary extract powder (rosemary) for improving the quality of a ready-to-eat hamburger steak by changing the storage condition from frozen (-20°C) to a chilled temperature (4°C), an accelerated storage test was carried out. The hamburger steak was prepared with 200 or 500 ppm of rosemary, or 200 ppm of butylated hydroxyanisole by a commercially used recipe, gamma irradiated at absorbed doses of 5.0, 10.0 and 20.0 kGy, and stored at 30°C. From the microbiological aspect, irradiation at 20 kGy or a higher dose was needed to inactivate the normal microflora. Little effect of the antioxidant was, if any, observed. Thiobarbituric acid values were not very different during storage regardless of the irradiation dose and the addition of the antioxidant. Textural and sensory results were also not significantly different in all the samples.

  6. Impact of large-scale tides on cosmological distortions via redshift-space power spectrum

    NASA Astrophysics Data System (ADS)

    Akitsu, Kazuyuki; Takada, Masahiro

    2018-03-01

    Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.

  7. A topology visualization early warning distribution algorithm for large-scale network security incidents.

    PubMed

    He, Hui; Fan, Guotao; Ye, Jianwei; Zhang, Weizhe

    2013-01-01

    It is of great significance to research the early warning system for large-scale network security incidents. It can improve the network system's emergency response capabilities, alleviate the cyber attacks' damage, and strengthen the system's counterattack ability. A comprehensive early warning system is presented in this paper, which combines active measurement and anomaly detection. The key visualization algorithm and technology of the system are mainly discussed. The large-scale network system's plane visualization is realized based on the divide and conquer thought. First, the topology of the large-scale network is divided into some small-scale networks by the MLkP/CR algorithm. Second, the sub graph plane visualization algorithm is applied to each small-scale network. Finally, the small-scale networks' topologies are combined into a topology based on the automatic distribution algorithm of force analysis. As the algorithm transforms the large-scale network topology plane visualization problem into a series of small-scale network topology plane visualization and distribution problems, it has higher parallelism and is able to handle the display of ultra-large-scale network topology.

  8. Honeycomb: Visual Analysis of Large Scale Social Networks

    NASA Astrophysics Data System (ADS)

    van Ham, Frank; Schulz, Hans-Jörg; Dimicco, Joan M.

    The rise in the use of social network sites allows us to collect large amounts of user reported data on social structures and analysis of this data could provide useful insights for many of the social sciences. This analysis is typically the domain of Social Network Analysis, and visualization of these structures often proves invaluable in understanding them. However, currently available visual analysis tools are not very well suited to handle the massive scale of this network data, and often resolve to displaying small ego networks or heavily abstracted networks. In this paper, we present Honeycomb, a visualization tool that is able to deal with much larger scale data (with millions of connections), which we illustrate by using a large scale corporate social networking site as an example. Additionally, we introduce a new probability based network metric to guide users to potentially interesting or anomalous patterns and discuss lessons learned during design and implementation.

  9. Advances in Parallelization for Large Scale Oct-Tree Mesh Generation

    NASA Technical Reports Server (NTRS)

    O'Connell, Matthew; Karman, Steve L.

    2015-01-01

    Despite great advancements in the parallelization of numerical simulation codes over the last 20 years, it is still common to perform grid generation in serial. Generating large scale grids in serial often requires using special "grid generation" compute machines that can have more than ten times the memory of average machines. While some parallel mesh generation techniques have been proposed, generating very large meshes for LES or aeroacoustic simulations is still a challenging problem. An automated method for the parallel generation of very large scale off-body hierarchical meshes is presented here. This work enables large scale parallel generation of off-body meshes by using a novel combination of parallel grid generation techniques and a hybrid "top down" and "bottom up" oct-tree method. Meshes are generated using hardware commonly found in parallel compute clusters. The capability to generate very large meshes is demonstrated by the generation of off-body meshes surrounding complex aerospace geometries. Results are shown including a one billion cell mesh generated around a Predator Unmanned Aerial Vehicle geometry, which was generated on 64 processors in under 45 minutes.

  10. QUAGMIRE v1.3: a quasi-geostrophic model for investigating rotating fluids experiments

    NASA Astrophysics Data System (ADS)

    Williams, P. D.; Haine, T. W. N.; Read, P. L.; Lewis, S. R.; Yamazaki, Y. H.

    2009-04-01

    The QUAGMIRE model has recently been made freely available for public use. QUAGMIRE is a quasi-geostrophic numerical model for performing fast, high-resolution simulations of multi-layer rotating annulus laboratory experiments on a desktop personal computer. This presentation describes the model's main features. QUAGMIRE uses a hybrid finite-difference/spectral approach to numerically integrate the coupled nonlinear partial differential equations of motion in cylindrical geometry in each layer. Version 1.3 implements the special case of two fluid layers of equal resting depths. The flow is forced either by a differentially rotating lid, or by relaxation to specified streamfunction or potential vorticity fields, or both. Dissipation is achieved through Ekman layer pumping and suction at the horizontal boundaries, including the internal interface. The effects of weak interfacial tension are included, as well as the linear topographic beta-effect and the quadratic centripetal beta-effect. Stochastic forcing may optionally be activated, to represent approximately the effects of random unresolved features. A leapfrog time stepping scheme is used, with a Robert filter. Flows simulated by the model agree well with those observed in the corresponding laboratory experiments.

  11. Investigating a link between large and small-scale chaos features on Europa

    NASA Astrophysics Data System (ADS)

    Tognetti, L.; Rhoden, A.; Nelson, D. M.

    2017-12-01

    Chaos is one of the most recognizable, and studied, features on Europa's surface. Most models of chaos formation invoke liquid water at shallow depths within the ice shell; the liquid destabilizes the overlying ice layer, breaking it into mobile rafts and destroying pre-existing terrain. This class of model has been applied to both large-scale chaos like Conamara and small-scale features (i.e. microchaos), which are typically <10 km in diameter. Currently unknown, however, is whether both large-scale and small-scale features are produced together, e.g. through a network of smaller sills linked to a larger liquid water pocket. If microchaos features do form as satellites of large-scale chaos features, we would expect a drop off in the number density of microchaos with increasing distance from the large chaos feature; the trend should not be observed in regions without large-scale chaos features. Here, we test the hypothesis that large chaos features create "satellite" systems of smaller chaos features. Either outcome will help us better understand the relationship between large-scale chaos and microchaos. We focus first on regions surrounding the large chaos features Conamara and Murias (e.g. the Mitten). We map all chaos features within 90,000 sq km of the main chaos feature and assign each one a ranking (High Confidence, Probable, or Low Confidence) based on the observed characteristics of each feature. In particular, we look for a distinct boundary, loss of preexisting terrain, the existence of rafts or blocks, and the overall smoothness of the feature. We also note features that are chaos-like but lack sufficient characteristics to be classified as chaos. We then apply the same criteria to map microchaos features in regions of similar area ( 90,000 sq km) that lack large chaos features. By plotting the distribution of microchaos with distance from the center point of the large chaos feature or the mapping region (for the cases without a large feature), we

  12. Soft X-ray Emission from Large-Scale Galactic Outflows in Seyfert Galaxies

    NASA Astrophysics Data System (ADS)

    Colbert, E. J. M.; Baum, S.; O'Dea, C.; Veilleux, S.

    1998-01-01

    Kiloparsec-scale soft X-ray nebulae extend along the galaxy minor axes in several Seyfert galaxies, including NGC 2992, NGC 4388 and NGC 5506. In these three galaxies, the extended X-ray emission observed in ROSAT HRI images has 0.2-2.4 keV X-ray luminosities of 0.4-3.5 x 10(40) erg s(-1) . The X-ray nebulae are roughly co-spatial with the large-scale radio emission, suggesting that both are produced by large-scale galactic outflows. Assuming pressure balance between the radio and X-ray plasmas, the X-ray filling factor is >~ 10(4) times as large as the radio plasma filling factor, suggesting that large-scale outflows in Seyfert galaxies are predominantly winds of thermal X-ray emitting gas. We favor an interpretation in which large-scale outflows originate as AGN-driven jets that entrain and heat gas on kpc scales as they make their way out of the galaxy. AGN- and starburst-driven winds are also possible explanations if the winds are oriented along the rotation axis of the galaxy disk. Since large-scale outflows are present in at least 50 percent of Seyfert galaxies, the soft X-ray emission from the outflowing gas may, in many cases, explain the ``soft excess" X-ray feature observed below 2 keV in X-ray spectra of many Seyfert 2 galaxies.

  13. Some aspects of control of a large-scale dynamic system

    NASA Technical Reports Server (NTRS)

    Aoki, M.

    1975-01-01

    Techniques of predicting and/or controlling the dynamic behavior of large scale systems are discussed in terms of decentralized decision making. Topics discussed include: (1) control of large scale systems by dynamic team with delayed information sharing; (2) dynamic resource allocation problems by a team (hierarchical structure with a coordinator); and (3) some problems related to the construction of a model of reduced dimension.

  14. Large-scale magnetic topologies of early M dwarfs

    NASA Astrophysics Data System (ADS)

    Donati, J.-F.; Morin, J.; Petit, P.; Delfosse, X.; Forveille, T.; Aurière, M.; Cabanac, R.; Dintrans, B.; Fares, R.; Gastine, T.; Jardine, M. M.; Lignières, F.; Paletou, F.; Ramirez Velez, J. C.; Théado, S.

    2008-10-01

    We present here additional results of a spectropolarimetric survey of a small sample of stars ranging from spectral type M0 to M8 aimed at investigating observationally how dynamo processes operate in stars on both sides of the full convection threshold (spectral type M4). The present paper focuses on early M stars (M0-M3), that is above the full convection threshold. Applying tomographic imaging techniques to time series of rotationally modulated circularly polarized profiles collected with the NARVAL spectropolarimeter, we determine the rotation period and reconstruct the large-scale magnetic topologies of six early M dwarfs. We find that early-M stars preferentially host large-scale fields with dominantly toroidal and non-axisymmetric poloidal configurations, along with significant differential rotation (and long-term variability); only the lowest-mass star of our subsample is found to host an almost fully poloidal, mainly axisymmetric large-scale field resembling those found in mid-M dwarfs. This abrupt change in the large-scale magnetic topologies of M dwarfs (occurring at spectral type M3) has no related signature on X-ray luminosities (measuring the total amount of magnetic flux); it thus suggests that underlying dynamo processes become more efficient at producing large-scale fields (despite producing the same flux) at spectral types later than M3. We suspect that this change relates to the rapid decrease in the radiative cores of low-mass stars and to the simultaneous sharp increase of the convective turnover times (with decreasing stellar mass) that models predict to occur at M3; it may also be (at least partly) responsible for the reduced magnetic braking reported for fully convective stars. Based on observations obtained at the Télescope Bernard Lyot (TBL), operated by the Institut National des Science de l'Univers of the Centre National de la Recherche Scientifique of France. E-mail: donati@ast.obs-mip.fr (J-FD); jmorin@ast.obs-mip.fr (JM); petit

  15. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple

  16. Rotation invariant fast features for large-scale recognition

    NASA Astrophysics Data System (ADS)

    Takacs, Gabriel; Chandrasekhar, Vijay; Tsai, Sam; Chen, David; Grzeszczuk, Radek; Girod, Bernd

    2012-10-01

    We present an end-to-end feature description pipeline which uses a novel interest point detector and Rotation- Invariant Fast Feature (RIFF) descriptors. The proposed RIFF algorithm is 15× faster than SURF1 while producing large-scale retrieval results that are comparable to SIFT.2 Such high-speed features benefit a range of applications from Mobile Augmented Reality (MAR) to web-scale image retrieval and analysis.

  17. Large-scale bioenergy production: how to resolve sustainability trade-offs?

    NASA Astrophysics Data System (ADS)

    Humpenöder, Florian; Popp, Alexander; Bodirsky, Benjamin Leon; Weindl, Isabelle; Biewald, Anne; Lotze-Campen, Hermann; Dietrich, Jan Philipp; Klein, David; Kreidenweis, Ulrich; Müller, Christoph; Rolinski, Susanne; Stevanovic, Miodrag

    2018-02-01

    Large-scale 2nd generation bioenergy deployment is a key element of 1.5 °C and 2 °C transformation pathways. However, large-scale bioenergy production might have negative sustainability implications and thus may conflict with the Sustainable Development Goal (SDG) agenda. Here, we carry out a multi-criteria sustainability assessment of large-scale bioenergy crop production throughout the 21st century (300 EJ in 2100) using a global land-use model. Our analysis indicates that large-scale bioenergy production without complementary measures results in negative effects on the following sustainability indicators: deforestation, CO2 emissions from land-use change, nitrogen losses, unsustainable water withdrawals and food prices. One of our main findings is that single-sector environmental protection measures next to large-scale bioenergy production are prone to involve trade-offs among these sustainability indicators—at least in the absence of more efficient land or water resource use. For instance, if bioenergy production is accompanied by forest protection, deforestation and associated emissions (SDGs 13 and 15) decline substantially whereas food prices (SDG 2) increase. However, our study also shows that this trade-off strongly depends on the development of future food demand. In contrast to environmental protection measures, we find that agricultural intensification lowers some side-effects of bioenergy production substantially (SDGs 13 and 15) without generating new trade-offs—at least among the sustainability indicators considered here. Moreover, our results indicate that a combination of forest and water protection schemes, improved fertilization efficiency, and agricultural intensification would reduce the side-effects of bioenergy production most comprehensively. However, although our study includes more sustainability indicators than previous studies on bioenergy side-effects, our study represents only a small subset of all indicators relevant for the

  18. Large-scale fabrication of single crystalline tin nanowire arrays

    NASA Astrophysics Data System (ADS)

    Luo, Bin; Yang, Dachi; Liang, Minghui; Zhi, Linjie

    2010-09-01

    Large-scale single crystalline tin nanowire arrays with preferred lattice orientation along the [100] direction were fabricated in porous anodic aluminium oxide (AAO) membranes by the electrodeposition method using copper nanorod as a second electrode.Large-scale single crystalline tin nanowire arrays with preferred lattice orientation along the [100] direction were fabricated in porous anodic aluminium oxide (AAO) membranes by the electrodeposition method using copper nanorod as a second electrode. Electronic supplementary information (ESI) available: Experimental details and the information for single crystalline copper nanorods. See DOI: 10.1039/c0nr00206b

  19. Large-scale Activities Associated with the 2005 Sep. 7th Event

    NASA Astrophysics Data System (ADS)

    Zong, Weiguo

    We present a multi-wavelength study on large-scale activities associated with a significant solar event. On 2005 September 7, a flare classified as bigger than X17 was observed. Combining with Hα 6562.8 ˚, He I 10830 ˚and soft X-ray observations, three large-scale activities were A A found to propagate over a long distance on the solar surface. 1) The first large-scale activity emanated from the flare site, which propagated westward around the solar equator and appeared as sequential brightenings. With MDI longitudinal magnetic field map, the activity was found to propagate along the magnetic network. 2) The second large-scale activity could be well identified both in He I 10830 ˚images and soft X-ray images and appeared as diffuse emission A enhancement propagating away. The activity started later than the first one and was not centric on the flare site. Moreover, a rotation was found along with the bright front propagating away. 3) The third activity was ahead of the second one, which was identified as a "winking" filament. The three activities have different origins, which were seldom observed in one event. Therefore this study is useful to understand the mechanism of large-scale activities on solar surface.

  20. The Zoology Department at Washington University (1944-1954): from undergraduate to graduate studies with Viktor Hamburger.

    PubMed

    Dunnebacke, T H

    2001-04-01

    Beginning from an undergraduate's perspective and continuing through graduate school, this student's experiences in the Department of Zoology at Washington University in St. Louis, Missouri was a time of many rewarding experiences. Now, on this occasion of his 100th birthday, I wish to express my appreciation to the Chairman, Dr. Viktor Hamburger, for his teachings, his encouragement, and his friendship that has lasted over the past 56 years.

  1. Off-pump repair of a post-infarct ventricular septal defect: the 'Hamburger procedure'

    PubMed Central

    Barker, Thomas A; Ng, Alexander; Morgan, Ian S

    2006-01-01

    We report a novel off-pump technique for the surgical closure of post-infarct ventricular septal defects (VSDs). The case report describes the peri-operative management of a 76 year old lady who underwent the 'Hamburger procedure' for closure of her apical VSD. Refractory cardiogenic shock meant that traditional patch repairs requiring cardiopulmonary bypass would be poorly tolerated. We show that echocardiography guided off-pump posterior-anterior septal plication is a safe, effective method for closing post-infarct VSDs in unstable patients. More experience is required to ascertain whether this technique will become an accepted alternative to patch repairs. PMID:16722552

  2. Multiresolution comparison of precipitation datasets for large-scale models

    NASA Astrophysics Data System (ADS)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  3. Toward Instructional Leadership: Principals' Perceptions of Large-Scale Assessment in Schools

    ERIC Educational Resources Information Center

    Prytula, Michelle; Noonan, Brian; Hellsten, Laurie

    2013-01-01

    This paper describes a study of the perceptions that Saskatchewan school principals have regarding large-scale assessment reform and their perceptions of how assessment reform has affected their roles as principals. The findings revealed that large-scale assessments, especially provincial assessments, have affected the principal in Saskatchewan…

  4. Large scale structure in universes dominated by cold dark matter

    NASA Technical Reports Server (NTRS)

    Bond, J. Richard

    1986-01-01

    The theory of Gaussian random density field peaks is applied to a numerical study of the large-scale structure developing from adiabatic fluctuations in models of biased galaxy formation in universes with Omega = 1, h = 0.5 dominated by cold dark matter (CDM). The angular anisotropy of the cross-correlation function demonstrates that the far-field regions of cluster-scale peaks are asymmetric, as recent observations indicate. These regions will generate pancakes or filaments upon collapse. One-dimensional singularities in the large-scale bulk flow should arise in these CDM models, appearing as pancakes in position space. They are too rare to explain the CfA bubble walls, but pancakes that are just turning around now are sufficiently abundant and would appear to be thin walls normal to the line of sight in redshift space. Large scale streaming velocities are significantly smaller than recent observations indicate. To explain the reported 700 km/s coherent motions, mass must be significantly more clustered than galaxies with a biasing factor of less than 0.4 and a nonlinear redshift at cluster scales greater than one for both massive neutrino and cold models.

  5. Large-scale derived flood frequency analysis based on continuous simulation

    NASA Astrophysics Data System (ADS)

    Dung Nguyen, Viet; Hundecha, Yeshewatesfa; Guse, Björn; Vorogushyn, Sergiy; Merz, Bruno

    2016-04-01

    There is an increasing need for spatially consistent flood risk assessments at the regional scale (several 100.000 km2), in particular in the insurance industry and for national risk reduction strategies. However, most large-scale flood risk assessments are composed of smaller-scale assessments and show spatial inconsistencies. To overcome this deficit, a large-scale flood model composed of a weather generator and catchments models was developed reflecting the spatially inherent heterogeneity. The weather generator is a multisite and multivariate stochastic model capable of generating synthetic meteorological fields (precipitation, temperature, etc.) at daily resolution for the regional scale. These fields respect the observed autocorrelation, spatial correlation and co-variance between the variables. They are used as input into catchment models. A long-term simulation of this combined system enables to derive very long discharge series at many catchment locations serving as a basic for spatially consistent flood risk estimates at the regional scale. This combined model was set up and validated for major river catchments in Germany. The weather generator was trained by 53-year observation data at 528 stations covering not only the complete Germany but also parts of France, Switzerland, Czech Republic and Australia with the aggregated spatial scale of 443,931 km2. 10.000 years of daily meteorological fields for the study area were generated. Likewise, rainfall-runoff simulations with SWIM were performed for the entire Elbe, Rhine, Weser, Donau and Ems catchments. The validation results illustrate a good performance of the combined system, as the simulated flood magnitudes and frequencies agree well with the observed flood data. Based on continuous simulation this model chain is then used to estimate flood quantiles for the whole Germany including upstream headwater catchments in neighbouring countries. This continuous large scale approach overcomes the several

  6. DESIGN OF LARGE-SCALE AIR MONITORING NETWORKS

    EPA Science Inventory

    The potential effects of air pollution on human health have received much attention in recent years. In the U.S. and other countries, there are extensive large-scale monitoring networks designed to collect data to inform the public of exposure risks to air pollution. A major crit...

  7. The XMM Large Scale Structure Survey

    NASA Astrophysics Data System (ADS)

    Pierre, Marguerite

    2005-10-01

    We propose to complete, by an additional 5 deg2, the XMM-LSS Survey region overlying the Spitzer/SWIRE field. This field already has CFHTLS and Integral coverage, and will encompass about 10 deg2. The resulting multi-wavelength medium-depth survey, which complements XMM and Chandra deep surveys, will provide a unique view of large-scale structure over a wide range of redshift, and will show active galaxies in the full range of environments. The complete coverage by optical and IR surveys provides high-quality photometric redshifts, so that cosmological results can quickly be extracted. In the spirit of a Legacy survey, we will make the raw X-ray data immediately public. Multi-band catalogues and images will also be made available on short time scales.

  8. GAIA: A WINDOW TO LARGE-SCALE MOTIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nusser, Adi; Branchini, Enzo; Davis, Marc, E-mail: adi@physics.technion.ac.il, E-mail: branchin@fis.uniroma3.it, E-mail: mdavis@berkeley.edu

    2012-08-10

    Using redshifts as a proxy for galaxy distances, estimates of the two-dimensional (2D) transverse peculiar velocities of distant galaxies could be obtained from future measurements of proper motions. We provide the mathematical framework for analyzing 2D transverse motions and show that they offer several advantages over traditional probes of large-scale motions. They are completely independent of any intrinsic relations between galaxy properties; hence, they are essentially free of selection biases. They are free from homogeneous and inhomogeneous Malmquist biases that typically plague distance indicator catalogs. They provide additional information to traditional probes that yield line-of-sight peculiar velocities only. Further, becausemore » of their 2D nature, fundamental questions regarding vorticity of large-scale flows can be addressed. Gaia, for example, is expected to provide proper motions of at least bright galaxies with high central surface brightness, making proper motions a likely contender for traditional probes based on current and future distance indicator measurements.« less

  9. Large-scale quantum networks based on graphs

    NASA Astrophysics Data System (ADS)

    Epping, Michael; Kampermann, Hermann; Bruß, Dagmar

    2016-05-01

    Society relies and depends increasingly on information exchange and communication. In the quantum world, security and privacy is a built-in feature for information processing. The essential ingredient for exploiting these quantum advantages is the resource of entanglement, which can be shared between two or more parties. The distribution of entanglement over large distances constitutes a key challenge for current research and development. Due to losses of the transmitted quantum particles, which typically scale exponentially with the distance, intermediate quantum repeater stations are needed. Here we show how to generalise the quantum repeater concept to the multipartite case, by describing large-scale quantum networks, i.e. network nodes and their long-distance links, consistently in the language of graphs and graph states. This unifying approach comprises both the distribution of multipartite entanglement across the network, and the protection against errors via encoding. The correspondence to graph states also provides a tool for optimising the architecture of quantum networks.

  10. Large Scale Density Estimation of Blue and Fin Whales (LSD)

    DTIC Science & Technology

    2014-09-30

    172. McDonald, MA, Hildebrand, JA, and Mesnick, S (2009). Worldwide decline in tonal frequencies of blue whale songs . Endangered Species Research 9...1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Large Scale Density Estimation of Blue and Fin Whales ...estimating blue and fin whale density that is effective over large spatial scales and is designed to cope with spatial variation in animal density utilizing

  11. Large-Scale Aerosol Modeling and Analysis

    DTIC Science & Technology

    2007-09-30

    deserts of the world: Arabian Gulf, Sea of Japan, China Sea , Mediterranean Sea , and the Tropical Atlantic Ocean. NAAPS also accurately predicts the...fate of large-scale smoke and pollution plumes. With its global and continuous coverage, 1 Report Documentation Page Form ApprovedOMB No. 0704-0188...origin of dust plumes impacting naval operations in the Red Sea , Mediterranean, eastern Atlantic, Gulf of Guinea, Sea of Japan, Yellow Sea , and East

  12. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories

    NASA Astrophysics Data System (ADS)

    Park, Kiwan; Blackman, Eric G.; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  13. Large-scale dynamo growth rates from numerical simulations and implications for mean-field theories.

    PubMed

    Park, Kiwan; Blackman, Eric G; Subramanian, Kandaswamy

    2013-05-01

    Understanding large-scale magnetic field growth in turbulent plasmas in the magnetohydrodynamic limit is a goal of magnetic dynamo theory. In particular, assessing how well large-scale helical field growth and saturation in simulations match those predicted by existing theories is important for progress. Using numerical simulations of isotropically forced turbulence without large-scale shear with its implications, we focus on several additional aspects of this comparison: (1) Leading mean-field dynamo theories which break the field into large and small scales predict that large-scale helical field growth rates are determined by the difference between kinetic helicity and current helicity with no dependence on the nonhelical energy in small-scale magnetic fields. Our simulations show that the growth rate of the large-scale field from fully helical forcing is indeed unaffected by the presence or absence of small-scale magnetic fields amplified in a precursor nonhelical dynamo. However, because the precursor nonhelical dynamo in our simulations produced fields that were strongly subequipartition with respect to the kinetic energy, we cannot yet rule out the potential influence of stronger nonhelical small-scale fields. (2) We have identified two features in our simulations which cannot be explained by the most minimalist versions of two-scale mean-field theory: (i) fully helical small-scale forcing produces significant nonhelical large-scale magnetic energy and (ii) the saturation of the large-scale field growth is time delayed with respect to what minimalist theory predicts. We comment on desirable generalizations to the theory in this context and future desired work.

  14. Considerations for Managing Large-Scale Clinical Trials.

    ERIC Educational Resources Information Center

    Tuttle, Waneta C.; And Others

    1989-01-01

    Research management strategies used effectively in a large-scale clinical trial to determine the health effects of exposure to Agent Orange in Vietnam are discussed, including pre-project planning, organization according to strategy, attention to scheduling, a team approach, emphasis on guest relations, cross-training of personnel, and preparing…

  15. Large-Scale Optimization for Bayesian Inference in Complex Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Willcox, Karen; Marzouk, Youssef

    2013-11-12

    The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of themore » SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve

  16. Response of deep and shallow tropical maritime cumuli to large-scale processes

    NASA Technical Reports Server (NTRS)

    Yanai, M.; Chu, J.-H.; Stark, T. E.; Nitta, T.

    1976-01-01

    The bulk diagnostic method of Yanai et al. (1973) and a simplified version of the spectral diagnostic method of Nitta (1975) are used for a more quantitative evaluation of the response of various types of cumuliform clouds to large-scale processes, using the same data set in the Marshall Islands area for a 100-day period in 1956. The dependence of the cloud mass flux distribution on radiative cooling, large-scale vertical motion, and evaporation from the sea is examined. It is shown that typical radiative cooling rates in the tropics tend to produce a bimodal distribution of mass spectrum exhibiting deep and shallow clouds. The bimodal distribution is further enhanced when the large-scale vertical motion is upward, and a nearly unimodal distribution of shallow clouds prevails when the relative cooling is compensated by the heating due to the large-scale subsidence. Both deep and shallow clouds are modulated by large-scale disturbances. The primary role of surface evaporation is to maintain the moisture flux at the cloud base.

  17. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  18. Large-scale dynamo action precedes turbulence in shearing box simulations of the magnetorotational instability

    DOE PAGES

    Bhat, Pallavi; Ebrahimi, Fatima; Blackman, Eric G.

    2016-07-06

    Here, we study the dynamo generation (exponential growth) of large-scale (planar averaged) fields in unstratified shearing box simulations of the magnetorotational instability (MRI). In contrast to previous studies restricted to horizontal (x–y) averaging, we also demonstrate the presence of large-scale fields when vertical (y–z) averaging is employed instead. By computing space–time planar averaged fields and power spectra, we find large-scale dynamo action in the early MRI growth phase – a previously unidentified feature. Non-axisymmetric linear MRI modes with low horizontal wavenumbers and vertical wavenumbers near that of expected maximal growth, amplify the large-scale fields exponentially before turbulence and high wavenumbermore » fluctuations arise. Thus the large-scale dynamo requires only linear fluctuations but not non-linear turbulence (as defined by mode–mode coupling). Vertical averaging also allows for monitoring the evolution of the large-scale vertical field and we find that a feedback from horizontal low wavenumber MRI modes provides a clue as to why the large-scale vertical field sustains against turbulent diffusion in the non-linear saturation regime. We compute the terms in the mean field equations to identify the individual contributions to large-scale field growth for both types of averaging. The large-scale fields obtained from vertical averaging are found to compare well with global simulations and quasi-linear analytical analysis from a previous study by Ebrahimi & Blackman. We discuss the potential implications of these new results for understanding the large-scale MRI dynamo saturation and turbulence.« less

  19. Flagellum synchronization inhibits large-scale hydrodynamic instabilities in sperm suspensions

    NASA Astrophysics Data System (ADS)

    Schöller, Simon F.; Keaveny, Eric E.

    2016-11-01

    Sperm in suspension can exhibit large-scale collective motion and form coherent structures. Our picture of such coherent motion is largely based on reduced models that treat the swimmers as self-locomoting rigid bodies that interact via steady dipolar flow fields. Swimming sperm, however, have many more degrees of freedom due to elasticity, have a more exotic shape, and generate spatially-complex, time-dependent flow fields. While these complexities are known to lead to phenomena such as flagellum synchronization and attraction, how these effects impact the overall suspension behaviour and coherent structure formation is largely unknown. Using a computational model that captures both flagellum beating and elasticity, we simulate suspensions on the order of 103 individual swimming sperm cells whose motion is coupled through the surrounding Stokesian fluid. We find that the tendency for flagella to synchronize and sperm to aggregate inhibits the emergence of the large-scale hydrodynamic instabilities often associated with active suspensions. However, when synchronization is repressed by adding noise in the flagellum actuation mechanism, the picture changes and the structures that resemble large-scale vortices appear to re-emerge. Supported by an Imperial College PhD scholarship.

  20. Contractual Duration and Investment Incentives: Evidence from Large Scale Production Units in China

    NASA Astrophysics Data System (ADS)

    Li, Fang; Feng, Shuyi; D'Haese, Marijke; Lu, Hualiang; Qu, Futian

    2017-04-01

    Large Scale Production Units have become important forces in the supply of agricultural commodities and agricultural modernization in China. Contractual duration in farmland transfer to Large Scale Production Units can be considered to reflect land tenure security. Theoretically, long-term tenancy contracts can encourage Large Scale Production Units to increase long-term investments by ensuring land rights stability or favoring access to credit. Using a unique Large Scale Production Units- and plot-level field survey dataset from Jiangsu and Jiangxi Province, this study aims to examine the effect of contractual duration on Large Scale Production Units' soil conservation behaviours. IV method is applied to take into account the endogeneity of contractual duration and unobserved household heterogeneity. Results indicate that farmland transfer contract duration significantly and positively affects land-improving investments. Policies aimed at improving transaction platforms and intermediary organizations in farmland transfer to facilitate Large Scale Production Units to access farmland with long-term tenancy contracts may therefore play an important role in improving soil quality and land productivity.

  1. Small-scale monitoring - can it be integrated with large-scale programs?

    Treesearch

    C. M. Downes; J. Bart; B. T. Collins; B. Craig; B. Dale; E. H. Dunn; C. M. Francis; S. Woodley; P. Zorn

    2005-01-01

    There are dozens of programs and methodologies for monitoring and inventory of bird populations, differing in geographic scope, species focus, field methods and purpose. However, most of the emphasis has been placed on large-scale monitoring programs. People interested in assessing bird numbers and long-term trends in small geographic areas such as a local birding area...

  2. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    NASA Astrophysics Data System (ADS)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  3. On decentralized control of large-scale systems

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.

    1978-01-01

    A scheme is presented for decentralized control of large-scale linear systems which are composed of a number of interconnected subsystems. By ignoring the interconnections, local feedback controls are chosen to optimize each decoupled subsystem. Conditions are provided to establish compatibility of the individual local controllers and achieve stability of the overall system. Besides computational simplifications, the scheme is attractive because of its structural features and the fact that it produces a robust decentralized regulator for large dynamic systems, which can tolerate a wide range of nonlinearities and perturbations among the subsystems.

  4. Large-scale modeling of rain fields from a rain cell deterministic model

    NASA Astrophysics Data System (ADS)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  5. Geostrophic adjustment in a shallow-water numerical model as it relates to thermospheric dynamics

    NASA Technical Reports Server (NTRS)

    Larsen, M. F.; Mikkelsen, I. S.

    1986-01-01

    The theory of geostrophic adjustment and its application to the dynamics of the high latitude thermosphere have been discussed in previous papers based on a linearized treatment of the fluid dynamical equations. However, a linearized treatment is only valid for small Rossby numbers given by Ro = V/fL, where V is the wind speed, f is the local value of the Coriolis parameter, and L is a characteristic horizontal scale for the flow. For typical values in the auroral zone, the approximation is not reasonable for wind speeds greater than 25 m/s or so. A shallow-water (one layer) model was developed that includes the spherical geometry and full nonlinear dynamics in the momentum equations in order to isolate the effects of the nonlinearities on the adjustment process. A belt of accelerated winds between 60 deg and 70 deg latitude was used as the initial condition. The adjustment process was found to proceed as expected from the linear formulation, but that an asymmetry between the response for an eastward and westward flow results from the nonlineawr curvature (centrifugal) terms. In general, the amplitude of an eastward flowing wind will be less after adjustment than a westward wind. For instance, if the initial wind velocity is 300 m/s, the linearized theory predicts a final wind speed of 240 m/s, regardless of the flow direction. However, the nonlinear curvature terms modify the response and produce a final wind speed of only 200 m/s for an initial eastward wind and a final wind speed of almost 300 m/s for an initial westward flow direction. Also, less gravity wave energy is produced by the adjustment of the westward flow than by the adjustment of the eastward flow. The implications are that the response of the thermosphere should be significantly different on the dawn and dusk sides of the auroral oval. Larger flow velocities would be expected on the dusk side since the plasma will accelerate the flow in a westward direction in that sector.

  6. Facile Large-scale synthesis of stable CuO nanoparticles

    NASA Astrophysics Data System (ADS)

    Nazari, P.; Abdollahi-Nejand, B.; Eskandari, M.; Kohnehpoushi, S.

    2018-04-01

    In this work, a novel approach in synthesizing the CuO nanoparticles was introduced. A sequential corrosion and detaching was proposed in the growth and dispersion of CuO nanoparticles in the optimum pH value of eight. The produced CuO nanoparticles showed six nm (±2 nm) in diameter and spherical feather with a high crystallinity and uniformity in size. In this method, a large-scale production of CuO nanoparticles (120 grams in an experimental batch) from Cu micro-particles was achieved which may met the market criteria for large-scale production of CuO nanoparticles.

  7. Large scale preparation and crystallization of neuron-specific enolase.

    PubMed

    Ishioka, N; Isobe, T; Kadoya, T; Okuyama, T; Nakajima, T

    1984-03-01

    A simple method has been developed for the large scale purification of neuron-specific enolase [EC 4.2.1.11]. The method consists of ammonium sulfate fractionation of brain extract, and two subsequent column chromatography steps on DEAE Sephadex A-50. The chromatography was performed on a short (25 cm height) and thick (8.5 cm inside diameter) column unit that was specially devised for the large scale preparation. The purified enolase was crystallized in 0.05 M imidazole-HCl buffer containing 1.6 M ammonium sulfate (pH 6.39), with a yield of 0.9 g/kg of bovine brain tissue.

  8. A Large Scale Computer Terminal Output Controller.

    ERIC Educational Resources Information Center

    Tucker, Paul Thomas

    This paper describes the design and implementation of a large scale computer terminal output controller which supervises the transfer of information from a Control Data 6400 Computer to a PLATO IV data network. It discusses the cost considerations leading to the selection of educational television channels rather than telephone lines for…

  9. Introducing Large-Scale Innovation in Schools

    NASA Astrophysics Data System (ADS)

    Sotiriou, Sofoklis; Riviou, Katherina; Cherouvis, Stephanos; Chelioti, Eleni; Bogner, Franz X.

    2016-08-01

    Education reform initiatives tend to promise higher effectiveness in classrooms especially when emphasis is given to e-learning and digital resources. Practical changes in classroom realities or school organization, however, are lacking. A major European initiative entitled Open Discovery Space (ODS) examined the challenge of modernizing school education via a large-scale implementation of an open-scale methodology in using technology-supported innovation. The present paper describes this innovation scheme which involved schools and teachers all over Europe, embedded technology-enhanced learning into wider school environments and provided training to teachers. Our implementation scheme consisted of three phases: (1) stimulating interest, (2) incorporating the innovation into school settings and (3) accelerating the implementation of the innovation. The scheme's impact was monitored for a school year using five indicators: leadership and vision building, ICT in the curriculum, development of ICT culture, professional development support, and school resources and infrastructure. Based on about 400 schools, our study produced four results: (1) The growth in digital maturity was substantial, even for previously high scoring schools. This was even more important for indicators such as vision and leadership" and "professional development." (2) The evolution of networking is presented graphically, showing the gradual growth of connections achieved. (3) These communities became core nodes, involving numerous teachers in sharing educational content and experiences: One out of three registered users (36 %) has shared his/her educational resources in at least one community. (4) Satisfaction scores ranged from 76 % (offer of useful support through teacher academies) to 87 % (good environment to exchange best practices). Initiatives such as ODS add substantial value to schools on a large scale.

  10. Quantitative Missense Variant Effect Prediction Using Large-Scale Mutagenesis Data.

    PubMed

    Gray, Vanessa E; Hause, Ronald J; Luebeck, Jens; Shendure, Jay; Fowler, Douglas M

    2018-01-24

    Large datasets describing the quantitative effects of mutations on protein function are becoming increasingly available. Here, we leverage these datasets to develop Envision, which predicts the magnitude of a missense variant's molecular effect. Envision combines 21,026 variant effect measurements from nine large-scale experimental mutagenesis datasets, a hitherto untapped training resource, with a supervised, stochastic gradient boosting learning algorithm. Envision outperforms other missense variant effect predictors both on large-scale mutagenesis data and on an independent test dataset comprising 2,312 TP53 variants whose effects were measured using a low-throughput approach. This dataset was never used for hyperparameter tuning or model training and thus serves as an independent validation set. Envision prediction accuracy is also more consistent across amino acids than other predictors. Finally, we demonstrate that Envision's performance improves as more large-scale mutagenesis data are incorporated. We precompute Envision predictions for every possible single amino acid variant in human, mouse, frog, zebrafish, fruit fly, worm, and yeast proteomes (https://envision.gs.washington.edu/). Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Methods for Large-Scale Nonlinear Optimization.

    DTIC Science & Technology

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  12. Generation of large-scale vorticity in rotating stratified turbulence with inhomogeneous helicity: mean-field theory

    NASA Astrophysics Data System (ADS)

    Kleeorin, N.

    2018-06-01

    We discuss a mean-field theory of the generation of large-scale vorticity in a rotating density stratified developed turbulence with inhomogeneous kinetic helicity. We show that the large-scale non-uniform flow is produced due to either a combined action of a density stratified rotating turbulence and uniform kinetic helicity or a combined effect of a rotating incompressible turbulence and inhomogeneous kinetic helicity. These effects result in the formation of a large-scale shear, and in turn its interaction with the small-scale turbulence causes an excitation of the large-scale instability (known as a vorticity dynamo) due to a combined effect of the large-scale shear and Reynolds stress-induced generation of the mean vorticity. The latter is due to the effect of large-scale shear on the Reynolds stress. A fast rotation suppresses this large-scale instability.

  13. A Large-Eddy Simulation Study of Atmospheric Boundary Layer Influence on Stratified Flows over Terrain

    DOE PAGES

    Sauer, Jeremy A.; Munoz-Esparza, Domingo; Canfield, Jesse M.; ...

    2016-06-24

    In this study, the impact of atmospheric boundary layer (ABL) interactions with large-scale stably stratified flow over an isolated, two-dimensional hill is investigated using turbulence-resolving large-eddy simulations. The onset of internal gravity wave breaking and leeside flow response regimes of trapped lee waves and nonlinear breakdown (or hydraulic-jump-like state) as they depend on the classical inverse Froude number, Fr -1 = Nh/U g, is explored in detail. Here, N is the Brunt–Väisälä frequency, h is the hill height, and U g is the geostrophic wind. The results here demonstrate that the presence of a turbulent ABL influences mountain wave (MW) development in critical aspects, such as dissipation of trapped lee waves and amplified stagnation zone turbulence through Kelvin–Helmholtz instability. It is shown that the nature of interactions between the large-scale flow and the ABL is better characterized by a proposed inverse compensated Froude number, Frmore » $$-1\\atop{c}$$ = N(h - z i)/U g, where z i is the ABL height. In addition, it is found that the onset of the nonlinear-breakdown regime, Fr$$-1\\atop{c}$$ ≈ 1.0, is initiated when the vertical wavelength becomes comparable to the sufficiently energetic scales of turbulence in the stagnation zone and ABL, yielding an abrupt change in leeside flow response. Lastly, energy spectra are presented in the context of MW flows, supporting the existence of a clear transition in leeside flow response, and illustrating two distinct energy distribution states for the trapped-lee-wave and the nonlinear-breakdown regimes.« less

  14. Large Scale GW Calculations on the Cori System

    NASA Astrophysics Data System (ADS)

    Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven

    The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.

  15. Lagrangian space consistency relation for large scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less

  16. Lagrangian space consistency relation for large scale structure

    DOE PAGES

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-09-29

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less

  17. Electron drift in a large scale solid xenon

    DOE PAGES

    Yoo, J.; Jaskierny, W. F.

    2015-08-21

    A study of charge drift in a large scale optically transparent solid xenon is reported. A pulsed high power xenon light source is used to liberate electrons from a photocathode. The drift speeds of the electrons are measured using a 8.7 cm long electrode in both the liquid and solid phase of xenon. In the liquid phase (163 K), the drift speed is 0.193 ± 0.003 cm/μs while the drift speed in the solid phase (157 K) is 0.397 ± 0.006 cm/μs at 900 V/cm over 8.0 cm of uniform electric fields. Furthermore, it is demonstrated that a factor twomore » faster electron drift speed in solid phase xenon compared to that in liquid in a large scale solid xenon.« less

  18. Porous microwells for geometry-selective, large-scale microparticle arrays

    NASA Astrophysics Data System (ADS)

    Kim, Jae Jung; Bong, Ki Wan; Reátegui, Eduardo; Irimia, Daniel; Doyle, Patrick S.

    2017-01-01

    Large-scale microparticle arrays (LSMAs) are key for material science and bioengineering applications. However, previous approaches suffer from trade-offs between scalability, precision, specificity and versatility. Here, we present a porous microwell-based approach to create large-scale microparticle arrays with complex motifs. Microparticles are guided to and pushed into microwells by fluid flow through small open pores at the bottom of the porous well arrays. A scaling theory allows for the rational design of LSMAs to sort and array particles on the basis of their size, shape, or modulus. Sequential particle assembly allows for proximal and nested particle arrangements, as well as particle recollection and pattern transfer. We demonstrate the capabilities of the approach by means of three applications: high-throughput single-cell arrays; microenvironment fabrication for neutrophil chemotaxis; and complex, covert tags by the transfer of an upconversion nanocrystal-laden LSMA.

  19. Connecting the large- and the small-scale magnetic fields of solar-like stars

    NASA Astrophysics Data System (ADS)

    Lehmann, L. T.; Jardine, M. M.; Mackay, D. H.; Vidotto, A. A.

    2018-05-01

    A key question in understanding the observed magnetic field topologies of cool stars is the link between the small- and the large-scale magnetic field and the influence of the stellar parameters on the magnetic field topology. We examine various simulated stars to connect the small-scale with the observable large-scale field. The highly resolved 3D simulations we used couple a flux transport model with a non-potential coronal model using a magnetofrictional technique. The surface magnetic field of these simulations is decomposed into spherical harmonics which enables us to analyse the magnetic field topologies on a wide range of length scales and to filter the large-scale magnetic field for a direct comparison with the observations. We show that the large-scale field of the self-consistent simulations fits the observed solar-like stars and is mainly set up by the global dipolar field and the large-scale properties of the flux pattern, e.g. the averaged latitudinal position of the emerging small-scale field and its global polarity pattern. The stellar parameters flux emergence rate, differential rotation and meridional flow affect the large-scale magnetic field topology. An increased flux emergence rate increases the magnetic flux in all field components and an increased differential rotation increases the toroidal field fraction by decreasing the poloidal field. The meridional flow affects the distribution of the magnetic energy across the spherical harmonic modes.

  20. The Large-Scale Structure of Scientific Method

    ERIC Educational Resources Information Center

    Kosso, Peter

    2009-01-01

    The standard textbook description of the nature of science describes the proposal, testing, and acceptance of a theoretical idea almost entirely in isolation from other theories. The resulting model of science is a kind of piecemeal empiricism that misses the important network structure of scientific knowledge. Only the large-scale description of…

  1. Detection of large-scale concentric gravity waves from a Chinese airglow imager network

    NASA Astrophysics Data System (ADS)

    Lai, Chang; Yue, Jia; Xu, Jiyao; Yuan, Wei; Li, Qinzeng; Liu, Xiao

    2018-06-01

    Concentric gravity waves (CGWs) contain a broad spectrum of horizontal wavelengths and periods due to their instantaneous localized sources (e.g., deep convection, volcanic eruptions, or earthquake, etc.). However, it is difficult to observe large-scale gravity waves of >100 km wavelength from the ground for the limited field of view of a single camera and local bad weather. Previously, complete large-scale CGW imagery could only be captured by satellite observations. In the present study, we developed a novel method that uses assembling separate images and applying low-pass filtering to obtain temporal and spatial information about complete large-scale CGWs from a network of all-sky airglow imagers. Coordinated observations from five all-sky airglow imagers in Northern China were assembled and processed to study large-scale CGWs over a wide area (1800 km × 1 400 km), focusing on the same two CGW events as Xu et al. (2015). Our algorithms yielded images of large-scale CGWs by filtering out the small-scale CGWs. The wavelengths, wave speeds, and periods of CGWs were measured from a sequence of consecutive assembled images. Overall, the assembling and low-pass filtering algorithms can expand the airglow imager network to its full capacity regarding the detection of large-scale gravity waves.

  2. The Cosmology Large Angular Scale Surveyor (CLASS)

    NASA Technical Reports Server (NTRS)

    Harrington, Kathleen; Marriange, Tobias; Aamir, Ali; Appel, John W.; Bennett, Charles L.; Boone, Fletcher; Brewer, Michael; Chan, Manwei; Chuss, David T.; Colazo, Felipe; hide

    2016-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) is a four telescope array designed to characterize relic primordial gravitational waves from in ation and the optical depth to reionization through a measurement of the polarized cosmic microwave background (CMB) on the largest angular scales. The frequencies of the four CLASS telescopes, one at 38 GHz, two at 93 GHz, and one dichroic system at 145/217 GHz, are chosen to avoid spectral regions of high atmospheric emission and span the minimum of the polarized Galactic foregrounds: synchrotron emission at lower frequencies and dust emission at higher frequencies. Low-noise transition edge sensor detectors and a rapid front-end polarization modulator provide a unique combination of high sensitivity, stability, and control of systematics. The CLASS site, at 5200 m in the Chilean Atacama desert, allows for daily mapping of up to 70% of the sky and enables the characterization of CMB polarization at the largest angular scales. Using this combination of a broad frequency range, large sky coverage, control over systematics, and high sensitivity, CLASS will observe the reionization and recombination peaks of the CMB E- and B-mode power spectra. CLASS will make a cosmic variance limited measurement of the optical depth to reionization and will measure or place upper limits on the tensor-to-scalar ratio, r, down to a level of 0.01 (95% C.L.).

  3. Amplification of large scale magnetic fields in a decaying MHD system

    NASA Astrophysics Data System (ADS)

    Park, Kiwan

    2017-10-01

    Dynamo theory explains the amplification of magnetic fields in the conducting fluids (plasmas) driven by the continuous external energy. It is known that the nonhelical continuous kinetic or magnetic energy amplifies the small scale magnetic field; and the helical energy, the instability, or the shear with rotation effect amplifies the large scale magnetic field. However, recently it was reported that the decaying magnetic energy independent of helicity or instability could generate the large scale magnetic field. This phenomenon may look somewhat contradictory to the conventional dynamo theory. But it gives us some clues to the fundamental mechanism of energy transfer in the magnetized conducting fluids. It also implies that an ephemeral astrophysical event emitting the magnetic and kinetic energy can be a direct cause of the large scale magnetic field observed in space. As of now the exact physical mechanism is not yet understood in spite of several numerical results. The plasma motion coupled with a nearly conserved vector potential in the magnetohydrodynamic (MHD) system may transfer magnetic energy to the large scale. Also the intrinsic property of the scaling invariant MHD equation may decide the direction of energy transfer. In this paper we present the simulation results of inversely transferred helical and nonhelical energy in a decaying MHD system. We introduce a field structure model based on the MHD equation to show that the transfer of magnetic energy is essentially bidirectional depending on the plasma motion and initial energy distribution. And then we derive α coefficient algebraically in line with the field structure model to explain how the large scale magnetic field is induced by the helical energy in the system regardless of an external forcing source. And for the algebraic analysis of nonhelical magnetic energy, we use the eddy damped quasinormalized Markovian approximation to show the inverse transfer of magnetic energy.

  4. Measuring the topology of large-scale structure in the universe

    NASA Technical Reports Server (NTRS)

    Gott, J. Richard, III

    1988-01-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  5. Measuring the topology of large-scale structure in the universe

    NASA Astrophysics Data System (ADS)

    Gott, J. Richard, III

    1988-11-01

    An algorithm for quantitatively measuring the topology of large-scale structure has now been applied to a large number of observational data sets. The present paper summarizes and provides an overview of some of these observational results. On scales significantly larger than the correlation length, larger than about 1200 km/s, the cluster and galaxy data are fully consistent with a sponge-like random phase topology. At a smoothing length of about 600 km/s, however, the observed genus curves show a small shift in the direction of a meatball topology. Cold dark matter (CDM) models show similar shifts at these scales but not generally as large as those seen in the data. Bubble models, with voids completely surrounded on all sides by wall of galaxies, show shifts in the opposite direction. The CDM model is overall the most successful in explaining the data.

  6. Contribution of large scale coherence to wind turbine power: A large eddy simulation study in periodic wind farms

    NASA Astrophysics Data System (ADS)

    Chatterjee, Tanmoy; Peet, Yulia T.

    2018-03-01

    Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.

  7. Solving large scale structure in ten easy steps with COLA

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.

  8. Development of a large-scale transportation optimization course.

    DOT National Transportation Integrated Search

    2011-11-01

    "In this project, a course was developed to introduce transportation and logistics applications of large-scale optimization to graduate students. This report details what : similar courses exist in other universities, and the methodology used to gath...

  9. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  10. The "Hamburger Connection" as Ecologically Unequal Exchange: A Cross-National Investigation of Beef Exports and Deforestation in Less-Developed Countries

    ERIC Educational Resources Information Center

    Austin, Kelly

    2010-01-01

    This study explores Norman Myers's concept of the "hamburger connection" as a form of ecologically unequal exchange, where more-developed nations are able to transfer the environmental costs of beef consumption to less-developed nations. I used ordinary least squares (OLS) regression to test whether deforestation in less-developed…

  11. Line segment extraction for large scale unorganized point clouds

    NASA Astrophysics Data System (ADS)

    Lin, Yangbin; Wang, Cheng; Cheng, Jun; Chen, Bili; Jia, Fukai; Chen, Zhonggui; Li, Jonathan

    2015-04-01

    Line segment detection in images is already a well-investigated topic, although it has received considerably less attention in 3D point clouds. Benefiting from current LiDAR devices, large-scale point clouds are becoming increasingly common. Most human-made objects have flat surfaces. Line segments that occur where pairs of planes intersect give important information regarding the geometric content of point clouds, which is especially useful for automatic building reconstruction and segmentation. This paper proposes a novel method that is capable of accurately extracting plane intersection line segments from large-scale raw scan points. The 3D line-support region, namely, a point set near a straight linear structure, is extracted simultaneously. The 3D line-support region is fitted by our Line-Segment-Half-Planes (LSHP) structure, which provides a geometric constraint for a line segment, making the line segment more reliable and accurate. We demonstrate our method on the point clouds of large-scale, complex, real-world scenes acquired by LiDAR devices. We also demonstrate the application of 3D line-support regions and their LSHP structures on urban scene abstraction.

  12. Formulation and evaluation on human skin of a water-in-oil emulsion containing Muscat hamburg black grape seed extract.

    PubMed

    Sharif, A; Akhtar, N; Khan, M S; Menaa, A; Menaa, B; Khan, B A; Menaa, F

    2015-04-01

    Vitis vinifera 'muscat hamburg' (Vitaceae) is a blue-black grape variety commonly found in Pakistan. It has been consumed and used in traditional medicine for centuries. Compared to other grapes, M. hamburg records one of the greatest amount of polyphenols and displays potent antioxidant activities, which make it a great candidate for its exploitation in the development of stable cream emulsions destined to improve the skin appearance. Evaluate the effects of stable water-in-oil (W/O) emulsion containing 2% M. hamburg grape seed extract ('formulation') on human cheek skin in comparison with the placebo ('base'). An occlusive patch test, containing either the formulation or the base, was topically tested for 8 weeks during a winter period in young adult and healthy Pakistani male volunteers. The subjects were instructed to use twice a day the base and the formulation on their right and left cheek skin, respectively. Non-invasive measurements on these skin areas were carried out every week to assess any effects produced on melanin, elasticity and sebum. Skin compatibility assay (Burchard test) was used to report any potential skin reactivity. ANOVA, paired sample t-test and LSD test were applied to determine the statistical data significance. Significant differences (P ≤ 0.05) were found between the placebo and the formulation in terms of their respective skin effects elicited on melanin, elasticity and sebum content. Nevertheless, placebo and formulation exerted similar effects on skin erythema and moisture contents. Importantly, no skin hypersensitivity cases were reported during the whole course of the study. The developed grape-based cream could be efficiently and safely applied to improve a number of skin conditions (e.g. hyper-pigmentation, premature ageing, acne). © 2014 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  13. Why do large and small scales couple in a turbulent boundary layer?

    NASA Astrophysics Data System (ADS)

    Bandyopadhyay, Promode R.

    2011-11-01

    Correlation measurement, which is not definitive, suggests that large and small scales in a turbulent boundary layer (TBL) couple. A TBL is modeled as a jungle of interacting nonlinear oscillators to explore the origin of the coupling. These oscillators have the inherent property of self-sustainability, disturbance rejection, and of self-referential phase reset whereby several oscillators can phase align (or have constant phase difference between them) when an ``external'' impulse is applied. Consequently, these properties of a TBL are accounted for: self-sustainability, return of the wake component after a disturbance is removed, and the formation of the 18o large structures, which are composed of a sequential train of hairpin vortices. The nonlinear ordinary differential equations of the oscillators are solved using an analog circuit for rapid solution. The post-bifurcation limit cycles are determined. A small scale and a large scale are akin to two different oscillators. The state variables from the two disparate interacting oscillators are shown to couple and the small scales appear at certain regions of the phase of the large scale. The coupling is a consequence of the nonlinear oscillatory behavior. Although state planes exist where the disparate scales appear de-superposed, all scales in a TBL are in fact coupled and they cannot be monochromatically isolated.

  14. Large-Scale Low-Boom Inlet Test Overview

    NASA Technical Reports Server (NTRS)

    Hirt, Stefanie

    2011-01-01

    This presentation provides a high level overview of the Large-Scale Low-Boom Inlet Test and was presented at the Fundamental Aeronautics 2011 Technical Conference. In October 2010 a low-boom supersonic inlet concept with flow control was tested in the 8'x6' supersonic wind tunnel at NASA Glenn Research Center (GRC). The primary objectives of the test were to evaluate the inlet stability and operability of a large-scale low-boom supersonic inlet concept by acquiring performance and flowfield validation data, as well as evaluate simple, passive, bleedless inlet boundary layer control options. During this effort two models were tested: a dual stream inlet intended to model potential flight hardware and a single stream design to study a zero-degree external cowl angle and to permit surface flow visualization of the vortex generator flow control on the internal centerbody surface. The tests were conducted by a team of researchers from NASA GRC, Gulfstream Aerospace Corporation, University of Illinois at Urbana-Champaign, and the University of Virginia

  15. Large-scale linear programs in planning and prediction.

    DOT National Transportation Integrated Search

    2017-06-01

    Large-scale linear programs are at the core of many traffic-related optimization problems in both planning and prediction. Moreover, many of these involve significant uncertainty, and hence are modeled using either chance constraints, or robust optim...

  16. The use of HIV post-exposure prophylaxis in forensic medicine following incidents of sexual violence in Hamburg, Germany: a retrospective study.

    PubMed

    Ebert, Julia; Sperhake, Jan Peter; Degen, Olaf; Schröder, Ann Sophie

    2018-05-18

    In Hamburg, Germany, the initiation of HIV post-exposure prophylaxis (HIV PEP) in cases of sexual violence is often carried out by forensic medical specialists (FMS) using the city's unique Hamburg Model. FMS-provided three-day HIV PEP starter packs include a combination of raltegravir and emtricitabine/tenofovir. This study aimed to investigate the practice of offering HIV PEP, reasons for discontinuing treatment, patient compliance, and whether or not potential perpetrators were tested for HIV. We conducted a retrospective study of forensic clinical examinations carried out by the Hamburg Department of Legal Medicine following incidents of sexual violence from 2009 to 2016. One thousand two hundred eighteen incidents of sexual violence were reviewed. In 18% of these cases, HIV PEP was initially prescribed by the FMS. HIV PEP indication depended on the examination occurring within 24 h after the incident, no/unknown condom use, the occurrence of ejaculation, the presence of any injury, and the perpetrator being from population at high risk for HIV. Half of the HIV PEP recipients returned for a reevaluation of the HIV PEP indication by an infectious disease specialist, and just 16% completed the full month of treatment. Only 131 potential perpetrators were tested for HIV, with one found to be HIV positive. No HIV seroconversion was registered among the study sample. Provision of HIV PEP by an FMS after sexual assault ensures appropriate and prompt care for victims. However, patient compliance and completion rates are low. HIV testing of perpetrators must be carried out much more rigorously.

  17. Large-scale quantum photonic circuits in silicon

    NASA Astrophysics Data System (ADS)

    Harris, Nicholas C.; Bunandar, Darius; Pant, Mihir; Steinbrecher, Greg R.; Mower, Jacob; Prabhu, Mihika; Baehr-Jones, Tom; Hochberg, Michael; Englund, Dirk

    2016-08-01

    Quantum information science offers inherently more powerful methods for communication, computation, and precision measurement that take advantage of quantum superposition and entanglement. In recent years, theoretical and experimental advances in quantum computing and simulation with photons have spurred great interest in developing large photonic entangled states that challenge today's classical computers. As experiments have increased in complexity, there has been an increasing need to transition bulk optics experiments to integrated photonics platforms to control more spatial modes with higher fidelity and phase stability. The silicon-on-insulator (SOI) nanophotonics platform offers new possibilities for quantum optics, including the integration of bright, nonclassical light sources, based on the large third-order nonlinearity (χ(3)) of silicon, alongside quantum state manipulation circuits with thousands of optical elements, all on a single phase-stable chip. How large do these photonic systems need to be? Recent theoretical work on Boson Sampling suggests that even the problem of sampling from e30 identical photons, having passed through an interferometer of hundreds of modes, becomes challenging for classical computers. While experiments of this size are still challenging, the SOI platform has the required component density to enable low-loss and programmable interferometers for manipulating hundreds of spatial modes. Here, we discuss the SOI nanophotonics platform for quantum photonic circuits with hundreds-to-thousands of optical elements and the associated challenges. We compare SOI to competing technologies in terms of requirements for quantum optical systems. We review recent results on large-scale quantum state evolution circuits and strategies for realizing high-fidelity heralded gates with imperfect, practical systems. Next, we review recent results on silicon photonics-based photon-pair sources and device architectures, and we discuss a path towards

  18. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    NASA Astrophysics Data System (ADS)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  19. Climate and weather across scales: singularities and stochastic Levy-Clifford algebra

    NASA Astrophysics Data System (ADS)

    Schertzer, Daniel; Tchiguirinskaia, Ioulia

    2016-04-01

    There have been several attempts to understand and simulate the fluctuations of weather and climate across scales. Beyond mono/uni-scaling approaches (e.g. using spectral analysis), this was done with the help of multifractal techniques that aim to track and simulate the scaling singularities of the underlying equations instead of relying on numerical, scale truncated simulations of these equations (Royer et al., 2008, Lovejoy and Schertzer, 2013). However, these techniques were limited to deal with scalar fields, instead of dealing directly with a system of complex interactions and non trivial symmetries. The latter is unfortunately indispensable to answer to the challenging question of being able to assess the climatology of (exo-) planets based on first principles (Pierrehumbert, 2013) or to fully address the question of the relevance of quasi-geostrophic turbulence and to define an effective, fractal dimension of the atmospheric motions (Schertzer et al., 2012). In this talk, we present a plausible candidate based on the combination of Lévy stable processes and Clifford algebra. Together they combine stochastic and structural properties that are strongly universal. They therefore define with the help of a few physically meaningful parameters a wide class of stochastic symmetries, as well as high dimensional vector- or manifold-valued fields respecting these symmetries (Schertzer and Tchiguirinskaia, 2015). Lovejoy, S. & Schertzer, D., 2013. The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge U.K. Cambridge Univeristy Press. Pierrehumbert, R.T., 2013. Strange news from other stars. Nature Geoscience, 6(2), pp.81-83. Royer, J.F. et al., 2008. Multifractal analysis of the evolution of simulated precipitation over France in a climate scenario. C.R. Geoscience, 340(431-440). Schertzer, D. et al., 2012. Quasi-geostrophic turbulence and generalized scale invariance, a theoretical reply. Atmos. Chem. Phys., 12, pp.327-336. Schertzer, D

  20. Large-scale motions in the universe: Using clusters of galaxies as tracers

    NASA Technical Reports Server (NTRS)

    Gramann, Mirt; Bahcall, Neta A.; Cen, Renyue; Gott, J. Richard

    1995-01-01

    Can clusters of galaxies be used to trace the large-scale peculiar velocity field of the universe? We answer this question by using large-scale cosmological simulations to compare the motions of rich clusters of galaxies with the motion of the underlying matter distribution. Three models are investigated: Omega = 1 and Omega = 0.3 cold dark matter (CDM), and Omega = 0.3 primeval baryonic isocurvature (PBI) models, all normalized to the Cosmic Background Explorer (COBE) background fluctuations. We compare the cluster and mass distribution of peculiar velocities, bulk motions, velocity dispersions, and Mach numbers as a function of scale for R greater than or = 50/h Mpc. We also present the large-scale velocity and potential maps of clusters and of the matter. We find that clusters of galaxies trace well the large-scale velocity field and can serve as an efficient tool to constrain cosmological models. The recently reported bulk motion of clusters 689 +/- 178 km/s on approximately 150/h Mpc scale (Lauer & Postman 1994) is larger than expected in any of the models studied (less than or = 190 +/- 78 km/s).

  1. ``Large''- vs Small-scale friction control in turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Canton, Jacopo; Örlü, Ramis; Chin, Cheng; Schlatter, Philipp

    2017-11-01

    We reconsider the ``large-scale'' control scheme proposed by Hussain and co-workers (Phys. Fluids 10, 1049-1051 1998 and Phys. Rev. Fluids, 2, 62601 2017), using new direct numerical simulations (DNS). The DNS are performed in a turbulent channel at friction Reynolds number Reτ of up to 550 in order to eliminate low-Reynolds-number effects. The purpose of the present contribution is to re-assess this control method in the light of more modern developments in the field, in particular also related to the discovery of (very) large-scale motions. The goals of the paper are as follows: First, we want to better characterise the physics of the control, and assess what external contribution (vortices, forcing, wall motion) are actually needed. Then, we investigate the optimal parameters and, finally, determine which aspects of this control technique actually scale in outer units and can therefore be of use in practical applications. In addition to discussing the mentioned drag-reduction effects, the present contribution will also address the potential effect of the naturally occurring large-scale motions on frictional drag, and give indications on the physical processes for potential drag reduction possible at all Reynolds numbers.

  2. Power suppression at large scales in string inflation

    NASA Astrophysics Data System (ADS)

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflation is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.

  3. Imprint of thawing scalar fields on the large scale galaxy overdensity

    NASA Astrophysics Data System (ADS)

    Dinda, Bikash R.; Sen, Anjan A.

    2018-04-01

    We investigate the observed galaxy power spectrum for the thawing class of scalar field models taking into account various general relativistic corrections that occur on very large scales. We consider the full general relativistic perturbation equations for the matter as well as the dark energy fluid. We form a single autonomous system of equations containing both the background and the perturbed equations of motion which we subsequently solve for different scalar field potentials. First we study the percentage deviation from the Λ CDM model for different cosmological parameters as well as in the observed galaxy power spectra on different scales in scalar field models for various choices of scalar field potentials. Interestingly the difference in background expansion results from the enhancement of power from Λ CDM on small scales, whereas the inclusion of general relativistic (GR) corrections results in the suppression of power from Λ CDM on large scales. This can be useful to distinguish scalar field models from Λ CDM with future optical/radio surveys. We also compare the observed galaxy power spectra for tracking and thawing types of scalar field using some particular choices for the scalar field potentials. We show that thawing and tracking models can have large differences in observed galaxy power spectra on large scales and for smaller redshifts due to different GR effects. But on smaller scales and for larger redshifts, the difference is small and is mainly due to the difference in background expansion.

  4. Enabling large-scale viscoelastic calculations via neural network acceleration

    NASA Astrophysics Data System (ADS)

    Robinson DeVries, P.; Thompson, T. B.; Meade, B. J.

    2017-12-01

    One of the most significant challenges involved in efforts to understand the effects of repeated earthquake cycle activity are the computational costs of large-scale viscoelastic earthquake cycle models. Deep artificial neural networks (ANNs) can be used to discover new, compact, and accurate computational representations of viscoelastic physics. Once found, these efficient ANN representations may replace computationally intensive viscoelastic codes and accelerate large-scale viscoelastic calculations by more than 50,000%. This magnitude of acceleration enables the modeling of geometrically complex faults over thousands of earthquake cycles across wider ranges of model parameters and at larger spatial and temporal scales than have been previously possible. Perhaps most interestingly from a scientific perspective, ANN representations of viscoelastic physics may lead to basic advances in the understanding of the underlying model phenomenology. We demonstrate the potential of artificial neural networks to illuminate fundamental physical insights with specific examples.

  5. Cosmic Rays and Gamma-Rays in Large-Scale Structure

    NASA Astrophysics Data System (ADS)

    Inoue, Susumu; Nagashima, Masahiro; Suzuki, Takeru K.; Aoki, Wako

    2004-12-01

    During the hierarchical formation of large scale structure in the universe, the progressive collapse and merging of dark matter should inevitably drive shocks into the gas, with nonthermal particle acceleration as a natural consequence. Two topics in this regard are discussed, emphasizing what important things nonthermal phenomena may tell us about the structure formation (SF) process itself. 1. Inverse Compton gamma-rays from large scale SF shocks and non-gravitational effects, and the implications for probing the warm-hot intergalactic medium. We utilize a semi-analytic approach based on Monte Carlo merger trees that treats both merger and accretion shocks self-consistently. 2. Production of 6Li by cosmic rays from SF shocks in the early Galaxy, and the implications for probing Galaxy formation and uncertain physics on sub-Galactic scales. Our new observations of metal-poor halo stars with the Subaru High Dispersion Spectrograph are highlighted.

  6. Transparent and Flexible Large-scale Graphene-based Heater

    NASA Astrophysics Data System (ADS)

    Kang, Junmo; Lee, Changgu; Kim, Young-Jin; Choi, Jae-Boong; Hong, Byung Hee

    2011-03-01

    We report the application of transparent and flexible heater with high optical transmittance and low sheet resistance using graphene films, showing outstanding thermal and electrical properties. The large-scale graphene films were grown on Cu foil by chemical vapor deposition methods, and transferred to transparent substrates by multiple stacking. The wet chemical doping process enhanced the electrical properties, showing a sheet resistance as low as 35 ohm/sq with 88.5 % transmittance. The temperature response usually depends on the dimension and the sheet resistance of the graphene-based heater. We show that a 4x4 cm2 heater can reach 80& circ; C within 40 seconds and large-scale (9x9 cm2) heater shows uniformly heating performance, which was measured using thermocouple and infra-red camera. These heaters would be very useful for defogging systems and smart windows.

  7. Lagrangian space consistency relation for large scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horn, Bart; Hui, Lam; Xiao, Xiao, E-mail: bh2478@columbia.edu, E-mail: lh399@columbia.edu, E-mail: xx2146@columbia.edu

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less

  8. Stability of large-scale systems with stable and unstable subsystems.

    NASA Technical Reports Server (NTRS)

    Grujic, Lj. T.; Siljak, D. D.

    1972-01-01

    The purpose of this paper is to develop new methods for constructing vector Liapunov functions and broaden the application of Liapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. With minor technical adjustments, the same criterion can be used to determine connective asymptotic stability of large-scale systems subject to structural perturbations. By redefining the constraints imposed on the interconnections among the subsystems, the considered class of systems is broadened in an essential way to include composite systems with unstable subsystems. In this way, the theory is brought substantially closer to reality since stability of all subsystems is no longer a necessary assumption in establishing stability of the overall composite system.

  9. The Emergence of Dominant Design(s) in Large Scale Cyber-Infrastructure Systems

    ERIC Educational Resources Information Center

    Diamanti, Eirini Ilana

    2012-01-01

    Cyber-infrastructure systems are integrated large-scale IT systems designed with the goal of transforming scientific practice by enabling multi-disciplinary, cross-institutional collaboration. Their large scale and socio-technical complexity make design decisions for their underlying architecture practically irreversible. Drawing on three…

  10. Large-Scale Coronal Heating from "Cool" Activity in the Solar Magnetic Network

    NASA Technical Reports Server (NTRS)

    Falconer, D. A.; Moore, R. L.; Porter, J. G.; Hathaway, D. H.

    1999-01-01

    In Fe XII images from SOHO/EIT, the quiet solar corona shows structure on scales ranging from sub-supergranular (i.e., bright points and coronal network) to multi-supergranular (large-scale corona). In Falconer et al 1998 (Ap.J., 501, 386) we suppressed the large-scale background and found that the network-scale features are predominantly rooted in the magnetic network lanes at the boundaries of the supergranules. Taken together, the coronal network emission and bright point emission are only about 5% of the entire quiet solar coronal Fe XII emission. Here we investigate the relationship between the large-scale corona and the network as seen in three different EIT filters (He II, Fe IX-X, and Fe XII). Using the median-brightness contour, we divide the large-scale Fe XII corona into dim and bright halves, and find that the bright-half/dim half brightness ratio is about 1.5. We also find that the bright half relative to the dim half has 10 times greater total bright point Fe XII emission, 3 times greater Fe XII network emission, 2 times greater Fe IX-X network emission, 1.3 times greater He II network emission, and has 1.5 times more magnetic flux. Also, the cooler network (He II) radiates an order of magnitude more energy than the hotter coronal network (Fe IX-X, and Fe XII). From these results we infer that: 1) The heating of the network and the heating of the large-scale corona each increase roughly linearly with the underlying magnetic flux. 2) The production of network coronal bright points and heating of the coronal network each increase nonlinearly with the magnetic flux. 3) The heating of the large-scale corona is driven by widespread cooler network activity rather than by the exceptional network activity that produces the network coronal bright points and the coronal network. 4) The large-scale corona is heated by a nonthermal process since the driver of its heating is cooler than it is. This work was funded by the Solar Physics Branch of NASA's office of

  11. Large and small-scale structures in Saturn's rings

    NASA Astrophysics Data System (ADS)

    Albers, N.; Rehnberg, M. E.; Brown, Z. L.; Sremcevic, M.; Esposito, L. W.

    2017-09-01

    Observations made by the Cassini spacecraft have revealed both large and small scale structures in Saturn's rings in unprecedented detail. Analysis of high-resolution measurements by the Cassini Ultraviolet Spectrograph (UVIS) High Speed Photometer (HSP) and the Imaging Science Subsystem (ISS) show an abundance of intrinsic small-scale structures (or clumping) seen across the entire ring system. These include self-gravity wakes (50-100m), sub-km structure at the A and B ring edges, and "straw"/"ropy" structures (1-3km).

  12. Large-Scale Traffic Microsimulation From An MPO Perspective

    DOT National Transportation Integrated Search

    1997-01-01

    One potential advancement of the four-step travel model process is the forecasting and simulation of individual activities and travel. A common concern with such an approach is that the data and computational requirements for a large-scale, regional ...

  13. Scale up of large ALON® and spinel windows

    NASA Astrophysics Data System (ADS)

    Goldman, Lee M.; Kashalikar, Uday; Ramisetty, Mohan; Jha, Santosh; Sastri, Suri

    2017-05-01

    Aluminum Oxynitride (ALON® Transparent Ceramic) and Magnesia Aluminate Spinel (Spinel) combine broadband transparency with excellent mechanical properties. Their cubic structure means that they are transparent in their polycrystalline form, allowing them to be manufactured by conventional powder processing techniques. Surmet has scaled up its ALON® production capability to produce and deliver windows as large as 4.4 sq ft. We have also produced our first 6 sq ft window. We are in the process of producing 7 sq ft ALON® window blanks for armor applications; and scale up to even larger, high optical quality blanks for Recce window applications is underway. Surmet also produces spinel for customers that require superior transmission at the longer wavelengths in the mid wave infra-red (MWIR). Spinel windows have been limited to smaller sizes than have been achieved with ALON. To date the largest spinel window produced is 11x18-in, and windows 14x20-in size are currently in process. Surmet is now scaling up its spinel processing capability to produce high quality window blanks as large as 19x27-in for sensor applications.

  14. Cross-indexing of binary SIFT codes for large-scale image search.

    PubMed

    Liu, Zhen; Li, Houqiang; Zhang, Liyan; Zhou, Wengang; Tian, Qi

    2014-05-01

    In recent years, there has been growing interest in mapping visual features into compact binary codes for applications on large-scale image collections. Encoding high-dimensional data as compact binary codes reduces the memory cost for storage. Besides, it benefits the computational efficiency since the computation of similarity can be efficiently measured by Hamming distance. In this paper, we propose a novel flexible scale invariant feature transform (SIFT) binarization (FSB) algorithm for large-scale image search. The FSB algorithm explores the magnitude patterns of SIFT descriptor. It is unsupervised and the generated binary codes are demonstrated to be dispreserving. Besides, we propose a new searching strategy to find target features based on the cross-indexing in the binary SIFT space and original SIFT space. We evaluate our approach on two publicly released data sets. The experiments on large-scale partial duplicate image retrieval system demonstrate the effectiveness and efficiency of the proposed algorithm.

  15. Survey of decentralized control methods. [for large scale dynamic systems

    NASA Technical Reports Server (NTRS)

    Athans, M.

    1975-01-01

    An overview is presented of the types of problems that are being considered by control theorists in the area of dynamic large scale systems with emphasis on decentralized control strategies. Approaches that deal directly with decentralized decision making for large scale systems are discussed. It is shown that future advances in decentralized system theory are intimately connected with advances in the stochastic control problem with nonclassical information pattern. The basic assumptions and mathematical tools associated with the latter are summarized, and recommendations concerning future research are presented.

  16. Large-scale synthesis of high-quality hexagonal boron nitride nanosheets for large-area graphene electronics.

    PubMed

    Lee, Kang Hyuck; Shin, Hyeon-Jin; Lee, Jinyeong; Lee, In-yeal; Kim, Gil-Ho; Choi, Jae-Young; Kim, Sang-Woo

    2012-02-08

    Hexagonal boron nitride (h-BN) has received a great deal of attention as a substrate material for high-performance graphene electronics because it has an atomically smooth surface, lattice constant similar to that of graphene, large optical phonon modes, and a large electrical band gap. Herein, we report the large-scale synthesis of high-quality h-BN nanosheets in a chemical vapor deposition (CVD) process by controlling the surface morphologies of the copper (Cu) catalysts. It was found that morphology control of the Cu foil is much critical for the formation of the pure h-BN nanosheets as well as the improvement of their crystallinity. For the first time, we demonstrate the performance enhancement of CVD-based graphene devices with large-scale h-BN nanosheets. The mobility of the graphene device on the h-BN nanosheets was increased 3 times compared to that without the h-BN nanosheets. The on-off ratio of the drain current is 2 times higher than that of the graphene device without h-BN. This work suggests that high-quality h-BN nanosheets based on CVD are very promising for high-performance large-area graphene electronics. © 2012 American Chemical Society

  17. Scale-Similar Models for Large-Eddy Simulations

    NASA Technical Reports Server (NTRS)

    Sarghini, F.

    1999-01-01

    Scale-similar models employ multiple filtering operations to identify the smallest resolved scales, which have been shown to be the most active in the interaction with the unresolved subgrid scales. They do not assume that the principal axes of the strain-rate tensor are aligned with those of the subgrid-scale stress (SGS) tensor, and allow the explicit calculation of the SGS energy. They can provide backscatter in a numerically stable and physically realistic manner, and predict SGS stresses in regions that are well correlated with the locations where large Reynolds stress occurs. In this paper, eddy viscosity and mixed models, which include an eddy-viscosity part as well as a scale-similar contribution, are applied to the simulation of two flows, a high Reynolds number plane channel flow, and a three-dimensional, nonequilibrium flow. The results show that simulations without models or with the Smagorinsky model are unable to predict nonequilibrium effects. Dynamic models provide an improvement of the results: the adjustment of the coefficient results in more accurate prediction of the perturbation from equilibrium. The Lagrangian-ensemble approach [Meneveau et al., J. Fluid Mech. 319, 353 (1996)] is found to be very beneficial. Models that included a scale-similar term and a dissipative one, as well as the Lagrangian ensemble averaging, gave results in the best agreement with the direct simulation and experimental data.

  18. Research on the Construction Management and Sustainable Development of Large-Scale Scientific Facilities in China

    NASA Astrophysics Data System (ADS)

    Guiquan, Xi; Lin, Cong; Xuehui, Jin

    2018-05-01

    As an important platform for scientific and technological development, large -scale scientific facilities are the cornerstone of technological innovation and a guarantee for economic and social development. Researching management of large-scale scientific facilities can play a key role in scientific research, sociology and key national strategy. This paper reviews the characteristics of large-scale scientific facilities, and summarizes development status of China's large-scale scientific facilities. At last, the construction, management, operation and evaluation of large-scale scientific facilities is analyzed from the perspective of sustainable development.

  19. Development and analysis of prognostic equations for mesoscale kinetic energy and mesoscale (subgrid scale) fluxes for large-scale atmospheric models

    NASA Technical Reports Server (NTRS)

    Avissar, Roni; Chen, Fei

    1993-01-01

    Generated by landscape discontinuities (e.g., sea breezes) mesoscale circulation processes are not represented in large-scale atmospheric models (e.g., general circulation models), which have an inappropiate grid-scale resolution. With the assumption that atmospheric variables can be separated into large scale, mesoscale, and turbulent scale, a set of prognostic equations applicable in large-scale atmospheric models for momentum, temperature, moisture, and any other gaseous or aerosol material, which includes both mesoscale and turbulent fluxes is developed. Prognostic equations are also developed for these mesoscale fluxes, which indicate a closure problem and, therefore, require a parameterization. For this purpose, the mean mesoscale kinetic energy (MKE) per unit of mass is used, defined as E-tilde = 0.5 (the mean value of u'(sub i exp 2), where u'(sub i) represents the three Cartesian components of a mesoscale circulation (the angle bracket symbol is the grid-scale, horizontal averaging operator in the large-scale model, and a tilde indicates a corresponding large-scale mean value). A prognostic equation is developed for E-tilde, and an analysis of the different terms of this equation indicates that the mesoscale vertical heat flux, the mesoscale pressure correlation, and the interaction between turbulence and mesoscale perturbations are the major terms that affect the time tendency of E-tilde. A-state-of-the-art mesoscale atmospheric model is used to investigate the relationship between MKE, landscape discontinuities (as characterized by the spatial distribution of heat fluxes at the earth's surface), and mesoscale sensible and latent heat fluxes in the atmosphere. MKE is compared with turbulence kinetic energy to illustrate the importance of mesoscale processes as compared to turbulent processes. This analysis emphasizes the potential use of MKE to bridge between landscape discontinuities and mesoscale fluxes and, therefore, to parameterize mesoscale fluxes

  20. The Relevancy of Large-Scale, Quantitative Methodologies in Middle Grades Education Research

    ERIC Educational Resources Information Center

    Mertens, Steven B.

    2006-01-01

    This article examines the relevancy of large-scale, quantitative methodologies in middle grades education research. Based on recommendations from national advocacy organizations, the need for more large-scale, quantitative research, combined with the application of more rigorous methodologies, is presented. Subsequent sections describe and discuss…

  1. Generalized Chirp Scaling Combined with Baseband Azimuth Scaling Algorithm for Large Bandwidth Sliding Spotlight SAR Imaging

    PubMed Central

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-01-01

    This paper presents an efficient and precise imaging algorithm for the large bandwidth sliding spotlight synthetic aperture radar (SAR). The existing sub-aperture processing method based on the baseband azimuth scaling (BAS) algorithm cannot cope with the high order phase coupling along the range and azimuth dimensions. This coupling problem causes defocusing along the range and azimuth dimensions. This paper proposes a generalized chirp scaling (GCS)-BAS processing algorithm, which is based on the GCS algorithm. It successfully mitigates the deep focus along the range dimension of a sub-aperture of the large bandwidth sliding spotlight SAR, as well as high order phase coupling along the range and azimuth dimensions. Additionally, the azimuth focusing can be achieved by this azimuth scaling method. Simulation results demonstrate the ability of the GCS-BAS algorithm to process the large bandwidth sliding spotlight SAR data. It is proven that great improvements of the focus depth and imaging accuracy are obtained via the GCS-BAS algorithm. PMID:28555057

  2. Solving large scale structure in ten easy steps with COLA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J., E-mail: stassev@cfa.harvard.edu, E-mail: matiasz@ias.edu, E-mail: deisenstein@cfa.harvard.edu

    2013-06-01

    We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As anmore » illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 10{sup 9}M{sub s}un/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 10{sup 11}M{sub s}un/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.« less

  3. Large-scale PACS implementation.

    PubMed

    Carrino, J A; Unkel, P J; Miller, I D; Bowser, C L; Freckleton, M W; Johnson, T G

    1998-08-01

    The transition to filmless radiology is a much more formidable task than making the request for proposal to purchase a (Picture Archiving and Communications System) PACS. The Department of Defense and the Veterans Administration have been pioneers in the transformation of medical diagnostic imaging to the electronic environment. Many civilian sites are expected to implement large-scale PACS in the next five to ten years. This presentation will related the empirical insights gleaned at our institution from a large-scale PACS implementation. Our PACS integration was introduced into a fully operational department (not a new hospital) in which work flow had to continue with minimal impact. Impediments to user acceptance will be addressed. The critical components of this enormous task will be discussed. The topics covered during this session will include issues such as phased implementation, DICOM (digital imaging and communications in medicine) standard-based interaction of devices, hospital information system (HIS)/radiology information system (RIS) interface, user approval, networking, workstation deployment and backup procedures. The presentation will make specific suggestions regarding the implementation team, operating instructions, quality control (QC), training and education. The concept of identifying key functional areas is relevant to transitioning the facility to be entirely on line. Special attention must be paid to specific functional areas such as the operating rooms and trauma rooms where the clinical requirements may not match the PACS capabilities. The printing of films may be necessary for certain circumstances. The integration of teleradiology and remote clinics into a PACS is a salient topic with respect to the overall role of the radiologists providing rapid consultation. A Web-based server allows a clinician to review images and reports on a desk-top (personal) computer and thus reduce the number of dedicated PACS review workstations. This session

  4. Scaling laws for mixing and dissipation in unforced rotating stratified turbulence

    NASA Astrophysics Data System (ADS)

    Pouquet, A.; Rosenberg, D.; Marino, R.; Herbert, C.

    2018-06-01

    We present a model for the scaling of mixing in weakly rotating stratified flows characterized by their Rossby, Froude and Reynolds numbers Ro, Fr, Re. It is based on quasi-equipartition between kinetic and potential modes, sub-dominant vertical velocity and lessening of the energy transfer to small scales as measured by the ratio rE of kinetic energy dissipation to its dimensional expression. We determine their domains of validity for a numerical study of the unforced Boussinesq equations mostly on grids of 10243 points, with Ro/Fr> 2.5 and with 1600< Re<1.9x104; the Prandtl number is one, initial conditions are either isotropic and at large scale for the velocity, and zero for the temperature {\\theta}, or in geostrophic balance. Three regimes in Fr are observed: dominant waves, eddy-wave interactions and strong turbulence. A wave-turbulence balance for the transfer time leads to rE growing linearly with Fr in the intermediate regime, with a saturation at ~0.3 or more, depending on initial conditions for larger Froude numbers. The Ellison scale is also found to scale linearly with Fr, and the flux Richardson number Rf transitions for roughly the same parameter values as well. Putting together the 3 relationships of the model allows for the prediction of mixing efficiency scaling as Fr-2~RB-1 in the low and intermediate regimes, whereas for higher Fr, it scales as RB-1/2, as already observed: as turbulence strengthens, rE~1, the velocity is isotropic and smaller buoyancy fluxes altogether correspond to a decoupling of velocity and temperature fluctuations, the latter becoming passive.

  5. A survey on routing protocols for large-scale wireless sensor networks.

    PubMed

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. "Large-scale" means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner and

  6. Field-aligned currents and large-scale magnetospheric electric fields

    NASA Technical Reports Server (NTRS)

    Dangelo, N.

    1979-01-01

    The existence of field-aligned currents (FAC) at northern and southern high latitudes was confirmed by a number of observations, most clearly by experiments on the TRIAD and ISIS 2 satellites. The high-latitude FAC system is used to relate what is presently known about the large-scale pattern of high-latitude ionospheric electric fields and their relation to solar wind parameters. Recently a simplified model was presented for polar cap electric fields. The model is of considerable help in visualizing the large-scale features of FAC systems. A summary of the FAC observations is given. The simplified model is used to visualize how the FAC systems are driven by their generators.

  7. The large-scale effect of environment on galactic conformity

    NASA Astrophysics Data System (ADS)

    Sun, Shuangpeng; Guo, Qi; Wang, Lan; Lacey, Cedric G.; Wang, Jie; Gao, Liang; Pan, Jun

    2018-07-01

    We use a volume-limited galaxy sample from the Sloan Digital Sky Survey Data Release 7 to explore the dependence of galactic conformity on the large-scale environment, measured on ˜4 Mpc scales. We find that the star formation activity of neighbour galaxies depends more strongly on the environment than on the activity of their primary galaxies. In underdense regions most neighbour galaxies tend to be active, while in overdense regions neighbour galaxies are mostly passive, regardless of the activity of their primary galaxies. At a given stellar mass, passive primary galaxies reside in higher density regions than active primary galaxies, leading to the apparently strong conformity signal. The dependence of the activity of neighbour galaxies on environment can be explained by the corresponding dependence of the fraction of satellite galaxies. Similar results are found for galaxies in a semi-analytical model, suggesting that no new physics is required to explain the observed large-scale conformity.

  8. The three-point function as a probe of models for large-scale structure

    NASA Astrophysics Data System (ADS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1994-04-01

    We analyze the consequences of models of structure formation for higher order (n-point) galaxy correlation functions in the mildly nonlinear regime. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, Rp is approximately 20/h Mpc, e.g., low matter-density (nonzero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower et al. We show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale dependence leads to a dramatic decrease of the the hierarchical amplitudes QJ at large scales, r is greater than or approximately Rp. Current observational constraints on the three-point amplitudes Q3 and S3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  9. Forum: The Rise of International Large-Scale Assessments and Rationales for Participation

    ERIC Educational Resources Information Center

    Addey, Camilla; Sellar, Sam; Steiner-Khamsi, Gita; Lingard, Bob; Verger, Antoni

    2017-01-01

    This Forum discusses the significant growth of international large-scale assessments (ILSAs) since the mid-1990s. Addey and Sellar's contribution ("A Framework for Analysing the Multiple Rationales for Participating in International Large-Scale Assessments") outlines a framework of rationales for participating in ILSAs and examines the…

  10. Quantification of soy protein using the isotope method (δ(13)C and δ(15)N) for commercial brands of beef hamburger.

    PubMed

    Ducatti, Rhani; de Almeida Nogueira Pinto, José Paes; Sartori, Maria Márcia Pereira; Ducatti, Carlos

    2016-12-01

    Hamburgers (beef patties) may be adulterated through the overuse of protein extenders. Among vegetables, soy protein is the best substitute for animal protein. These ingredients help to reduce the cost of producing a final product, and they maximize profits for fraudulent industries. Moreover, the ingestion of soy or other non-meat proteins by allergic individuals may present a health risk. In addition, monitoring by supervisory bodies is hampered by a lack of appropriate analytical methodologies. Within this context, the aim of this study was to determine and quantify the levels of added soy protein by determination of (15)N and (13)C stable isotopes. A total of 100 beef hamburger samples from 10 commercial brands were analyzed. Only three samples of the G brand were within the standards set the Brazilian legislation. The remaining 97 samples from 10 commercial brands contained >4% soy protein; therefore, they are adulterated and not in compliance with the current legislation. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Large-Scale Environmental Influences on Aquatic Animal Health

    EPA Science Inventory

    In the latter portion of the 20th century, North America experienced numerous large-scale mortality events affecting a broad diversity of aquatic animals. Short-term forensic investigations of these events have sometimes characterized a causative agent or condition, but have rare...

  12. Some ecological guidelines for large-scale biomass plantations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, W.; Cook, J.H.; Beyea, J.

    1993-12-31

    The National Audubon Society sees biomass as an appropriate and necessary source of energy to help replace fossil fuels in the near future, but is concerned that large-scale biomass plantations could displace significant natural vegetation and wildlife habitat, and reduce national and global biodiversity. We support the development of an industry large enough to provide significant portions of our energy budget, but we see a critical need to ensure that plantations are designed and sited in ways that minimize ecological disruption, or even provide environmental benefits. We have been studying the habitat value of intensively managed short-rotation tree plantations. Ourmore » results show that these plantations support large populations of some birds, but not all of the species using the surrounding landscape, and indicate that their value as habitat can be increased greatly by including small areas of mature trees within them. We believe short-rotation plantations can benefit regional biodiversity if they can be deployed as buffers for natural forests, or as corridors connecting forest tracts. To realize these benefits, and to avoid habitat degradation, regional biomass plantation complexes (e.g., the plantations supplying all the fuel for a powerplant) need to be planned, sited, and developed as large-scale units in the context of the regional landscape mosaic.« less

  13. Foundational perspectives on causality in large-scale brain networks

    NASA Astrophysics Data System (ADS)

    Mannino, Michael; Bressler, Steven L.

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  14. Multilevel Item Response Modeling: Applications to Large-Scale Assessment of Academic Achievement

    ERIC Educational Resources Information Center

    Zheng, Xiaohui

    2009-01-01

    The call for standards-based reform and educational accountability has led to increased attention to large-scale assessments. Over the past two decades, large-scale assessments have been providing policymakers and educators with timely information about student learning and achievement to facilitate their decisions regarding schools, teachers and…

  15. Large Scale Landslide Database System Established for the Reservoirs in Southern Taiwan

    NASA Astrophysics Data System (ADS)

    Tsai, Tsai-Tsung; Tsai, Kuang-Jung; Shieh, Chjeng-Lun

    2017-04-01

    Typhoon Morakot seriously attack southern Taiwan awaken the public awareness of large scale landslide disasters. Large scale landslide disasters produce large quantity of sediment due to negative effects on the operating functions of reservoirs. In order to reduce the risk of these disasters within the study area, the establishment of a database for hazard mitigation / disaster prevention is necessary. Real time data and numerous archives of engineering data, environment information, photo, and video, will not only help people make appropriate decisions, but also bring the biggest concern for people to process and value added. The study tried to define some basic data formats / standards from collected various types of data about these reservoirs and then provide a management platform based on these formats / standards. Meanwhile, in order to satisfy the practicality and convenience, the large scale landslide disasters database system is built both provide and receive information abilities, which user can use this large scale landslide disasters database system on different type of devices. IT technology progressed extreme quick, the most modern system might be out of date anytime. In order to provide long term service, the system reserved the possibility of user define data format /standard and user define system structure. The system established by this study was based on HTML5 standard language, and use the responsive web design technology. This will make user can easily handle and develop this large scale landslide disasters database system.

  16. Power suppression at large scales in string inflation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar, E-mail: mcicoli@ictp.it, E-mail: sddownes@physics.tamu.edu, E-mail: dutta@physics.tamu.edu

    2013-12-01

    We study a possible origin of the anomalous suppression of the power spectrum at large angular scales in the cosmic microwave background within the framework of explicit string inflationary models where inflation is driven by a closed string modulus parameterizing the size of the extra dimensions. In this class of models the apparent power loss at large scales is caused by the background dynamics which involves a sharp transition from a fast-roll power law phase to a period of Starobinsky-like slow-roll inflation. An interesting feature of this class of string inflationary models is that the number of e-foldings of inflationmore » is inversely proportional to the string coupling to a positive power. Therefore once the string coupling is tuned to small values in order to trust string perturbation theory, enough e-foldings of inflation are automatically obtained without the need of extra tuning. Moreover, in the less tuned cases the sharp transition responsible for the power loss takes place just before the last 50-60 e-foldings of inflation. We illustrate these general claims in the case of Fibre Inflation where we study the strength of this transition in terms of the attractor dynamics, finding that it induces a pivot from a blue to a redshifted power spectrum which can explain the apparent large scale power loss. We compute the effects of this pivot for example cases and demonstrate how magnitude and duration of this effect depend on model parameters.« less

  17. The new large-scale sweet sorghum industry in the USA

    USDA-ARS?s Scientific Manuscript database

    Sweet sorghum (Sorghum bicolor) has been widely recognized as a promising sugar feedstock crop for the large-scale manufacture of food-grade and non food-grade bioproducts in the USA. Heckemeyer Mill, located in Sikeston, Missouri, has built and equipped the largest, commercial-scale sweet sorghum ...

  18. The Modified HZ Conjugate Gradient Algorithm for Large-Scale Nonsmooth Optimization.

    PubMed

    Yuan, Gonglin; Sheng, Zhou; Liu, Wenjie

    2016-01-01

    In this paper, the Hager and Zhang (HZ) conjugate gradient (CG) method and the modified HZ (MHZ) CG method are presented for large-scale nonsmooth convex minimization. Under some mild conditions, convergent results of the proposed methods are established. Numerical results show that the presented methods can be better efficiency for large-scale nonsmooth problems, and several problems are tested (with the maximum dimensions to 100,000 variables).

  19. Are female daycare workers at greater risk of cytomegalovirus infection? A secondary data analysis of CMV seroprevalence between 2010 and 2013 in Hamburg, Germany.

    PubMed

    Stranzinger, Johanna; Kozak, Agnessa; Schilgen, Benjamin; Paris, Diana; Nießen, Thomas; Schmidt, Lutz; Wille, Andreas; Wagner, Norbert L; Nienhaus, Albert

    2016-01-01

    Close contact with asymptomatic children younger than three years is a risk factor for a primary cytomegalovirus (CMV) infection. In pregnant women, such primary infection increases the risk of CMV-induced feto- or embryopathy. Daycare providers have therefore implemented working restrictions for pregnant daycare workers (DCWs) in accordance with legislation and guidelines for maternity protection. However, little is known about the infection risk for DCWs. We therefore compared the prevalence of CMV antibodies of pregnant DCWs to that of female blood donors (BDs). In a secondary data analysis, the prevalence of anti-CMV IgG among pregnant DCWs (N=509) in daycare centers (DCCs) was compared to the prevalence of female first-time BDs (N=14,358) from the greater region of Hamburg, Germany. Data collection took place between 2010 and 2013. The influence of other risk factors such as age, pregnancies and place of residence was evaluated using logistic regression models. The prevalence of CMV antibodies in pregnant DCWs was higher than in female BDs (54.6 vs 41.5%; OR 1.6; 95%CI 1.3-1.9). The subgroup of BDs who had given birth to at least one child and who lived in the city of Hamburg (N=2,591) had a prevalence of CMV antibodies similar to the prevalence in pregnant DCWs (53.9 vs 54.6%; OR 0.9; 95%CI 0.8-1.2). Age, pregnancy history and living in the center of Hamburg were risk factors for CMV infections. The comparison of pregnant DCWs to the best-matching subgroup of female first-time BDs with past pregnancies and living in the city of Hamburg does not indicate an elevated risk of CMV infection among DCWs. However, as two secondary data sets from convenience samples were used, a more detailed investigation of the risk factors other than place of residence, age and maternity was not possible. Therefore, the CMV infection risk in DCWs should be further studied by taking into consideration the potential preventive effect of hygiene measures.

  20. Validating Bayesian truth serum in large-scale online human experiments.

    PubMed

    Frank, Morgan R; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad

    2017-01-01

    Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method's mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon's Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the "honest" distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where "honest" answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers.

  1. International Large-Scale Assessments: What Uses, What Consequences?

    ERIC Educational Resources Information Center

    Johansson, Stefan

    2016-01-01

    Background: International large-scale assessments (ILSAs) are a much-debated phenomenon in education. Increasingly, their outcomes attract considerable media attention and influence educational policies in many jurisdictions worldwide. The relevance, uses and consequences of these assessments are often the focus of research scrutiny. Whilst some…

  2. Large-scale Eucalyptus energy farms and power cogeneration

    Treesearch

    Robert C. Noroña

    1983-01-01

    A thorough evaluation of all factors possibly affecting a large-scale planting of eucalyptus is foremost in determining the cost effectiveness of the planned operation. Seven basic areas of concern must be analyzed:1. Species Selection 2. Site Preparation 3. Planting 4. Weed Control 5....

  3. Large scale modulation of high frequency acoustic waves in periodic porous media.

    PubMed

    Boutin, Claude; Rallu, Antoine; Hans, Stephane

    2012-12-01

    This paper deals with the description of the modulation at large scale of high frequency acoustic waves in gas saturated periodic porous media. High frequencies mean local dynamics at the pore scale and therefore absence of scale separation in the usual sense of homogenization. However, although the pressure is spatially varying in the pores (according to periodic eigenmodes), the mode amplitude can present a large scale modulation, thereby introducing another type of scale separation to which the asymptotic multi-scale procedure applies. The approach is first presented on a periodic network of inter-connected Helmholtz resonators. The equations governing the modulations carried by periodic eigenmodes, at frequencies close to their eigenfrequency, are derived. The number of cells on which the carrying periodic mode is defined is therefore a parameter of the modeling. In a second part, the asymptotic approach is developed for periodic porous media saturated by a perfect gas. Using the "multicells" periodic condition, one obtains the family of equations governing the amplitude modulation at large scale of high frequency waves. The significant difference between modulations of simple and multiple mode are evidenced and discussed. The features of the modulation (anisotropy, width of frequency band) are also analyzed.

  4. Measuring Cosmic Expansion and Large Scale Structure with Destiny

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Lauer, Tod R.

    2007-01-01

    Destiny is a simple, direct, low cost mission to determine the properties of dark energy by obtaining a cosmologically deep supernova (SN) type Ia Hubble diagram and by measuring the large-scale mass power spectrum over time. Its science instrument is a 1.65m space telescope, featuring a near-infrared survey camera/spectrometer with a large field of view. During its first two years, Destiny will detect, observe, and characterize 23000 SN Ia events over the redshift interval 0.4lo00 square degrees to measure the large-scale mass power spectrum. The combination of surveys is much more powerful than either technique on its own, and will have over an order of magnitude greater sensitivity than will be provided by ongoing ground-based projects.

  5. Evaluation and application of microwave-assisted extraction and dispersive liquid-liquid microextraction followed by high-performance liquid chromatography for the determination of polar heterocyclic aromatic amines in hamburger patties.

    PubMed

    Aeenehvand, Saeed; Toudehrousta, Zahra; Kamankesh, Marzieh; Mashayekh, Morteza; Tavakoli, Hamid Reza; Mohammadi, Abdorreza

    2016-01-01

    This study developed an analytical method based on microwave-assisted extraction and dispersive liquid-liquid microextraction followed by high-performance liquid chromatography for the determination of three polar heterocyclic aromatic amines from hamburger patties. Effective parameters controlling the performance of the microextraction process, such as the type and volume of extraction and disperser solvents, microwave time, nature of alkaline aqueous solution, pH and salt amount, were optimized. The calibration graphs were linear in the range of 1-200 ng g(-1), with a coefficient of determination (R(2)) better than 0.9993. The relative standard deviations (RSD) for seven analyses were between 3.2% and 6.5%. The recoveries of those compounds in hamburger patties were from 90% to 105%. Detection limits were between 0.06 and 0.21 ng g(-1). A comparison of the proposed method with the existing literature demonstrates that it is a simple, rapid, highly selective and sensitive, and it gives good enrichment factors and detection limits for determining HAAs in real hamburger patties samples. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Circulation and Internationalisation of Pedagogical Concepts and Practices in the Discourse of Education: The Hamburg School Reform Experiment (1919-1933)

    ERIC Educational Resources Information Center

    Mayer, Christine

    2014-01-01

    In the context of the international exchange of school reform ideas and concepts, the new schools in Hamburg were recognised as exemplary instances of a revolutionary and forceful reform in the public elementary school systems. Based on studies of transfer and their premise that the transnational transfer of ideas, practices and objects does not…

  7. A Combined Eulerian-Lagrangian Data Representation for Large-Scale Applications.

    PubMed

    Sauer, Franz; Xie, Jinrong; Ma, Kwan-Liu

    2017-10-01

    The Eulerian and Lagrangian reference frames each provide a unique perspective when studying and visualizing results from scientific systems. As a result, many large-scale simulations produce data in both formats, and analysis tasks that simultaneously utilize information from both representations are becoming increasingly popular. However, due to their fundamentally different nature, drawing correlations between these data formats is a computationally difficult task, especially in a large-scale setting. In this work, we present a new data representation which combines both reference frames into a joint Eulerian-Lagrangian format. By reorganizing Lagrangian information according to the Eulerian simulation grid into a "unit cell" based approach, we can provide an efficient out-of-core means of sampling, querying, and operating with both representations simultaneously. We also extend this design to generate multi-resolution subsets of the full data to suit the viewer's needs and provide a fast flow-aware trajectory construction scheme. We demonstrate the effectiveness of our method using three large-scale real world scientific datasets and provide insight into the types of performance gains that can be achieved.

  8. On the influences of key modelling constants of large eddy simulations for large-scale compartment fires predictions

    NASA Astrophysics Data System (ADS)

    Yuen, Anthony C. Y.; Yeoh, Guan H.; Timchenko, Victoria; Cheung, Sherman C. P.; Chan, Qing N.; Chen, Timothy

    2017-09-01

    An in-house large eddy simulation (LES) based fire field model has been developed for large-scale compartment fire simulations. The model incorporates four major components, including subgrid-scale turbulence, combustion, soot and radiation models which are fully coupled. It is designed to simulate the temporal and fluid dynamical effects of turbulent reaction flow for non-premixed diffusion flame. Parametric studies were performed based on a large-scale fire experiment carried out in a 39-m long test hall facility. Several turbulent Prandtl and Schmidt numbers ranging from 0.2 to 0.5, and Smagorinsky constants ranging from 0.18 to 0.23 were investigated. It was found that the temperature and flow field predictions were most accurate with turbulent Prandtl and Schmidt numbers of 0.3, respectively, and a Smagorinsky constant of 0.2 applied. In addition, by utilising a set of numerically verified key modelling parameters, the smoke filling process was successfully captured by the present LES model.

  9. The Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey

    NASA Astrophysics Data System (ADS)

    Squires, Gordon K.; Lubin, L. M.; Gal, R. R.

    2007-05-01

    We present the motivation, design, and latest results from the Observations of Redshift Evolution in Large Scale Environments (ORELSE) Survey, a systematic search for structure on scales greater than 10 Mpc around 20 known galaxy clusters at z > 0.6. When complete, the survey will cover nearly 5 square degrees, all targeted at high-density regions, making it complementary and comparable to field surveys such as DEEP2, GOODS, and COSMOS. For the survey, we are using the Large Format Camera on the Palomar 5-m and SuPRIME-Cam on the Subaru 8-m to obtain optical/near-infrared imaging of an approximately 30 arcmin region around previously studied high-redshift clusters. Colors are used to identify likely member galaxies which are targeted for follow-up spectroscopy with the DEep Imaging Multi-Object Spectrograph on the Keck 10-m. This technique has been used to identify successfully the Cl 1604 supercluster at z = 0.9, a large scale structure containing at least eight clusters (Gal & Lubin 2004; Gal, Lubin & Squires 2005). We present the most recent structures to be photometrically and spectroscopically confirmed through this program, discuss the properties of the member galaxies as a function of environment, and describe our planned multi-wavelength (radio, mid-IR, and X-ray) observations of these systems. The goal of this survey is to identify and examine a statistical sample of large scale structures during an active period in the assembly history of the most massive clusters. With such a sample, we can begin to constrain large scale cluster dynamics and determine the effect of the larger environment on galaxy evolution.

  10. From Large-scale to Protostellar Disk Fragmentation into Close Binary Stars

    NASA Astrophysics Data System (ADS)

    Sigalotti, Leonardo Di G.; Cruz, Fidel; Gabbasov, Ruslan; Klapp, Jaime; Ramírez-Velasquez, José

    2018-04-01

    Recent observations of young stellar systems with the Atacama Large Millimeter/submillimeter Array (ALMA) and the Karl G. Jansky Very Large Array are helping to cement the idea that close companion stars form via fragmentation of a gravitationally unstable disk around a protostar early in the star formation process. As the disk grows in mass, it eventually becomes gravitationally unstable and fragments, forming one or more new protostars in orbit with the first at mean separations of 100 au or even less. Here, we report direct numerical calculations down to scales as small as ∼0.1 au, using a consistent Smoothed Particle Hydrodynamics code, that show the large-scale fragmentation of a cloud core into two protostars accompanied by small-scale fragmentation of their circumstellar disks. Our results demonstrate the two dominant mechanisms of star formation, where the disk forming around a protostar (which in turn results from the large-scale fragmentation of the cloud core) undergoes eccentric (m = 1) fragmentation to produce a close binary. We generate two-dimensional emission maps and simulated ALMA 1.3 mm continuum images of the structure and fragmentation of the disks that can help explain the dynamical processes occurring within collapsing cloud cores.

  11. Large Scale Spectral Line Mapping of Galactic Regions with CCAT-Prime

    NASA Astrophysics Data System (ADS)

    Simon, Robert

    2018-01-01

    CCAT-prime is a 6-m submillimeter telescope that is being built on the top of Cerro Chajnantor (5600 m altitude) overlooking the ALMA plateau in the Atacama Desert. Its novel Crossed-Dragone design enables a large field of view without blockage and is thus particularly well suited for large scale surveys in the continuum and spectral lines targeting important questions ranging from star formation in the Milky Way to cosmology. On this poster, we focus on the large scale mapping opportunities in important spectral cooling lines of the interstellar medium opened up by CCAT-prime and the Cologne heterodyne instrument CHAI.

  12. Polymer Physics of the Large-Scale Structure of Chromatin.

    PubMed

    Bianco, Simona; Chiariello, Andrea Maria; Annunziatella, Carlo; Esposito, Andrea; Nicodemi, Mario

    2016-01-01

    We summarize the picture emerging from recently proposed models of polymer physics describing the general features of chromatin large scale spatial architecture, as revealed by microscopy and Hi-C experiments.

  13. Large-scale circulation departures related to wet episodes in northeast Brazil

    NASA Technical Reports Server (NTRS)

    Sikdar, D. N.; Elsner, J. B.

    1985-01-01

    Large scale circulation features are presented as related to wet spells over northeast Brazil (Nordeste) during the rainy season (March and April) of 1979. The rainy season season is devided into dry and wet periods, the FGGE and geostationary satellite data was averaged and mean and departure fields of basic variables and cloudiness were studied. Analysis of seasonal mean circulation features show: lowest sea level easterlies beneath upper level westerlies; weak meridional winds; high relative humidity over the Amazon basin and relatively dry conditions over the South Atlantic Ocean. A fluctuation was found in the large scale circulation features on time scales of a few weeks or so over Nordeste and the South Atlantic sector. Even the subtropical High SLP's have large departures during wet episodes, implying a short period oscillation in the Southern Hemisphere Hadley circulation.

  14. The three-point function as a probe of models for large-scale structure

    NASA Technical Reports Server (NTRS)

    Frieman, Joshua A.; Gaztanaga, Enrique

    1993-01-01

    The consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime are analyzed. Several variations of the standard Omega = 1 cold dark matter model with scale-invariant primordial perturbations were recently introduced to obtain more power on large scales, R(sub p) is approximately 20 h(sup -1) Mpc, e.g., low-matter-density (non-zero cosmological constant) models, 'tilted' primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, etal. It is shown that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q(sub J) at large scales, r is approximately greater than R(sub p). Current observational constraints on the three-point amplitudes Q(sub 3) and S(sub 3) can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales.

  15. Dynamics of large scale impacts on Venus and Earth

    NASA Technical Reports Server (NTRS)

    Okeefe, John D.; Ahrens, Thomas J.

    1993-01-01

    Large scale impacts are a key aspect of the accretion and growth of the planets, the evolution of their atmospheres, and the viability of their life forms. We have performed an extensive series of numerical calculations that examined the mechanics of impacts over a broad range of conditions and are now extending these to account for the effects of the planetary atmosphere. We have examined the effects of large scale impacts in which the trapping and compression of an atmosphere during impact is a significant factor in the transfer of energy to the atmosphere. The various energy transfer regimes and where conventional drag and trapping and subsequent compression of atmosphere between the bolide and planetary surface are significant are shown.

  16. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  17. Large-scale solar magnetic fields and H-alpha patterns

    NASA Technical Reports Server (NTRS)

    Mcintosh, P. S.

    1972-01-01

    Coronal and interplanetary magnetic fields computed from measurements of large-scale photospheric magnetic fields suffer from interruptions in day-to-day observations and the limitation of using only measurements made near the solar central meridian. Procedures were devised for inferring the lines of polarity reversal from H-alpha solar patrol photographs that map the same large-scale features found on Mt. Wilson magnetograms. These features may be monitored without interruption by combining observations from the global network of observatories associated with NOAA's Space Environment Services Center. The patterns of inferred magnetic fields may be followed accurately as far as 60 deg from central meridian. Such patterns will be used to improve predictions of coronal features during the next solar eclipse.

  18. Large-Scale Assessments and Educational Policies in Italy

    ERIC Educational Resources Information Center

    Damiani, Valeria

    2016-01-01

    Despite Italy's extensive participation in most large-scale assessments, their actual influence on Italian educational policies is less easy to identify. The present contribution aims at highlighting and explaining reasons for the weak and often inconsistent relationship between international surveys and policy-making processes in Italy.…

  19. Divergence of perturbation theory in large scale structures

    NASA Astrophysics Data System (ADS)

    Pajer, Enrico; van der Woude, Drian

    2018-05-01

    We make progress towards an analytical understanding of the regime of validity of perturbation theory for large scale structures and the nature of some non-perturbative corrections. We restrict ourselves to 1D gravitational collapse, for which exact solutions before shell crossing are known. We review the convergence of perturbation theory for the power spectrum, recently proven by McQuinn and White [1], and extend it to non-Gaussian initial conditions and the bispectrum. In contrast, we prove that perturbation theory diverges for the real space two-point correlation function and for the probability density function (PDF) of the density averaged in cells and all the cumulants derived from it. We attribute these divergences to the statistical averaging intrinsic to cosmological observables, which, even on very large and "perturbative" scales, gives non-vanishing weight to all extreme fluctuations. Finally, we discuss some general properties of non-perturbative effects in real space and Fourier space.

  20. Statistical Analysis of Large-Scale Structure of Universe

    NASA Astrophysics Data System (ADS)

    Tugay, A. V.

    While galaxy cluster catalogs were compiled many decades ago, other structural elements of cosmic web are detected at definite level only in the newest works. For example, extragalactic filaments were described by velocity field and SDSS galaxy distribution during the last years. Large-scale structure of the Universe could be also mapped in the future using ATHENA observations in X-rays and SKA in radio band. Until detailed observations are not available for the most volume of Universe, some integral statistical parameters can be used for its description. Such methods as galaxy correlation function, power spectrum, statistical moments and peak statistics are commonly used with this aim. The parameters of power spectrum and other statistics are important for constraining the models of dark matter, dark energy, inflation and brane cosmology. In the present work we describe the growth of large-scale density fluctuations in one- and three-dimensional case with Fourier harmonics of hydrodynamical parameters. In result we get power-law relation for the matter power spectrum.

  1. Reliability of a science admission test (HAM-Nat) at Hamburg medical school.

    PubMed

    Hissbach, Johanna; Klusmann, Dietrich; Hampe, Wolfgang

    2011-01-01

    The University Hospital in Hamburg (UKE) started to develop a test of knowledge in natural sciences for admission to medical school in 2005 (Hamburger Auswahlverfahren für Medizinische Studiengänge, Naturwissenschaftsteil, HAM-Nat). This study is a step towards establishing the HAM-Nat. We are investigating parallel forms reliability, the effect of a crash course in chemistry on test results, and correlations of HAM-Nat test results with a test of scientific reasoning (similar to a subtest of the "Test for Medical Studies", TMS). 316 first-year students participated in the study in 2007. They completed different versions of the HAM-Nat test which consisted of items that had already been used (HN2006) and new items (HN2007). Four weeks later half of the participants were tested on the HN2007 version of the HAM-Nat again, while the other half completed the test of scientific reasoning. Within this four week interval students were offered a five day chemistry course. Parallel forms reliability for four different test versions ranged from r(tt)=.53 to r(tt)=.67. The retest reliabilities of the HN2007 halves were r(tt)=.54 and r(tt )=.61. Correlations of the two HAM-Nat versions with the test of scientific reasoning were r=.34 und r=.21. The crash course in chemistry had no effect on HAM-Nat scores. The results suggest that further versions of the test of natural sciences will not easily conform to the standards of internal consistency, parallel-forms reliability and retest reliability. Much care has to be taken in order to assemble items which could be used interchangeably for the construction of new test versions. The test of scientific reasoning and the HAM-Nat are tapping different constructs. Participation in a chemistry course did not improve students' achievement, probably because the content of the course was not coordinated with the test and many students lacked of motivation to do well in the second test.

  2. Reliability of a science admission test (HAM-Nat) at Hamburg medical school

    PubMed Central

    Hissbach, Johanna; Klusmann, Dietrich; Hampe, Wolfgang

    2011-01-01

    Objective: The University Hospital in Hamburg (UKE) started to develop a test of knowledge in natural sciences for admission to medical school in 2005 (Hamburger Auswahlverfahren für Medizinische Studiengänge, Naturwissenschaftsteil, HAM-Nat). This study is a step towards establishing the HAM-Nat. We are investigating parallel forms reliability, the effect of a crash course in chemistry on test results, and correlations of HAM-Nat test results with a test of scientific reasoning (similar to a subtest of the "Test for Medical Studies", TMS). Methods: 316 first-year students participated in the study in 2007. They completed different versions of the HAM-Nat test which consisted of items that had already been used (HN2006) and new items (HN2007). Four weeks later half of the participants were tested on the HN2007 version of the HAM-Nat again, while the other half completed the test of scientific reasoning. Within this four week interval students were offered a five day chemistry course. Results: Parallel forms reliability for four different test versions ranged from rtt=.53 to rtt=.67. The retest reliabilities of the HN2007 halves were rtt=.54 and rtt =.61. Correlations of the two HAM-Nat versions with the test of scientific reasoning were r=.34 und r=.21. The crash course in chemistry had no effect on HAM-Nat scores. Conclusions: The results suggest that further versions of the test of natural sciences will not easily conform to the standards of internal consistency, parallel-forms reliability and retest reliability. Much care has to be taken in order to assemble items which could be used interchangeably for the construction of new test versions. The test of scientific reasoning and the HAM-Nat are tapping different constructs. Participation in a chemistry course did not improve students’ achievement, probably because the content of the course was not coordinated with the test and many students lacked of motivation to do well in the second test. PMID:21866246

  3. Assessment of the hygienic performances of hamburger patty production processes.

    PubMed

    Gill, C O; Rahn, K; Sloan, K; McMullen, L M

    1997-05-20

    The hygienic conditions of the hamburger patties collected from three patty manufacturing plants and six retail outlets were examined. At each manufacturing plant a sample from newly formed, chilled patties and one from frozen patties were collected from each of 25 batches of patties selected at random. At three, two or one retail outlet, respectively, 25 samples from frozen, chilled or both frozen and chilled patties were collected at random. Each sample consisted of 30 g of meat obtained from five or six patties. Total aerobic, coliform and Escherichia coli counts per gram were enumerated for each sample. The mean log (x) and standard deviation (s) were calculated for the log10 values for each set of 25 counts, on the assumption that the distribution of counts approximated the log normal. A value for the log10 of the arithmetic mean (log A) was calculated for each set from the values of x and s. A chi2 statistic was calculated for each set as a test of the assumption of the log normal distribution. The chi2 statistic was calculable for 32 of the 39 sets. Four of the sets gave chi2 values indicative of gross deviation from log normality. On inspection of those sets, distributions obviously differing from the log normal were apparent in two. Log A values for total, coliform and E. coli counts for chilled patties from manufacturing plants ranged from 4.4 to 5.1, 1.7 to 2.3 and 0.9 to 1.5, respectively. Log A values for frozen patties from manufacturing plants were between < 0.1 and 0.5 log10 units less than the equivalent values for chilled patties. Log A values for total, coliform and E. coli counts for frozen patties on retail sale ranged from 3.8 to 8.5, < 0.5 to 3.6 and < 0 to 1.9, respectively. The equivalent ranges for chilled patties on retail sale were 4.8 to 8.5, 1.8 to 3.7 and 1.4 to 2.7, respectively. The findings indicate that the general hygienic condition of hamburgers patties could be improved by their being manufactured from only manufacturing beef

  4. Results of Large-Scale Spacecraft Flammability Tests

    NASA Technical Reports Server (NTRS)

    Ferkul, Paul; Olson, Sandra; Urban, David L.; Ruff, Gary A.; Easton, John; T'ien, James S.; Liao, Ta-Ting T.; Fernandez-Pello, A. Carlos; Torero, Jose L.; Eigenbrand, Christian; hide

    2017-01-01

    For the first time, a large-scale fire was intentionally set inside a spacecraft while in orbit. Testing in low gravity aboard spacecraft had been limited to samples of modest size: for thin fuels the longest samples burned were around 15 cm in length and thick fuel samples have been even smaller. This is despite the fact that fire is a catastrophic hazard for spaceflight and the spread and growth of a fire, combined with its interactions with the vehicle cannot be expected to scale linearly. While every type of occupied structure on earth has been the subject of full scale fire testing, this had never been attempted in space owing to the complexity, cost, risk and absence of a safe location. Thus, there is a gap in knowledge of fire behavior in spacecraft. The recent utilization of large, unmanned, resupply craft has provided the needed capability: a habitable but unoccupied spacecraft in low earth orbit. One such vehicle was used to study the flame spread over a 94 x 40.6 cm thin charring solid (fiberglasscotton fabric). The sample was an order of magnitude larger than anything studied to date in microgravity and was of sufficient scale that it consumed 1.5 of the available oxygen. The experiment which is called Saffire consisted of two tests, forward or concurrent flame spread (with the direction of flow) and opposed flame spread (against the direction of flow). The average forced air speed was 20 cms. For the concurrent flame spread test, the flame size remained constrained after the ignition transient, which is not the case in 1-g. These results were qualitatively different from those on earth where an upward-spreading flame on a sample of this size accelerates and grows. In addition, a curious effect of the chamber size is noted. Compared to previous microgravity work in smaller tunnels, the flame in the larger tunnel spread more slowly, even for a wider sample. This is attributed to the effect of flow acceleration in the smaller tunnels as a result of hot

  5. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  6. Towards large-scale, human-based, mesoscopic neurotechnologies.

    PubMed

    Chang, Edward F

    2015-04-08

    Direct human brain recordings have transformed the scope of neuroscience in the past decade. Progress has relied upon currently available neurophysiological approaches in the context of patients undergoing neurosurgical procedures for medical treatment. While this setting has provided precious opportunities for scientific research, it also has presented significant constraints on the development of new neurotechnologies. A major challenge now is how to achieve high-resolution spatiotemporal neural recordings at a large scale. By narrowing the gap between current approaches, new directions tailored to the mesoscopic (intermediate) scale of resolution may overcome the barriers towards safe and reliable human-based neurotechnology development, with major implications for advancing both basic research and clinical translation. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Large-scale recording of neuronal ensembles.

    PubMed

    Buzsáki, György

    2004-05-01

    How does the brain orchestrate perceptions, thoughts and actions from the spiking activity of its neurons? Early single-neuron recording research treated spike pattern variability as noise that needed to be averaged out to reveal the brain's representation of invariant input. Another view is that variability of spikes is centrally coordinated and that this brain-generated ensemble pattern in cortical structures is itself a potential source of cognition. Large-scale recordings from neuronal ensembles now offer the opportunity to test these competing theoretical frameworks. Currently, wire and micro-machined silicon electrode arrays can record from large numbers of neurons and monitor local neural circuits at work. Achieving the full potential of massively parallel neuronal recordings, however, will require further development of the neuron-electrode interface, automated and efficient spike-sorting algorithms for effective isolation and identification of single neurons, and new mathematical insights for the analysis of network properties.

  8. Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.

    With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less

  9. Trajectory Segmentation Map-Matching Approach for Large-Scale, High-Resolution GPS Data

    DOE PAGES

    Zhu, Lei; Holden, Jacob R.; Gonder, Jeffrey D.

    2017-01-01

    With the development of smartphones and portable GPS devices, large-scale, high-resolution GPS data can be collected. Map matching is a critical step in studying vehicle driving activity and recognizing network traffic conditions from the data. A new trajectory segmentation map-matching algorithm is proposed to deal accurately and efficiently with large-scale, high-resolution GPS trajectory data. The new algorithm separated the GPS trajectory into segments. It found the shortest path for each segment in a scientific manner and ultimately generated a best-matched path for the entire trajectory. The similarity of a trajectory segment and its matched path is described by a similaritymore » score system based on the longest common subsequence. The numerical experiment indicated that the proposed map-matching algorithm was very promising in relation to accuracy and computational efficiency. Large-scale data set applications verified that the proposed method is robust and capable of dealing with real-world, large-scale GPS data in a computationally efficient and accurate manner.« less

  10. Tropospheric transport differences between models using the same large-scale meteorological fields

    NASA Astrophysics Data System (ADS)

    Orbe, Clara; Waugh, Darryn W.; Yang, Huang; Lamarque, Jean-Francois; Tilmes, Simone; Kinnison, Douglas E.

    2017-01-01

    The transport of chemicals is a major uncertainty in the modeling of tropospheric composition. A common approach is to transport gases using the winds from meteorological analyses, either using them directly in a chemical transport model or by constraining the flow in a general circulation model. Here we compare the transport of idealized tracers in several different models that use the same meteorological fields taken from Modern-Era Retrospective analysis for Research and Applications (MERRA). We show that, even though the models use the same meteorological fields, there are substantial differences in their global-scale tropospheric transport related to large differences in parameterized convection between the simulations. Furthermore, we find that the transport differences between simulations constrained with the same-large scale flow are larger than differences between free-running simulations, which have differing large-scale flow but much more similar convective mass fluxes. Our results indicate that more attention needs to be paid to convective parameterizations in order to understand large-scale tropospheric transport in models, particularly in simulations constrained with analyzed winds.

  11. Validating Bayesian truth serum in large-scale online human experiments

    PubMed Central

    Frank, Morgan R.; Cebrian, Manuel; Pickard, Galen; Rahwan, Iyad

    2017-01-01

    Bayesian truth serum (BTS) is an exciting new method for improving honesty and information quality in multiple-choice survey, but, despite the method’s mathematical reliance on large sample sizes, existing literature about BTS only focuses on small experiments. Combined with the prevalence of online survey platforms, such as Amazon’s Mechanical Turk, which facilitate surveys with hundreds or thousands of participants, BTS must be effective in large-scale experiments for BTS to become a readily accepted tool in real-world applications. We demonstrate that BTS quantifiably improves honesty in large-scale online surveys where the “honest” distribution of answers is known in expectation on aggregate. Furthermore, we explore a marketing application where “honest” answers cannot be known, but find that BTS treatment impacts the resulting distributions of answers. PMID:28494000

  12. Mapping spatial patterns of denitrifiers at large scales (Invited)

    NASA Astrophysics Data System (ADS)

    Philippot, L.; Ramette, A.; Saby, N.; Bru, D.; Dequiedt, S.; Ranjard, L.; Jolivet, C.; Arrouays, D.

    2010-12-01

    Little information is available regarding the landscape-scale distribution of microbial communities and its environmental determinants. Here we combined molecular approaches and geostatistical modeling to explore spatial patterns of the denitrifying community at large scales. The distribution of denitrifrying community was investigated over 107 sites in Burgundy, a 31 500 km2 region of France, using a 16 X 16 km sampling grid. At each sampling site, the abundances of denitrifiers and 42 soil physico-chemical properties were measured. The relative contributions of land use, spatial distance, climatic conditions, time and soil physico-chemical properties to the denitrifier spatial distribution were analyzed by canonical variation partitioning. Our results indicate that 43% to 85% of the spatial variation in community abundances could be explained by the measured environmental parameters, with soil chemical properties (mostly pH) being the main driver. We found spatial autocorrelation up to 739 km and used geostatistical modelling to generate predictive maps of the distribution of denitrifiers at the landscape scale. Studying the distribution of the denitrifiers at large scale can help closing the artificial gap between the investigation of microbial processes and microbial community ecology, therefore facilitating our understanding of the relationships between the ecology of denitrifiers and N-fluxes by denitrification.

  13. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Technical Reports Server (NTRS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dunner, R.; Essinger-Hileman, T.; Eimer, J.; hide

    2015-01-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe approx.70% of the sky. A variable-delay polarization modulator provides modulation of the polarization at approx.10Hz to suppress the 1/f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  14. Cosmology Large Angular Scale Surveyor (CLASS) Focal Plane Development

    NASA Astrophysics Data System (ADS)

    Chuss, D. T.; Ali, A.; Amiri, M.; Appel, J.; Bennett, C. L.; Colazo, F.; Denis, K. L.; Dünner, R.; Essinger-Hileman, T.; Eimer, J.; Fluxa, P.; Gothe, D.; Halpern, M.; Harrington, K.; Hilton, G.; Hinshaw, G.; Hubmayr, J.; Iuliano, J.; Marriage, T. A.; Miller, N.; Moseley, S. H.; Mumby, G.; Petroff, M.; Reintsema, C.; Rostem, K.; U-Yen, K.; Watts, D.; Wagner, E.; Wollack, E. J.; Xu, Z.; Zeng, L.

    2016-08-01

    The Cosmology Large Angular Scale Surveyor (CLASS) will measure the polarization of the Cosmic Microwave Background to search for and characterize the polarized signature of inflation. CLASS will operate from the Atacama Desert and observe ˜ 70 % of the sky. A variable-delay polarization modulator provides modulation of the polarization at ˜ 10 Hz to suppress the 1/ f noise of the atmosphere and enable the measurement of the large angular scale polarization modes. The measurement of the inflationary signal across angular scales that spans both the recombination and reionization features allows a test of the predicted shape of the polarized angular power spectra in addition to a measurement of the energy scale of inflation. CLASS is an array of telescopes covering frequencies of 38, 93, 148, and 217 GHz. These frequencies straddle the foreground minimum and thus allow the extraction of foregrounds from the primordial signal. Each focal plane contains feedhorn-coupled transition-edge sensors that simultaneously detect two orthogonal linear polarizations. The use of single-crystal silicon as the dielectric for the on-chip transmission lines enables both high efficiency and uniformity in fabrication. Integrated band definition has been implemented that both controls the bandpass of the single-mode transmission on the chip and prevents stray light from coupling to the detectors.

  15. Assuring Quality in Large-Scale Online Course Development

    ERIC Educational Resources Information Center

    Parscal, Tina; Riemer, Deborah

    2010-01-01

    Student demand for online education requires colleges and universities to rapidly expand the number of courses and programs offered online while maintaining high quality. This paper outlines two universities respective processes to assure quality in large-scale online programs that integrate instructional design, eBook custom publishing, Quality…

  16. Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions

    NASA Technical Reports Server (NTRS)

    Grujic, L. T.; Siljak, D. D.

    1973-01-01

    The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.

  17. [Active and healthy living in old age--results from a representative survey of community-dwelling senior citizens in Hamburg].

    PubMed

    Dapp, Ulrike; Lorentz, Ch; Laub, S; Anders, J; von Renteln-Kruse, W; Minder, Ch; Dirksen-Fischer, M

    2009-06-01

    The majority of community-dwelling people 60 years and older are independent and live actively. However, there is little information about elderly persons' views on aging, health and health promotion. Therefore, an anonymous, written questionnaire survey was performed in a representative sample of inhabitants from a section of the city of Hamburg, 60 years and older; 5 year intervals, 14 subsamples according to 7 age groups of females and males. Questionnaires from 950 participants (29% response) could be evaluated: mean age 71.5 years, 58% women, 34% living alone, 5% with professional healthcare needs as indicated by status according to German nursing care insurance. Senior citizens' positive attitudes towards aging and health were predominant: 69% of respondents felt young, 85% worried about loss of autonomy in old age. The results provide evidence indicating potential for improving health-promoting lifestyles in parts of the older population by evaluating and strengthening older persons' competencies and by considering their concerns seriously. These results provide valuable information for future plans in the public-health sector in the city of Hamburg where particular health-promoting actions for elderly persons will be considered.

  18. Learning Short Binary Codes for Large-scale Image Retrieval.

    PubMed

    Liu, Li; Yu, Mengyang; Shao, Ling

    2017-03-01

    Large-scale visual information retrieval has become an active research area in this big data era. Recently, hashing/binary coding algorithms prove to be effective for scalable retrieval applications. Most existing hashing methods require relatively long binary codes (i.e., over hundreds of bits, sometimes even thousands of bits) to achieve reasonable retrieval accuracies. However, for some realistic and unique applications, such as on wearable or mobile devices, only short binary codes can be used for efficient image retrieval due to the limitation of computational resources or bandwidth on these devices. In this paper, we propose a novel unsupervised hashing approach called min-cost ranking (MCR) specifically for learning powerful short binary codes (i.e., usually the code length shorter than 100 b) for scalable image retrieval tasks. By exploring the discriminative ability of each dimension of data, MCR can generate one bit binary code for each dimension and simultaneously rank the discriminative separability of each bit according to the proposed cost function. Only top-ranked bits with minimum cost-values are then selected and grouped together to compose the final salient binary codes. Extensive experimental results on large-scale retrieval demonstrate that MCR can achieve comparative performance as the state-of-the-art hashing algorithms but with significantly shorter codes, leading to much faster large-scale retrieval.

  19. High Fidelity Simulations of Large-Scale Wireless Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onunkwo, Uzoma; Benz, Zachary

    The worldwide proliferation of wireless connected devices continues to accelerate. There are 10s of billions of wireless links across the planet with an additional explosion of new wireless usage anticipated as the Internet of Things develops. Wireless technologies do not only provide convenience for mobile applications, but are also extremely cost-effective to deploy. Thus, this trend towards wireless connectivity will only continue and Sandia must develop the necessary simulation technology to proactively analyze the associated emerging vulnerabilities. Wireless networks are marked by mobility and proximity-based connectivity. The de facto standard for exploratory studies of wireless networks is discrete event simulationsmore » (DES). However, the simulation of large-scale wireless networks is extremely difficult due to prohibitively large turnaround time. A path forward is to expedite simulations with parallel discrete event simulation (PDES) techniques. The mobility and distance-based connectivity associated with wireless simulations, however, typically doom PDES and fail to scale (e.g., OPNET and ns-3 simulators). We propose a PDES-based tool aimed at reducing the communication overhead between processors. The proposed solution will use light-weight processes to dynamically distribute computation workload while mitigating communication overhead associated with synchronizations. This work is vital to the analytics and validation capabilities of simulation and emulation at Sandia. We have years of experience in Sandia’s simulation and emulation projects (e.g., MINIMEGA and FIREWHEEL). Sandia’s current highly-regarded capabilities in large-scale emulations have focused on wired networks, where two assumptions prevent scalable wireless studies: (a) the connections between objects are mostly static and (b) the nodes have fixed locations.« less

  20. Just enough inflation: power spectrum modifications at large scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cicoli, Michele; Downes, Sean; Dutta, Bhaskar

    2014-12-01

    We show that models of 'just enough' inflation, where the slow-roll evolution lasted only 50- 60 e-foldings, feature modifications of the CMB power spectrum at large angular scales. We perform a systematic analytic analysis in the limit of a sudden transition between any possible non-slow-roll background evolution and the final stage of slow-roll inflation. We find a high degree of universality since most common backgrounds like fast-roll evolution, matter or radiation-dominance give rise to a power loss at large angular scales and a peak together with an oscillatory behaviour at scales around the value of the Hubble parameter at themore » beginning of slow-roll inflation. Depending on the value of the equation of state parameter, different pre-inflationary epochs lead instead to an enhancement of power at low ℓ, and so seem disfavoured by recent observational hints for a lack of CMB power at ℓ∼< 40. We also comment on the importance of initial conditions and the possibility to have multiple pre-inflationary stages.« less

  1. An Analysis of Large-Scale Writing Assessments in Canada (Grades 5-8)

    ERIC Educational Resources Information Center

    Peterson, Shelley Stagg; McClay, Jill; Main, Kristin

    2011-01-01

    This paper reports on an analysis of large-scale assessments of Grades 5-8 students' writing across 10 provinces and 2 territories in Canada. Theory, classroom practice, and the contributions and constraints of large-scale writing assessment are brought together with a focus on Grades 5-8 writing in order to provide both a broad view of…

  2. The Effect of Modified Atmosphere Packaging and Addition of Rosemary Extract, Sodium Acetate and Calcium Lactate Mixture on the Quality of Pre-cooked Hamburger Patties during Refrigerated Storage

    PubMed Central

    Muhlisin; Kang, Sun Moon; Choi, Won Hee; Lee, Keun Taik; Cheong, Sung Hee; Lee, Sung Ki

    2013-01-01

    The effect of modified atmosphere packaging (MAP; 30% CO2+70% N2 or 100% N2) and an additive mixture (500 ppm rosemary extract, 3,000 ppm sodium acetate and 1,500 ppm calcium lactate) on the quality of pre-cooked hamburger patties during storage at 5°C for 14 d was evaluated. The addition of the additive mixture reduced aerobic and anaerobic bacteria counts in both 30% CO2-MAP (30% CO2+70% N2) and 100% N2-MAP (p<0.05). The 30% CO2-MAP was more effective to suppress the microbial growth than 100% N2-MAP, moreover the 30% CO2-MAP combined with additive mixture resulted in the lowest bacterial counts. The hamburger patties with additive mixture showed lower CIE L* and CIE a*, and higher CIE b* than those with no additive mixture. The 30% CO2-MAP tended to decrease the TBARS during storage regardless of the addition of additives. The use of 30% CO2-MAP in combination with additives mixture was effective for maintaining the quality and extending the shelf-life of pre-cooked hamburger patties. PMID:25049716

  3. A novel computational approach towards the certification of large-scale boson sampling

    NASA Astrophysics Data System (ADS)

    Huh, Joonsuk

    Recent proposals of boson sampling and the corresponding experiments exhibit the possible disproof of extended Church-Turning Thesis. Furthermore, the application of boson sampling to molecular computation has been suggested theoretically. Till now, however, only small-scale experiments with a few photons have been successfully performed. The boson sampling experiments of 20-30 photons are expected to reveal the computational superiority of the quantum device. A novel theoretical proposal for the large-scale boson sampling using microwave photons is highly promising due to the deterministic photon sources and the scalability. Therefore, the certification protocol of large-scale boson sampling experiments should be presented to complete the exciting story. We propose, in this presentation, a computational protocol towards the certification of large-scale boson sampling. The correlations of paired photon modes and the time-dependent characteristic functional with its Fourier component can show the fingerprint of large-scale boson sampling. This work was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Education, Science and Technology(NRF-2015R1A6A3A04059773), the ICT R&D program of MSIP/IITP [2015-019, Fundamental Research Toward Secure Quantum Communication] and Mueunjae Institute for Chemistry (MIC) postdoctoral fellowship.

  4. Multi-level discriminative dictionary learning with application to large scale image classification.

    PubMed

    Shen, Li; Sun, Gang; Huang, Qingming; Wang, Shuhui; Lin, Zhouchen; Wu, Enhua

    2015-10-01

    The sparse coding technique has shown flexibility and capability in image representation and analysis. It is a powerful tool in many visual applications. Some recent work has shown that incorporating the properties of task (such as discrimination for classification task) into dictionary learning is effective for improving the accuracy. However, the traditional supervised dictionary learning methods suffer from high computation complexity when dealing with large number of categories, making them less satisfactory in large scale applications. In this paper, we propose a novel multi-level discriminative dictionary learning method and apply it to large scale image classification. Our method takes advantage of hierarchical category correlation to encode multi-level discriminative information. Each internal node of the category hierarchy is associated with a discriminative dictionary and a classification model. The dictionaries at different layers are learnt to capture the information of different scales. Moreover, each node at lower layers also inherits the dictionary of its parent, so that the categories at lower layers can be described with multi-scale information. The learning of dictionaries and associated classification models is jointly conducted by minimizing an overall tree loss. The experimental results on challenging data sets demonstrate that our approach achieves excellent accuracy and competitive computation cost compared with other sparse coding methods for large scale image classification.

  5. NASA: Assessments of Selected Large-Scale Projects

    DTIC Science & Technology

    2011-03-01

    REPORT DATE MAR 2011 2. REPORT TYPE 3. DATES COVERED 00-00-2011 to 00-00-2011 4. TITLE AND SUBTITLE Assessments Of Selected Large-Scale Projects...Volatile EvolutioN MEP Mars Exploration Program MIB Mishap Investigation Board MMRTG Multi Mission Radioisotope Thermoelectric Generator MMS Magnetospheric...probes designed to explore the Martian surface, to satellites equipped with advanced sensors to study the earth , to telescopes intended to explore the

  6. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations.

    PubMed

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts.

  7. The magnetic shear-current effect: Generation of large-scale magnetic fields by the small-scale dynamo

    DOE PAGES

    Squire, J.; Bhattacharjee, A.

    2016-03-14

    A novel large-scale dynamo mechanism, the magnetic shear-current effect, is discussed and explored. Here, the effect relies on the interaction of magnetic fluctuations with a mean shear flow, meaning the saturated state of the small-scale dynamo can drive a large-scale dynamo – in some sense the inverse of dynamo quenching. The dynamo is non-helical, with the mean fieldmore » $${\\it\\alpha}$$coefficient zero, and is caused by the interaction between an off-diagonal component of the turbulent resistivity and the stretching of the large-scale field by shear flow. Following up on previous numerical and analytic work, this paper presents further details of the numerical evidence for the effect, as well as an heuristic description of how magnetic fluctuations can interact with shear flow to produce the required electromotive force. The pressure response of the fluid is fundamental to this mechanism, which helps explain why the magnetic effect is stronger than its kinematic cousin, and the basic idea is related to the well-known lack of turbulent resistivity quenching by magnetic fluctuations. As well as being interesting for its applications to general high Reynolds number astrophysical turbulence, where strong small-scale magnetic fluctuations are expected to be prevalent, the magnetic shear-current effect is a likely candidate for large-scale dynamo in the unstratified regions of ionized accretion disks. Evidence for this is discussed, as well as future research directions and the challenges involved with understanding details of the effect in astrophysically relevant regimes.« less

  8. Large-Scale Traveling Weather Systems in Mars’ Southern Extratropics

    NASA Astrophysics Data System (ADS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-10-01

    Between late fall and early spring, Mars’ middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  9. Large-Scale Traveling Weather Systems in Mars Southern Extratropics

    NASA Technical Reports Server (NTRS)

    Hollingsworth, Jeffery L.; Kahre, Melinda A.

    2017-01-01

    Between late fall and early spring, Mars' middle- and high-latitude atmosphere supports strong mean equator-to-pole temperature contrasts and an accompanying mean westerly polar vortex. Observations from both the MGS Thermal Emission Spectrometer (TES) and the MRO Mars Climate Sounder (MCS) indicate that a mean baroclinicity-barotropicity supports intense, large-scale eastward traveling weather systems (i.e., transient synoptic-period waves). Such extratropical weather disturbances are critical components of the global circulation as they serve as agents in the transport of heat and momentum, and generalized scalar/tracer quantities (e.g., atmospheric dust, water-vapor and ice clouds). The character of such traveling extratropical synoptic disturbances in Mars' southern hemisphere during late winter through early spring is investigated using a moderately high-resolution Mars global climate model (Mars GCM). This Mars GCM imposes interactively-lifted and radiatively-active dust based on a threshold value of the surface stress. The model exhibits a reasonable "dust cycle" (i.e., globally averaged, a dustier atmosphere during southern spring and summer occurs). Compared to the northern-hemisphere counterparts, the southern synoptic-period weather disturbances and accompanying frontal waves have smaller meridional and zonal scales, and are far less intense. Influences of the zonally asymmetric (i.e., east-west varying) topography on southern large-scale weather are investigated, in addition to large-scale up-slope/down-slope flows and the diurnal cycle. A southern storm zone in late winter and early spring presents in the western hemisphere via orographic influences from the Tharsis highlands, and the Argyre and Hellas impact basins. Geographically localized transient-wave activity diagnostics are constructed that illuminate dynamical differences amongst the simulations and these are presented.

  10. Lessons from a Large-Scale Assessment: Results from Conceptual Inventories

    ERIC Educational Resources Information Center

    Thacker, Beth; Dulli, Hani; Pattillo, Dave; West, Keith

    2014-01-01

    We report conceptual inventory results of a large-scale assessment project at a large university. We studied the introduction of materials and instructional methods informed by physics education research (PER) (physics education research-informed materials) into a department where most instruction has previously been traditional and a significant…

  11. A diagnostic study of the forcing of the Ferrel cell by eddies, with latent heat effects included

    NASA Technical Reports Server (NTRS)

    Salustri, G.; Stone, P. H.

    1983-01-01

    A diagnostic study of the forcing of the Ferrel cell by eddy fluxes in the Northern Hemisphere is carried out. The quasi-geostrophic omega equation, and Oort and Rasmusson's (1971) data set, are used. The effects of condensation associated with the large scale motions are introduced to the omega equation by using the quasi-geostrophic moisture conservation equation. Thus, the dry static stability is replaced by a moist static stability, and the forcing of the Ferrel cell by eddy latent heat fluxes as well as sensible heat and momentum fluxes is included. Both effects tend to enhance the forcing of the Ferrel cell. The numerical analysis indicates that the effects are small in January, but in July the maximum vertical velocities are enhanced by about 30 percent.

  12. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    NASA Astrophysics Data System (ADS)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  13. A Survey on Routing Protocols for Large-Scale Wireless Sensor Networks

    PubMed Central

    Li, Changle; Zhang, Hanxiao; Hao, Binbin; Li, Jiandong

    2011-01-01

    With the advances in micro-electronics, wireless sensor devices have been made much smaller and more integrated, and large-scale wireless sensor networks (WSNs) based the cooperation among the significant amount of nodes have become a hot topic. “Large-scale” means mainly large area or high density of a network. Accordingly the routing protocols must scale well to the network scope extension and node density increases. A sensor node is normally energy-limited and cannot be recharged, and thus its energy consumption has a quite significant effect on the scalability of the protocol. To the best of our knowledge, currently the mainstream methods to solve the energy problem in large-scale WSNs are the hierarchical routing protocols. In a hierarchical routing protocol, all the nodes are divided into several groups with different assignment levels. The nodes within the high level are responsible for data aggregation and management work, and the low level nodes for sensing their surroundings and collecting information. The hierarchical routing protocols are proved to be more energy-efficient than flat ones in which all the nodes play the same role, especially in terms of the data aggregation and the flooding of the control packets. With focus on the hierarchical structure, in this paper we provide an insight into routing protocols designed specifically for large-scale WSNs. According to the different objectives, the protocols are generally classified based on different criteria such as control overhead reduction, energy consumption mitigation and energy balance. In order to gain a comprehensive understanding of each protocol, we highlight their innovative ideas, describe the underlying principles in detail and analyze their advantages and disadvantages. Moreover a comparison of each routing protocol is conducted to demonstrate the differences between the protocols in terms of message complexity, memory requirements, localization, data aggregation, clustering manner

  14. Large-Scale Networked Virtual Environments: Architecture and Applications

    ERIC Educational Resources Information Center

    Lamotte, Wim; Quax, Peter; Flerackers, Eddy

    2008-01-01

    Purpose: Scalability is an important research topic in the context of networked virtual environments (NVEs). This paper aims to describe the ALVIC (Architecture for Large-scale Virtual Interactive Communities) approach to NVE scalability. Design/methodology/approach: The setup and results from two case studies are shown: a 3-D learning environment…

  15. Foundational perspectives on causality in large-scale brain networks.

    PubMed

    Mannino, Michael; Bressler, Steven L

    2015-12-01

    A profusion of recent work in cognitive neuroscience has been concerned with the endeavor to uncover causal influences in large-scale brain networks. However, despite the fact that many papers give a nod to the important theoretical challenges posed by the concept of causality, this explosion of research has generally not been accompanied by a rigorous conceptual analysis of the nature of causality in the brain. This review provides both a descriptive and prescriptive account of the nature of causality as found within and between large-scale brain networks. In short, it seeks to clarify the concept of causality in large-scale brain networks both philosophically and scientifically. This is accomplished by briefly reviewing the rich philosophical history of work on causality, especially focusing on contributions by David Hume, Immanuel Kant, Bertrand Russell, and Christopher Hitchcock. We go on to discuss the impact that various interpretations of modern physics have had on our understanding of causality. Throughout all this, a central focus is the distinction between theories of deterministic causality (DC), whereby causes uniquely determine their effects, and probabilistic causality (PC), whereby causes change the probability of occurrence of their effects. We argue that, given the topological complexity of its large-scale connectivity, the brain should be considered as a complex system and its causal influences treated as probabilistic in nature. We conclude that PC is well suited for explaining causality in the brain for three reasons: (1) brain causality is often mutual; (2) connectional convergence dictates that only rarely is the activity of one neuronal population uniquely determined by another one; and (3) the causal influences exerted between neuronal populations may not have observable effects. A number of different techniques are currently available to characterize causal influence in the brain. Typically, these techniques quantify the statistical

  16. A quasi-Newton algorithm for large-scale nonlinear equations.

    PubMed

    Huang, Linghua

    2017-01-01

    In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm's initial point does not have any restrictions; (ii) a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length [Formula: see text]. The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the [Formula: see text]-order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  17. Applications of large-scale density functional theory in biology

    NASA Astrophysics Data System (ADS)

    Cole, Daniel J.; Hine, Nicholas D. M.

    2016-10-01

    Density functional theory (DFT) has become a routine tool for the computation of electronic structure in the physics, materials and chemistry fields. Yet the application of traditional DFT to problems in the biological sciences is hindered, to a large extent, by the unfavourable scaling of the computational effort with system size. Here, we review some of the major software and functionality advances that enable insightful electronic structure calculations to be performed on systems comprising many thousands of atoms. We describe some of the early applications of large-scale DFT to the computation of the electronic properties and structure of biomolecules, as well as to paradigmatic problems in enzymology, metalloproteins, photosynthesis and computer-aided drug design. With this review, we hope to demonstrate that first principles modelling of biological structure-function relationships are approaching a reality.

  18. An annotated type catalogue of the camel spiders (Arachnida: Solifugae) held in the Zoological Museum Hamburg.

    PubMed

    Harms, Danilo; DupÉrrÉ, Nadine

    2018-01-23

    Solifuges are an enigmatic and poorly studied group of arachnids. Commonly referred to as camel spiders or sun spiders, these animals are voracious predators of small animals and found in arid biomes of the Old World and the Americas. In this paper, we provide a catalogue for the solifuges (Arachnida: Solifugae) that are held at the Center of Natural History in Hamburg. The collections in Hamburg are predominantly historical and were accumulated by Karl Kraepelin between 1889 and 1914 with the help of other famous arachnologists such as Ferdinant Karsch and Eugène Simon. The re-study of these collections indicates that there are 38 type species and 65 type specimens from 10 families. We provide a detailed account of this material, including collection data, taxonomic updates, measurements and high-resolution images for species that are either poorly or not at all illustrated. Most specimens (70%) were collected in Africa as part of colonial expeditions or field surveys but there are also types from Western Asia (11%), and North and South America (19%). We provide an overview of the history of this collection, including a summary of the field surveys during which the specimens were collected and the arachnologists who described the material. Overall, this is the third-largest collection of solifuges in Germany with a distinct biogeographical focus and one of the largest collections of camel spiders in Europe.

  19. Scale dependence of the 200-mb divergence inferred from EOLE data.

    NASA Technical Reports Server (NTRS)

    Morel, P.; Necco, G.

    1973-01-01

    The EOLE experiment with 480 constant-volume balloons distributed over the Southern Hemisphere approximately at the 200-mb level, has provided a unique, highly accurate set of tracer trajectories in the general westerly circulation. The trajectories of neighboring balloons are analyzed to estimate the horizontal divergence from the Lagrangian derivative of the area of one cluster. The variance of the divergence estimates results from two almost comparable effects: the true divergence of the horizontal flow and eddy diffusion due to small-scale, two-dimensional turbulence. Taking this into account, the rms divergence is found to be of the order of 0.00001 per sec and decreases logarithmically with cluster size. This scale dependence is shown to be consistent with the quasi-geostrophic turbulence model of the general circulation in midlatitudes.

  20. Intermediate Models of Planetary Circulations in the Atmosphere and Ocean.

    NASA Astrophysics Data System (ADS)

    McWilliams, James C.; Gent, Peter R.

    1980-08-01

    Large-scale extratropical motions (with dimensions comparable to, or somewhat smaller than, the planetary radius) in the atmosphere and ocean exhibit a more restricted range of phenomena than are admissible in the primitive equations for fluid motions, and there have been many previous proposals for simpler, more phenomenologically limited models of these motions. The oldest and most successful of these is the quasi-geostrophic model. An extensive discussion is made of models intermediate between the quasi-geostrophic and primitive ones, some of which have been previously proposed [e.g., the balance equations (BE), where tendencies in the equation for the divergent component of velocity are neglected, or the geostrophic momentum approximation (GM), where ageostrophic accelerations are neglected relative to geostrophic ones] and some of which are derived here. Virtues of these models are assessed in the dual measure of nearly geostrophic momentum balance (i.e., small Rossby number) and approximate frontal structure (i.e., larger along-axis velocities and length scales than their cross-axis counterparts), since one or both of these circumstances is usually characteristic of planetary motions. Consideration is also given to various coordinate transformations, since they can yield simpler expressions for the governing differential equations of the intermediate models. In particular, a new set of coordinates is proposed, isentropic geostrophic coordinates,(IGC), which has the advantage of making implicit the advections due to ageostrophic horizontal and vertical velocities under various approximations. A generalization of quasi-geostrophy is made. named hypo-geostrophy (HG), which is an asymptotic approximation of one higher order accuracy in Rossby number. The governing equations are simplest in IGC for both HG and GM; we name the latter in these coordinates isentropic semi-geostrophy (ISG), in analogy to Hoskins' (1975) semi-geostrophy (SG). HG, GM and BE are, in our

  1. Large-scale fabrication of single crystalline tin nanowire arrays.

    PubMed

    Luo, Bin; Yang, Dachi; Liang, Minghui; Zhi, Linjie

    2010-09-01

    Large-scale single crystalline tin nanowire arrays with preferred lattice orientation along the [100] direction were fabricated in porous anodic aluminium oxide (AAO) membranes by the electrodeposition method using copper nanorod as a second electrode.

  2. Evaluating the Large-Scale Environment of Extreme Events Using Reanalyses

    NASA Astrophysics Data System (ADS)

    Bosilovich, M. G.; Schubert, S. D.; Koster, R. D.; da Silva, A. M., Jr.; Eichmann, A.

    2014-12-01

    Extreme conditions and events have always been a long standing concern in weather forecasting and national security. While some evidence indicates extreme weather will increase in global change scenarios, extremes are often related to the large scale atmospheric circulation, but also occurring infrequently. Reanalyses assimilate substantial amounts of weather data and a primary strength of reanalysis data is the representation of the large-scale atmospheric environment. In this effort, we link the occurrences of extreme events or climate indicators to the underlying regional and global weather patterns. Now, with greater than 3o years of data, reanalyses can include multiple cases of extreme events, and thereby identify commonality among the weather to better characterize the large-scale to global environment linked to the indicator or extreme event. Since these features are certainly regionally dependent, and also, the indicators of climate are continually being developed, we outline various methods to analyze the reanalysis data and the development of tools to support regional evaluation of the data. Here, we provide some examples of both individual case studies and composite studies of similar events. For example, we will compare the large scale environment for Northeastern US extreme precipitation with that of highest mean precipitation seasons. Likewise, southerly winds can shown to be a major contributor to very warm days in the Northeast winter. While most of our development has involved NASA's MERRA reanalysis, we are also looking forward to MERRA-2 which includes several new features that greatly improve the representation of weather and climate, especially for the regions and sectors involved in the National Climate Assessment.

  3. Bridging the gap between small and large scale sediment budgets? - A scaling challenge in the Upper Rhone Basin, Switzerland

    NASA Astrophysics Data System (ADS)

    Schoch, Anna; Blöthe, Jan; Hoffmann, Thomas; Schrott, Lothar

    2016-04-01

    A large number of sediment budgets have been compiled on different temporal and spatial scales in alpine regions. Detailed sediment budgets based on the quantification of a number of sediment storages (e.g. talus cones, moraine deposits) exist only for a few small scale drainage basins (up to 10² km²). In contrast, large scale sediment budgets (> 10³ km²) consider only long term sediment sinks such as valley fills and lakes. Until now, these studies often neglect small scale sediment storages in the headwaters. However, the significance of these sediment storages have been reported. A quantitative verification whether headwaters function as sediment source regions is lacking. Despite substantial transport energy in mountain environments due to steep gradients and high relief, sediment flux in large river systems is frequently disconnected from alpine headwaters. This leads to significant storage of coarse-grained sediment along the flow path from rockwall source regions to large sedimentary sinks in major alpine valleys. To improve the knowledge on sediment budgets in large scale alpine catchments and to bridge the gap between small and large scale sediment budgets, we apply a multi-method approach comprising investigations on different spatial scales in the Upper Rhone Basin (URB). The URB is the largest inneralpine basin in the European Alps with a size of > 5400 km². It is a closed system with Lake Geneva acting as an ultimate sediment sink for suspended and clastic sediment. We examine the spatial pattern and volumes of sediment storages as well as the morphometry on the local and catchment-wide scale. We mapped sediment storages and bedrock in five sub-regions of the study area (Goms, Lötschen valley, Val d'Illiez, Vallée de la Liène, Turtmann valley) in the field and from high-resolution remote sensing imagery to investigate the spatial distribution of different sediment storage types (e.g. talus deposits, debris flow cones, alluvial fans). These sub

  4. Large-scale protein/antibody patterning with limiting unspecific adsorption

    NASA Astrophysics Data System (ADS)

    Fedorenko, Viktoriia; Bechelany, Mikhael; Janot, Jean-Marc; Smyntyna, Valentyn; Balme, Sebastien

    2017-10-01

    A simple synthetic route based on nanosphere lithography has been developed in order to design a large-scale nanoarray for specific control of protein anchoring. This technique based on two-dimensional (2D) colloidal crystals composed of polystyrene spheres allows the easy and inexpensive fabrication of large arrays (up to several centimeters) by reducing the cost. A silicon wafer coated with a thin adhesion layer of chromium (15 nm) and a layer of gold (50 nm) is used as a substrate. PS spheres are deposited on the gold surface using the floating-transferring technique. The PS spheres were then functionalized with PEG-biotin and the defects by self-assembly monolayer (SAM) PEG to prevent unspecific adsorption. Using epifluorescence microscopy, we show that after immersion of sample on target protein (avidin and anti-avidin) solution, the latter are specifically located on polystyrene spheres. Thus, these results are meaningful for exploration of devices based on a large-scale nanoarray of PS spheres and can be used for detection of target proteins or simply to pattern a surface with specific proteins.

  5. How Large Scale Flows in the Solar Convection Zone may Influence Solar Activity

    NASA Technical Reports Server (NTRS)

    Hathaway, D. H.

    2004-01-01

    Large scale flows within the solar convection zone are the primary drivers of the Sun s magnetic activity cycle. Differential rotation can amplify the magnetic field and convert poloidal fields into toroidal fields. Poleward meridional flow near the surface can carry magnetic flux that reverses the magnetic poles and can convert toroidal fields into poloidal fields. The deeper, equatorward meridional flow can carry magnetic flux toward the equator where it can reconnect with oppositely directed fields in the other hemisphere. These axisymmetric flows are themselves driven by large scale convective motions. The effects of the Sun s rotation on convection produce velocity correlations that can maintain the differential rotation and meridional circulation. These convective motions can influence solar activity themselves by shaping the large-scale magnetic field pattern. While considerable theoretical advances have been made toward understanding these large scale flows, outstanding problems in matching theory to observations still remain.

  6. Transport induced by large scale convective structures in a dipole-confined plasma.

    PubMed

    Grierson, B A; Mauel, M E; Worstell, M W; Klassen, M

    2010-11-12

    Convective structures characterized by E×B motion are observed in a dipole-confined plasma. Particle transport rates are calculated from density dynamics obtained from multipoint measurements and the reconstructed electrostatic potential. The calculated transport rates determined from the large-scale dynamics and local probe measurements agree in magnitude, show intermittency, and indicate that the particle transport is dominated by large-scale convective structures.

  7. NASA Goddard Earth Sciences Graduate Student Program. [FIRE CIRRUS-II examination of coupling between an upper tropospheric cloud system and synoptic-scale dynamics

    NASA Technical Reports Server (NTRS)

    Ackerman, Thomas P.

    1994-01-01

    The evolution of synoptic-scale dynamics associated with a middle and upper tropospheric cloud event that occurred on 26 November 1991 is examined. The case under consideration occurred during the FIRE CIRRUS-II Intensive Field Observing Period held in Coffeyville, KS during Nov. and Dec., 1991. Using data from the wind profiler demonstration network and a temporally and spatially augmented radiosonde array, emphasis is given to explaining the evolution of the kinematically-derived ageostrophic vertical circulations and correlating the circulation with the forcing of an extensively sampled cloud field. This is facilitated by decomposing the horizontal divergence into its component parts through a natural coordinate representation of the flow. Ageostrophic vertical circulations are inferred and compared to the circulation forcing arising from geostrophic confluence and shearing deformation derived from the Sawyer-Eliassen Equation. It is found that a thermodynamically indirect vertical circulation existed in association with a jet streak exit region. The circulation was displaced to the cyclonic side of the jet axis due to the orientation of the jet exit between a deepening diffluent trough and building ridge. The cloud line formed in the ascending branch of the vertical circulation with the most concentrated cloud development occurring in conjunction with the maximum large-scale vertical motion. The relationship between the large scale dynamics and the parameterization of middle and upper tropospheric clouds in large-scale models is discussed and an example of ice water contents derived from a parameterization forced by the diagnosed vertical motions and observed water vapor contents is presented.

  8. A process for creating multimetric indices for large-scale aquatic surveys

    EPA Science Inventory

    Differences in sampling and laboratory protocols, differences in techniques used to evaluate metrics, and differing scales of calibration and application prohibit the use of many existing multimetric indices (MMIs) in large-scale bioassessments. We describe an approach to develop...

  9. Large-Scale medical image analytics: Recent methodologies, applications and Future directions.

    PubMed

    Zhang, Shaoting; Metaxas, Dimitris

    2016-10-01

    Despite the ever-increasing amount and complexity of annotated medical image data, the development of large-scale medical image analysis algorithms has not kept pace with the need for methods that bridge the semantic gap between images and diagnoses. The goal of this position paper is to discuss and explore innovative and large-scale data science techniques in medical image analytics, which will benefit clinical decision-making and facilitate efficient medical data management. Particularly, we advocate that the scale of image retrieval systems should be significantly increased at which interactive systems can be effective for knowledge discovery in potentially large databases of medical images. For clinical relevance, such systems should return results in real-time, incorporate expert feedback, and be able to cope with the size, quality, and variety of the medical images and their associated metadata for a particular domain. The design, development, and testing of the such framework can significantly impact interactive mining in medical image databases that are growing rapidly in size and complexity and enable novel methods of analysis at much larger scales in an efficient, integrated fashion. Copyright © 2016. Published by Elsevier B.V.

  10. Weak gravitational lensing due to large-scale structure of the universe

    NASA Technical Reports Server (NTRS)

    Jaroszynski, Michal; Park, Changbom; Paczynski, Bohdan; Gott, J. Richard, III

    1990-01-01

    The effect of the large-scale structure of the universe on the propagation of light rays is studied. The development of the large-scale density fluctuations in the omega = 1 universe is calculated within the cold dark matter scenario using a smooth particle approximation. The propagation of about 10 to the 6th random light rays between the redshift z = 5 and the observer was followed. It is found that the effect of shear is negligible, and the amplification of single images is dominated by the matter in the beam. The spread of amplifications is very small. Therefore, the filled-beam approximation is very good for studies of strong lensing by galaxies or clusters of galaxies. In the simulation, the column density was averaged over a comoving area of approximately (1/h Mpc)-squared. No case of a strong gravitational lensing was found, i.e., no 'over-focused' image that would suggest that a few images might be present. Therefore, the large-scale structure of the universe as it is presently known does not produce multiple images with gravitational lensing on a scale larger than clusters of galaxies.

  11. Large-scale galaxy bias

    NASA Astrophysics Data System (ADS)

    Desjacques, Vincent; Jeong, Donghui; Schmidt, Fabian

    2018-02-01

    This review presents a comprehensive overview of galaxy bias, that is, the statistical relation between the distribution of galaxies and matter. We focus on large scales where cosmic density fields are quasi-linear. On these scales, the clustering of galaxies can be described by a perturbative bias expansion, and the complicated physics of galaxy formation is absorbed by a finite set of coefficients of the expansion, called bias parameters. The review begins with a detailed derivation of this very important result, which forms the basis of the rigorous perturbative description of galaxy clustering, under the assumptions of General Relativity and Gaussian, adiabatic initial conditions. Key components of the bias expansion are all leading local gravitational observables, which include the matter density but also tidal fields and their time derivatives. We hence expand the definition of local bias to encompass all these contributions. This derivation is followed by a presentation of the peak-background split in its general form, which elucidates the physical meaning of the bias parameters, and a detailed description of the connection between bias parameters and galaxy statistics. We then review the excursion-set formalism and peak theory which provide predictions for the values of the bias parameters. In the remainder of the review, we consider the generalizations of galaxy bias required in the presence of various types of cosmological physics that go beyond pressureless matter with adiabatic, Gaussian initial conditions: primordial non-Gaussianity, massive neutrinos, baryon-CDM isocurvature perturbations, dark energy, and modified gravity. Finally, we discuss how the description of galaxy bias in the galaxies' rest frame is related to clustering statistics measured from the observed angular positions and redshifts in actual galaxy catalogs.

  12. Large-Scale Machine Learning for Classification and Search

    ERIC Educational Resources Information Center

    Liu, Wei

    2012-01-01

    With the rapid development of the Internet, nowadays tremendous amounts of data including images and videos, up to millions or billions, can be collected for training machine learning models. Inspired by this trend, this thesis is dedicated to developing large-scale machine learning techniques for the purpose of making classification and nearest…

  13. Calibration method for a large-scale structured light measurement system.

    PubMed

    Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken

    2017-05-10

    The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.

  14. Instrumentation Development for Large Scale Hypersonic Inflatable Aerodynamic Decelerator Characterization

    NASA Technical Reports Server (NTRS)

    Swanson, Gregory T.; Cassell, Alan M.

    2011-01-01

    Hypersonic Inflatable Aerodynamic Decelerator (HIAD) technology is currently being considered for multiple atmospheric entry applications as the limitations of traditional entry vehicles have been reached. The Inflatable Re-entry Vehicle Experiment (IRVE) has successfully demonstrated this technology as a viable candidate with a 3.0 m diameter vehicle sub-orbital flight. To further this technology, large scale HIADs (6.0 8.5 m) must be developed and tested. To characterize the performance of large scale HIAD technology new instrumentation concepts must be developed to accommodate the flexible nature inflatable aeroshell. Many of the concepts that are under consideration for the HIAD FY12 subsonic wind tunnel test series are discussed below.

  15. On the large scale structure of X-ray background sources

    NASA Technical Reports Server (NTRS)

    Bi, H. G.; Meszaros, A.; Meszaros, P.

    1991-01-01

    The large scale clustering of the sources responsible for the X-ray background is discussed, under the assumption of a discrete origin. The formalism necessary for calculating the X-ray spatial fluctuations in the most general case where the source density contrast in structures varies with redshift is developed. A comparison of this with observational limits is useful for obtaining information concerning various galaxy formation scenarios. The calculations presented show that a varying density contrast has a small impact on the expected X-ray fluctuations. This strengthens and extends previous conclusions concerning the size and comoving density of large scale structures at redshifts 0.5 between 4.0.

  16. Inflationary tensor fossils in large-scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimastrogiovanni, Emanuela; Fasiello, Matteo; Jeong, Donghui

    Inflation models make specific predictions for a tensor-scalar-scalar three-point correlation, or bispectrum, between one gravitational-wave (tensor) mode and two density-perturbation (scalar) modes. This tensor-scalar-scalar correlation leads to a local power quadrupole, an apparent departure from statistical isotropy in our Universe, as well as characteristic four-point correlations in the current mass distribution in the Universe. So far, the predictions for these observables have been worked out only for single-clock models in which certain consistency conditions between the tensor-scalar-scalar correlation and tensor and scalar power spectra are satisfied. Here we review the requirements on inflation models for these consistency conditions to bemore » satisfied. We then consider several examples of inflation models, such as non-attractor and solid-inflation models, in which these conditions are put to the test. In solid inflation the simplest consistency conditions are already violated whilst in the non-attractor model we find that, contrary to the standard scenario, the tensor-scalar-scalar correlator probes directly relevant model-dependent information. We work out the predictions for observables in these models. For non-attractor inflation we find an apparent local quadrupolar departure from statistical isotropy in large-scale structure but that this power quadrupole decreases very rapidly at smaller scales. The consistency of the CMB quadrupole with statistical isotropy then constrains the distance scale that corresponds to the transition from the non-attractor to attractor phase of inflation to be larger than the currently observable horizon. Solid inflation predicts clustering fossils signatures in the current galaxy distribution that may be large enough to be detectable with forthcoming, and possibly even current, galaxy surveys.« less

  17. Command and Control for Large-Scale Hybrid Warfare Systems

    DTIC Science & Technology

    2014-06-05

    Prescribed by ANSI Std Z39-18 2 CK Pang et al. in C2 architectures was proposed using Petri nets (PNs).10 Liao in [11] reported an architecture for...arises from the chal- lenging and often-conflicting user requirements, scale, scope, inter-connectivity with different large-scale net - worked teams and...resources can be easily modelled and reconfigured by the notion of block matrix. At any time, the various missions of the net - worked team can be added

  18. Dynamical links between small- and large-scale mantle heterogeneity: Seismological evidence

    NASA Astrophysics Data System (ADS)

    Frost, Daniel A.; Garnero, Edward J.; Rost, Sebastian

    2018-01-01

    We identify PKP • PKP scattered waves (also known as P‧ •P‧) from earthquakes recorded at small-aperture seismic arrays at distances less than 65°. P‧ •P‧ energy travels as a PKP wave through the core, up into the mantle, then scatters back down through the core to the receiver as a second PKP. P‧ •P‧ waves are unique in that they allow scattering heterogeneities throughout the mantle to be imaged. We use array-processing methods to amplify low amplitude, coherent scattered energy signals and resolve their incoming direction. We deterministically map scattering heterogeneity locations from the core-mantle boundary to the surface. We use an extensive dataset with sensitivity to a large volume of the mantle and a location method allowing us to resolve and map more heterogeneities than have previously been possible, representing a significant increase in our understanding of small-scale structure within the mantle. Our results demonstrate that the distribution of scattering heterogeneities varies both radially and laterally. Scattering is most abundant in the uppermost and lowermost mantle, and a minimum in the mid-mantle, resembling the radial distribution of tomographically derived whole-mantle velocity heterogeneity. We investigate the spatial correlation of scattering heterogeneities with large-scale tomographic velocities, lateral velocity gradients, the locations of deep-seated hotspots and subducted slabs. In the lowermost 1500 km of the mantle, small-scale heterogeneities correlate with regions of low seismic velocity, high lateral seismic gradient, and proximity to hotspots. In the upper 1000 km of the mantle there is no significant correlation between scattering heterogeneity location and subducted slabs. Between 600 and 900 km depth, scattering heterogeneities are more common in the regions most remote from slabs, and close to hotspots. Scattering heterogeneities show an affinity for regions close to slabs within the upper 200 km of the

  19. Large-scale expensive black-box function optimization

    NASA Astrophysics Data System (ADS)

    Rashid, Kashif; Bailey, William; Couët, Benoît

    2012-09-01

    This paper presents the application of an adaptive radial basis function method to a computationally expensive black-box reservoir simulation model of many variables. An iterative proxy-based scheme is used to tune the control variables, distributed for finer control over a varying number of intervals covering the total simulation period, to maximize asset NPV. The method shows that large-scale simulation-based function optimization of several hundred variables is practical and effective.

  20. Presenting an Approach for Conducting Knowledge Architecture within Large-Scale Organizations

    PubMed Central

    Varaee, Touraj; Habibi, Jafar; Mohaghar, Ali

    2015-01-01

    Knowledge architecture (KA) establishes the basic groundwork for the successful implementation of a short-term or long-term knowledge management (KM) program. An example of KA is the design of a prototype before a new vehicle is manufactured. Due to a transformation to large-scale organizations, the traditional architecture of organizations is undergoing fundamental changes. This paper explores the main strengths and weaknesses in the field of KA within large-scale organizations and provides a suitable methodology and supervising framework to overcome specific limitations. This objective was achieved by applying and updating the concepts from the Zachman information architectural framework and the information architectural methodology of enterprise architecture planning (EAP). The proposed solution may be beneficial for architects in knowledge-related areas to successfully accomplish KM within large-scale organizations. The research method is descriptive; its validity is confirmed by performing a case study and polling the opinions of KA experts. PMID:25993414