Sample records for complex distributed constrained

  1. Structural and parameteric uncertainty quantification in cloud microphysics parameterization schemes

    NASA Astrophysics Data System (ADS)

    van Lier-Walqui, M.; Morrison, H.; Kumjian, M. R.; Prat, O. P.; Martinkus, C.

    2017-12-01

    Atmospheric model parameterization schemes employ approximations to represent the effects of unresolved processes. These approximations are a source of error in forecasts, caused in part by considerable uncertainty about the optimal value of parameters within each scheme -- parameteric uncertainty. Furthermore, there is uncertainty regarding the best choice of the overarching structure of the parameterization scheme -- structrual uncertainty. Parameter estimation can constrain the first, but may struggle with the second because structural choices are typically discrete. We address this problem in the context of cloud microphysics parameterization schemes by creating a flexible framework wherein structural and parametric uncertainties can be simultaneously constrained. Our scheme makes no assuptions about drop size distribution shape or the functional form of parametrized process rate terms. Instead, these uncertainties are constrained by observations using a Markov Chain Monte Carlo sampler within a Bayesian inference framework. Our scheme, the Bayesian Observationally-constrained Statistical-physical Scheme (BOSS), has flexibility to predict various sets of prognostic drop size distribution moments as well as varying complexity of process rate formulations. We compare idealized probabilistic forecasts from versions of BOSS with varying levels of structural complexity. This work has applications in ensemble forecasts with model physics uncertainty, data assimilation, and cloud microphysics process studies.

  2. An inexact log-normal distribution-based stochastic chance-constrained model for agricultural water quality management

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2018-05-01

    In this study, an inexact log-normal-based stochastic chance-constrained programming model was developed for solving the non-point source pollution issues caused by agricultural activities. Compared to the general stochastic chance-constrained programming model, the main advantage of the proposed model is that it allows random variables to be expressed as a log-normal distribution, rather than a general normal distribution. Possible deviations in solutions caused by irrational parameter assumptions were avoided. The agricultural system management in the Erhai Lake watershed was used as a case study, where critical system factors, including rainfall and runoff amounts, show characteristics of a log-normal distribution. Several interval solutions were obtained under different constraint-satisfaction levels, which were useful in evaluating the trade-off between system economy and reliability. The applied results show that the proposed model could help decision makers to design optimal production patterns under complex uncertainties. The successful application of this model is expected to provide a good example for agricultural management in many other watersheds.

  3. A Metrics-Based Approach to Intrusion Detection System Evaluation for Distributed Real-Time Systems

    DTIC Science & Technology

    2002-04-01

    Based Approach to Intrusion Detection System Evaluation for Distributed Real - Time Systems Authors: G. A. Fink, B. L. Chappell, T. G. Turner, and...Distributed, Security. 1 Introduction Processing and cost requirements are driving future naval combat platforms to use distributed, real - time systems of...distributed, real - time systems . As these systems grow more complex, the timing requirements do not diminish; indeed, they may become more constrained

  4. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  5. Using Remote Sensing Data to Constrain Models of Fault Interactions and Plate Boundary Deformation

    NASA Astrophysics Data System (ADS)

    Glasscoe, M. T.; Donnellan, A.; Lyzenga, G. A.; Parker, J. W.; Milliner, C. W. D.

    2016-12-01

    Determining the distribution of slip and behavior of fault interactions at plate boundaries is a complex problem. Field and remotely sensed data often lack the necessary coverage to fully resolve fault behavior. However, realistic physical models may be used to more accurately characterize the complex behavior of faults constrained with observed data, such as GPS, InSAR, and SfM. These results will improve the utility of using combined models and data to estimate earthquake potential and characterize plate boundary behavior. Plate boundary faults exhibit complex behavior, with partitioned slip and distributed deformation. To investigate what fraction of slip becomes distributed deformation off major faults, we examine a model fault embedded within a damage zone of reduced elastic rigidity that narrows with depth and forward model the slip and resulting surface deformation. The fault segments and slip distributions are modeled using the JPL GeoFEST software. GeoFEST (Geophysical Finite Element Simulation Tool) is a two- and three-dimensional finite element software package for modeling solid stress and strain in geophysical and other continuum domain applications [Lyzenga, et al., 2000; Glasscoe, et al., 2004; Parker, et al., 2008, 2010]. New methods to advance geohazards research using computer simulations and remotely sensed observations for model validation are required to understand fault slip, the complex nature of fault interaction and plate boundary deformation. These models help enhance our understanding of the underlying processes, such as transient deformation and fault creep, and can aid in developing observation strategies for sUAV, airborne, and upcoming satellite missions seeking to determine how faults behave and interact and assess their associated hazard. Models will also help to characterize this behavior, which will enable improvements in hazard estimation. Validating the model results against remotely sensed observations will allow us to better constrain fault zone rheology and physical properties, having implications for the overall understanding of earthquake physics, fault interactions, plate boundary deformation and earthquake hazard, preparedness and risk reduction.

  6. A Near-Optimal Distributed QoS Constrained Routing Algorithm for Multichannel Wireless Sensor Networks

    PubMed Central

    Lin, Frank Yeong-Sung; Hsiao, Chiu-Han; Yen, Hong-Hsu; Hsieh, Yu-Jen

    2013-01-01

    One of the important applications in Wireless Sensor Networks (WSNs) is video surveillance that includes the tasks of video data processing and transmission. Processing and transmission of image and video data in WSNs has attracted a lot of attention in recent years. This is known as Wireless Visual Sensor Networks (WVSNs). WVSNs are distributed intelligent systems for collecting image or video data with unique performance, complexity, and quality of service challenges. WVSNs consist of a large number of battery-powered and resource constrained camera nodes. End-to-end delay is a very important Quality of Service (QoS) metric for video surveillance application in WVSNs. How to meet the stringent delay QoS in resource constrained WVSNs is a challenging issue that requires novel distributed and collaborative routing strategies. This paper proposes a Near-Optimal Distributed QoS Constrained (NODQC) routing algorithm to achieve an end-to-end route with lower delay and higher throughput. A Lagrangian Relaxation (LR)-based routing metric that considers the “system perspective” and “user perspective” is proposed to determine the near-optimal routing paths that satisfy end-to-end delay constraints with high system throughput. The empirical results show that the NODQC routing algorithm outperforms others in terms of higher system throughput with lower average end-to-end delay and delay jitter. In this paper, for the first time, the algorithm shows how to meet the delay QoS and at the same time how to achieve higher system throughput in stringently resource constrained WVSNs.

  7. Astrophysical Model Selection in Gravitational Wave Astronomy

    NASA Technical Reports Server (NTRS)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  8. Time-Lapse 3D Inversion of Complex Conductivity Data Using an Active Time Constrained (ATC) Approach

    EPA Science Inventory

    Induced polarization (more precisely the magnitude and the phase of the impedance of the subsurface) is measured using a network of electrodes located at the ground surface or in boreholes. This method yields important information related to the distribution of permeability and ...

  9. Random versus maximum entropy models of neural population activity

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry

    2017-04-01

    The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.

  10. Explaining spatial variability in stream habitats using both natural and management-influenced landscape predictors

    Treesearch

    K.J. Anlauf; D.W. Jensen; K.M. Burnett; E.A. Steel; K. Christiansen; J.C. Firman; B.E. Feist; D.P. Larsen

    2011-01-01

    1. The distribution and composition of in-stream habitats are reflections of landscape scale geomorphic and climatic controls. Correspondingly, Pacific salmon (Oncorhynchus spp.) are largely adapted to and constrained by the quality and complexity of those in-stream habitat conditions. The degree to which lands have been fragmented and managed can...

  11. Complete characterization of the constrained geometry bimolecular reaction O(1D)+N2O-->NO+NO by three-dimensional velocity map imaging

    NASA Astrophysics Data System (ADS)

    Gödecke, Niels; Maul, Christof; Chichinin, Alexey I.; Kauczok, Sebastian; Gericke, Karl-Heinz

    2009-08-01

    The bimolecular reaction O(D1)+N2O→NO+NO was photoinitiated in the (N2O)2 dimer at a wavelength of 193 nm and was investigated by three-dimensional (3D) velocity map imaging. State selective 3D momentum vector distributions were monitored and analyzed. For the first time, kinetic energy resolution and stereodynamic information about the reaction under constrained geometry conditions is available. Directly observable NO products exhibit moderate vibrational excitation and are rotationally and translationally cold. Speed and spatial distributions suggest a pronounced backward scattering of the observed products with respect to the direction of motion of the O(D1) atom. Forward scattered partner products, which are not directly detectable are also translationally cold, but carry very large internal energy as vibration or rotation. The results confirm and extend previous studies on the complex initiated reaction system. The restricted geometry of the van der Waals complex seems to favor an abstraction reaction of the terminal nitrogen atom by the O(D1) atom, which is in striking contrast to the behavior observed for the unrestricted gas phase reaction under bulk conditions.

  12. Distributed Soil Moisture Estimation in a Mountainous Semiarid Basin: Constraining Soil Parameter Uncertainty through Field Studies

    NASA Astrophysics Data System (ADS)

    Yatheendradas, S.; Vivoni, E.

    2007-12-01

    A common practice in distributed hydrological modeling is to assign soil hydraulic properties based on coarse textural datasets. For semiarid regions with poor soil information, the performance of a model can be severely constrained due to the high model sensitivity to near-surface soil characteristics. Neglecting the uncertainty in soil hydraulic properties, their spatial variation and their naturally-occurring horizonation can potentially affect the modeled hydrological response. In this study, we investigate such effects using the TIN-based Real-time Integrated Basin Simulator (tRIBS) applied to the mid-sized (100 km2) Sierra Los Locos watershed in northern Sonora, Mexico. The Sierra Los Locos basin is characterized by complex mountainous terrain leading to topographic organization of soil characteristics and ecosystem distributions. We focus on simulations during the 2004 North American Monsoon Experiment (NAME) when intensive soil moisture measurements and aircraft- based soil moisture retrievals are available in the basin. Our experiments focus on soil moisture comparisons at the point, topographic transect and basin scales using a range of different soil characterizations. We compare the distributed soil moisture estimates obtained using (1) a deterministic simulation based on soil texture from coarse soil maps, (2) a set of ensemble simulations that capture soil parameter uncertainty and their spatial distribution, and (3) a set of simulations that conditions the ensemble on recent soil profile measurements. Uncertainties considered in near-surface soil characterization provide insights into their influence on the modeled uncertainty, into the value of soil profile observations, and into effective use of on-going field observations for constraining the soil moisture response uncertainty.

  13. Fixman compensating potential for general branched molecules

    NASA Astrophysics Data System (ADS)

    Jain, Abhinandan; Kandel, Saugat; Wagner, Jeffrey; Larsen, Adrien; Vaidehi, Nagarajan

    2013-12-01

    The technique of constraining high frequency modes of molecular motion is an effective way to increase simulation time scale and improve conformational sampling in molecular dynamics simulations. However, it has been shown that constraints on higher frequency modes such as bond lengths and bond angles stiffen the molecular model, thereby introducing systematic biases in the statistical behavior of the simulations. Fixman proposed a compensating potential to remove such biases in the thermodynamic and kinetic properties calculated from dynamics simulations. Previous implementations of the Fixman potential have been limited to only short serial chain systems. In this paper, we present a spatial operator algebra based algorithm to calculate the Fixman potential and its gradient within constrained dynamics simulations for branched topology molecules of any size. Our numerical studies on molecules of increasing complexity validate our algorithm by demonstrating recovery of the dihedral angle probability distribution function for systems that range in complexity from serial chains to protein molecules. We observe that the Fixman compensating potential recovers the free energy surface of a serial chain polymer, thus annulling the biases caused by constraining the bond lengths and bond angles. The inclusion of Fixman potential entails only a modest increase in the computational cost in these simulations. We believe that this work represents the first instance where the Fixman potential has been used for general branched systems, and establishes the viability for its use in constrained dynamics simulations of proteins and other macromolecules.

  14. Complex organic molecules during low-mass star formation: Pilot survey results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Öberg, Karin I.; Graninger, Dawn; Lauck, Trish, E-mail: koberg@cfa.harvard.edu

    Complex organic molecules (COMs) are known to be abundant toward some low-mass young stellar objects (YSOs), but how these detections relate to typical COM abundance are not yet understood. We aim to constrain the frequency distribution of COMs during low-mass star formation, beginning with this pilot survey of COM lines toward six embedded YSOs using the IRAM 30 m Telescope. The sample was selected from the Spitzer c2d ice sample and covers a range of ice abundances. We detect multiple COMs, including CH{sub 3}CN, toward two of the YSOs, and tentatively toward a third. Abundances with respect to CH{sub 3}OHmore » vary between 0.7% and 10%. This sample is combined with previous COM observations and upper limits to obtain a frequency distributions of CH{sub 3}CN, HCOOCH{sub 3}, CH{sub 3}OCH{sub 3}, and CH{sub 3}CHO. We find that for all molecules more than 50% of the sample have detections or upper limits of 1%-10% with respect to CH{sub 3}OH. Moderate abundances of COMs thus appear common during the early stages of low-mass star formation. A larger sample is required, however, to quantify the COM distributions, as well as to constrain the origins of observed variations across the sample.« less

  15. A new golden age: testing general relativity with cosmology.

    PubMed

    Bean, Rachel; Ferreira, Pedro G; Taylor, Andy

    2011-12-28

    Gravity drives the evolution of the Universe and is at the heart of its complexity. Einstein's field equations can be used to work out the detailed dynamics of space and time and to calculate the emergence of large-scale structure in the distribution of galaxies and radiation. Over the past few years, it has become clear that cosmological observations can be used not only to constrain different world models within the context of Einstein gravity but also to constrain the theory of gravity itself. In this article, we look at different aspects of this new field in which cosmology is used to test theories of gravity with a wide range of observations.

  16. Habitat-based constraints on food web structure and parasite life cycles.

    PubMed

    Rossiter, Wayne; Sukhdeo, Michael V K

    2014-04-01

    Habitat is frequently implicated as a powerful determinant of community structure and species distributions, but few studies explicitly evaluate the relationship between habitat-based patterns of species' distributions and the presence or absence of trophic interactions. The complex (multi-host) life cycles of parasites are directly affected by these factors, but almost no data exist on the role of habitat in constraining parasite-host interactions at the community level. In this study the relationship(s) between species abundances, distributions and trophic interactions (including parasitism) were evaluated in the context of habitat structure (classic geomorphic designations of pools, riffles and runs) in a riverine community (Raritan River, Hunterdon County, NJ, USA). We report 121 taxa collected over a 2-year period, and compare the observed food web patterns to null model expectations. The results show that top predators are constrained to particular habitat types, and that species' distributions are biased towards pool habitats. However, our null model (which incorporates cascade model assumptions) accurately predicts the observed patterns of trophic interactions. Thus, habitat strongly dictates species distributions, and patterns of trophic interactions arise as a consequence of these distributions. Additionally, we find that hosts utilized in parasite life cycles are more overlapping in their distributions, and this pattern is more pronounced among those involved in trophic transmission. We conclude that habitat structure may be a strong predictor of parasite transmission routes, particularly within communities that occupy heterogeneous habitats.

  17. The presence of opportunistic pathogens, Legionella spp., L. pneumophila and Mycobacterium avium complex, in South Australian reuse water distribution pipelines.

    PubMed

    Whiley, H; Keegan, A; Fallowfield, H; Bentham, R

    2015-06-01

    Water reuse has become increasingly important for sustainable water management. Currently, its application is primarily constrained by the potential health risks. Presently there is limited knowledge regarding the presence and fate of opportunistic pathogens along reuse water distribution pipelines. In this study opportunistic human pathogens Legionella spp., L. pneumophila and Mycobacterium avium complex were detected using real-time polymerase chain reaction along two South Australian reuse water distribution pipelines at maximum concentrations of 10⁵, 10³ and 10⁵ copies/mL, respectively. During the summer period of sampling the concentration of all three organisms significantly increased (P < 0.05) along the pipeline, suggesting multiplication and hence viability. No seasonality in the decrease in chlorine residual along the pipelines was observed. This suggests that the combination of reduced chlorine residual and increased water temperature promoted the presence of these opportunistic pathogens.

  18. Functionality, Complexity, and Approaches to Assessment of Resilience Under Constrained Energy and Information

    DTIC Science & Technology

    2015-03-26

    albeit powerful , method available for exploring CAS. As discussed above, there are many useful mathematical tools appropriate for CAS modeling. Agent-based...cells, tele- phone calls, and sexual contacts approach power -law distributions. [48] Networks in general are robust against random failures, but...targeted failures can have powerful effects – provided the targeter has a good understanding of the network structure. Some argue (convincingly) that all

  19. Constrained Surface Complexation Modeling: Rutile in RbCl, NaCl, and NaCF 3SO 3 Media to 250 °C

    DOE PAGES

    Machesky, Michael L.; Předota, Milan; Ridley, Moira K.; ...

    2015-06-01

    In this paper, a comprehensive set of molecular-level results, primarily from classical molecular dynamics (CMD) simulations, are used to constrain CD-MUSIC surface complexation model (SCM) parameters describing rutile powder titrations conducted in RbCl, NaCl, and NaTr (Tr = triflate, CF 3SO 3 –) electrolyte media from 25 to 250 °C. Rb + primarily occupies the innermost tetradentate binding site on the rutile (110) surface at all temperatures (25, 150, 250 °C) and negative charge conditions (-0.1 and -0.2 C/m 2) probed via CMD simulations, reflecting the small hydration energy of this large, monovalent cation. Consequently, variable SCM parameters (Stern-layer capacitancemore » values and intrinsic Rb + binding constants) were adjusted relatively easily to satisfactorily match the CMD and titration data. The larger hydration energy of Na + results in a more complex inner-sphere distribution, which shifts from bidentate to tetradentate binding with increasing negative charge and temperature, and this distribution was not matched well for both negative charge conditions, which may reflect limitations in the CMD and/or SCM approaches. Finally, in particular, the CMD axial density profiles for Rb + and Na + reveal that peak binding distances shift toward the surface with increasing negative charge, suggesting that the CD-MUSIC framework may be improved by incorporating CD or Stern-layer capacitance values that vary with charge.« less

  20. Constrained Surface Complexation Modeling: Rutile in RbCl, NaCl, and NaCF 3SO 3 Media to 250 °C

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Machesky, Michael L.; Předota, Milan; Ridley, Moira K.

    In this paper, a comprehensive set of molecular-level results, primarily from classical molecular dynamics (CMD) simulations, are used to constrain CD-MUSIC surface complexation model (SCM) parameters describing rutile powder titrations conducted in RbCl, NaCl, and NaTr (Tr = triflate, CF 3SO 3 –) electrolyte media from 25 to 250 °C. Rb + primarily occupies the innermost tetradentate binding site on the rutile (110) surface at all temperatures (25, 150, 250 °C) and negative charge conditions (-0.1 and -0.2 C/m 2) probed via CMD simulations, reflecting the small hydration energy of this large, monovalent cation. Consequently, variable SCM parameters (Stern-layer capacitancemore » values and intrinsic Rb + binding constants) were adjusted relatively easily to satisfactorily match the CMD and titration data. The larger hydration energy of Na + results in a more complex inner-sphere distribution, which shifts from bidentate to tetradentate binding with increasing negative charge and temperature, and this distribution was not matched well for both negative charge conditions, which may reflect limitations in the CMD and/or SCM approaches. Finally, in particular, the CMD axial density profiles for Rb + and Na + reveal that peak binding distances shift toward the surface with increasing negative charge, suggesting that the CD-MUSIC framework may be improved by incorporating CD or Stern-layer capacitance values that vary with charge.« less

  1. Designing management strategies for carbon dioxide storage and utilization under uncertainty using inexact modelling

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong

    2017-06-01

    Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.

  2. Calibrating binary lumped parameter models

    NASA Astrophysics Data System (ADS)

    Morgenstern, Uwe; Stewart, Mike

    2017-04-01

    Groundwater at its discharge point is a mixture of water from short and long flowlines, and therefore has a distribution of ages rather than a single age. Various transfer functions describe the distribution of ages within the water sample. Lumped parameter models (LPMs), which are mathematical models of water transport based on simplified aquifer geometry and flow configuration can account for such mixing of groundwater of different age, usually representing the age distribution with two parameters, the mean residence time, and the mixing parameter. Simple lumped parameter models can often match well the measured time varying age tracer concentrations, and therefore are a good representation of the groundwater mixing at these sites. Usually a few tracer data (time series and/or multi-tracer) can constrain both parameters. With the building of larger data sets of age tracer data throughout New Zealand, including tritium, SF6, CFCs, and recently Halon-1301, and time series of these tracers, we realised that for a number of wells the groundwater ages using a simple lumped parameter model were inconsistent between the different tracer methods. Contamination or degradation of individual tracers is unlikely because the different tracers show consistent trends over years and decades. This points toward a more complex mixing of groundwaters with different ages for such wells than represented by the simple lumped parameter models. Binary (or compound) mixing models are able to represent a more complex mixing, with mixing of water of two different age distributions. The problem related to these models is that they usually have 5 parameters which makes them data-hungry and therefore difficult to constrain all parameters. Two or more age tracers with different input functions, with multiple measurements over time, can provide the required information to constrain the parameters of the binary mixing model. We obtained excellent results using tritium time series encompassing the passage of the bomb-tritium through the aquifer, and SF6 with its steep gradient currently in the input. We will show age tracer data from drinking water wells that enabled identification of young water ingression into wells, which poses the risk of bacteriological contamination from the surface into the drinking water.

  3. Vehicle routing problem with time windows using natural inspired algorithms

    NASA Astrophysics Data System (ADS)

    Pratiwi, A. B.; Pratama, A.; Sa’diyah, I.; Suprajitno, H.

    2018-03-01

    Process of distribution of goods needs a strategy to make the total cost spent for operational activities minimized. But there are several constrains have to be satisfied which are the capacity of the vehicles and the service time of the customers. This Vehicle Routing Problem with Time Windows (VRPTW) gives complex constrains problem. This paper proposes natural inspired algorithms for dealing with constrains of VRPTW which involves Bat Algorithm and Cat Swarm Optimization. Bat Algorithm is being hybrid with Simulated Annealing, the worst solution of Bat Algorithm is replaced by the solution from Simulated Annealing. Algorithm which is based on behavior of cats, Cat Swarm Optimization, is improved using Crow Search Algorithm to make simplier and faster convergence. From the computational result, these algorithms give good performances in finding the minimized total distance. Higher number of population causes better computational performance. The improved Cat Swarm Optimization with Crow Search gives better performance than the hybridization of Bat Algorithm and Simulated Annealing in dealing with big data.

  4. Inverse and Predictive Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syracuse, Ellen Marie

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an evenmore » greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.« less

  5. STABILITY AND STOICHIOMETRY OF BILAYER PHOSPHOLIPID-CHOLESTEROL COMPLEXES: RELATIONSHIP TO CELLULAR STEROL DISTRIBUTION AND HOMEOSTASIS&

    PubMed Central

    Lange, Yvonne; Ali Tabei, S. M.; Ye, Jin; Steck, Theodore L.

    2013-01-01

    Does cholesterol distribute among intracellular compartments by passive equilibration down its chemical gradient? If so, its distribution should reflect the relative cholesterol affinity of the constituent membrane phospholipids as well as their ability to form stoichiometric cholesterol complexes. We tested this hypothesis by analyzing the reactivity to cholesterol oxidase of large unilamellar vesicles (LUVs) containing biological phospholipids plus varied cholesterol. The rates of cholesterol oxidation differed among the various phospholipid environments by roughly four orders of magnitude. Furthermore, accessibility to the enzyme increased by orders of magnitude at cholesterol thresholds that suggested stoichiometries of association of 1:1, 2:3 or 1:2 cholesterol:phospholipid (mol:mol). Cholesterol accessibility above the threshold was still constrained by its particular phospholipid environment. One phospholipid, 1-stearoyl-2-oleoyl-sn-glycero-3-phosphatidylserine, exhibited no threshold. The analysis suggested values for the relative stabilities of the cholesterol-phospholipid complexes and for the fractions of bilayer cholesterol not in complexes at the threshold equivalence points; predictably, the saturated phosphorylcholine species had the lowest stoichiometries and the strongest affinities for cholesterol. These results were in general agreement with the equilibrium distribution of cholesterol between the various LUVs and methyl-β-cyclodextrin. In addition, the properties of the cholesterol in intact human red blood cells matched predictions made from LUVs of the corresponding composition. These results support a passive mechanism for the intracellular distribution of cholesterol that can provide a signal for its homeostatic regulation. PMID:24000774

  6. Formation, Detection and the Distribution of Complex Organic Molecules with the Atacama Large Millimeter/submillimeter Array (ALMA)

    NASA Astrophysics Data System (ADS)

    Remijan, Anthony John

    2015-08-01

    The formation and distribution of complex organic material in astronomical environments continues to be a focused research area in astrochemistry. For several decades now, emphasis has been placed on the millimeter/submillimeter regime of the radio spectrum for trying to detect new molecular species and to constrain the chemical formation route of complex molecules by comparing and contrasting their relative distributions towards varying astronomical environments. This effort has been extremely laborious as millimeter/submillimeter facilities have been only able to detect and map the distribution of the strongest transition(s) of the simplest organic molecules. Even then, these single transition "chemical maps" have been very low spatial resolution because early millimeter/submillimeter facilities did not have access to broadband spectral coverage or the imaging capabilities to truly ascertain the morphology of the molecular emission. In the era of ALMA, these limitations have been greatly lifted. Broadband spectral line surveys now hold the key to uncovering the full molecular complexity in astronomical environments. In addition, searches for complex organic material is no longer limited to investigating the strongest lines of the simplest molecules toward the strongest sources of emission in the Galaxy. ALMA is issuing a new era of exploration as the search for complex molecules will now be available to an increased suite of sources in the Galaxy and our understanding of the formation of this complex material will be greatly increased as a result. This presentation will highlight the current and future ALMA capabilities in the search for complex molecules towards astronomical environments, highlight the recent searches that ALMA scientists have conducted from the start of ALMA Early Science and provide the motivation for the next suite of astronomical searches to investigate our pre-biotic origins in the universe.

  7. The Complex Refractive Index of Volcanic Ash Aerosol Retrieved From Spectral Mass Extinction

    NASA Astrophysics Data System (ADS)

    Reed, Benjamin E.; Peters, Daniel M.; McPheat, Robert; Grainger, R. G.

    2018-01-01

    The complex refractive indices of eight volcanic ash samples, chosen to have a representative range of SiO2 contents, were retrieved from simultaneous measurements of their spectral mass extinction coefficient and size distribution. The mass extinction coefficients, at 0.33-19 μm, were measured using two optical systems: a Fourier transform spectrometer in the infrared and two diffraction grating spectrometers covering visible and ultraviolet wavelengths. The particle size distribution was measured using a scanning mobility particle sizer and an optical particle counter; values for the effective radius of ash particles measured in this study varied from 0.574 to 1.16 μm. Verification retrievals on high-purity silica aerosol demonstrated that the Rayleigh continuous distribution of ellipsoids (CDEs) scattering model significantly outperformed Mie theory in retrieving the complex refractive index, when compared to literature values. Assuming the silica particles provided a good analogue of volcanic ash, the CDE scattering model was applied to retrieve the complex refractive index of the eight ash samples. The Lorentz formulation of the complex refractive index was used within the retrievals as a convenient way to ensure consistency with the Kramers-Kronig relation. The short-wavelength limit of the electric susceptibility was constrained by using independently measured reference values of the complex refractive index of the ash samples at a visible wavelength. The retrieved values of the complex refractive indices of the ash samples showed considerable variation, highlighting the importance of using accurate refractive index data in ash cloud radiative transfer models.

  8. Respiratory Chain Complexes in Dynamic Mitochondria Display a Patchy Distribution in Life Cells

    PubMed Central

    Muster, Britta; Kohl, Wladislaw; Wittig, Ilka; Strecker, Valentina; Joos, Friederike; Haase, Winfried; Bereiter-Hahn, Jürgen; Busch, Karin

    2010-01-01

    Background Mitochondria, the main suppliers of cellular energy, are dynamic organelles that fuse and divide frequently. Constraining these processes impairs mitochondrial is closely linked to certain neurodegenerative diseases. It is proposed that functional mitochondrial dynamics allows the exchange of compounds thereby providing a rescue mechanism. Methodology/Principal Findings The question discussed in this paper is whether fusion and fission of mitochondria in different cell lines result in re-localization of respiratory chain (RC) complexes and of the ATP synthase. This was addressed by fusing cells containing mitochondria with respiratory complexes labelled with different fluorescent proteins and resolving their time dependent re-localization in living cells. We found a complete reshuffling of RC complexes throughout the entire chondriome in single HeLa cells within 2–3 h by organelle fusion and fission. Polykaryons of fused cells completely re-mixed their RC complexes in 10–24 h in a progressive way. In contrast to the recently described homogeneous mixing of matrix-targeted proteins or outer membrane proteins, the distribution of RC complexes and ATP synthase in fused hybrid mitochondria, however, was not homogeneous but patterned. Thus, complete equilibration of respiratory chain complexes as integral inner mitochondrial membrane complexes is a slow process compared with matrix proteins probably limited by complete fusion. In co-expressing cells, complex II is more homogenously distributed than complex I and V, resp. Indeed, this result argues for higher mobility and less integration in supercomplexes. Conclusion/Significance Our results clearly demonstrate that mitochondrial fusion and fission dynamics favours the re-mixing of all RC complexes within the chondriome. This permanent mixing avoids a static situation with a fixed composition of RC complexes per mitochondrion. PMID:20689601

  9. Delineating Hydrofacies Spatial Distribution by Integrating Ensemble Data Assimilation and Indicator Geostatistics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Song, Xuehang; Chen, Xingyuan; Ye, Ming

    2015-07-01

    This study develops a new framework of facies-based data assimilation for characterizing spatial distribution of hydrofacies and estimating their associated hydraulic properties. This framework couples ensemble data assimilation with transition probability-based geostatistical model via a parameterization based on a level set function. The nature of ensemble data assimilation makes the framework efficient and flexible to be integrated with various types of observation data. The transition probability-based geostatistical model keeps the updated hydrofacies distributions under geological constrains. The framework is illustrated by using a two-dimensional synthetic study that estimates hydrofacies spatial distribution and permeability in each hydrofacies from transient head data.more » Our results show that the proposed framework can characterize hydrofacies distribution and associated permeability with adequate accuracy even with limited direct measurements of hydrofacies. Our study provides a promising starting point for hydrofacies delineation in complex real problems.« less

  10. Characterization and predictability of basin scale SWE distributions using ASO snow depth and SWE retrievals

    NASA Astrophysics Data System (ADS)

    Bormann, K.; Hedrick, A. R.; Marks, D. G.; Painter, T. H.

    2017-12-01

    The spatial and temporal distribution of snow water resources (SWE) in the mountains has been examined extensively through the use of models, in-situ networks and remote sensing techniques. However, until the Airborne Snow Observatory (http://aso.jpl.nasa.gov), our understanding of SWE dynamics has been limited due to a lack of well-constrained spatial distributions of SWE in complex terrain, particularly at high elevations and at regional scales (100km+). ASO produces comprehensive snow depth measurements and well-constrained SWE products providing the opportunity to re-examine our current understanding of SWE distributions with a robust and rich data source. We collected spatially-distributed snow depth and SWE data from over 150 individual ASO acquisitions spanning seven basins in California during the five-year operational period of 2013 - 2017. For each of these acquisitions, we characterized the spatial distribution of snow depth and SWE and examined how these distributions changed with time during snowmelt. We compared these distribution patterns between each of the seven basins and finally, examined the predictability of the SWE distributions using statistical extrapolations through both space and time. We compare and contrast these observationally-based characteristics with those from a physically-based snow model to highlight the strengths and weaknesses of the implementation of our understanding of SWE processes in the model environment. In practice, these results may be used to support or challenge our current understanding of mountain SWE dynamics and provide techniques for enhanced evaluation of high-resolution snow models that go beyond in-situ point comparisons. In application, this work may provide guidance on the potential of ASO to guide backfilling of sparse spaceborne measurements of snow depth and snow water equivalent.

  11. Struggling with Excellence in All We Do: Is the Lure of New Technology Affecting How We Process Out Members’ Information

    DTIC Science & Technology

    2016-02-01

    Approved for public release: distribution unlimited. ii Disclaimer The views expressed in this academic research paper are those of the author...is managed today is far too complex and riddled with risk. Why is a members’ information duplicated across multiple disparate databases ? To better... databases . The purpose of this paper is to provide a viable solution within a given set of constrains that the Air Force can implement. Utilizing the

  12. Wireless Technology Recognition Based on RSSI Distribution at Sub-Nyquist Sampling Rate for Constrained Devices.

    PubMed

    Liu, Wei; Kulin, Merima; Kazaz, Tarik; Shahid, Adnan; Moerman, Ingrid; De Poorter, Eli

    2017-09-12

    Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals' modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI's probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.

  13. Wireless Technology Recognition Based on RSSI Distribution at Sub-Nyquist Sampling Rate for Constrained Devices

    PubMed Central

    Liu, Wei; Kulin, Merima; Kazaz, Tarik; De Poorter, Eli

    2017-01-01

    Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals’ modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI’s probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access. PMID:28895879

  14. Nature and provenance of the Beishan Complex, southernmost Central Asian Orogenic Belt

    NASA Astrophysics Data System (ADS)

    Zheng, Rongguo; Li, Jinyi; Xiao, Wenjiao; Zhang, Jin

    2018-03-01

    The ages and origins of metasedimentary rocks, which were previously mapped as Precambrian, are critical in rebuilding the orogenic process and better understanding the Phanerozoic continental growth in the Central Asian Orogenic Belt (CAOB). The Beishan Complex was widely distributed in the southern Beishan Orogenic Collage, southernmost CAOB, and their ages and tectonic affinities are still in controversy. The Beishan Complex was previously proposed as fragments drifted from the Tarim Craton, Neoproterozoic Block or Phanerozoic accretionary complex. In this study, we employ detrital zircon age spectra to constrain ages and provenances of metasedimentary sequences of the Beishan Complex in the Chuanshanxun area. The metasedimentary rocks here are dominated by zircons with Paleoproterozoic-Mesoproterozoic age ( 1160-2070 Ma), and yield two peak ages at 1454 and 1760 Ma. One sample yielded a middle Permian peak age (269 Ma), which suggests that the metasedimentary sequences were deposited in the late Paleozoic. The granitoid and dioritic dykes, intruding into the metasedimentary sequences, exhibit zircon U-Pb ages of 268 and 261 Ma, respectively, which constrain the minimum deposit age of the metasedimentary sequences. Zircon U-Pb ages of amphibolite (274 and 216 Ma) indicate that they might be affected by multi-stage metamorphic events. The Beishan Complex was not a fragment drifted from the Tarim Block or Dunhuang Block, and none of cratons or blocks surrounding Beishan Orogenic Collage was the sole material source of the Beishan Complex due to obviously different age spectra. Instead, 1.4 Ga marginal accretionary zones of the Columbia supercontinent might have existed in the southern CAOB, and may provide the main source materials for the sedimentary sequences in the Beishan Complex.

  15. Distributed Coding/Decoding Complexity in Video Sensor Networks

    PubMed Central

    Cordeiro, Paulo J.; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality. PMID:22736972

  16. Distributed coding/decoding complexity in video sensor networks.

    PubMed

    Cordeiro, Paulo J; Assunção, Pedro

    2012-01-01

    Video Sensor Networks (VSNs) are recent communication infrastructures used to capture and transmit dense visual information from an application context. In such large scale environments which include video coding, transmission and display/storage, there are several open problems to overcome in practical implementations. This paper addresses the most relevant challenges posed by VSNs, namely stringent bandwidth usage and processing time/power constraints. In particular, the paper proposes a novel VSN architecture where large sets of visual sensors with embedded processors are used for compression and transmission of coded streams to gateways, which in turn transrate the incoming streams and adapt them to the variable complexity requirements of both the sensor encoders and end-user decoder terminals. Such gateways provide real-time transcoding functionalities for bandwidth adaptation and coding/decoding complexity distribution by transferring the most complex video encoding/decoding tasks to the transcoding gateway at the expense of a limited increase in bit rate. Then, a method to reduce the decoding complexity, suitable for system-on-chip implementation, is proposed to operate at the transcoding gateway whenever decoders with constrained resources are targeted. The results show that the proposed method achieves good performance and its inclusion into the VSN infrastructure provides an additional level of complexity control functionality.

  17. Solution of a Complex Least Squares Problem with Constrained Phase.

    PubMed

    Bydder, Mark

    2010-12-30

    The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.

  18. The Trend Odds Model for Ordinal Data‡

    PubMed Central

    Capuano, Ana W.; Dawson, Jeffrey D.

    2013-01-01

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values (Peterson and Harrell, 1990). We consider a trend odds version of this constrained model, where the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc Nlmixed, and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical dataset is used to illustrate the interpretation of the trend odds model, and we apply this model to a Swine Influenza example where the proportional odds assumption appears to be violated. PMID:23225520

  19. The trend odds model for ordinal data.

    PubMed

    Capuano, Ana W; Dawson, Jeffrey D

    2013-06-15

    Ordinal data appear in a wide variety of scientific fields. These data are often analyzed using ordinal logistic regression models that assume proportional odds. When this assumption is not met, it may be possible to capture the lack of proportionality using a constrained structural relationship between the odds and the cut-points of the ordinal values. We consider a trend odds version of this constrained model, wherein the odds parameter increases or decreases in a monotonic manner across the cut-points. We demonstrate algebraically and graphically how this model is related to latent logistic, normal, and exponential distributions. In particular, we find that scale changes in these potential latent distributions are consistent with the trend odds assumption, with the logistic and exponential distributions having odds that increase in a linear or nearly linear fashion. We show how to fit this model using SAS Proc NLMIXED and perform simulations under proportional odds and trend odds processes. We find that the added complexity of the trend odds model gives improved power over the proportional odds model when there are moderate to severe departures from proportionality. A hypothetical data set is used to illustrate the interpretation of the trend odds model, and we apply this model to a swine influenza example wherein the proportional odds assumption appears to be violated. Copyright © 2012 John Wiley & Sons, Ltd.

  20. Effects of Initial Particle Distribution on an Energetic Dispersal of Particles

    NASA Astrophysics Data System (ADS)

    Rollin, Bertrand; Ouellet, Frederick; Koneru, Rahul; Garno, Joshua; Durant, Bradford

    2017-11-01

    Accurate predictions of the late time solid particle cloud distribution ensuing an explosive dispersal of particles is an extremely challenging problem for compressible multiphase flow simulations. The source of this difficulty is twofold: (i) The complex sequence of events taking place. Indeed, as the blast wave crosses the surrounding layer of particles, compaction occurs shortly before particles disperse radially at high speed. Then, during the dispersion phase, complex multiphase interactions occurs between particles and detonation products. (ii) Precise characterization of the explosive and particle distribution is virtually impossible. In this numerical experiment, we focus on the sensitivity of late time particle cloud distributions relative to carefully designed initial distributions, assuming the explosive is well described. Using point particle simulations, we study the case of a bed of glass particles surrounding an explosive. Constraining our simulations to relatively low initial volume fractions to prevent reaching of the close packing limit, we seek to describe qualitatively and quantitatively the late time dependency of a solid particle cloud on its distribution before the energy release of an explosive. This work was supported by the U.S. DoE, NNSA, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA0002378.

  1. Estimating the Effects of Damping Treatments on the Vibration of Complex Structures

    DTIC Science & Technology

    2012-09-26

    26 4.3 Literature review 26 4.3.1 CLD Theory 26 4.3.2 Temperature Profiling 28 4.4 Constrained Layer Damping Analysis 29 4.5 Results 35...Coordinate systems and length scales are noted. Constraining layer, viscoelastic layer and base layer pertain to the nomenclature used through CLD ...for vibrational damping 4.1 Introduction Constrained layer damping ( CLD ) treatment systems are widely used in complex structures to dissipate

  2. Inferring Complex Aquifer Structure from the Combined Use of Hydraulic and Groundwater Age Data in Groundater Flow Models

    NASA Astrophysics Data System (ADS)

    Leray, S.; De Dreuzy, J.; Aquilina, L.; Labasque, T.; Bour, O.

    2011-12-01

    While groundwater age data have been classically used to determine aquifer hydraulic properties such as recharge and/or porosity, we show here that they contain more valuable information on aquifer structure in complex hard rock contexts. Our numerical modeling study is based on the developed crystalline aquifer of Ploemeur (Brittany, France) characterized by two transmissive structures: the interface between an intruding granite and overlying micaschists dipping moderately to the North and a steeply dipping fault striking North 20. We explore the definition and evolution of the supplying volume to the pumping well of the Ploemeur medium under steady-state conditions. We first show that, with the help of general observations on the site, hydraulic data, such as piezometric levels or transmissivity derived from pumping tests, can be used to refine recharge spatial distribution and rate and bulk aquifer transmissivity. We then model the effect of aquifer porosity and thickness on environmental tracer concentrations. Porosity gives the range of the mean residence time, shifting the probability density function of residence times along the time axis whereas aquifer thickness affects the shape of the residence times distribution. It also modifies the mean concentration of CFCs taken as the convolution product of the atmospheric tracer concentration with the probability density function of residence times. Because porosity may be estimated by petrologic and gravimetric investigations, the thickness of the aquifer can be advantageously constrained by groundwater ages and then compared to other results from inversion of geophysical data. More generally, we advocate using groundwater age data at the aquifer discharge locations to constrain complex aquifer structures when recharge and porosity can be fixed by other means.

  3. What do we gain from simplicity versus complexity in species distribution models?

    USGS Publications Warehouse

    Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane

    2014-01-01

    Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.

  4. Use of Traffic Intent Information by Autonomous Aircraft in Constrained Operations

    NASA Technical Reports Server (NTRS)

    Wing, David J.; Barmore, Bryan E.; Krishnamurthy, Karthik

    2002-01-01

    This paper presents findings of a research study designed to provide insight into the issue of intent information exchange in constrained en-route air-traffic operations and its effect on pilot decision-making and flight performance. The piloted simulation was conducted in the Air Traffic Operations Laboratory at the NASA Langley Research Center. Two operational modes for autonomous flight management were compared under conditions of low and high operational complexity (traffic and airspace hazard density). The tactical mode was characterized primarily by the use of traffic state data for conflict detection and resolution and a manual approach to meeting operational constraints. The strategic mode involved the combined use of traffic state and intent information, provided the pilot an additional level of alerting, and allowed an automated approach to meeting operational constraints. Operational constraints applied in the experiment included separation assurance, schedule adherence, airspace hazard avoidance, flight efficiency, and passenger comfort. The strategic operational mode was found to be effective in reducing unnecessary maneuvering in conflict situations where the intruder's intended maneuvers would resolve the conflict. Conditions of high operational complexity and vertical maneuvering resulted in increased proliferation of conflicts, but both operational modes exhibited characteristics of stability based on observed conflict proliferation rates of less than 30 percent. Scenario case studies illustrated the need for maneuver flight restrictions to prevent the creation of new conflicts through maneuvering and the need for an improved user interface design that appropriately focuses the pilot's attention on conflict prevention information. Pilot real-time assessment of maximum workload indicated minimal sensitivity to operational complexity, providing further evidence that pilot workload is not the limiting factor for feasibility of an en-route distributed traffic management system, even under highly constrained conditions.

  5. Constraining the Abundances of Complex Organics in the Inner Regions of Solar-Type Protostars

    NASA Astrophysics Data System (ADS)

    López-Sepulcre, A.; Taquet, V.; Ceccarelli, C.; Neri, R.; Kahane, C.; Charnley, S. B.

    2015-12-01

    We present arcsecond-resolution observations, obtained with the IRAM Plateau de Bure interferometer, of multiple complex organic molecules in two hot corino protostars: IRAS 2A and IRAS 4A, in the NGC 1333 star-forming region. The distribution of the line emission is very compact, indicating the presence of COMs is mostly concentrated in the inner hot corino regions. A comparison of the COMs abundances with astrochemical models favours a gas-phase formation route for CH3OCH3, and a grain formation of C2H5OH, C2H5CN, and HCOCH2OH. The high abundances of methyl formate (HCOOCH3) remain underpredicted by an order of magnitude.

  6. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE PAGES

    Fierce, Laura; McGraw, Robert L.

    2017-07-26

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  7. Multivariate quadrature for representing cloud condensation nuclei activity of aerosol populations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fierce, Laura; McGraw, Robert L.

    Here, sparse representations of atmospheric aerosols are needed for efficient regional- and global-scale chemical transport models. Here we introduce a new framework for representing aerosol distributions, based on the quadrature method of moments. Given a set of moment constraints, we show how linear programming, combined with an entropy-inspired cost function, can be used to construct optimized quadrature representations of aerosol distributions. The sparse representations derived from this approach accurately reproduce cloud condensation nuclei (CCN) activity for realistically complex distributions simulated by a particleresolved model. Additionally, the linear programming techniques described in this study can be used to bound key aerosolmore » properties, such as the number concentration of CCN. Unlike the commonly used sparse representations, such as modal and sectional schemes, the maximum-entropy approach described here is not constrained to pre-determined size bins or assumed distribution shapes. This study is a first step toward a particle-based aerosol scheme that will track multivariate aerosol distributions with sufficient computational efficiency for large-scale simulations.« less

  8. Taking the Pulse of Plants

    NASA Astrophysics Data System (ADS)

    Jensen, Kaare H.; Beecher, Sierra; Holbrook, N. Michele; Knoblauch, Michael

    2014-11-01

    Many biological systems use complex networks of vascular conduits to distribute energy over great distances. Examples include sugar transport in the phloem tissue of vascular plants and cytoplasmic streaming in some slime molds. Detailed knowledge of transport patterns in these systems is important for our fundamental understanding of energy distribution during development and for engineering of more efficient crops. Current techniques for quantifying transport in these microfluidic systems, however, only allow for the determination of either the flow speed or the concentration of material. Here we demonstrate a new method, based on confocal microscopy, which allows us to simultaneously determine velocity and solute concentration by tracking the dispersion of a tracer dye. We attempt to rationalize the observed transport patterns through consideration of constrained optimization problems.

  9. Learning Multirobot Hose Transportation and Deployment by Distributed Round-Robin Q-Learning.

    PubMed

    Fernandez-Gauna, Borja; Etxeberria-Agiriano, Ismael; Graña, Manuel

    2015-01-01

    Multi-Agent Reinforcement Learning (MARL) algorithms face two main difficulties: the curse of dimensionality, and environment non-stationarity due to the independent learning processes carried out by the agents concurrently. In this paper we formalize and prove the convergence of a Distributed Round Robin Q-learning (D-RR-QL) algorithm for cooperative systems. The computational complexity of this algorithm increases linearly with the number of agents. Moreover, it eliminates environment non sta tionarity by carrying a round-robin scheduling of the action selection and execution. That this learning scheme allows the implementation of Modular State-Action Vetoes (MSAV) in cooperative multi-agent systems, which speeds up learning convergence in over-constrained systems by vetoing state-action pairs which lead to undesired termination states (UTS) in the relevant state-action subspace. Each agent's local state-action value function learning is an independent process, including the MSAV policies. Coordination of locally optimal policies to obtain the global optimal joint policy is achieved by a greedy selection procedure using message passing. We show that D-RR-QL improves over state-of-the-art approaches, such as Distributed Q-Learning, Team Q-Learning and Coordinated Reinforcement Learning in a paradigmatic Linked Multi-Component Robotic System (L-MCRS) control problem: the hose transportation task. L-MCRS are over-constrained systems with many UTS induced by the interaction of the passive linking element and the active mobile robots.

  10. Sparsity-promoting inversion for modeling of irregular volcanic deformation source

    NASA Astrophysics Data System (ADS)

    Zhai, G.; Shirzaei, M.

    2016-12-01

    Kīlauea volcano, Hawaíi Island, has a complex magmatic system. Nonetheless, kinematic models of the summit reservoir have so far been limited to first-order analytical solutions with pre-determined geometry. To investigate the complex geometry and kinematics of the summit reservoir, we apply a multitrack multitemporal wavelet-based InSAR (Interferometric Synthetic Aperture Radar) algorithm and a geometry-free time-dependent modeling scheme considering a superposition of point centers of dilatation (PCDs). Applying Principal Component Analysis (PCA) to the time-dependent source model, six spatially independent deformation zones (i.e., reservoirs) are identified, whose locations are consistent with previous studies. Time-dependence of the model allows also identifying periods of correlated or anti-correlated behaviors between reservoirs. Hence, we suggest that likely the reservoir are connected and form a complex magmatic reservoir [Zhai and Shirzaei, 2016]. To obtain a physically-meaningful representation of the complex reservoir, we devise a new sparsity-promoting modeling scheme assuming active magma bodies are well-localized melt accumulations (i.e., outliers in background crust). The major steps include inverting surface deformation data using a hybrid L-1 and L-2 norm regularization approach to solve for sparse volume change distribution and then implementing a BEM based method to solve for opening distribution on a triangular mesh representing the complex reservoir. Using this approach, we are able to constrain the internal excess pressure of magma body with irregular geometry, satisfying uniformly pressurized boundary condition on the surface of magma chamber. The inversion method with sparsity constraint is tested using five synthetic source geometries, including torus, prolate ellipsoid, and sphere as well as horizontal and vertical L-shape bodies. The results show that source dimension, depth and shape are well recovered. Afterward, we apply this modeling scheme to deformation observed at Kilauea summit to constrain the magmatic source geometry, and revise the kinematics of Kilauea's shallow plumbing system. Such a model is valuable for understanding the physical processes in a magmatic reservoir and the method can readily be applied to other volcanic settings.

  11. An inexact chance-constrained programming model for water quality management in Binhai New Area of Tianjin, China.

    PubMed

    Xie, Y L; Li, Y P; Huang, G H; Li, Y F; Chen, L R

    2011-04-15

    In this study, an inexact-chance-constrained water quality management (ICC-WQM) model is developed for planning regional environmental management under uncertainty. This method is based on an integration of interval linear programming (ILP) and chance-constrained programming (CCP) techniques. ICC-WQM allows uncertainties presented as both probability distributions and interval values to be incorporated within a general optimization framework. Complexities in environmental management systems can be systematically reflected, thus applicability of the modeling process can be highly enhanced. The developed method is applied to planning chemical-industry development in Binhai New Area of Tianjin, China. Interval solutions associated with different risk levels of constraint violation have been obtained. They can be used for generating decision alternatives and thus help decision makers identify desired policies under various system-reliability constraints of water environmental capacity of pollutant. Tradeoffs between system benefits and constraint-violation risks can also be tackled. They are helpful for supporting (a) decision of wastewater discharge and government investment, (b) formulation of local policies regarding water consumption, economic development and industry structure, and (c) analysis of interactions among economic benefits, system reliability and pollutant discharges. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Macroscopically constrained Wang-Landau method for systems with multiple order parameters and its application to drawing complex phase diagrams

    NASA Astrophysics Data System (ADS)

    Chan, C. H.; Brown, G.; Rikvold, P. A.

    2017-05-01

    A generalized approach to Wang-Landau simulations, macroscopically constrained Wang-Landau, is proposed to simulate the density of states of a system with multiple macroscopic order parameters. The method breaks a multidimensional random-walk process in phase space into many separate, one-dimensional random-walk processes in well-defined subspaces. Each of these random walks is constrained to a different set of values of the macroscopic order parameters. When the multivariable density of states is obtained for one set of values of fieldlike model parameters, the density of states for any other values of these parameters can be obtained by a simple transformation of the total system energy. All thermodynamic quantities of the system can then be rapidly calculated at any point in the phase diagram. We demonstrate how to use the multivariable density of states to draw the phase diagram, as well as order-parameter probability distributions at specific phase points, for a model spin-crossover material: an antiferromagnetic Ising model with ferromagnetic long-range interactions. The fieldlike parameters in this model are an effective magnetic field and the strength of the long-range interaction.

  13. A synoptic view of the Third Uniform California Earthquake Rupture Forecast (UCERF3)

    USGS Publications Warehouse

    Field, Edward; Jordan, Thomas H.; Page, Morgan T.; Milner, Kevin R.; Shaw, Bruce E.; Dawson, Timothy E.; Biasi, Glenn; Parsons, Thomas E.; Hardebeck, Jeanne L.; Michael, Andrew J.; Weldon, Ray; Powers, Peter; Johnson, Kaj M.; Zeng, Yuehua; Bird, Peter; Felzer, Karen; van der Elst, Nicholas; Madden, Christopher; Arrowsmith, Ramon; Werner, Maximillan J.; Thatcher, Wayne R.

    2017-01-01

    Probabilistic forecasting of earthquake‐producing fault ruptures informs all major decisions aimed at reducing seismic risk and improving earthquake resilience. Earthquake forecasting models rely on two scales of hazard evolution: long‐term (decades to centuries) probabilities of fault rupture, constrained by stress renewal statistics, and short‐term (hours to years) probabilities of distributed seismicity, constrained by earthquake‐clustering statistics. Comprehensive datasets on both hazard scales have been integrated into the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3). UCERF3 is the first model to provide self‐consistent rupture probabilities over forecasting intervals from less than an hour to more than a century, and it is the first capable of evaluating the short‐term hazards that result from multievent sequences of complex faulting. This article gives an overview of UCERF3, illustrates the short‐term probabilities with aftershock scenarios, and draws some valuable scientific conclusions from the modeling results. In particular, seismic, geologic, and geodetic data, when combined in the UCERF3 framework, reject two types of fault‐based models: long‐term forecasts constrained to have local Gutenberg–Richter scaling, and short‐term forecasts that lack stress relaxation by elastic rebound.

  14. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    NASA Astrophysics Data System (ADS)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M. & Cella, F., 2013. Self-constrained inversion of potential fields, Geophys J Int.This research is supported by the Fundamental Research Funds for Institute for Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Grant Nos. WHS201210 and WHS201211).

  15. Dynamic rupture simulation of the 2017 Mw 7.8 Kaikoura (New Zealand) earthquake: Is spontaneous multi-fault rupture expected?

    NASA Astrophysics Data System (ADS)

    Ando, R.; Kaneko, Y.

    2017-12-01

    The coseismic rupture of the 2016 Kaikoura earthquake propagated over the distance of 150 km along the NE-SW striking fault system in the northern South Island of New Zealand. The analysis of In-SAR, GPS and field observations (Hamling et al., 2017) revealed that the most of the rupture occurred along the previously mapped active faults, involving more than seven major fault segments. These fault segments, mostly dipping to northwest, are distributed in a quite complex manner, manifested by fault branching and step-over structures. Back-projection rupture imaging shows that the rupture appears to jump between three sub-parallel fault segments in sequence from the south to north (Kaiser et al., 2017). The rupture seems to be terminated on the Needles fault in Cook Strait. One of the main questions is whether this multi-fault rupture can be naturally explained with the physical basis. In order to understand the conditions responsible for the complex rupture process, we conduct fully dynamic rupture simulations that account for 3-D non-planar fault geometry embedded in an elastic half-space. The fault geometry is constrained by previous In-SAR observations and geological inferences. The regional stress field is constrained by the result of stress tensor inversion based on focal mechanisms (Balfour et al., 2005). The fault is governed by a relatively simple, slip-weakening friction law. For simplicity, the frictional parameters are uniformly distributed as there is no direct estimate of them except for a shallow portion of the Kekerengu fault (Kaneko et al., 2017). Our simulations show that the rupture can indeed propagate through the complex fault system once it is nucleated at the southernmost segment. The simulated slip distribution is quite heterogeneous, reflecting the nature of non-planar fault geometry, fault branching and step-over structures. We find that optimally oriented faults exhibit larger slip, which is consistent with the slip model of Hamling et al. (2017). We conclude that the first order characteristics of this event may be interpreted by the effect of irregularity in the fault geometry.

  16. Distributed Algorithm for Voronoi Partition of Wireless Sensor Networks with a Limited Sensing Range.

    PubMed

    He, Chenlong; Feng, Zuren; Ren, Zhigang

    2018-02-03

    For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham's Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm.

  17. Distributed Algorithm for Voronoi Partition of Wireless Sensor Networks with a Limited Sensing Range

    PubMed Central

    Feng, Zuren; Ren, Zhigang

    2018-01-01

    For Wireless Sensor Networks (WSNs), the Voronoi partition of a region is a challenging problem owing to the limited sensing ability of each sensor and the distributed organization of the network. In this paper, an algorithm is proposed for each sensor having a limited sensing range to compute its limited Voronoi cell autonomously, so that the limited Voronoi partition of the entire WSN is generated in a distributed manner. Inspired by Graham’s Scan (GS) algorithm used to compute the convex hull of a point set, the limited Voronoi cell of each sensor is obtained by sequentially scanning two consecutive bisectors between the sensor and its neighbors. The proposed algorithm called the Boundary Scan (BS) algorithm has a lower computational complexity than the existing Range-Constrained Voronoi Cell (RCVC) algorithm and reaches the lower bound of the computational complexity of the algorithms used to solve the problem of this kind. Moreover, it also improves the time efficiency of a key step in the Adjust-Sensing-Radius (ASR) algorithm used to compute the exact Voronoi cell. Extensive numerical simulations are performed to demonstrate the correctness and effectiveness of the BS algorithm. The distributed realization of the BS combined with a localization algorithm in WSNs is used to justify the WSN nature of the proposed algorithm. PMID:29401649

  18. Distinguishing remobilized ash from erupted volcanic plumes using space-borne multi-angle imaging.

    PubMed

    Flower, Verity J B; Kahn, Ralph A

    2017-10-28

    Volcanic systems are comprised of a complex combination of ongoing eruptive activity and secondary hazards, such as remobilized ash plumes. Similarities in the visual characteristics of remobilized and erupted plumes, as imaged by satellite-based remote sensing, complicate the accurate classification of these events. The stereo imaging capabilities of the Multi-angle Imaging SpectroRadiometer (MISR) were used to determine the altitude and distribution of suspended particles. Remobilized ash shows distinct dispersion, with particles distributed within ~1.5 km of the surface. Particle transport is consistently constrained by local topography, limiting dispersion pathways downwind. The MISR Research Aerosol (RA) retrieval algorithm was used to assess plume particle microphysical properties. Remobilized ash plumes displayed a dominance of large particles with consistent absorption and angularity properties, distinct from emitted plumes. The combination of vertical distribution, topographic control, and particle microphysical properties makes it possible to distinguish remobilized ash flows from eruptive plumes, globally.

  19. A functional model of sensemaking in a neurocognitive architecture.

    PubMed

    Lebiere, Christian; Pirolli, Peter; Thomson, Robert; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment.

  20. Spot distribution and fast surface evolution on Vega

    NASA Astrophysics Data System (ADS)

    Petit, P.; Hébrard, E. M.; Böhm, T.; Folsom, C. P.; Lignières, F.

    2017-11-01

    Spectral signatures of surface spots were recently discovered from high cadence observations of the A star Vega. We aim at constraining the surface distribution of these photospheric inhomogeneities and investigating a possible short-term evolution of the spot pattern. Using data collected over five consecutive nights, we employ the Doppler imaging method to reconstruct three different maps of the stellar surface, from three consecutive subsets of the whole time series. The surface maps display a complex distribution of dark and bright spots, covering most of the visible fraction of the stellar surface. A number of surface features are consistently recovered in all three maps, but other features seem to evolve over the time span of observations, suggesting that fast changes can affect the surface of Vega within a few days at most. The short-term evolution is observed as emergence or disappearance of individual spots, and may also show up as zonal flows, with low- and high-latitude belts rotating faster than intermediate latitudes. It is tempting to relate the surface brightness activity to the complex magnetic field topology previously reconstructed for Vega, although strictly simultaneous brightness and magnetic maps will be necessary to assess this potential link.

  1. A Functional Model of Sensemaking in a Neurocognitive Architecture

    PubMed Central

    Lebiere, Christian; Paik, Jaehyon; Rutledge-Taylor, Matthew; Staszewski, James; Anderson, John R.

    2013-01-01

    Sensemaking is the active process of constructing a meaningful representation (i.e., making sense) of some complex aspect of the world. In relation to intelligence analysis, sensemaking is the act of finding and interpreting relevant facts amongst the sea of incoming reports, images, and intelligence. We present a cognitive model of core information-foraging and hypothesis-updating sensemaking processes applied to complex spatial probability estimation and decision-making tasks. While the model was developed in a hybrid symbolic-statistical cognitive architecture, its correspondence to neural frameworks in terms of both structure and mechanisms provided a direct bridge between rational and neural levels of description. Compared against data from two participant groups, the model correctly predicted both the presence and degree of four biases: confirmation, anchoring and adjustment, representativeness, and probability matching. It also favorably predicted human performance in generating probability distributions across categories, assigning resources based on these distributions, and selecting relevant features given a prior probability distribution. This model provides a constrained theoretical framework describing cognitive biases as arising from three interacting factors: the structure of the task environment, the mechanisms and limitations of the cognitive architecture, and the use of strategies to adapt to the dual constraints of cognition and the environment. PMID:24302930

  2. Determining Size Distribution at the Phoenix Landing Site

    NASA Astrophysics Data System (ADS)

    Mason, E. L.; Lemmon, M. T.

    2016-12-01

    Dust aerosols play a crucial role in determining atmospheric radiative heating on Mars through absorption and scattering of sunlight. How dust scatters and absorbs light is dependent on size, shape, composition, and quantity. Optical properties of the dust have been well constrained in the visible and near infrared wavelengths using various methods [Wolff et al. 2009, Lemmon et al. 2004]. In addition, the dust is nonspherical, and irregular shapes have shown to work well in determining effective particle size [Pollack et al. 1977]. Variance of the size distribution is less constrained but constitutes an important parameter in fully describing the dust. The Phoenix Lander's Surface Stereo Imager performed several cross-sky brightness surveys to determine the size distribution and scattering properties of dust in the wavelength range of 400 to 1000 nm. In combination with a single-layer radiative transfer model, these surveys can be used to help constrain variance of the size distribution. We will present a discussion of seasonal size distribution as it pertains to the Phoenix landing site.

  3. BARI+: A Biometric Based Distributed Key Management Approach for Wireless Body Area Networks

    PubMed Central

    Muhammad, Khaliq-ur-Rahman Raazi Syed; Lee, Heejo; Lee, Sungyoung; Lee, Young-Koo

    2010-01-01

    Wireless body area networks (WBAN) consist of resource constrained sensing devices just like other wireless sensor networks (WSN). However, they differ from WSN in topology, scale and security requirements. Due to these differences, key management schemes designed for WSN are inefficient and unnecessarily complex when applied to WBAN. Considering the key management issue, WBAN are also different from WPAN because WBAN can use random biometric measurements as keys. We highlight the differences between WSN and WBAN and propose an efficient key management scheme, which makes use of biometrics and is specifically designed for WBAN domain. PMID:22319333

  4. Marine microorganisms and global nutrient cycles

    NASA Astrophysics Data System (ADS)

    Arrigo, Kevin R.

    2005-09-01

    The way that nutrients cycle through atmospheric, terrestrial, oceanic and associated biotic reservoirs can constrain rates of biological production and help structure ecosystems on land and in the sea. On a global scale, cycling of nutrients also affects the concentration of atmospheric carbon dioxide. Because of their capacity for rapid growth, marine microorganisms are a major component of global nutrient cycles. Understanding what controls their distributions and their diverse suite of nutrient transformations is a major challenge facing contemporary biological oceanographers. What is emerging is an appreciation of the previously unknown degree of complexity within the marine microbial community.

  5. BARI+: a biometric based distributed key management approach for wireless body area networks.

    PubMed

    Muhammad, Khaliq-ur-Rahman Raazi Syed; Lee, Heejo; Lee, Sungyoung; Lee, Young-Koo

    2010-01-01

    Wireless body area networks (WBAN) consist of resource constrained sensing devices just like other wireless sensor networks (WSN). However, they differ from WSN in topology, scale and security requirements. Due to these differences, key management schemes designed for WSN are inefficient and unnecessarily complex when applied to WBAN. Considering the key management issue, WBAN are also different from WPAN because WBAN can use random biometric measurements as keys. We highlight the differences between WSN and WBAN and propose an efficient key management scheme, which makes use of biometrics and is specifically designed for WBAN domain.

  6. Neutrophils establish rapid and robust WAVE complex polarity in an actin-dependent fashion.

    PubMed

    Millius, Arthur; Dandekar, Sheel N; Houk, Andrew R; Weiner, Orion D

    2009-02-10

    Asymmetric intracellular signals enable cells to migrate in response to external cues. The multiprotein WAVE (also known as SCAR or WASF) complex activates the actin-nucleating Arp2/3 complex [1-4] and localizes to propagating "waves," which direct actin assembly during neutrophil migration [5, 6]. Here, we observe similar WAVE complex dynamics in other mammalian cells and analyze WAVE complex dynamics during establishment of neutrophil polarity. Earlier models proposed that spatially biased generation [7] or selection of protrusions [8] enables chemotaxis. These models require existing morphological polarity to control protrusions. We show that spatially biased generation and selection of WAVE complex recruitment also occur in morphologically unpolarized neutrophils during development of their first protrusions. Additionally, several mechanisms limit WAVE complex recruitment during polarization and movement: Intrinsic cues restrict WAVE complex distribution during establishment of polarity, and asymmetric intracellular signals constrain it in morphologically polarized cells. External gradients can overcome both intrinsic biases and control WAVE complex localization. After latrunculin-mediated inhibition of actin polymerization, addition and removal of agonist gradients globally recruits and releases the WAVE complex from the membrane. Under these conditions, the WAVE complex no longer polarizes, despite the presence of strong external gradients. Thus, actin polymer and the WAVE complex reciprocally interact during polarization.

  7. Constrained target controllability of complex networks

    NASA Astrophysics Data System (ADS)

    Guo, Wei-Feng; Zhang, Shao-Wu; Wei, Ze-Gang; Zeng, Tao; Liu, Fei; Zhang, Jingsong; Wu, Fang-Xiang; Chen, Luonan

    2017-06-01

    It is of great theoretical interest and practical significance to study how to control a system by applying perturbations to only a few driver nodes. Recently, a hot topic of modern network researches is how to determine driver nodes that allow the control of an entire network. However, in practice, to control a complex network, especially a biological network, one may know not only the set of nodes which need to be controlled (i.e. target nodes), but also the set of nodes to which only control signals can be applied (i.e. constrained control nodes). Compared to the general concept of controllability, we introduce the concept of constrained target controllability (CTC) of complex networks, which concerns the ability to drive any state of target nodes to their desirable state by applying control signals to the driver nodes from the set of constrained control nodes. To efficiently investigate the CTC of complex networks, we further design a novel graph-theoretic algorithm called CTCA to estimate the ability of a given network to control targets by choosing driver nodes from the set of constrained control nodes. We extensively evaluate the CTC of numerous real complex networks. The results indicate that biological networks with a higher average degree are easier to control than biological networks with a lower average degree, while electronic networks with a lower average degree are easier to control than web networks with a higher average degree. We also show that our CTCA can more efficiently produce driver nodes for target-controlling the networks than existing state-of-the-art methods. Moreover, we use our CTCA to analyze two expert-curated bio-molecular networks and compare to other state-of-the-art methods. The results illustrate that our CTCA can efficiently identify proven drug targets and new potentials, according to the constrained controllability of those biological networks.

  8. UV-Vis-IR spectral complex refractive indices and optical properties of brown carbon aerosol from biomass burning

    NASA Astrophysics Data System (ADS)

    Sumlin, Benjamin J.; Heinson, Yuli W.; Shetty, Nishit; Pandey, Apoorva; Pattison, Robert S.; Baker, Stephen; Hao, Wei Min; Chakrabarty, Rajan K.

    2018-02-01

    Constraining the complex refractive indices, optical properties and size of brown carbon (BrC) aerosols is a vital endeavor for improving climate models and satellite retrieval algorithms. Smoldering wildfires are the largest source of primary BrC, and fuel parameters such as moisture content, source depth, geographic origin, and fuel packing density could influence the properties of the emitted aerosol. We measured in situ spectral (375-1047 nm) optical properties of BrC aerosols emitted from smoldering combustion of Boreal and Indonesian peatlands across a range of these fuel parameters. Inverse Lorenz-Mie algorithms used these optical measurements along with simultaneously measured particle size distributions to retrieve the aerosol complex refractive indices (m = n + iκ). Our results show that the real part n is constrained between 1.5 and 1.7 with no obvious functionality in wavelength (λ), moisture content, source depth, or geographic origin. With increasing λ from 375 to 532 nm, κ decreased from 0.014 to 0.003, with corresponding increase in single scattering albedo (SSA) from 0.93 to 0.99. The spectral variability of κ follows the Kramers-Kronig dispersion relation for a damped harmonic oscillator. For λ ≥ 532 nm, both κ and SSA showed no spectral dependency. We discuss differences between this study and previous work. The imaginary part κ was sensitive to changes in FPD, and we hypothesize mechanisms that might help explain this observation.

  9. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    NASA Astrophysics Data System (ADS)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  10. Programmable motion of DNA origami mechanisms.

    PubMed

    Marras, Alexander E; Zhou, Lifeng; Su, Hai-Jun; Castro, Carlos E

    2015-01-20

    DNA origami enables the precise fabrication of nanoscale geometries. We demonstrate an approach to engineer complex and reversible motion of nanoscale DNA origami machine elements. We first design, fabricate, and characterize the mechanical behavior of flexible DNA origami rotational and linear joints that integrate stiff double-stranded DNA components and flexible single-stranded DNA components to constrain motion along a single degree of freedom and demonstrate the ability to tune the flexibility and range of motion. Multiple joints with simple 1D motion were then integrated into higher order mechanisms. One mechanism is a crank-slider that couples rotational and linear motion, and the other is a Bennett linkage that moves between a compacted bundle and an expanded frame configuration with a constrained 3D motion path. Finally, we demonstrate distributed actuation of the linkage using DNA input strands to achieve reversible conformational changes of the entire structure on ∼ minute timescales. Our results demonstrate programmable motion of 2D and 3D DNA origami mechanisms constructed following a macroscopic machine design approach.

  11. Programmable motion of DNA origami mechanisms

    PubMed Central

    Marras, Alexander E.; Zhou, Lifeng; Su, Hai-Jun; Castro, Carlos E.

    2015-01-01

    DNA origami enables the precise fabrication of nanoscale geometries. We demonstrate an approach to engineer complex and reversible motion of nanoscale DNA origami machine elements. We first design, fabricate, and characterize the mechanical behavior of flexible DNA origami rotational and linear joints that integrate stiff double-stranded DNA components and flexible single-stranded DNA components to constrain motion along a single degree of freedom and demonstrate the ability to tune the flexibility and range of motion. Multiple joints with simple 1D motion were then integrated into higher order mechanisms. One mechanism is a crank–slider that couples rotational and linear motion, and the other is a Bennett linkage that moves between a compacted bundle and an expanded frame configuration with a constrained 3D motion path. Finally, we demonstrate distributed actuation of the linkage using DNA input strands to achieve reversible conformational changes of the entire structure on ∼minute timescales. Our results demonstrate programmable motion of 2D and 3D DNA origami mechanisms constructed following a macroscopic machine design approach. PMID:25561550

  12. Biogeography of time partitioning in mammals.

    PubMed

    Bennie, Jonathan J; Duffy, James P; Inger, Richard; Gaston, Kevin J

    2014-09-23

    Many animals regulate their activity over a 24-h sleep-wake cycle, concentrating their peak periods of activity to coincide with the hours of daylight, darkness, or twilight, or using different periods of light and darkness in more complex ways. These behavioral differences, which are in themselves functional traits, are associated with suites of physiological and morphological adaptations with implications for the ecological roles of species. The biogeography of diel time partitioning is, however, poorly understood. Here, we document basic biogeographic patterns of time partitioning by mammals and ecologically relevant large-scale patterns of natural variation in "illuminated activity time" constrained by temperature, and we determine how well the first of these are predicted by the second. Although the majority of mammals are nocturnal, the distributions of diurnal and crepuscular species richness are strongly associated with the availability of biologically useful daylight and twilight, respectively. Cathemerality is associated with relatively long hours of daylight and twilight in the northern Holarctic region, whereas the proportion of nocturnal species is highest in arid regions and lowest at extreme high altitudes. Although thermal constraints on activity have been identified as key to the distributions of organisms, constraints due to functional adaptation to the light environment are less well studied. Global patterns in diversity are constrained by the availability of the temporal niche; disruption of these constraints by the spread of artificial lighting and anthropogenic climate change, and the potential effects on time partitioning, are likely to be critical influences on species' future distributions.

  13. Age and Mass for 920 Large Magellanic Cloud Clusters Derived from 100 Million Monte Carlo Simulations

    NASA Astrophysics Data System (ADS)

    Popescu, Bogdan; Hanson, M. M.; Elmegreen, Bruce G.

    2012-06-01

    We present new age and mass estimates for 920 stellar clusters in the Large Magellanic Cloud (LMC) based on previously published broadband photometry and the stellar cluster analysis package, MASSCLEANage. Expressed in the generic fitting formula, d 2 N/dMdtvpropM α t β, the distribution of observed clusters is described by α = -1.5 to -1.6 and β = -2.1 to -2.2. For 288 of these clusters, ages have recently been determined based on stellar photometric color-magnitude diagrams, allowing us to gauge the confidence of our ages. The results look very promising, opening up the possibility that this sample of 920 clusters, with reliable and consistent age, mass, and photometric measures, might be used to constrain important characteristics about the stellar cluster population in the LMC. We also investigate a traditional age determination method that uses a χ2 minimization routine to fit observed cluster colors to standard infinite-mass limit simple stellar population models. This reveals serious defects in the derived cluster age distribution using this method. The traditional χ2 minimization method, due to the variation of U, B, V, R colors, will always produce an overdensity of younger and older clusters, with an underdensity of clusters in the log (age/yr) = [7.0, 7.5] range. Finally, we present a unique simulation aimed at illustrating and constraining the fading limit in observed cluster distributions that includes the complex effects of stochastic variations in the observed properties of stellar clusters.

  14. A Comparison of Four Item-Selection Methods for Severely Constrained CATs

    ERIC Educational Resources Information Center

    He, Wei; Diao, Qi; Hauser, Carl

    2014-01-01

    This study compared four item-selection procedures developed for use with severely constrained computerized adaptive tests (CATs). Severely constrained CATs refer to those adaptive tests that seek to meet a complex set of constraints that are often not conclusive to each other (i.e., an item may contribute to the satisfaction of several…

  15. Adding Biotic Interactions into Paleodistribution Models: A Host-Cleptoparasite Complex of Neotropical Orchid Bees

    PubMed Central

    Silva, Daniel Paiva; Varela, Sara; Nemésio, André; De Marco, Paulo

    2015-01-01

    Orchid bees compose an exclusive Neotropical pollinators group, with bright body coloration. Several of those species build their own nests, while others are reported as nest cleptoparasites. Here, the objective was to evaluate whether the inclusion of a strong biotic interaction, such as the presence of a host species, improved the ability of species distribution models (SDMs) to predict the geographic range of the cleptoparasite species. The target species were Aglae caerulea and its host species Eulaema nigrita. Additionally, since A. caerulea is more frequently found in the Amazon rather than the Cerrado areas, a secondary objective was to evaluate whether this species is increasing or decreasing its distribution given South American past and current climatic conditions. SDMs methods (Maxent and Bioclim), in addition with current and past South American climatic conditions, as well as the occurrences for A. caerulea and E. nigrita were used to generate the distribution models. The distribution of A. caerulea was generated with and without the inclusion of the distribution of E. nigrita as a predictor variable. The results indicate A. caerulea was barely affected by past climatic conditions and the populations from the Cerrado savanna could be at least 21,000 years old (the last glacial maximum), as well as the Amazonian ones. On the other hand, in this study, the inclusion of the host-cleptoparasite interaction complex did not statistically improve the quality of the produced models, which means that the geographic range of this cleptoparasite species is mainly constrained by climate and not by the presence of the host species. Nonetheless, this could also be caused by unknown complexes of other Euglossini hosts with A. caerulea, which still are still needed to be described by science. PMID:26069956

  16. Herschel observations of extraordinary sources: Analysis of the HIFI 1.2 THz wide spectral survey toward orion KL. I. method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crockett, Nathan R.; Bergin, Edwin A.; Neill, Justin L.

    2014-06-01

    We present a comprehensive analysis of a broadband spectral line survey of the Orion Kleinmann-Low nebula (Orion KL), one of the most chemically rich regions in the Galaxy, using the HIFI instrument on board the Herschel Space Observatory. This survey spans a frequency range from 480 to 1907 GHz at a resolution of 1.1 MHz. These observations thus encompass the largest spectral coverage ever obtained toward this high-mass star-forming region in the submillimeter with high spectral resolution and include frequencies >1 THz, where the Earth's atmosphere prevents observations from the ground. In all, we detect emission from 39 molecules (79more » isotopologues). Combining this data set with ground-based millimeter spectroscopy obtained with the IRAM 30 m telescope, we model the molecular emission from the millimeter to the far-IR using the XCLASS program, which assumes local thermodynamic equilibrium (LTE). Several molecules are also modeled with the MADEX non-LTE code. Because of the wide frequency coverage, our models are constrained by transitions over an unprecedented range in excitation energy. A reduced χ{sup 2} analysis indicates that models for most species reproduce the observed emission well. In particular, most complex organics are well fit by LTE implying gas densities are high (>10{sup 6} cm{sup –3}) and excitation temperatures and column densities are well constrained. Molecular abundances are computed using H{sub 2} column densities also derived from the HIFI survey. The distribution of rotation temperatures, T {sub rot}, for molecules detected toward the hot core is significantly wider than the compact ridge, plateau, and extended ridge T {sub rot} distributions, indicating the hot core has the most complex thermal structure.« less

  17. An ensemble Kalman filter for statistical estimation of physics constrained nonlinear regression models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harlim, John, E-mail: jharlim@psu.edu; Mahdi, Adam, E-mail: amahdi@ncsu.edu; Majda, Andrew J., E-mail: jonjon@cims.nyu.edu

    2014-01-15

    A central issue in contemporary science is the development of nonlinear data driven statistical–dynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partialmore » noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (east–west) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.« less

  18. Distributed synchronization control of complex networks with communication constraints.

    PubMed

    Xu, Zhenhua; Zhang, Dan; Song, Hongbo

    2016-11-01

    This paper is concerned with the distributed synchronization control of complex networks with communication constraints. In this work, the controllers communicate with each other through the wireless network, acting as a controller network. Due to the constrained transmission power, techniques such as the packet size reduction and transmission rate reduction schemes are proposed which could help reduce communication load of the controller network. The packet dropout problem is also considered in the controller design since it is often encountered in networked control systems. We show that the closed-loop system can be modeled as a switched system with uncertainties and random variables. By resorting to the switched system approach and some stochastic system analysis method, a new sufficient condition is firstly proposed such that the exponential synchronization is guaranteed in the mean-square sense. The controller gains are determined by using the well-known cone complementarity linearization (CCL) algorithm. Finally, a simulation study is performed, which demonstrates the effectiveness of the proposed design algorithm. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Polarization and long-term variability of Sgr A* X-ray echo

    NASA Astrophysics Data System (ADS)

    Churazov, E.; Khabibullin, I.; Ponti, G.; Sunyaev, R.

    2017-06-01

    We use a model of the molecular gas distribution within ˜100 pc from the centre of the Milky Way (Kruijssen, Dale & Longmore) to simulate time evolution and polarization properties of the reflected X-ray emission, associated with the past outbursts from Sgr A*. While this model is too simple to describe the complexity of the true gas distribution, it illustrates the importance and power of long-term observations of the reflected emission. We show that the variable part of X-ray emission observed by Chandra and XMM-Newton from prominent molecular clouds is well described by a pure reflection model, providing strong support of the reflection scenario. While the identification of Sgr A* as a primary source for this reflected emission is already a very appealing hypothesis, a decisive test of this model can be provided by future X-ray polarimetric observations, which will allow placing constraints on the location of the primary source. In addition, X-ray polarimeters (like, e.g. XIPE) have sufficient sensitivity to constrain the line-of-sight positions of molecular complexes, removing major uncertainty in the model.

  20. The hydrogeology of complex lens conditions in Qatar

    NASA Astrophysics Data System (ADS)

    Lloyd, J. W.; Pike, J. G.; Eccleston, B. L.; Chidley, T. R. E.

    1987-01-01

    The emirate of Qatar lies on a peninsula extending northward from the mainland of Saudi Arabia into the Arabian Gulf. The peninsula is underlain by sedimentary rocks ranging from late Cretaceous to Holocene age but only two Lower Tertiary units are identified as aquifers. The groundwater distribution in these units is seen to be controlled by facies distributions related to tectonically controlled sedimentation and subsequent dissolution. Dissolution has created permeability, in the Umm er Rhaduma limestones and in the overlying Rus anhydrites. In the latter case the dissolution has lead to extensive surface collapse which has provided a mechanism for recharge from runoff. Despite very low rainfall and high evaporation rates, recharge related to storm runoff has resulted in the establishment of a complex fresh groundwater lens in both aquifer units. The lens is constrained by saline groundwaters which in the lower unit are controlled by heads in eastern Saudi Arabia but in the upper unit by the Arabian Gulf sea level. Groundwater abstraction is shown to be distorting the fresh groundwater lens configuration, and estimates of the resultant flow responses affecting the lens are given.

  1. An embodied biologically constrained model of foraging: from classical and operant conditioning to adaptive real-world behavior in DAC-X.

    PubMed

    Maffei, Giovanni; Santos-Pata, Diogo; Marcos, Encarni; Sánchez-Fibla, Marti; Verschure, Paul F M J

    2015-12-01

    Animals successfully forage within new environments by learning, simulating and adapting to their surroundings. The functions behind such goal-oriented behavior can be decomposed into 5 top-level objectives: 'how', 'why', 'what', 'where', 'when' (H4W). The paradigms of classical and operant conditioning describe some of the behavioral aspects found in foraging. However, it remains unclear how the organization of their underlying neural principles account for these complex behaviors. We address this problem from the perspective of the Distributed Adaptive Control theory of mind and brain (DAC) that interprets these two paradigms as expressing properties of core functional subsystems of a layered architecture. In particular, we propose DAC-X, a novel cognitive architecture that unifies the theoretical principles of DAC with biologically constrained computational models of several areas of the mammalian brain. DAC-X supports complex foraging strategies through the progressive acquisition, retention and expression of task-dependent information and associated shaping of action, from exploration to goal-oriented deliberation. We benchmark DAC-X using a robot-based hoarding task including the main perceptual and cognitive aspects of animal foraging. We show that efficient goal-oriented behavior results from the interaction of parallel learning mechanisms accounting for motor adaptation, spatial encoding and decision-making. Together, our results suggest that the H4W problem can be solved by DAC-X building on the insights from the study of classical and operant conditioning. Finally, we discuss the advantages and limitations of the proposed biologically constrained and embodied approach towards the study of cognition and the relation of DAC-X to other cognitive architectures. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Analysis of polarization in hydrogen bonded complexes: An asymptotic projection approach

    NASA Astrophysics Data System (ADS)

    Drici, Nedjoua

    2018-03-01

    The asymptotic projection technique is used to investigate the polarization effect that arises from the interaction between the relaxed, and frozen monomeric charge densities of a set of neutral and charged hydrogen bonded complexes. The AP technique based on the resolution of the original Kohn-Sham equations can give an acceptable qualitative description of the polarization effect in neutral complexes. The significant overlap of the electron densities, in charged and π-conjugated complexes, impose further development of a new functional, describing the coupling between constrained and non-constrained electron densities within the AP technique to provide an accurate representation of the polarization effect.

  3. Subband Image Coding with Jointly Optimized Quantizers

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith Mark J. T.

    1995-01-01

    An iterative design algorithm for the joint design of complexity- and entropy-constrained subband quantizers and associated entropy coders is proposed. Unlike conventional subband design algorithms, the proposed algorithm does not require the use of various bit allocation algorithms. Multistage residual quantizers are employed here because they provide greater control of the complexity-performance tradeoffs, and also because they allow efficient and effective high-order statistical modeling. The resulting subband coder exploits statistical dependencies within subbands, across subbands, and across stages, mainly through complexity-constrained high-order entropy coding. Experimental results demonstrate that the complexity-rate-distortion performance of the new subband coder is exceptional.

  4. Metabolic costs imposed by hydrostatic pressure constrain bathymetric range in the lithodid crab Lithodes maja.

    PubMed

    Brown, Alastair; Thatje, Sven; Morris, James P; Oliphant, Andrew; Morgan, Elizabeth A; Hauton, Chris; Jones, Daniel O B; Pond, David W

    2017-11-01

    The changing climate is shifting the distributions of marine species, yet the potential for shifts in depth distributions is virtually unexplored. Hydrostatic pressure is proposed to contribute to a physiological bottleneck constraining depth range extension in shallow-water taxa. However, bathymetric limitation by hydrostatic pressure remains undemonstrated, and the mechanism limiting hyperbaric tolerance remains hypothetical. Here, we assess the effects of hydrostatic pressure in the lithodid crab Lithodes maja (bathymetric range 4-790 m depth, approximately equivalent to 0.1 to 7.9 MPa hydrostatic pressure). Heart rate decreased with increasing hydrostatic pressure, and was significantly lower at ≥10.0 MPa than at 0.1 MPa. Oxygen consumption increased with increasing hydrostatic pressure to 12.5 MPa, before decreasing as hydrostatic pressure increased to 20.0 MPa; oxygen consumption was significantly higher at 7.5-17.5 MPa than at 0.1 MPa. Increases in expression of genes associated with neurotransmission, metabolism and stress were observed between 7.5 and 12.5 MPa. We suggest that hyperbaric tolerance in L maja may be oxygen-limited by hyperbaric effects on heart rate and metabolic rate, but that L maja 's bathymetric range is limited by metabolic costs imposed by the effects of high hydrostatic pressure. These results advocate including hydrostatic pressure in a complex model of environmental tolerance, where energy limitation constrains biogeographic range, and facilitate the incorporation of hydrostatic pressure into the broader metabolic framework for ecology and evolution. Such an approach is crucial for accurately projecting biogeographic responses to changing climate, and for understanding the ecology and evolution of life at depth. © 2017. Published by The Company of Biologists Ltd.

  5. Exponential Arithmetic Based Self-Healing Group Key Distribution Scheme with Backward Secrecy under the Resource-Constrained Wireless Networks

    PubMed Central

    Guo, Hua; Zheng, Yandong; Zhang, Xiyong; Li, Zhoujun

    2016-01-01

    In resource-constrained wireless networks, resources such as storage space and communication bandwidth are limited. To guarantee secure communication in resource-constrained wireless networks, group keys should be distributed to users. The self-healing group key distribution (SGKD) scheme is a promising cryptographic tool, which can be used to distribute and update the group key for the secure group communication over unreliable wireless networks. Among all known SGKD schemes, exponential arithmetic based SGKD (E-SGKD) schemes reduce the storage overhead to constant, thus is suitable for the the resource-constrained wireless networks. In this paper, we provide a new mechanism to achieve E-SGKD schemes with backward secrecy. We first propose a basic E-SGKD scheme based on a known polynomial-based SGKD, where it has optimal storage overhead while having no backward secrecy. To obtain the backward secrecy and reduce the communication overhead, we introduce a novel approach for message broadcasting and self-healing. Compared with other E-SGKD schemes, our new E-SGKD scheme has the optimal storage overhead, high communication efficiency and satisfactory security. The simulation results in Zigbee-based networks show that the proposed scheme is suitable for the resource-restrained wireless networks. Finally, we show the application of our proposed scheme. PMID:27136550

  6. Understanding scaling through history-dependent processes with collapsing sample space.

    PubMed

    Corominas-Murtra, Bernat; Hanel, Rudolf; Thurner, Stefan

    2015-04-28

    History-dependent processes are ubiquitous in natural and social systems. Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space-reducing (SSR) processes necessarily lead to Zipf's law in the rank distributions of their outcomes. We show that by adding noise to SSR processes the corresponding rank distributions remain exact power laws, p(x) ~ x(-λ), where the exponent directly corresponds to the mixing ratio of the SSR process and noise. This allows us to give a precise meaning to the scaling exponent in terms of the degree to which a given process reduces its sample space as it unfolds. Noisy SSR processes further allow us to explain a wide range of scaling exponents in frequency distributions ranging from α = 2 to ∞. We discuss several applications showing how SSR processes can be used to understand Zipf's law in word frequencies, and how they are related to diffusion processes in directed networks, or aging processes such as in fragmentation processes. SSR processes provide a new alternative to understand the origin of scaling in complex systems without the recourse to multiplicative, preferential, or self-organized critical processes.

  7. Forward Modeling of Atmospheric Carbon Dioxide in GEOS-5: Uncertainties Related to Surface Fluxes and Sub-Grid Transport

    NASA Technical Reports Server (NTRS)

    Pawson, Steven; Ott, Lesley E.; Zhu, Zhengxin; Bowman, Kevin; Brix, Holger; Collatz, G. James; Dutkiewicz, Stephanie; Fisher, Joshua B.; Gregg, Watson W.; Hill, Chris; hide

    2011-01-01

    Forward GEOS-5 AGCM simulations of CO2, with transport constrained by analyzed meteorology for 2009-2010, are examined. The CO2 distributions are evaluated using AIRS upper tropospheric CO2 and ACOS-GOSAT total column CO2 observations. Different combinations of surface C02 fluxes are used to generate ensembles of runs that span some uncertainty in surface emissions and uptake. The fluxes are specified in GEOS-5 from different inventories (fossil and biofuel), different data-constrained estimates of land biological emissions, and different data-constrained ocean-biology estimates. One set of fluxes is based on the established "Transcom" database and others are constructed using contemporary satellite observations to constrain land and ocean process models. Likewise, different approximations to sub-grid transport are employed, to construct an ensemble of CO2 distributions related to transport variability. This work is part of NASA's "Carbon Monitoring System Flux Pilot Project,"

  8. Mathematic modeling of complex aquifer: Evian Natural Mineral Water case study considering lumped and distributed models.

    NASA Astrophysics Data System (ADS)

    Henriot, abel; Blavoux, bernard; Travi, yves; Lachassagne, patrick; Beon, olivier; Dewandel, benoit; Ladouche, bernard

    2013-04-01

    The Evian Natural Mineral Water (NMW) aquifer is a highly heterogeneous Quaternary glacial deposits complex composed of three main units, from bottom to top: - The "Inferior Complex" mainly composed of basal and impermeable till lying on the Alpine rocks. It outcrops only at the higher altitudes but is known in depth through drilled holes. - The "Gavot Plateau Complex" is an interstratified complex of mainly basal and lateral till up to 400 m thick. It outcrops at heights above approximately 850 m a.m.s.l. and up to 1200 m a.m.s.l. over a 30 km² area. It is the main recharge area known for the hydromineral system. - The "Terminal Complex" from which the Evian NMW is emerging at 410 m a.m.s.l. It is composed of sand and gravel Kame terraces that allow water to flow from the deep "Gavot Plateau Complex" permeable layers to the "Terminal Complex". A thick and impermeable terminal till caps and seals the system. Aquifer is then confined at its downstream area. Because of heterogeneity and complexity of this hydrosystem, distributed modeling tools are difficult to implement at the whole system scale: important hypothesis would have to be made about geometry, hydraulic properties, boundary conditions for example and extrapolation would lead with no doubt to unacceptable errors. Consequently a modeling strategy is being developed and leads also to improve the conceptual model of the hydrosystem. Lumped models mainly based on tritium time series allow the whole hydrosystem to be modeled combining in series: an exponential model (superficial aquifers of the "Gavot Plateau Complex"), a dispersive model (Gavot Plateau interstratified complex) and a piston flow model (sand and gravel from the Kame terraces) respectively 8, 60 and 2.5 years of mean transit time. These models provide insight on the governing parameters for the whole mineral aquifer. They help improving the current conceptual model and are to be improved with other environmental tracers such as CFC, SF6. A deterministic approach (distributed model; flow and transport) is performed at the scale of the terminal complex. The geometry of the system is quite well known from drill holes and the aquifer properties from data processing of hydraulic heads and pumping tests interpretation. A multidisciplinary approach (hydrodynamic, hydrochemistry, geology, isotopes) for the recharge area (Gavot Plateau Complex) aims to provide better constraint for the upstream boundary of distributed model. More, perfect tracer modeling approach highly constrains fitting of this distributed model. The result is a high resolution conceptual model leading to a future operational management tool of the aquifer.

  9. Retrieving the Vertical Structure of the Effective Aerosol Complex Index of Refraction from a Combination of Aerosol in Situ and Remote Sensing Measurements During TARFOX

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Turco, R. P.; Liou, K. N.; Russell, P. B.; Bergstrom, R. W.; Schmid, B.; Livingston, J. M.; Hobbs, P. V.; Hartley, W. S.; Ismail, S.

    2000-01-01

    The largest uncertainty in estimates of the effects of atmospheric aerosols on climate stems from uncertainties in the determination of their microphysical properties, including the aerosol complex index of refraction, which in turn determines their optical properties. A novel technique is used to estimate the aerosol complex index of refraction in distinct vertical layers from a combination of aerosol in situ size distribution and remote sensing measurements during the Tropospheric Aerosol Radiative Forcing Observational Experiment (TARFOX). In particular, aerosol backscatter measurements using the NASA Langley LASE (Lidar Atmospheric Sensing Experiment) instrument and in situ aerosol size distribution data are utilized to derive vertical profiles of the 'effective' aerosol complex index of refraction at 815 nm (i.e., the refractive index that would provide the same backscatter signal in a forward calculation on the basis of the measured in situ particle size distributions for homogeneous, spherical aerosols). A sensitivity study shows that this method yields small errors in the retrieved aerosol refractive indices, provided the errors in the lidar derived aerosol backscatter are less than 30% and random in nature. Absolute errors in the estimated aerosol refractive indices are generally less than 0.04 for the real part and can be as much as 0.042 for the imaginary part in the case of a 30% error in the lidar-derived aerosol backscatter. The measurements of aerosol optical depth from the NASA Ames Airborne Tracking Sunphotometer (AATS-6) are successfully incorporated into the new technique and help constrain the retrieved aerosol refractive indices. An application of the technique to two TARFOX case studies yields the occurrence of vertical layers of distinct aerosol refractive indices. Values of the estimated complex aerosol refractive index range from 1.33 to 1.45 for the real part and 0.001 to 0.008 for the imaginary part. The methodology devised in this study provides, for the first time a complete set of vertically resolved aerosol size distribution and refractive index data, yielding the vertical distribution of aerosol optical properties required for the determination of aersol-induced radiative flux changes

  10. Retrieving the Vertical Structure of the Effective Aerosol Complex Index of Refraction from a Combination of Aerosol in Situ and Remote Sensing Measurements During TARFOX

    NASA Technical Reports Server (NTRS)

    Redemann, J.; Turco, R. P.; Liou, K. N.; Russell, P. B.; Bergstrom, R. W.; Schmid, B.; Livingston, J. M.; Hobbs, P. V.; Hartley, W. S.; Ismail, S.; hide

    2000-01-01

    The largest uncertainty in estimates of the effects of atmospheric aerosols on climate stems from uncertainties in the determination of their microphysical properties, including the aerosol complex index of refraction, which in turn determines their optical properties. A novel technique is used to estimate the aerosol complex index of refraction in distinct vertical layers from a combination of aerosol in situ size distribution and remote sensing measurements during the Tropospheric Aerosol Radiative Forcing Observational Experiment (TARFOX). In particular, aerosol backscatter measurements using the NASA Langley LASE (Lidar Atmospheric Sensing Experiment) instrument and in situ aerosol size distribution data are utilized to derive vertical profiles of the "effective" aerosol complex index of refraction at 815 nm (i.e., the refractive index that would provide the same backscatter signal in a forward calculation on the basis of the measured in situ particle size distributions for homogeneous, spherical aerosols). A sensitivity study shows that this method yields small errors in the retrieved aerosol refractive indices, provided the errors in the lidar-derived aerosol backscatter are less than 30% and random in nature. Absolute errors in the estimated aerosol refractive indices are generally less than 0.04 for the real part and can be as much as 0.042 for the imaginary part in the case of a 30% error in the lidar-derived aerosol backscatter. The measurements of aerosol optical depth from the NASA Ames Airborne Tracking Sunphotometer (AATS-6) are successfully incorporated into the new technique and help constrain the retrieved aerosol refractive indices. An application of the technique to two TARFOX case studies yields the occurrence of vertical layers of distinct aerosol refractive indices. Values of the estimated complex aerosol refractive index range from 1.33 to 1.45 for the real part and 0.001 to 0.008 for the imaginary part. The methodology devised in this study provides, for the first time, a complete set of vertically resolved aerosol size distribution and refractive index data. yielding the vertical distribution of aerosol optical properties required for the determination of aerosol-induced radiative flux changes.

  11. High resolution near on-axis digital holography using constrained optimization approach with faster convergence

    NASA Astrophysics Data System (ADS)

    Pandiyan, Vimal Prabhu; Khare, Kedar; John, Renu

    2017-09-01

    A constrained optimization approach with faster convergence is proposed to recover the complex object field from a near on-axis digital holography (DH). We subtract the DC from the hologram after recording the object beam and reference beam intensities separately. The DC-subtracted hologram is used to recover the complex object information using a constrained optimization approach with faster convergence. The recovered complex object field is back propagated to the image plane using the Fresnel back-propagation method. The results reported in this approach provide high-resolution images compared with the conventional Fourier filtering approach and is 25% faster than the previously reported constrained optimization approach due to the subtraction of two DC terms in the cost function. We report this approach in DH and digital holographic microscopy using the U.S. Air Force resolution target as the object to retrieve the high-resolution image without DC and twin image interference. We also demonstrate the high potential of this technique in transparent microelectrode patterned on indium tin oxide-coated glass, by reconstructing a high-resolution quantitative phase microscope image. We also demonstrate this technique by imaging yeast cells.

  12. Inner-sphere complexation of cations at the rutile-water interface: A concise surface structural interpretation with the CD and MUSIC model

    NASA Astrophysics Data System (ADS)

    Ridley, Moira K.; Hiemstra, Tjisse; van Riemsdijk, Willem H.; Machesky, Michael L.

    2009-04-01

    Acid-base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multi-component mineral-aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise, molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488-508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca 2+ and Sr 2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 1 1 0 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Předota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Bénézeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile-water interface: linking molecular and macroscopic properties. Langmuir20, 4954-4969]. Our CD modeling results are consistent with these adsorbed configurations provided adsorbed cation charge is allowed to be distributed between the surface (0-plane) and Stern plane (1-plane). Additionally, a complete description of our titration data required inclusion of outer-sphere binding, principally for Cl - which was common to all solutions, but also for Rb + and K +. These outer-sphere species were treated as point charges positioned at the Stern layer, and hence determined the Stern layer capacitance value. The modeling results demonstrate that a multi-component suite of experimental data can be successfully rationalized within a CD and MUSIC model using a Stern-based description of the EDL. Furthermore, the fitted CD values of the various inner-sphere complexes of the mono- and divalent ions can be linked to the microscopic structure of the surface complexes and other data found by spectroscopy as well as molecular dynamics (MD). For the Na + ion, the fitted CD value points to the presence of bidenate inner-sphere complexation as suggested by a recent MD study. Moreover, its MD dominance quantitatively agrees with the CD model prediction. For Rb +, the presence of a tetradentate complex, as found by spectroscopy, agreed well with the fitted CD and its predicted presence was quantitatively in very good agreement with the amount found by spectroscopy.

  13. inner-sphere complexation of cations at the rutile-water interface: A concise surface structural interpretation with the CD and MUSIC model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridley, Mora K.; Hiemstra, T; Van Riemsdijk, Willem H.

    Acid base reactivity and ion-interaction between mineral surfaces and aqueous solutions is most frequently investigated at the macroscopic scale as a function of pH. Experimental data are then rationalized by a variety of surface complexation models. These models are thermodynamically based which in principle does not require a molecular picture. The models are typically calibrated to relatively simple solid-electrolyte solution pairs and may provide poor descriptions of complex multicomponent mineral aqueous solutions, including those found in natural environments. Surface complexation models may be improved by incorporating molecular-scale surface structural information to constrain the modeling efforts. Here, we apply a concise,more » molecularly-constrained surface complexation model to a diverse suite of surface titration data for rutile and thereby begin to address the complexity of multi-component systems. Primary surface charging curves in NaCl, KCl, and RbCl electrolyte media were fit simultaneously using a charge distribution (CD) and multisite complexation (MUSIC) model [Hiemstra T. and Van Riemsdijk W. H. (1996) A surface structural approach to ion adsorption: the charge distribution (CD) model. J. Colloid Interf. Sci. 179, 488 508], coupled with a Basic Stern layer description of the electric double layer. In addition, data for the specific interaction of Ca2+ and Sr2+ with rutile, in NaCl and RbCl media, were modeled. In recent developments, spectroscopy, quantum calculations, and molecular simulations have shown that electrolyte and divalent cations are principally adsorbed in various inner-sphere configurations on the rutile 110 surface [Zhang Z., Fenter P., Cheng L., Sturchio N. C., Bedzyk M. J., Pr edota M., Bandura A., Kubicki J., Lvov S. N., Cummings P. T., Chialvo A. A., Ridley M. K., Be ne zeth P., Anovitz L., Palmer D. A., Machesky M. L. and Wesolowski D. J. (2004) Ion adsorption at the rutile water interface: linking molecular and macroscopic properties. Langmuir 20, 4954 4969]. Our CD modeling results are consistent with these adsorbed configurations provided adsorbed cation charge is allowed to be distributed between the surface (0-plane) and Stern plane (1-plane). Additionally, a complete description of our titration data required inclusion of outer-sphere binding, principally for Cl which was common to all solutions, but also for Rb+ and K+. These outer-sphere species were treated as point charges positioned at the Stern layer, and hence determined the Stern layer capacitance value. The modeling results demonstrate that a multi-component suite of experimental data can be successfully rationalized within a CD and MUSIC model using a Stern-based description of the EDL. Furthermore, the fitted CD values of the various inner-sphere complexes of the mono- and divalent ions can be linked to the microscopic structure of the surface complexes and other data found by spectroscopy as well as molecular dynamics (MD). For the Na+ ion, the fitted CD value points to the presence of bidenate inner-sphere complexation as suggested by a recent MD study. Moreover, its MD dominance quantitatively agrees with the CD model prediction. For Rb+, the presence of a tetradentate complex, as found by spectroscopy, agreed well with the fitted CD and its predicted presence was quantitatively in very good agreement with the amount found by spectroscopy.« less

  14. Simple summation rule for optimal fixation selection in visual search.

    PubMed

    Najemnik, Jiri; Geisler, Wilson S

    2009-06-01

    When searching for a known target in a natural texture, practiced humans achieve near-optimal performance compared to a Bayesian ideal searcher constrained with the human map of target detectability across the visual field [Najemnik, J., & Geisler, W. S. (2005). Optimal eye movement strategies in visual search. Nature, 434, 387-391]. To do so, humans must be good at choosing where to fixate during the search [Najemnik, J., & Geisler, W.S. (2008). Eye movement statistics in humans are consistent with an optimal strategy. Journal of Vision, 8(3), 1-14. 4]; however, it seems unlikely that a biological nervous system would implement the computations for the Bayesian ideal fixation selection because of their complexity. Here we derive and test a simple heuristic for optimal fixation selection that appears to be a much better candidate for implementation within a biological nervous system. Specifically, we show that the near-optimal fixation location is the maximum of the current posterior probability distribution for target location after the distribution is filtered by (convolved with) the square of the retinotopic target detectability map. We term the model that uses this strategy the entropy limit minimization (ELM) searcher. We show that when constrained with human-like retinotopic map of target detectability and human search error rates, the ELM searcher performs as well as the Bayesian ideal searcher, and produces fixation statistics similar to human.

  15. Modeling and simulating networks of interdependent protein interactions.

    PubMed

    Stöcker, Bianca K; Köster, Johannes; Zamir, Eli; Rahmann, Sven

    2018-05-21

    Protein interactions are fundamental building blocks of biochemical reaction systems underlying cellular functions. The complexity and functionality of these systems emerge not only from the protein interactions themselves but also from the dependencies between these interactions, as generated by allosteric effects or mutual exclusion due to steric hindrance. Therefore, formal models for integrating and utilizing information about interaction dependencies are of high interest. Here, we describe an approach for endowing protein networks with interaction dependencies using propositional logic, thereby obtaining constrained protein interaction networks ("constrained networks"). The construction of these networks is based on public interaction databases as well as text-mined information about interaction dependencies. We present an efficient data structure and algorithm to simulate protein complex formation in constrained networks. The efficiency of the model allows fast simulation and facilitates the analysis of many proteins in large networks. In addition, this approach enables the simulation of perturbation effects, such as knockout of single or multiple proteins and changes of protein concentrations. We illustrate how our model can be used to analyze a constrained human adhesome protein network, which is responsible for the formation of diverse and dynamic cell-matrix adhesion sites. By comparing protein complex formation under known interaction dependencies versus without dependencies, we investigate how these dependencies shape the resulting repertoire of protein complexes. Furthermore, our model enables investigating how the interplay of network topology with interaction dependencies influences the propagation of perturbation effects across a large biochemical system. Our simulation software CPINSim (for Constrained Protein Interaction Network Simulator) is available under the MIT license at http://github.com/BiancaStoecker/cpinsim and as a Bioconda package (https://bioconda.github.io).

  16. Vibration control of beams using stand-off layer damping: finite element modeling and experiments

    NASA Astrophysics Data System (ADS)

    Chaudry, A.; Baz, A.

    2006-03-01

    Damping treatments with stand-off layer (SOL) have been widely accepted as an attractive alternative to conventional constrained layer damping (CLD) treatments. Such an acceptance stems from the fact that the SOL, which is simply a slotted spacer layer sandwiched between the viscoelastic layer and the base structure, acts as a strain magnifier that considerably amplifies the shear strain and hence the energy dissipation characteristics of the viscoelastic layer. Accordingly, more effective vibration suppression can be achieved by using SOL as compared to employing CLD. In this paper, a comprehensive finite element model of the stand-off layer constrained damping treatment is developed. The model accounts for the geometrical and physical parameters of the slotted SOL, the viscoelastic, layer the constraining layer, and the base structure. The predictions of the model are validated against the predictions of a distributed transfer function model and a model built using a commercial finite element code (ANSYS). Furthermore, the theoretical predictions are validated experimentally for passive SOL treatments of different configurations. The obtained results indicate a close agreement between theory and experiments. Furthermore, the obtained results demonstrate the effectiveness of the CLD with SOL in enhancing the energy dissipation as compared to the conventional CLD. Extension of the proposed one-dimensional CLD with SOL to more complex structures is a natural extension to the present study.

  17. A multistage motion vector processing method for motion-compensated frame interpolation.

    PubMed

    Huang, Ai- Mei; Nguyen, Truong Q

    2008-05-01

    In this paper, a novel, low-complexity motion vector processing algorithm at the decoder is proposed for motion-compensated frame interpolation or frame rate up-conversion. We address the problems of having broken edges and deformed structures in an interpolated frame by hierarchically refining motion vectors on different block sizes. Our method explicitly considers the reliability of each received motion vector and has the capability of preserving the structure information. This is achieved by analyzing the distribution of residual energies and effectively merging blocks that have unreliable motion vectors. The motion vector reliability information is also used as a prior knowledge in motion vector refinement using a constrained vector median filter to avoid choosing identical unreliable one. We also propose using chrominance information in our method. Experimental results show that the proposed scheme has better visual quality and is also robust, even in video sequences with complex scenes and fast motion.

  18. The CMS Tier0 goes cloud and grid for LHC Run 2

    DOE PAGES

    Hufnagel, Dirk

    2015-12-23

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threadedmore » framework to deal with the increased event complexity and to ensure efficient use of the resources. Furthermore, this contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.« less

  19. The CMS TierO goes Cloud and Grid for LHC Run 2

    NASA Astrophysics Data System (ADS)

    Hufnagel, Dirk

    2015-12-01

    In 2015, CMS will embark on a new era of collecting LHC collisions at unprecedented rates and complexity. This will put a tremendous stress on our computing systems. Prompt Processing of the raw data by the Tier-0 infrastructure will no longer be constrained to CERN alone due to the significantly increased resource requirements. In LHC Run 2, we will need to operate it as a distributed system utilizing both the CERN Cloud-based Agile Infrastructure and a significant fraction of the CMS Tier-1 Grid resources. In another big change for LHC Run 2, we will process all data using the multi-threaded framework to deal with the increased event complexity and to ensure efficient use of the resources. This contribution will cover the evolution of the Tier-0 infrastructure and present scale testing results and experiences from the first data taking in 2015.

  20. Structural control of coalbed methane production in Alabama

    USGS Publications Warehouse

    Pashin, J.C.; Groshong, R.H.

    1998-01-01

    Thin-skinned structures are distributed throughout the Alabama coalbed methane fields, and these structures affect the production of gas and water from coal-bearing strata. Extensional structures in Deerlick Creek and Cedar Cove fields include normal faults and hanging-wall rollovers, and area balancing indicates that these structures are detached in the Pottsville Formation. Compressional folds in Gurnee and Oak Grove fields, by comparison, are interpreted to be detachment folds formed above decollements at different stratigraphic levels. Patterns of gas and water production reflect the structural style of each field and further indicate that folding and faulting have affected the distribution of permeability and the overall success of coalbed methane operations. Area balancing can be an effective way to characterize coalbed methane reservoirs in structurally complex regions because it constrains structural geometry and can be used to determine the distribution of layer-parallel strain. Comparison of calculated requisite strain and borehole expansion data from calliper logs suggests that strain in coalbed methane reservoirs is predictable and can be expressed as fracturing and small-scale faulting. However, refined methodology is needed to analyze heterogeneous strain distributions in discrete bed segments. Understanding temporal variation of production patterns in areas where gas and water production are influenced by map-scale structure will further facilitate effective management of coalbed methane fields.Thin-skinned structures are distributed throughout the Alabama coalbed methane fields, and these structures affect the production of gas and water from coal-bearing strata. Extensional structures in Deerlick Creek and Cedar Cove fields include normal faults and hanging-wall rollovers, and area balancing indicates that these structures are detached in the Pottsville Formation. Compressional folds in Gurnee and Oak Grove fields, by comparison, are interpreted to be detachment folds formed above decollements at different stratigraphic levels. Patterns of gas and water production reflect the structural style of each field and further indicate that folding and faulting have affected the distribution of permeability and the overall success of coalbed methane operations. Area balancing can be an effective way to characterize coalbed methane reservoirs in structurally complex regions because it constrains structural geometry and can be used to determine the distribution of layer-parallel strain. Comparison of calculated requisite strain and borehole expansion data from calliper logs suggests that strain in coalbed methane reservoirs is predictable and can be expressed as fracturing and small-scale faulting. However, refined methodology is needed to analyze heterogeneous strain distributions in discrete bed segments. Understanding temporal variation of production patterns in areas where gas and water production are influenced by map-scale structure will further facilitate effective management of coalbed methane fields.

  1. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  2. Application of parallel distributed Lagrange multiplier technique to simulate coupled Fluid-Granular flows in pipes with varying Cross-Sectional area

    DOE PAGES

    Kanarska, Yuliya; Walton, Otis

    2015-11-30

    Fluid-granular flows are common phenomena in nature and industry. Here, an efficient computational technique based on the distributed Lagrange multiplier method is utilized to simulate complex fluid-granular flows. Each particle is explicitly resolved on an Eulerian grid as a separate domain, using solid volume fractions. The fluid equations are solved through the entire computational domain, however, Lagrange multiplier constrains are applied inside the particle domain such that the fluid within any volume associated with a solid particle moves as an incompressible rigid body. The particle–particle interactions are implemented using explicit force-displacement interactions for frictional inelastic particles similar to the DEMmore » method with some modifications using the volume of an overlapping region as an input to the contact forces. Here, a parallel implementation of the method is based on the SAMRAI (Structured Adaptive Mesh Refinement Application Infrastructure) library.« less

  3. A multitracer approach for characterizing interactions between shallow groundwater and the hydrothermal system in the Norris Geyser Basin area, Yellowstone National Park

    USGS Publications Warehouse

    Gardner, W.P.; Susong, D.D.; Solomon, D.K.; Heasler, H.P.

    2011-01-01

    Multiple environmental tracers are used to investigate age distribution, evolution, and mixing in local- to regional-scale groundwater circulation around the Norris Geyser Basin area in Yellowstone National Park. Springs ranging in temperature from 3??C to 90??C in the Norris Geyser Basin area were sampled for stable isotopes of hydrogen and oxygen, major and minor element chemistry, dissolved chlorofluorocarbons, and tritium. Groundwater near Norris Geyser Basin is comprised of two distinct systems: a shallow, cool water system and a deep, high-temperature hydrothermal system. These two end-member systems mix to create springs with intermediate temperature and composition. Using multiple tracers from a large number of springs, it is possible constrain the distribution of possible flow paths and refine conceptual models of groundwater circulation in and around a large, complex hydrothermal system. Copyright 2011 by the American Geophysical Union.

  4. Experimental measurement-device-independent quantum key distribution with uncharacterized encoding.

    PubMed

    Wang, Chao; Wang, Shuang; Yin, Zhen-Qiang; Chen, Wei; Li, Hong-Wei; Zhang, Chun-Mei; Ding, Yu-Yang; Guo, Guang-Can; Han, Zheng-Fu

    2016-12-01

    Measurement-device-independent quantum key distribution (MDI QKD) is an efficient way to share secrets using untrusted measurement devices. However, the assumption on the characterizations of encoding states is still necessary in this promising protocol, which may lead to unnecessary complexity and potential loopholes in realistic implementations. Here, by using the mismatched-basis statistics, we present the first proof-of-principle experiment of MDI QKD with uncharacterized encoding sources. In this demonstration, the encoded states are only required to be constrained in a two-dimensional Hilbert space, and two distant parties (Alice and Bob) are resistant to state preparation flaws even if they have no idea about the detailed information of their encoding states. The positive final secure key rates of our system exhibit the feasibility of this novel protocol, and demonstrate its value for the application of secure communication with uncharacterized devices.

  5. Ligament Mediated Fragmentation of Viscoelastic Liquids

    NASA Astrophysics Data System (ADS)

    Keshavarz, Bavand; Houze, Eric C.; Moore, John R.; Koerner, Michael R.; McKinley, Gareth H.

    2016-10-01

    The breakup and atomization of complex fluids can be markedly different than the analogous processes in a simple Newtonian fluid. Atomization of paint, combustion of fuels containing antimisting agents, as well as physiological processes such as sneezing are common examples in which the atomized liquid contains synthetic or biological macromolecules that result in viscoelastic fluid characteristics. Here, we investigate the ligament-mediated fragmentation dynamics of viscoelastic fluids in three different canonical flows. The size distributions measured in each viscoelastic fragmentation process show a systematic broadening from the Newtonian solvent. In each case, the droplet sizes are well described by Gamma distributions which correspond to a fragmentation-coalescence scenario. We use a prototypical axial step strain experiment together with high-speed video imaging to show that this broadening results from the pronounced change in the corrugated shape of viscoelastic ligaments as they separate from the liquid core. These corrugations saturate in amplitude and the measured distributions for viscoelastic liquids in each process are given by a universal probability density function, corresponding to a Gamma distribution with nmin=4 . The breadth of this size distribution for viscoelastic filaments is shown to be constrained by a geometrical limit which can not be exceeded in ligament-mediated fragmentation phenomena.

  6. Ligament Mediated Fragmentation of Viscoelastic Liquids.

    PubMed

    Keshavarz, Bavand; Houze, Eric C; Moore, John R; Koerner, Michael R; McKinley, Gareth H

    2016-10-07

    The breakup and atomization of complex fluids can be markedly different than the analogous processes in a simple Newtonian fluid. Atomization of paint, combustion of fuels containing antimisting agents, as well as physiological processes such as sneezing are common examples in which the atomized liquid contains synthetic or biological macromolecules that result in viscoelastic fluid characteristics. Here, we investigate the ligament-mediated fragmentation dynamics of viscoelastic fluids in three different canonical flows. The size distributions measured in each viscoelastic fragmentation process show a systematic broadening from the Newtonian solvent. In each case, the droplet sizes are well described by Gamma distributions which correspond to a fragmentation-coalescence scenario. We use a prototypical axial step strain experiment together with high-speed video imaging to show that this broadening results from the pronounced change in the corrugated shape of viscoelastic ligaments as they separate from the liquid core. These corrugations saturate in amplitude and the measured distributions for viscoelastic liquids in each process are given by a universal probability density function, corresponding to a Gamma distribution with n_{min}=4. The breadth of this size distribution for viscoelastic filaments is shown to be constrained by a geometrical limit which can not be exceeded in ligament-mediated fragmentation phenomena.

  7. Redshift-space distortions with the halo occupation distribution - II. Analytic model

    NASA Astrophysics Data System (ADS)

    Tinker, Jeremy L.

    2007-01-01

    We present an analytic model for the galaxy two-point correlation function in redshift space. The cosmological parameters of the model are the matter density Ωm, power spectrum normalization σ8, and velocity bias of galaxies αv, circumventing the linear theory distortion parameter β and eliminating nuisance parameters for non-linearities. The model is constructed within the framework of the halo occupation distribution (HOD), which quantifies galaxy bias on linear and non-linear scales. We model one-halo pairwise velocities by assuming that satellite galaxy velocities follow a Gaussian distribution with dispersion proportional to the virial dispersion of the host halo. Two-halo velocity statistics are a combination of virial motions and host halo motions. The velocity distribution function (DF) of halo pairs is a complex function with skewness and kurtosis that vary substantially with scale. Using a series of collisionless N-body simulations, we demonstrate that the shape of the velocity DF is determined primarily by the distribution of local densities around a halo pair, and at fixed density the velocity DF is close to Gaussian and nearly independent of halo mass. We calibrate a model for the conditional probability function of densities around halo pairs on these simulations. With this model, the full shape of the halo velocity DF can be accurately calculated as a function of halo mass, radial separation, angle and cosmology. The HOD approach to redshift-space distortions utilizes clustering data from linear to non-linear scales to break the standard degeneracies inherent in previous models of redshift-space clustering. The parameters of the occupation function are well constrained by real-space clustering alone, separating constraints on bias and cosmology. We demonstrate the ability of the model to separately constrain Ωm,σ8 and αv in models that are constructed to have the same value of β at large scales as well as the same finger-of-god distortions at small scales.

  8. Tree cover in sub-Saharan Africa: rainfall and fire constrain forest and savanna as alternative stable states.

    PubMed

    Staver, A Carla; Archibald, Sally; Levin, Simon

    2011-05-01

    Savannas are known as ecosystems with tree cover below climate-defined equilibrium values. However, a predictive framework for understanding constraints on tree cover is lacking. We present (a) a spatially extensive analysis of tree cover and fire distribution in sub-Saharan Africa, and (b) a model, based on empirical results, demonstrating that savanna and forest may be alternative stable states in parts of Africa, with implications for understanding savanna distributions. Tree cover does not increase continuously with rainfall, but rather is constrained to low (<50%, "savanna") or high tree cover (>75%, "forest"). Intermediate tree cover rarely occurs. Fire, which prevents trees from establishing, differentiates high and low tree cover, especially in areas with rainfall between 1000 mm and 2000 mm. Fire is less important at low rainfall (<1000 mm), where rainfall limits tree cover, and at high rainfall (>2000 mm), where fire is rare. This pattern suggests that complex interactions between climate and disturbance produce emergent alternative states in tree cover. The relationship between tree cover and fire was incorporated into a dynamic model including grass, savanna tree saplings, and savanna trees. Only recruitment from sapling to adult tree varied depending on the amount of grass in the system. Based on our empirical analysis and previous work, fires spread only at tree cover of 40% or less, producing a sigmoidal fire probability distribution as a function of grass cover and therefore a sigmoidal sapling to tree recruitment function. This model demonstrates that, given relatively conservative and empirically supported assumptions about the establishment of trees in savannas, alternative stable states for the same set of environmental conditions (i.e., model parameters) are possible via a fire feedback mechanism. Integrating alternative stable state dynamics into models of biome distributions could improve our ability to predict changes in biome distributions and in carbon storage under climate and global change scenarios.

  9. 2dFLenS and KiDS: determining source redshift distributions with cross-correlations

    NASA Astrophysics Data System (ADS)

    Johnson, Andrew; Blake, Chris; Amon, Alexandra; Erben, Thomas; Glazebrook, Karl; Harnois-Deraps, Joachim; Heymans, Catherine; Hildebrandt, Hendrik; Joudaki, Shahab; Klaes, Dominik; Kuijken, Konrad; Lidman, Chris; Marin, Felipe A.; McFarland, John; Morrison, Christopher B.; Parkinson, David; Poole, Gregory B.; Radovich, Mario; Wolf, Christian

    2017-03-01

    We develop a statistical estimator to infer the redshift probability distribution of a photometric sample of galaxies from its angular cross-correlation in redshift bins with an overlapping spectroscopic sample. This estimator is a minimum-variance weighted quadratic function of the data: a quadratic estimator. This extends and modifies the methodology presented by McQuinn & White. The derived source redshift distribution is degenerate with the source galaxy bias, which must be constrained via additional assumptions. We apply this estimator to constrain source galaxy redshift distributions in the Kilo-Degree imaging survey through cross-correlation with the spectroscopic 2-degree Field Lensing Survey, presenting results first as a binned step-wise distribution in the range z < 0.8, and then building a continuous distribution using a Gaussian process model. We demonstrate the robustness of our methodology using mock catalogues constructed from N-body simulations, and comparisons with other techniques for inferring the redshift distribution.

  10. Habitat productivity constrains the distribution of social spiders across continents – case study of the genus Stegodyphus

    PubMed Central

    2013-01-01

    Introduction Sociality has evolved independently multiple times across the spider phylogeny, and despite wide taxonomic and geographical breadth the social species are characterized by a common geographical constrain to tropical and subtropical areas. Here we investigate the environmental factors that drive macro-ecological patterns in social and solitary species in a genus that shows a Mediterranean–Afro-Oriental distribution (Stegodyphus). Both selected drivers (productivity and seasonality) may affect the abundance of potential prey insects, but seasonality may further directly affect survival due to mortality caused by extreme climatic events. Based on a comprehensive dataset including information about the distribution of three independently derived social species and 13 solitary congeners we tested the hypotheses that the distribution of social Stegodyphus species relative to solitary congeners is: (1) restricted to habitats of high vegetation productivity and (2) constrained to areas with a stable climate (low precipitation seasonality). Results Using spatial logistic regression modelling and information-theoretic model selection, we show that social species occur at higher vegetation productivity than solitary, while precipitation seasonality received limited support as a predictor of social spider occurrence. An analysis of insect biomass data across the Stegodyphus distribution range confirmed that vegetation productivity is positively correlated to potential insect prey biomass. Conclusions Habitat productivity constrains the distribution of social spiders across continents compared to their solitary congeners, with group-living in spiders being restricted to areas with relatively high vegetation productivity and insect prey biomass. As known for other taxa, permanent sociality likely evolves in response to high predation pressure and imposes within-group competition for resources. Our results suggest that group living is contingent upon productive environmental conditions where elevated prey abundance meet the increased demand for food of social groups. PMID:23433065

  11. Minimal complexity control law synthesis

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Haddad, Wassim M.; Nett, Carl N.

    1989-01-01

    A paradigm for control law design for modern engineering systems is proposed: Minimize control law complexity subject to the achievement of a specified accuracy in the face of a specified level of uncertainty. Correspondingly, the overall goal is to make progress towards the development of a control law design methodology which supports this paradigm. Researchers achieve this goal by developing a general theory of optimal constrained-structure dynamic output feedback compensation, where here constrained-structure means that the dynamic-structure (e.g., dynamic order, pole locations, zero locations, etc.) of the output feedback compensation is constrained in some way. By applying this theory in an innovative fashion, where here the indicated iteration occurs over the choice of the compensator dynamic-structure, the paradigm stated above can, in principle, be realized. The optimal constrained-structure dynamic output feedback problem is formulated in general terms. An elegant method for reducing optimal constrained-structure dynamic output feedback problems to optimal static output feedback problems is then developed. This reduction procedure makes use of star products, linear fractional transformations, and linear fractional decompositions, and yields as a byproduct a complete characterization of the class of optimal constrained-structure dynamic output feedback problems which can be reduced to optimal static output feedback problems. Issues such as operational/physical constraints, operating-point variations, and processor throughput/memory limitations are considered, and it is shown how anti-windup/bumpless transfer, gain-scheduling, and digital processor implementation can be facilitated by constraining the controller dynamic-structure in an appropriate fashion.

  12. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  13. Containment and Support: Core and Complexity in Spatial Language Learning.

    PubMed

    Landau, Barbara; Johannes, Kristen; Skordos, Dimitrios; Papafragou, Anna

    2017-04-01

    Containment and support have traditionally been assumed to represent universal conceptual foundations for spatial terms. This assumption can be challenged, however: English in and on are applied across a surprisingly broad range of exemplars, and comparable terms in other languages show significant variation in their application. We propose that the broad domains of both containment and support have internal structure that reflects different subtypes, that this structure is reflected in basic spatial term usage across languages, and that it constrains children's spatial term learning. Using a newly developed battery, we asked how adults and 4-year-old children speaking English or Greek distribute basic spatial terms across subtypes of containment and support. We found that containment showed similar distributions of basic terms across subtypes among all groups while support showed such similarity only among adults, with striking differences between children learning English versus Greek. We conclude that the two domains differ considerably in the learning problems they present, and that learning in and on is remarkably complex. Together, our results point to the need for a more nuanced view of spatial term learning. Copyright © 2016 Cognitive Science Society, Inc.

  14. Spatial heterogeneity in species composition constrains plant community responses to herbivory and fertilization

    USDA-ARS?s Scientific Manuscript database

    Changing environmental conditions result in substantial shifts in the composition of communities. The associated immigration and extinction events are likely constrained by the spatial distribution of species. Still, most studies on environmental change quantify the biotic responses at single spat...

  15. Climate and the complexity of migratory phenology: sexes, migratory distance, and arrival distributions

    NASA Astrophysics Data System (ADS)

    Macmynowski, Dena P.; Root, Terry L.

    2007-05-01

    The intra- and inter-season complexity of bird migration has received limited attention in climatic change research. Our phenological analysis of 22 species collected in Chicago, USA, (1979 2002) evaluates the relationship between multi-scalar climate variables and differences (1) in arrival timing between sexes, (2) in arrival distributions among species, and (3) between spring and fall migration. The early migratory period for earliest arriving species (i.e., short-distance migrants) and earliest arriving individuals of a species (i.e., males) most frequently correlate with climate variables. Compared to long-distance migrant species, four times as many short-distance migrants correlate with spring temperature, while 8 of 11 (73%) of long-distance migrant species’ arrival is correlated with the North Atlantic Oscillation (NAO). While migratory phenology has been correlated with NAO in Europe, we believe that this is the first documentation of a significant association in North America. Geographically proximate conditions apparently influence migratory timing for short-distance migrants while continental-scale climate (e.g., NAO) seemingly influences the phenology of Neotropical migrants. The preponderance of climate correlations is with the early migratory period, not the median of arrival, suggesting that early spring conditions constrain the onset or rate of migration for some species. The seasonal arrival distribution provides considerable information about migratory passage beyond what is apparent from statistical analyses of phenology. A relationship between climate and fall phenology is not detected at this location. Analysis of the within-season complexity of migration, including multiple metrics of arrival, is essential to detect species’ responses to changing climate as well as evaluate the underlying biological mechanisms.

  16. Networks In Real Space: Characteristics and Analysis for Biology and Mechanics

    NASA Astrophysics Data System (ADS)

    Modes, Carl; Magnasco, Marcelo; Katifori, Eleni

    Functional networks embedded in physical space play a crucial role in countless biological and physical systems, from the efficient dissemination of oxygen, blood sugars, and hormonal signals in vascular systems to the complex relaying of informational signals in the brain to the distribution of stress and strain in architecture or static sand piles. Unlike their more-studied abstract cousins, such as the hyperlinked internet, social networks, or economic and financial connections, these networks are both constrained by and intimately connected to the physicality of their real, embedding space. We report on the results of new computational and analytic approaches tailored to these physical networks with particular implications and insights for mammalian organ vasculature.

  17. Constrained Subjective Assessment of Student Learning

    ERIC Educational Resources Information Center

    Saliu, Sokol

    2005-01-01

    Student learning is a complex incremental cognitive process; assessment needs to parallel this, reporting the results in similar terms. Application of fuzzy sets and logic to the criterion-referenced assessment of student learning is considered here. The constrained qualitative assessment (CQA) system was designed, and then applied in assessing a…

  18. Complex crater formation: Insights from combining observations of shock pressure distribution with numerical models at the West Clearwater Lake impact structure

    NASA Astrophysics Data System (ADS)

    Rae, A. S. P.; Collins, G. S.; Grieve, R. A. F.; Osinski, G. R.; Morgan, J. V.

    2017-07-01

    Large impact structures have complex morphologies, with zones of structural uplift that can be expressed topographically as central peaks and/or peak rings internal to the crater rim. The formation of these structures requires transient strength reduction in the target material and one of the proposed mechanisms to explain this behavior is acoustic fluidization. Here, samples of shock-metamorphosed quartz-bearing lithologies at the West Clearwater Lake impact structure, Canada, are used to estimate the maximum recorded shock pressures in three dimensions across the crater. These measurements demonstrate that the currently observed distribution of shock metamorphism is strongly controlled by the formation of the structural uplift. The distribution of peak shock pressures, together with apparent crater morphology and geological observations, is compared with numerical impact simulations to constrain parameters used in the block-model implementation of acoustic fluidization. The numerical simulations produce craters that are consistent with morphological and geological observations. The results show that the regeneration of acoustic energy must be an important feature of acoustic fluidization in crater collapse, and should be included in future implementations. Based on the comparison between observational data and impact simulations, we conclude that the West Clearwater Lake structure had an original rim (final crater) diameter of 35-40 km and has since experienced up to 2 km of differential erosion.

  19. Aftershock distribution as a constraint on the geodetic model of coseismic slip for the 2004 Parkfield earthquake

    USGS Publications Warehouse

    Bennington, Ninfa; Thurber, Clifford; Feigl, Kurt; ,

    2011-01-01

    Several studies of the 2004 Parkfield earthquake have linked the spatial distribution of the event’s aftershocks to the mainshock slip distribution on the fault. Using geodetic data, we find a model of coseismic slip for the 2004 Parkfield earthquake with the constraint that the edges of coseismic slip patches align with aftershocks. The constraint is applied by encouraging the curvature of coseismic slip in each model cell to be equal to the negative of the curvature of seismicity density. The large patch of peak slip about 15 km northwest of the 2004 hypocenter found in the curvature-constrained model is in good agreement in location and amplitude with previous geodetic studies and the majority of strong motion studies. The curvature-constrained solution shows slip primarily between aftershock “streaks” with the continuation of moderate levels of slip to the southeast. These observations are in good agreement with strong motion studies, but inconsistent with the majority of published geodetic slip models. Southeast of the 2004 hypocenter, a patch of peak slip observed in strong motion studies is absent from our curvature-constrained model, but the available GPS data do not resolve slip in this region. We conclude that the geodetic slip model constrained by the aftershock distribution fits the geodetic data quite well and that inconsistencies between models derived from seismic and geodetic data can be attributed largely to resolution issues.

  20. Generating Multivariate Ordinal Data via Entropy Principles.

    PubMed

    Lee, Yen; Kaplan, David

    2018-03-01

    When conducting robustness research where the focus of attention is on the impact of non-normality, the marginal skewness and kurtosis are often used to set the degree of non-normality. Monte Carlo methods are commonly applied to conduct this type of research by simulating data from distributions with skewness and kurtosis constrained to pre-specified values. Although several procedures have been proposed to simulate data from distributions with these constraints, no corresponding procedures have been applied for discrete distributions. In this paper, we present two procedures based on the principles of maximum entropy and minimum cross-entropy to estimate the multivariate observed ordinal distributions with constraints on skewness and kurtosis. For these procedures, the correlation matrix of the observed variables is not specified but depends on the relationships between the latent response variables. With the estimated distributions, researchers can study robustness not only focusing on the levels of non-normality but also on the variations in the distribution shapes. A simulation study demonstrates that these procedures yield excellent agreement between specified parameters and those of estimated distributions. A robustness study concerning the effect of distribution shape in the context of confirmatory factor analysis shows that shape can affect the robust [Formula: see text] and robust fit indices, especially when the sample size is small, the data are severely non-normal, and the fitted model is complex.

  1. Analysis of the Wbt Vertex from the Measurement of Triple Differential Angular Decay Rates of Single Top Quarks Produced in the T-Channel at □S =8 TeV with ATLAS Detector

    NASA Astrophysics Data System (ADS)

    Su, Jun

    The electroweak production and subsequent decay of single top quarks is determined by the properties of the Wtb vertex, which can be described by the complex parameters of an effective Lagrangian. An analysis of angular distributions of the decay products of single top quarks produced in the t-channel constrains these parameters simultaneously. The thesis presents an analysis using 20.2 fb-1 of proton-proton collision data at a centre-of-mass energy of 8 TeV collected with the ATLAS detector at the LHC. The fraction ƒ1 of decays containing transversely polarised W bosons is measured to be ƒ1 = 0:296 +0:048 -0:051 (stat. + syst.). The phase delta_ between amplitudes for transversely and longitudinally polarised W bosons recoiling against left-handed b quarks, is measured to be delta_ = 0:002pi+0:016pi -0:017pi (stat. + syst.), giving no indication of CP violation. The fraction of longitudinal to transverse W bosons accompanied by right-handed b-quarks are also constrained at 95% C.L. to ƒ+1 < 0:118 and ƒ+ 0 < 0:085. Based on these measurements limits are placed at 95% C.L. on the ratio of the complex coupling parameters gR and VL such that Re [gR =VL] epsilon [-0:122; 0:168] and Im [gR=VL] epsilon [-0:066; 0:059]. Constraints are also placed on the magnitudes of the ratios | VL/VL|, and |g L/VL|. Finally the polarisation of single top quarks in the t-channel is constrained to be P > 0:718 (95% C.L.). None of the above measurements make assumptions on the value of any of the other parameters or couplings and all of them are in agreement with the Standard Model.

  2. Determination of Cenozoic sedimentary structures using integrated geophysical surveys: A case study in the Barkol Basin, Xinjiang, China

    NASA Astrophysics Data System (ADS)

    Sun, Kai; Chen, Chao; Du, Jinsong; Wang, Limin; Lei, Binhua

    2018-01-01

    Thickness estimation of sedimentary basin is a complex geological problem, especially in an orogenic environment. Intense and multiple tectonic movements and climate changes result in inhomogeneity of sedimentary layers and basement configurations, which making sedimentary structure modelling difficult. In this study, integrated geophysical methods, including gravity, magnetotelluric (MT) sounding and electrical resistivity tomography (ERT), were used to estimate basement relief to understand the geological structure and evolution of the eastern Barkol Basin in China. This basin formed with the uplift of the eastern Tianshan during the Cenozoic. Gravity anomaly map revealed the framework of the entire area, and ERT as well as MT sections reflected the geoelectric features of the Cenozoic two-layer distribution. Therefore, gravity data, constrained by MT, ERT and boreholes, were utilized to estimate the spatial distribution of the Quaternary layer. The gravity effect of the Quaternary layer related to the Tertiary layer was later subtracted to obtain the residual anomaly for inversion. For the Tertiary layer, the study area was divided into several parts because of lateral difference of density contrasts. Gravity data were interpreted to determine the density contrast constrained by the MT results. The basement relief can be verified by geological investigation, including the uplift process and regional tectonic setting. The agreement between geophysical survey and prior information from geology emphasizes the importance of integrated geophysical survey as a complementary means of geological studies in this region.

  3. Biophysical model of prokaryotic diversity in geothermal hot springs.

    PubMed

    Klales, Anna; Duncan, James; Nett, Elizabeth Janus; Kane, Suzanne Amador

    2012-02-01

    Recent studies of photosynthetic bacteria living in geothermal hot spring environments have revealed surprisingly complex ecosystems with an unexpected level of genetic diversity. One case of particular interest involves the distribution along hot spring thermal gradients of genetically distinct bacterial strains that differ in their preferred temperatures for reproduction and photosynthesis. In such systems, a single variable, temperature, defines the relevant environmental variation. In spite of this, each region along the thermal gradient exhibits multiple strains of photosynthetic bacteria adapted to several distinct thermal optima, rather than a single thermal strain adapted to the local environmental temperature. Here we analyze microbiology data from several ecological studies to show that the thermal distribution data exhibit several universal features independent of location and specific bacterial strain. These include the distribution of optimal temperatures of different thermal strains and the functional dependence of the net population density on temperature. We present a simple population dynamics model of these systems that is highly constrained by biophysical data and by physical features of the environment. This model can explain in detail the observed thermal population distributions, as well as certain features of population dynamics observed in laboratory studies of the same organisms. © 2012 American Physical Society

  4. Rupture Propagation for Stochastic Fault Models

    NASA Astrophysics Data System (ADS)

    Favreau, P.; Lavallee, D.; Archuleta, R.

    2003-12-01

    The inversion of strong motion data of large earhquakes give the spatial distribution of pre-stress on the ruptured faults and it can be partially reproduced by stochastic models, but a fundamental question remains: how rupture propagates, constrained by the presence of spatial heterogeneity? For this purpose we investigate how the underlying random variables, that control the pre-stress spatial variability, condition the propagation of the rupture. Two stochastic models of prestress distributions are considered, respectively based on Cauchy and Gaussian random variables. The parameters of the two stochastic models have values corresponding to the slip distribution of the 1979 Imperial Valley earthquake. We use a finite difference code to simulate the spontaneous propagation of shear rupture on a flat fault in a 3D continuum elastic body. The friction law is the slip dependent friction law. The simulations show that the propagation of the rupture front is more complex, incoherent or snake-like for a prestress distribution based on Cauchy random variables. This may be related to the presence of a higher number of asperities in this case. These simulations suggest that directivity is stronger in the Cauchy scenario, compared to the smoother rupture of the Gauss scenario.

  5. Ecological distribution and population physiology defined by proteomics in a natural microbial community

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muller, R; Denef, Vincent; Kalnejals, Linda

    An important challenge in microbial ecology is developing methods that simultaneously examine the physiology of organisms at the molecular level and their ecosystem level interactions in complex natural systems.We integrated extensive proteomic, geochemical, and biological information from 28 microbial communities collected from an acid mine drainage environment and representing a range of biofilm development stages and geochemical conditions to evaluate how the physiologies of the dominant and less abundant organisms change along environmental gradients. The initial colonist dominates across all environments, but its proteome changes between two stable states as communities diversify, implying that interspecies interactions affect this organism smore » metabolism. Its overall physiology is robust to abiotic environmental factors, but strong correlations exist between these factors and certain subsets of proteins, possibly accounting for its wide environmental distribution. Lower abundance populations are patchier in their distribution, and proteomic data indicate that their environmental niches may be constrained by specific sets of abiotic environmental factors. This research establishes an effective strategy to investigate ecological relationships between microbial physiology and the environment for whole communities in situ« less

  6. Parallel and Distributed Methods for Constrained Nonconvex Optimization—Part I: Theory

    NASA Astrophysics Data System (ADS)

    Scutari, Gesualdo; Facchinei, Francisco; Lampariello, Lorenzo

    2017-04-01

    In Part I of this paper, we proposed and analyzed a novel algorithmic framework for the minimization of a nonconvex (smooth) objective function, subject to nonconvex constraints, based on inner convex approximations. This Part II is devoted to the application of the framework to some resource allocation problems in communication networks. In particular, we consider two non-trivial case-study applications, namely: (generalizations of) i) the rate profile maximization in MIMO interference broadcast networks; and the ii) the max-min fair multicast multigroup beamforming problem in a multi-cell environment. We develop a new class of algorithms enjoying the following distinctive features: i) they are \\emph{distributed} across the base stations (with limited signaling) and lead to subproblems whose solutions are computable in closed form; and ii) differently from current relaxation-based schemes (e.g., semidefinite relaxation), they are proved to always converge to d-stationary solutions of the aforementioned class of nonconvex problems. Numerical results show that the proposed (distributed) schemes achieve larger worst-case rates (resp. signal-to-noise interference ratios) than state-of-the-art centralized ones while having comparable computational complexity.

  7. Ecological distribution and population physiology defined by proteomics in a natural microbial community

    USGS Publications Warehouse

    Mueller, Ryan S.; Denef, Vincent J.; Kalnejais, Linda H.; Suttle, K. Blake; Thomas, Brian C.; Wilmes, Paul; Smith, Richard L.; Nordstrom, D. Kirk; McCleskey, R. Blaine; Shah, Menesh B.; VerBekmoes, Nathan C.; Hettich, Robert L.; Banfield, Jillian F.

    2010-01-01

    An important challenge in microbial ecology is developing methods that simultaneously examine the physiology of organisms at the molecular level and their ecosystem level interactions in complex natural systems. We integrated extensive proteomic, geochemical, and biological information from 28 microbial communities collected from an acid mine drainage environment and representing a range of biofilm development stages and geochemical conditions to evaluate how the physiologies of the dominant and less abundant organisms change along environmental gradients. The initial colonist dominates across all environments, but its proteome changes between two stable states as communities diversify, implying that interspecies interactions affect this organism's metabolism. Its overall physiology is robust to abiotic environmental factors, but strong correlations exist between these factors and certain subsets of proteins, possibly accounting for its wide environmental distribution. Lower abundance populations are patchier in their distribution, and proteomic data indicate that their environmental niches may be constrained by specific sets of abiotic environmental factors. This research establishes an effective strategy to investigate ecological relationships between microbial physiology and the environment for whole communities in situ.

  8. Chance-Constrained AC Optimal Power Flow for Distribution Systems With Renewables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DallAnese, Emiliano; Baker, Kyri; Summers, Tyler

    This paper focuses on distribution systems featuring renewable energy sources (RESs) and energy storage systems, and presents an AC optimal power flow (OPF) approach to optimize system-level performance objectives while coping with uncertainty in both RES generation and loads. The proposed method hinges on a chance-constrained AC OPF formulation where probabilistic constraints are utilized to enforce voltage regulation with prescribed probability. A computationally more affordable convex reformulation is developed by resorting to suitable linear approximations of the AC power-flow equations as well as convex approximations of the chance constraints. The approximate chance constraints provide conservative bounds that hold for arbitrarymore » distributions of the forecasting errors. An adaptive strategy is then obtained by embedding the proposed AC OPF task into a model predictive control framework. Finally, a distributed solver is developed to strategically distribute the solution of the optimization problems across utility and customers.« less

  9. A Bayesian Modeling Approach for Estimation of a Shape-Free Groundwater Age Distribution using Multiple Tracers

    DOE PAGES

    Massoudieh, Arash; Visser, Ate; Sharifi, Soroosh; ...

    2013-10-15

    The mixing of groundwaters with different ages in aquifers, groundwater age is more appropriately represented by a distribution rather than a scalar number. To infer a groundwater age distribution from environmental tracers, a mathematical form is often assumed for the shape of the distribution and the parameters of the mathematical distribution are estimated using deterministic or stochastic inverse methods. We found that the prescription of the mathematical form limits the exploration of the age distribution to the shapes that can be described by the selected distribution. In this paper, the use of freeform histograms as groundwater age distributions is evaluated.more » A Bayesian Markov Chain Monte Carlo approach is used to estimate the fraction of groundwater in each histogram bin. This method was able to capture the shape of a hypothetical gamma distribution from the concentrations of four age tracers. The number of bins that can be considered in this approach is limited based on the number of tracers available. The histogram method was also tested on tracer data sets from Holten (The Netherlands; 3H, 3He, 85Kr, 39Ar) and the La Selva Biological Station (Costa-Rica; SF 6, CFCs, 3H, 4He and 14C), and compared to a number of mathematical forms. According to standard Bayesian measures of model goodness, the best mathematical distribution performs better than the histogram distributions in terms of the ability to capture the observed tracer data relative to their complexity. Among the histogram distributions, the four bin histogram performs better in most of the cases. The Monte Carlo simulations showed strong correlations in the posterior estimates of bin contributions, indicating that these bins cannot be well constrained using the available age tracers. The fact that mathematical forms overall perform better than the freeform histogram does not undermine the benefit of the freeform approach, especially for the cases where a larger amount of observed data is available and when the real groundwater distribution is more complex than can be represented by simple mathematical forms.« less

  10. A Bayesian Modeling Approach for Estimation of a Shape-Free Groundwater Age Distribution using Multiple Tracers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Massoudieh, Arash; Visser, Ate; Sharifi, Soroosh

    The mixing of groundwaters with different ages in aquifers, groundwater age is more appropriately represented by a distribution rather than a scalar number. To infer a groundwater age distribution from environmental tracers, a mathematical form is often assumed for the shape of the distribution and the parameters of the mathematical distribution are estimated using deterministic or stochastic inverse methods. We found that the prescription of the mathematical form limits the exploration of the age distribution to the shapes that can be described by the selected distribution. In this paper, the use of freeform histograms as groundwater age distributions is evaluated.more » A Bayesian Markov Chain Monte Carlo approach is used to estimate the fraction of groundwater in each histogram bin. This method was able to capture the shape of a hypothetical gamma distribution from the concentrations of four age tracers. The number of bins that can be considered in this approach is limited based on the number of tracers available. The histogram method was also tested on tracer data sets from Holten (The Netherlands; 3H, 3He, 85Kr, 39Ar) and the La Selva Biological Station (Costa-Rica; SF 6, CFCs, 3H, 4He and 14C), and compared to a number of mathematical forms. According to standard Bayesian measures of model goodness, the best mathematical distribution performs better than the histogram distributions in terms of the ability to capture the observed tracer data relative to their complexity. Among the histogram distributions, the four bin histogram performs better in most of the cases. The Monte Carlo simulations showed strong correlations in the posterior estimates of bin contributions, indicating that these bins cannot be well constrained using the available age tracers. The fact that mathematical forms overall perform better than the freeform histogram does not undermine the benefit of the freeform approach, especially for the cases where a larger amount of observed data is available and when the real groundwater distribution is more complex than can be represented by simple mathematical forms.« less

  11. Constraining Binary Asteroid Mass Distributions Based On Mutual Motion

    NASA Astrophysics Data System (ADS)

    Davis, Alex B.; Scheeres, Daniel J.

    2017-06-01

    The mutual gravitational potential and torques of binary asteroid systems results in a complex coupling of attitude and orbital motion based on the mass distribution of each body. For a doubly-synchronous binary system observations of the mutual motion can be leveraged to identify and measure the unique mass distributions of each body. By implementing arbitrary shape and order computation of the full two-body problem (F2BP) equilibria we study the influence of asteroid asymmetries on separation and orientation of a doubly-synchronous system. Additionally, simulations of binary systems perturbed from doubly-synchronous behavior are studied to understand the effects of mass distribution perturbations on precession and nutation rates such that unique behaviors can be isolated and used to measure asteroid mass distributions. We apply our investigation to the Trojan binary asteroid system 617 Patroclus and Menoetius (1906 VY), which will be the final flyby target of the recently announced LUCY Discovery mission in March 2033. This binary asteroid system is of particular interest due to the results of a recent stellar occultation study (DPS 46, id.506.09) that suggests the system to be doubly-synchronous and consisting of two-similarly sized oblate ellipsoids, in addition to suggesting the presence mass asymmetries resulting from an impact crater on the southern limb of Menoetius.

  12. Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Huang, Q.

    2017-12-01

    Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.

  13. Design and Simulation of Material-Integrated Distributed Sensor Processing with a Code-Based Agent Platform and Mobile Multi-Agent Systems

    PubMed Central

    Bosse, Stefan

    2015-01-01

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques. PMID:25690550

  14. Design and simulation of material-integrated distributed sensor processing with a code-based agent platform and mobile multi-agent systems.

    PubMed

    Bosse, Stefan

    2015-02-16

    Multi-agent systems (MAS) can be used for decentralized and self-organizing data processing in a distributed system, like a resource-constrained sensor network, enabling distributed information extraction, for example, based on pattern recognition and self-organization, by decomposing complex tasks in simpler cooperative agents. Reliable MAS-based data processing approaches can aid the material-integration of structural-monitoring applications, with agent processing platforms scaled to the microchip level. The agent behavior, based on a dynamic activity-transition graph (ATG) model, is implemented with program code storing the control and the data state of an agent, which is novel. The program code can be modified by the agent itself using code morphing techniques and is capable of migrating in the network between nodes. The program code is a self-contained unit (a container) and embeds the agent data, the initialization instructions and the ATG behavior implementation. The microchip agent processing platform used for the execution of the agent code is a standalone multi-core stack machine with a zero-operand instruction format, leading to a small-sized agent program code, low system complexity and high system performance. The agent processing is token-queue-based, similar to Petri-nets. The agent platform can be implemented in software, too, offering compatibility at the operational and code level, supporting agent processing in strong heterogeneous networks. In this work, the agent platform embedded in a large-scale distributed sensor network is simulated at the architectural level by using agent-based simulation techniques.

  15. Noise deconvolution based on the L1-metric and decomposition of discrete distributions of postsynaptic responses.

    PubMed

    Astrelin, A V; Sokolov, M V; Behnisch, T; Reymann, K G; Voronin, L L

    1997-04-25

    A statistical approach to analysis of amplitude fluctuations of postsynaptic responses is described. This includes (1) using a L1-metric in the space of distribution functions for minimisation with application of linear programming methods to decompose amplitude distributions into a convolution of Gaussian and discrete distributions; (2) deconvolution of the resulting discrete distribution with determination of the release probabilities and the quantal amplitude for cases with a small number (< 5) of discrete components. The methods were tested against simulated data over a range of sample sizes and signal-to-noise ratios which mimicked those observed in physiological experiments. In computer simulation experiments, comparisons were made with other methods of 'unconstrained' (generalized) and constrained reconstruction of discrete components from convolutions. The simulation results provided additional criteria for improving the solutions to overcome 'over-fitting phenomena' and to constrain the number of components with small probabilities. Application of the programme to recordings from hippocampal neurones demonstrated its usefulness for the analysis of amplitude distributions of postsynaptic responses.

  16. Exploring stellar evolution with gravitational-wave observations

    NASA Astrophysics Data System (ADS)

    Dvorkin, Irina; Uzan, Jean-Philippe; Vangioni, Elisabeth; Silk, Joseph

    2018-05-01

    Recent detections of gravitational waves from merging binary black holes opened new possibilities to study the evolution of massive stars and black hole formation. In particular, stellar evolution models may be constrained on the basis of the differences in the predicted distribution of black hole masses and redshifts. In this work we propose a framework that combines galaxy and stellar evolution models and use it to predict the detection rates of merging binary black holes for various stellar evolution models. We discuss the prospects of constraining the shape of the time delay distribution of merging binaries using just the observed distribution of chirp masses. Finally, we consider a generic model of primordial black hole formation and discuss the possibility of distinguishing it from stellar-origin black holes.

  17. Deep-biosphere methane production stimulated by geofluids in the Nankai accretionary complex

    PubMed Central

    Kubo, Yusuke; Hoshino, Tatsuhiko; Sakai, Sanae; Arnold, Gail L.; Case, David H.; Lever, Mark A.; Morita, Sumito; Nakamura, Ko-ichi

    2018-01-01

    Microbial life inhabiting subseafloor sediments plays an important role in Earth’s carbon cycle. However, the impact of geodynamic processes on the distributions and carbon-cycling activities of subseafloor life remains poorly constrained. We explore a submarine mud volcano of the Nankai accretionary complex by drilling down to 200 m below the summit. Stable isotopic compositions of water and carbon compounds, including clumped methane isotopologues, suggest that ~90% of methane is microbially produced at 16° to 30°C and 300 to 900 m below seafloor, corresponding to the basin bottom, where fluids in the accretionary prism are supplied via megasplay faults. Radiotracer experiments showed that relatively small microbial populations in deep mud volcano sediments (102 to 103 cells cm−3) include highly active hydrogenotrophic methanogens and acetogens. Our findings indicate that subduction-associated fluid migration has stimulated microbial activity in the mud reservoir and that mud volcanoes may contribute more substantially to the methane budget than previously estimated. PMID:29928689

  18. Structure and atomic correlations in molecular systems probed by XAS reverse Monte Carlo refinement

    NASA Astrophysics Data System (ADS)

    Di Cicco, Andrea; Iesari, Fabio; Trapananti, Angela; D'Angelo, Paola; Filipponi, Adriano

    2018-03-01

    The Reverse Monte Carlo (RMC) algorithm for structure refinement has been applied to x-ray absorption spectroscopy (XAS) multiple-edge data sets for six gas phase molecular systems (SnI2, CdI2, BBr3, GaI3, GeBr4, GeI4). Sets of thousands of molecular replicas were involved in the refinement process, driven by the XAS data and constrained by available electron diffraction results. The equilibrated configurations were analysed to determine the average tridimensional structure and obtain reliable bond and bond-angle distributions. Detectable deviations from Gaussian models were found in some cases. This work shows that a RMC refinement of XAS data is able to provide geometrical models for molecular structures compatible with present experimental evidence. The validation of this approach on simple molecular systems is particularly important in view of its possible simple extension to more complex and extended systems including metal-organic complexes, biomolecules, or nanocrystalline systems.

  19. Origin and propagation of galactic cosmic rays

    NASA Technical Reports Server (NTRS)

    Cesarsky, Catherine J.; Ormes, Jonathan F.

    1987-01-01

    The study of systematic trends in elemental abundances is important for unfolding the nuclear and/or atomic effects that should govern the shaping of source abundances and in constraining the parameters of cosmic ray acceleration models. In principle, much can be learned about the large-scale distributions of cosmic rays in the galaxy from all-sky gamma ray surveys such as COS-B and SAS-2. Because of the uncertainties in the matter distribution which come from the inability to measure the abundance of molecular hydrogen, the results are somewhat controversial. The leaky-box model accounts for a surprising amount of the data on heavy nuclei. However, a growing body of data indicates that the simple picture may have to be abandoned in favor of more complex models which contain additional parameters. Future experiments on the Spacelab and space station will hopefully be made of the spectra of individual nuclei at high energy. Antiprotons must be studied in the background free environment above the atmosphere with much higher reliability and presion to obtain spectral information.

  20. Integrated geophysical study to understand the architecture of the deep critical zone in the Luquillo Critical Zone Observatory (Puerto Rico

    NASA Astrophysics Data System (ADS)

    Comas, X.; Wright, W. J.; Hynek, S. A.; Ntarlagiannis, D.; Terry, N.; Whiting, F.; Job, M. J.; Brantley, S. L.; Fletcher, R. C.

    2016-12-01

    The Luquillo Critical Zone Observatory (CZO) in Puerto Rico is characterized by a complex system of heterogeneous fractures that participate in the formation of corestones, and influence the development of a regolith by the alteration of the bedrock at very rapid weathering rates. The spatial distribution of fractures, and its influence on regolith thickness is, however, currently not well understood. In this study, we used an array of near-surface geophysical methods, including ground penetrating radar, terrain conductivity, electrical resistivity imaging and induced polarization, OhmMapper, and shallow seismic, constrained with direct methods from previous studies. These methods were combined with stress modeling to better understand: 1) changes in regolith thickness; and 2) variation of the spatial distribution and density of fractures with topography and proximity to the knickpoint. Our observations show the potential of geophysical methods for imaging variability in regolith thickness, and agree with the result of a stress model showing increased dilation of fractures with proximity to the knickpoint.

  1. Data Assimilation - Advances and Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Brian J.

    2014-07-30

    This presentation provides an overview of data assimilation (model calibration) for complex computer experiments. Calibration refers to the process of probabilistically constraining uncertain physics/engineering model inputs to be consistent with observed experimental data. An initial probability distribution for these parameters is updated using the experimental information. Utilization of surrogate models and empirical adjustment for model form error in code calibration form the basis for the statistical methodology considered. The role of probabilistic code calibration in supporting code validation is discussed. Incorporation of model form uncertainty in rigorous uncertainty quantification (UQ) analyses is also addressed. Design criteria used within a batchmore » sequential design algorithm are introduced for efficiently achieving predictive maturity and improved code calibration. Predictive maturity refers to obtaining stable predictive inference with calibrated computer codes. These approaches allow for augmentation of initial experiment designs for collecting new physical data. A standard framework for data assimilation is presented and techniques for updating the posterior distribution of the state variables based on particle filtering and the ensemble Kalman filter are introduced.« less

  2. How the Slip Distribution Complexities Control the Tsunami Scenarios: a Sensitivity Analysis for the Hellenic and Calabrian Subduction Interfaces.

    NASA Astrophysics Data System (ADS)

    Scala, A.; Murphy, S.; Herrero, A.; Maesano, F. E.; Lorito, S.; Romano, F.; Tiberti, M. M.; Tonini, R.; Volpe, M.; Basili, R.

    2017-12-01

    Recent giant tsunamigenic earthquakes (Sumatra 2004, Chile 2010, Tohoku 2011) have confirmed that the complexity of seismic slip distributions may play a fundamental role in the generation and the amplitude of the tsunami waves. In particular, big patches of large slip on the shallower part of the subduction zones, as well as slow rupture propagation within low rigidity areas, can contribute to increase the tsunamigenic potential thus generating devastating coastal inundation. In the Mediterranean Sea, some subduction structures can be identified, such as the Hellenic Arc at the boundary between the African and Aegean plates, and the Calabrian Arc between the European and African plates. We have modelled these areas using discretized high-resolution 3D fault geometries with realistic variability of the strike and dip angles. In particular, the latter geometries have been constrained from the analysis of a dense network of seismic reflection profiles and the seismicity of the areas. To study the influence of different rigidity conditions, we compare the tsunami scenarios deriving from homogeneous slip to those obtained from depth-dependent slip distributions at different magnitudes. These depth-dependent slip distributions are obtained by imposing a variability with depth of both shear modulus and seismic rate, and the conservation of the dislocation over the whole subduction zone. Furthermore, we generate along the Hellenic and Calabrian arc subduction interfaces an ensemble of stochastic slip distributions using a composite source model technique. To mimic either single or multiple asperity source models, the distribution of sub-events whose sum produces the stochastic slip, are distributed based on a PDF, defined as the combination of either one or more Gaussian functions. Tsunami scenarios are then generated from this ensemble in order to address how the position of the main patch of slip can affect the tsunami amplitude along the coast.

  3. Integration of real-time mapping technology in disaster relief distribution.

    DOT National Transportation Integrated Search

    2013-02-01

    Vehicle routing for disaster relief distribution involves many challenges that distinguish this problem from those in commercial settings, given the time sensitive and resource constrained nature of relief activities. While operations research approa...

  4. Effects of life-history requirements on the distribution of a threatened reptile.

    PubMed

    Thompson, Denise M; Ligon, Day B; Patton, Jason C; Papeş, Monica

    2017-04-01

    Survival and reproduction are the two primary life-history traits essential for species' persistence; however, the environmental conditions that support each of these traits may not be the same. Despite this, reproductive requirements are seldom considered when estimating species' potential distributions. We sought to examine potentially limiting environmental factors influencing the distribution of an oviparous reptile of conservation concern with respect to the species' survival and reproduction and to assess the implications of the species' predicted climatic constraints on current conservation practices. We used ecological niche modeling to predict the probability of environmental suitability for the alligator snapping turtle (Macrochelys temminckii). We built an annual climate model to examine survival and a nesting climate model to examine reproduction. We combined incubation temperature requirements, products of modeled soil temperature data, and our estimated distributions to determine whether embryonic development constrained the northern distribution of the species. Low annual precipitation constrained the western distribution of alligator snapping turtles, whereas the northern distribution was constrained by thermal requirements during embryonic development. Only a portion of the geographic range predicted to have a high probability of suitability for alligator snapping turtle survival was estimated to be capable of supporting successful embryonic development. Historic occurrence records suggest adult alligator snapping turtles can survive in regions with colder climes than those associated with consistent and successful production of offspring. Estimated egg-incubation requirements indicated that current reintroductions at the northern edge of the species' range are within reproductively viable environmental conditions. Our results highlight the importance of considering survival and reproduction when estimating species' ecological niches, implicating conservation plans, and benefits of incorporating physiological data when evaluating species' distributions. © 2016 Society for Conservation Biology.

  5. Advanced EMT and Phasor-Domain Hybrid Simulation with Simulation Mode Switching Capability for Transmission and Distribution Systems

    DOE PAGES

    Huang, Qiuhua; Vittal, Vijay

    2018-05-09

    Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less

  6. Advanced EMT and Phasor-Domain Hybrid Simulation with Simulation Mode Switching Capability for Transmission and Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Qiuhua; Vittal, Vijay

    Conventional electromagnetic transient (EMT) and phasor-domain hybrid simulation approaches presently exist for trans-mission system level studies. Their simulation efficiency is generally constrained by the EMT simulation. With an increasing number of distributed energy resources and non-conventional loads being installed in distribution systems, it is imperative to extend the hybrid simulation application to include distribution systems and integrated transmission and distribution systems. Meanwhile, it is equally important to improve the simulation efficiency as the modeling scope and complexity of the detailed system in the EMT simulation increases. To meet both requirements, this paper introduces an advanced EMT and phasor-domain hybrid simulationmore » approach. This approach has two main features: 1) a comprehensive phasor-domain modeling framework which supports positive-sequence, three-sequence, three-phase and mixed three-sequence/three-phase representations and 2) a robust and flexible simulation mode switching scheme. The developed scheme enables simulation switching from hybrid simulation mode back to pure phasor-domain dynamic simulation mode to achieve significantly improved simulation efficiency. The proposed method has been tested on integrated transmission and distribution systems. In conclusion, the results show that with the developed simulation switching feature, the total computational time is significantly reduced compared to running the hybrid simulation for the whole simulation period, while maintaining good simulation accuracy.« less

  7. Architectures of Kepler Planet Systems with Approximate Bayesian Computation

    NASA Astrophysics Data System (ADS)

    Morehead, Robert C.; Ford, Eric B.

    2015-12-01

    The distribution of period normalized transit duration ratios among Kepler’s multiple transiting planet systems constrains the distributions of mutual orbital inclinations and orbital eccentricities. However, degeneracies in these parameters tied to the underlying number of planets in these systems complicate their interpretation. To untangle the true architecture of planet systems, the mutual inclination, eccentricity, and underlying planet number distributions must be considered simultaneously. The complexities of target selection, transit probability, detection biases, vetting, and follow-up observations make it impractical to write an explicit likelihood function. Approximate Bayesian computation (ABC) offers an intriguing path forward. In its simplest form, ABC generates a sample of trial population parameters from a prior distribution to produce synthetic datasets via a physically-motivated forward model. Samples are then accepted or rejected based on how close they come to reproducing the actual observed dataset to some tolerance. The accepted samples form a robust and useful approximation of the true posterior distribution of the underlying population parameters. We build on the considerable progress from the field of statistics to develop sequential algorithms for performing ABC in an efficient and flexible manner. We demonstrate the utility of ABC in exoplanet populations and present new constraints on the distributions of mutual orbital inclinations, eccentricities, and the relative number of short-period planets per star. We conclude with a discussion of the implications for other planet occurrence rate calculations, such as eta-Earth.

  8. Surface speciation of yttrium and neodymium sorbed on rutile: Interpretations using the change distribution model.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ridley, Mora K.; Hiemstra, T; Machesky, Michael L.

    2012-01-01

    The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3 11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Sternmore » layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (110) rutile surface (Zhang et al., 2004b). TheMDsimulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models (SCMs) should aid in elucidating a fundamental understating of ion-adsorption reactions.« less

  9. Surface speciation of yttrium and neodymium sorbed on rutile: Interpretations using the charge distribution model

    NASA Astrophysics Data System (ADS)

    Ridley, Moira K.; Hiemstra, Tjisse; Machesky, Michael L.; Wesolowski, David J.; van Riemsdijk, Willem H.

    2012-10-01

    The adsorption of Y3+ and Nd3+ onto rutile has been evaluated over a wide range of pH (3-11) and surface loading conditions, as well as at two ionic strengths (0.03 and 0.3 m), and temperatures (25 and 50 °C). The experimental results reveal the same adsorption behavior for the two trivalent ions onto the rutile surface, with Nd3+ first adsorbing at slightly lower pH values. The adsorption of both Y3+ and Nd3+ commences at pH values below the pHznpc of rutile. The experimental results were evaluated using a charge distribution (CD) and multisite complexation (MUSIC) model, and Basic Stern layer description of the electric double layer (EDL). The coordination geometry of possible surface complexes were constrained by molecular-level information obtained from X-ray standing wave measurements and molecular dynamic (MD) simulation studies. X-ray standing wave measurements showed an inner-sphere tetradentate complex for Y3+ adsorption onto the (1 1 0) rutile surface (Zhang et al., 2004b). The MD simulation studies suggest additional bidentate complexes may form. The CD values for all surface species were calculated based on a bond valence interpretation of the surface complexes identified by X-ray and MD. The calculated CD values were corrected for the effect of dipole orientation of interfacial water. At low pH, the tetradentate complex provided excellent fits to the Y3+ and Nd3+ experimental data. The experimental and surface complexation modeling results show a strong pH dependence, and suggest that the tetradentate surface species hydrolyze with increasing pH. Furthermore, with increased surface loading of Y3+ on rutile the tetradentate binding mode was augmented by a hydrolyzed-bidentate Y3+ surface complex. Collectively, the experimental and surface complexation modeling results demonstrate that solution chemistry and surface loading impacts Y3+ surface speciation. The approach taken of incorporating molecular-scale information into surface complexation models (SCMs) should aid in elucidating a fundamental understating of ion-adsorption reactions.

  10. Constrained subsystem density functional theory.

    PubMed

    Ramos, Pablo; Pavanello, Michele

    2016-08-03

    Constrained Subsystem Density Functional Theory (CSDFT) allows to compute diabatic states for charge transfer reactions using the machinery of the constrained DFT method, and at the same time is able to embed such diabatic states in a molecular environment via a subsystem DFT scheme. The CSDFT acronym is chosen to reflect the fact that on top of the subsystem DFT approach, a constraining potential is applied to each subsystem. We show that CSDFT can successfully tackle systems as complex as single stranded DNA complete of its backbone, and generate diabatic states as exotic as a hole localized on a phosphate group as well as on the nucleobases. CSDFT will be useful to investigators needing to evaluate the environmental effect on charge transfer couplings for systems in condensed phase environments.

  11. Clues on the Milky Way disc formation from population synthesis simulations

    NASA Astrophysics Data System (ADS)

    Robin, A. C.; Reylé, C.; Bienaymé, O.; Fernandez-Trincado, J. G.; Amôres, E. B.

    2016-09-01

    In recent years the stellar populations of the Milky Way have been investigated from large scale surveys in different ways, from pure star count analysis to detailed studies based on spectroscopic surveys. While in the former case the data can constrain the scale height and scale length thanks to completeness, they suffer from high correlation between these two values. On the other hand, spectroscopic surveys suffer from complex selection functions which hardly allow to derive accurate density distributions. The scale length in particular has been difficult to be constrained, resulting in discrepant values in the literature. Here, we investigate the thick disc characteristics by comparing model simulations with large scale data sets. The simulations are done from the population synthesis model of Besançon. We explore the parameters of the thick disc (shape, local density, age, metallicity) using a Monte Carlo Markov Chain method to constrain the model free parameters (Robin et al. 2014). Correlations between parameters are limited due to the vast spatial coverage of the used surveys (SDSS + 2MASS). We show that the thick disc was created during a long phase of formation, starting about 12 Gyr ago and finishing about 10 Gyr ago, during which gravitational contraction occurred, both vertically and radially. Moreover, in its early phase the thick disc was flaring in the outskirts. We conclude that the thick disc has been created prior to the thin disc during a gravitational collapse phase, slowed down by turbulence related to a high star formation rate, as explained for example in Bournaud et al. (2009) or Lehnert et al. (2009). Our result does not favor a formation from an initial thin disc thickened later by merger events or by secular evolution of the thin disc. We then study the in-plane distribution of stars in the thin disc from 2MASS and show that the thin disc scale length varies as a function of age, indicating an inside out formation. Moreover, we investigate the warp and flare and demonstrate that the warp amplitude is changing with time and the node angle is slightly precessing. Finally, we show comparisons between the new model and spectroscopic surveys. The new model allows to correctly simulate the kinematics, the metallicity, and α-abundance distributions in the solar neighbourhood as well as in the bulge region.

  12. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  13. The joint fit of the BHMF and ERDF for the BAT AGN Sample

    NASA Astrophysics Data System (ADS)

    Weigel, Anna K.; Koss, Michael; Ricci, Claudio; Trakhtenbrot, Benny; Oh, Kyuseok; Schawinski, Kevin; Lamperti, Isabella

    2018-01-01

    A natural product of an AGN survey is the AGN luminosity function. This statistical measure describes the distribution of directly measurable AGN luminosities. Intrinsically, the shape of the luminosity function depends on the distribution of black hole masses and Eddington ratios. To constrain these fundamental AGN properties, the luminosity function thus has to be disentangled into the black hole mass and Eddington ratio distribution function. The BASS survey is unique as it allows such a joint fit for a large number of local AGN, is unbiased in terms of obscuration in the X-rays and provides black hole masses for type-1 and type-2 AGN. The black hole mass function at z ~ 0 represents an essential baseline for simulations and black hole growth models. The normalization of the Eddington ratio distribution function directly constrains the AGN fraction. Together, the BASS AGN luminosity, black hole mass and Eddington ratio distribution functions thus provide a complete picture of the local black hole population.

  14. Ionized Outflows in 3-D Insights from Herbig-Haro Objects and Applications to Nearby AGN

    NASA Technical Reports Server (NTRS)

    Cecil, Gerald

    1999-01-01

    HST shows that the gas distributions of these objects are complex and clump at the limit of resolution. HST spectra have lumpy emission-line profiles, indicating unresolved sub-structure. The advantages of 3D over slits on gas so distributed are: robust flux estimates of various dynamical systems projected along lines of sight, sensitivity to fainter spectral lines that are physical diagnostics (reddening-gas density, T, excitation mechanisms, abundances), and improved prospects for recovery of unobserved dimensions of phase-space. These advantages al- low more confident modeling for more profound inquiry into underlying dynamics. The main complication is the effort required to link multi- frequency datasets that optimally track the energy flow through various phases of the ISM. This tedium has limited the number of objects that have been thoroughly analyzed to the a priori most spectacular systems. For HHO'S, proper-motions constrain the ambient B-field, shock velocity, gas abundances, mass-loss rates, source duty-cycle, and tie-ins with molecular flows. If the shock speed, hence ionization fraction, is indeed small then the ionized gas is a significant part of the flow energetics. For AGN'S, nuclear beaming is a source of ionization ambiguity. Establishing the energetics of the outflow is critical to determining how the accretion disk loses its energy. CXO will provide new constraints (especially spectral) on AGN outflows, and STIS UV-spectroscopy is also constraining cloud properties (although limited by extinction). HHO's show some of the things that we will find around AGN'S. I illustrate these points with results from ground-based and HST programs being pursued with collaborators.

  15. Constraining the Distribution of Vertical Slip on the South Heli Shan Fault (Northeastern Tibet) From High-Resolution Topographic Data

    NASA Astrophysics Data System (ADS)

    Bi, Haiyun; Zheng, Wenjun; Ge, Weipeng; Zhang, Peizhen; Zeng, Jiangyuan; Yu, Jingxing

    2018-03-01

    Reconstruction of the along-fault slip distribution provides an insight into the long-term rupture patterns of a fault, thereby enabling more accurate assessment of its future behavior. The increasing wealth of high-resolution topographic data, such as Light Detection and Ranging and photogrammetric digital elevation models, allows us to better constrain the slip distribution, thus greatly improving our understanding of fault behavior. The South Heli Shan Fault is a major active fault on the northeastern margin of the Tibetan Plateau. In this study, we built a 2 m resolution digital elevation model of the South Heli Shan Fault based on high-resolution GeoEye-1 stereo satellite imagery and then measured 302 vertical displacements along the fault, which increased the measurement density of previous field surveys by a factor of nearly 5. The cumulative displacements show an asymmetric distribution along the fault, comprising three major segments. An increasing trend from west to east indicates that the fault has likely propagated westward over its lifetime. The topographic relief of Heli Shan shows an asymmetry similar to the measured cumulative slip distribution, suggesting that the uplift of Heli Shan may result mainly from the long-term activity of the South Heli Shan Fault. Furthermore, the cumulative displacements divide into discrete clusters along the fault, indicating that the fault has ruptured in several large earthquakes. By constraining the slip-length distribution of each rupture, we found that the events do not support a characteristic recurrence model for the fault.

  16. The hydrological cycle in the high Pamir Mountains: how temperature and seasonal precipitation distribution influence stream flow in the Gunt catchment, Tajikistan

    NASA Astrophysics Data System (ADS)

    Pohl, E.; Knoche, M.; Gloaguen, R.; Andermann, C.; Krause, P.

    2014-12-01

    Complex climatic interactions control hydrological processes in high mountains that in their turn regulate the erosive forces shaping the relief. To unravel the hydrological cycle of a glaciated watershed (Gunt River) considered representative of the Pamirs' hydrologic regime we developed a remote sensing-based approach. At the boundary between two distinct climatic zones dominated by Westerlies and Indian summer monsoon, the Pamir is poorly instrumented and only a few in situ meteorological and hydrological data are available. We adapted a suitable conceptual distributed hydrological model (J2000g). Interpolations of the few available in situ data are inadequate due to strong, relief induced, spatial heterogeneities. Instead we use raster data, preferably from remote sensing sources depending on availability and validation. We evaluate remote sensing-based precipitation and temperature products. MODIS MOD11 surface temperatures show good agreement with in situ data, perform better than other products and represent a good proxy for air temperatures. For precipitation we tested remote sensing products as well as the HAR10 climate model data and the interpolation-based APHRODITE dataset. All products show substantial differences both in intensity and seasonal distribution with in-situ data. Despite low resolutions, the datasets are able to sustain high model efficiencies (NSE ≥0.85). In contrast to neighbouring regions in the Himalayas or the Hindukush, discharge is dominantly the product of snow and glacier melt and thus temperature is the essential controlling factor. 80% of annual precipitation is provided as snow in winter and spring contrasting peak discharges during summer. Hence, precipitation and discharge are negatively correlated and display complex hysteresis effects that allow to infer the effect of inter-annual climatic variability on river flow. We infer the existence of two subsurface reservoirs. The groundwater reservoir (providing 40% of annual discharge) recharges in spring and summer and releases slowly during fall and winter. A not fully constrained shallow reservoir with very rapid retention times buffers melt waters during spring and summer. This study highlights the importance of a better understanding of the hydrologic cycle to constrain natural hazards such as floods and landslides as well as water availability in the downstream areas. The negative glacier mass balance (-0.6 m w.e. yr-1) indicates glacier retreat, that will effect the currently 30% contribution of glacier melt to stream flow.

  17. A data-calibrated distribution of deglacial chronologies for the North American ice complex from glaciological modeling

    NASA Astrophysics Data System (ADS)

    Tarasov, Lev; Dyke, Arthur S.; Neal, Radford M.; Peltier, W. R.

    2012-01-01

    Past deglacial ice sheet reconstructions have generally relied upon discipline-specific constraints with no attention given to the determination of objective confidence intervals. Reconstructions based on geophysical inversion of relative sea level (RSL) data have the advantage of large sets of proxy data but lack ice-mechanical constraints. Conversely, reconstructions based on dynamical ice sheet models are glaciologically self-consistent, but depend on poorly constrained climate forcings and sub-glacial processes. As an example of a much better constrained methodology that computes explicit error bars, we present a distribution of high-resolution glaciologically-self-consistent deglacial histories for the North American ice complex calibrated against a large set of RSL, marine limit, and geodetic data. The history is derived from ensemble-based analyses using the 3D MUN glacial systems model and a high-resolution ice-margin chronology derived from geological and geomorphological observations. Isostatic response is computed with the VM5a viscosity structure. Bayesian calibration of the model is carried out using Markov Chain Monte Carlo methods in combination with artificial neural networks trained to the model results. The calibration provides a posterior distribution for model parameters (and thereby modeled glacial histories) given the observational data sets that takes data uncertainty into account. Final ensemble results also account for fits between computed and observed strandlines and marine limits. Given the model (including choice of calibration parameters), input and constraint data sets, and VM5a earth rheology, we find the North American contribution to mwp1a was likely between 9.4 and 13.2 m eustatic over a 500 year interval. This is more than half of the total 16 to 26 m meltwater pulse over 500 to 700 years (with lower values being more probable) indicated by the Barbados coral record (Fairbanks, 1989; Peltier and Fairbanks, 2006) if one assumes a 5 meter living range for the Acropora Palmata coral. 20 ka ice volume for North America was likely 70.1 ± 2.0 m eustatic, or about 60% of the total contribution to eustatic sea level change. We suspect that the potentially most critical unquantified uncertainties in our analyses are those related to model structure (especially climate forcing), deglacial ice margin chronology, and earth rheology.

  18. Constraining ejecta particle size distributions with light scattering

    NASA Astrophysics Data System (ADS)

    Schauer, Martin; Buttler, William; Frayer, Daniel; Grover, Michael; Lalone, Brandon; Monfared, Shabnam; Sorenson, Daniel; Stevens, Gerald; Turley, William

    2017-06-01

    The angular distribution of the intensity of light scattered from a particle is strongly dependent on the particle size and can be calculated using the Mie solution to Maxwell's equations. For a collection of particles with a range of sizes, the angular intensity distribution will be the sum of the contributions from each particle size weighted by the number of particles in that size bin. The set of equations describing this pattern is not uniquely invertible, i.e. a number of different distributions can lead to the same scattering pattern, but with reasonable assumptions about the distribution it is possible to constrain the problem and extract estimates of the particle sizes from a measured scattering pattern. We report here on experiments using particles ejected by shockwaves incident on strips of triangular perturbations machined into the surface of tin targets. These measurements indicate a bimodal distribution of ejected particle sizes with relatively large particles (median radius 2-4 μm) evolved from the edges of the perturbation strip and smaller particles (median radius 200-600 nm) from the perturbations. We will briefly discuss the implications of these results and outline future plans.

  19. Coseismic stresses indicated by pseudotachylytes in the Outer Hebrides Fault Zone, UK.

    NASA Astrophysics Data System (ADS)

    Campbell, Lucy; Lloyd, Geoffrey; Phillips, Richard; Holdsworth, Robert; Walcott, Rachel

    2015-04-01

    During the few seconds of earthquake slip, dynamic behaviour is predicted for stress, slip velocity, friction and temperature, amongst other properties. Fault-derived pseudotachylyte is a coseismic frictional melt and provides a unique snapshot of the rupture environment. Exhumation of ancient fault zones to seismogenic depths can reveal the structure and distribution of seismic slip as pseudotachylyte bearing fault planes. An example lies in NW Scotland along the Outer Hebrides Fault Zone (OHFZ) - this long-lived fault zone displays a suite of fault rocks developed under evolving kinematic regimes, including widespread pseudotachylyte veining which is distributed both on and away from the major faults. This study adds data derived from the OHFZ pseudotachylytes to published datasets from well-constrained fault zones, in order to explore the use of existing methodologies on more complex faults and to compare the calculated results. Temperature, stress and pressure are calculated from individual fault veins and added to existing datasets. The results pose questions on the physical meaning of the derived trends, the distribution of seismic energy release across scattered cm-scale faults and the range of earthquake magnitudes calculated from faults across any given fault zone.

  20. Electrons on a spherical surface: Physical properties and hollow spherical clusters

    NASA Astrophysics Data System (ADS)

    Cricchio, Dario; Fiordilino, Emilio; Persico, Franco

    2012-07-01

    We discuss the physical properties of a noninteracting electron gas constrained to a spherical surface. In particular we consider its chemical potentials, its ionization potential, and its electric static polarizability. All these properties are discussed analytically as functions of the number N of electrons. The trends obtained with increasing N are compared with those of the corresponding properties experimentally measured or theoretically evaluated for quasispherical hollow atomic and molecular clusters. Most of the properties investigated display similar trends, characterized by a prominence of shell effects. This leads to the definition of a scale-invariant distribution of magic numbers which follows a power law with critical exponent -0.5. We conclude that our completely mechanistic and analytically tractable model can be useful for the analysis of self-assembling complex systems.

  1. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    NASA Technical Reports Server (NTRS)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  2. Temperature dependence of electron magnetic resonance spectra of iron oxide nanoparticles mineralized in Listeria innocua protein cages

    NASA Astrophysics Data System (ADS)

    Usselman, Robert J.; Russek, Stephen E.; Klem, Michael T.; Allen, Mark A.; Douglas, Trevor; Young, Mark; Idzerda, Yves U.; Singel, David J.

    2012-10-01

    Electron magnetic resonance (EMR) spectroscopy was used to determine the magnetic properties of maghemite (γ-Fe2O3) nanoparticles formed within size-constraining Listeria innocua (LDps)-(DNA-binding protein from starved cells) protein cages that have an inner diameter of 5 nm. Variable-temperature X-band EMR spectra exhibited broad asymmetric resonances with a superimposed narrow peak at a gyromagnetic factor of g ≈ 2. The resonance structure, which depends on both superparamagnetic fluctuations and inhomogeneous broadening, changes dramatically as a function of temperature, and the overall linewidth becomes narrower with increasing temperature. Here, we compare two different models to simulate temperature-dependent lineshape trends. The temperature dependence for both models is derived from a Langevin behavior of the linewidth resulting from "anisotropy melting." The first uses either a truncated log-normal distribution of particle sizes or a bi-modal distribution and then a Landau-Liftshitz lineshape to describe the nanoparticle resonances. The essential feature of this model is that small particles have narrow linewidths and account for the g ≈ 2 feature with a constant resonance field, whereas larger particles have broad linewidths and undergo a shift in resonance field. The second model assumes uniform particles with a diameter around 4 nm and a random distribution of uniaxial anisotropy axes. This model uses a more precise calculation of the linewidth due to superparamagnetic fluctuations and a random distribution of anisotropies. Sharp features in the spectrum near g ≈ 2 are qualitatively predicted at high temperatures. Both models can account for many features of the observed spectra, although each has deficiencies. The first model leads to a nonphysical increase in magnetic moment as the temperature is increased if a log normal distribution of particles sizes is used. Introducing a bi-modal distribution of particle sizes resolves the unphysical increase in moment with temperature. The second model predicts low-temperature spectra that differ significantly from the observed spectra. The anisotropy energy density K1, determined by fitting the temperature-dependent linewidths, was ˜50 kJ/m3, which is considerably larger than that of bulk maghemite. The work presented here indicates that the magnetic properties of these size-constrained nanoparticles and more generally metal oxide nanoparticles with diameters d < 5 nm are complex and that currently existing models are not sufficient for determining their magnetic resonance signatures.

  3. Energy-constrained two-way assisted private and quantum capacities of quantum channels

    NASA Astrophysics Data System (ADS)

    Davis, Noah; Shirokov, Maksim E.; Wilde, Mark M.

    2018-06-01

    With the rapid growth of quantum technologies, knowing the fundamental characteristics of quantum systems and protocols is essential for their effective implementation. A particular communication setting that has received increased focus is related to quantum key distribution and distributed quantum computation. In this setting, a quantum channel connects a sender to a receiver, and their goal is to distill either a secret key or entanglement, along with the help of arbitrary local operations and classical communication (LOCC). In this work, we establish a general theory of energy-constrained, LOCC-assisted private and quantum capacities of quantum channels, which are the maximum rates at which an LOCC-assisted quantum channel can reliably establish a secret key or entanglement, respectively, subject to an energy constraint on the channel input states. We prove that the energy-constrained squashed entanglement of a channel is an upper bound on these capacities. We also explicitly prove that a thermal state maximizes a relaxation of the squashed entanglement of all phase-insensitive, single-mode input bosonic Gaussian channels, generalizing results from prior work. After doing so, we prove that a variation of the method introduced by Goodenough et al. [New J. Phys. 18, 063005 (2016), 10.1088/1367-2630/18/6/063005] leads to improved upper bounds on the energy-constrained secret-key-agreement capacity of a bosonic thermal channel. We then consider a multipartite setting and prove that two known multipartite generalizations of the squashed entanglement are in fact equal. We finally show that the energy-constrained, multipartite squashed entanglement plays a role in bounding the energy-constrained LOCC-assisted private and quantum capacity regions of quantum broadcast channels.

  4. AirSWOT observations versus hydrodynamic model outputs of water surface elevation and slope in a multichannel river

    NASA Astrophysics Data System (ADS)

    Altenau, Elizabeth H.; Pavelsky, Tamlin M.; Moller, Delwyn; Lion, Christine; Pitcher, Lincoln H.; Allen, George H.; Bates, Paul D.; Calmant, Stéphane; Durand, Michael; Neal, Jeffrey C.; Smith, Laurence C.

    2017-04-01

    Anabranching rivers make up a large proportion of the world's major rivers, but quantifying their flow dynamics is challenging due to their complex morphologies. Traditional in situ measurements of water levels collected at gauge stations cannot capture out of bank flows and are limited to defined cross sections, which presents an incomplete picture of water fluctuations in multichannel systems. Similarly, current remotely sensed measurements of water surface elevations (WSEs) and slopes are constrained by resolutions and accuracies that limit the visibility of surface waters at global scales. Here, we present new measurements of river WSE and slope along the Tanana River, AK, acquired from AirSWOT, an airborne analogue to the Surface Water and Ocean Topography (SWOT) mission. Additionally, we compare the AirSWOT observations to hydrodynamic model outputs of WSE and slope simulated across the same study area. Results indicate AirSWOT errors are significantly lower than model outputs. When compared to field measurements, RMSE for AirSWOT measurements of WSEs is 9.0 cm when averaged over 1 km squared areas and 1.0 cm/km for slopes along 10 km reaches. Also, AirSWOT can accurately reproduce the spatial variations in slope critical for characterizing reach-scale hydraulics, while model outputs of spatial variations in slope are very poor. Combining AirSWOT and future SWOT measurements with hydrodynamic models can result in major improvements in model simulations at local to global scales. Scientists can use AirSWOT measurements to constrain model parameters over long reach distances, improve understanding of the physical processes controlling the spatial distribution of model parameters, and validate models' abilities to reproduce spatial variations in slope. Additionally, AirSWOT and SWOT measurements can be assimilated into lower-complexity models to try and approach the accuracies achieved by higher-complexity models.

  5. HIFU scattering by the ribs: constrained optimisation with a complex surface impedance boundary condition

    NASA Astrophysics Data System (ADS)

    Gélat, P.; ter Haar, G.; Saffari, N.

    2014-04-01

    High intensity focused ultrasound (HIFU) enables highly localised, non-invasive tissue ablation and its efficacy has been demonstrated in the treatment of a range of cancers, including those of the kidney, prostate and breast. HIFU offers the ability to treat deep-seated tumours locally, and potentially bears fewer side effects than more established treatment modalities such as resection, chemotherapy and ionising radiation. There remains however a number of significant challenges which currently hinder its widespread clinical application. One of these challenges is the need to transmit sufficient energy through the ribcage to ablate tissue at the required foci whilst minimising the formation of side lobes and sparing healthy tissue. Ribs both absorb and reflect ultrasound strongly. This sometimes results in overheating of bone and overlying tissue during treatment, leading to skin burns. Successful treatment of a patient with tumours in the upper abdomen therefore requires a thorough understanding of the way acoustic and thermal energy is deposited. Previously, a boundary element (BE) approach based on a Generalised Minimal Residual (GMRES) implementation of the Burton-Miller formulation was developed to predict the field of a multi-element HIFU array scattered by human ribs, the topology of which was obtained from CT scan data [1]. Dissipative mechanisms inside the propagating medium have since been implemented, together with a complex surface impedance condition at the surface of the ribs. A reformulation of the boundary element equations as a constrained optimisation problem was carried out to determine the complex surface velocities of a multi-element HIFU array which generated the acoustic pressure field that best fitted a required acoustic pressure distribution in a least-squares sense. This was done whilst ensuring that an acoustic dose rate parameter at the surface of the ribs was kept below a specified threshold. The methodology was tested at an excitation frequency of 1 MHz on a spherical multi-element array in the presence of anatomical ribs.

  6. Robust Constrained Blackbox Optimization with Surrogates

    DTIC Science & Technology

    2015-05-21

    algorithms with OPAL . Mathematical Programming Computation, 6(3):233–254, 2014. 6. M.S. Ouali, H. Aoudjit, and C. Audet. Replacement scheduling of a fleet of...Orban. Optimization of Algorithms with OPAL . Mathematical Programming Computation, 6(3), 233-254, September 2014. DISTRIBUTION A: Distribution

  7. Constraining the Sea Quark Distributions Through W+/- Cross Section Ratio Measurements at STAR

    NASA Astrophysics Data System (ADS)

    Posik, Matthew; STAR Collaboration

    2017-09-01

    Over the years, extractions of parton distribution functions (PDFs) have become more precise, however there are still regions where more data are needed to improve constraints. One such distribution is the sea quark distribution near the valence region, in particular the d / u distribution. Currently, measurements in the high-x region still have large uncertainties and suggest different trends for this distribution. The charged W cross section ratio is sensitive to the unpolarized sea quark distributions and could be used to help constrain the d / u distribution. Through pp collisions, the STAR experiment at RHIC, is well equipped to measure the e+/- leptonic decays of W+/- bosons in the mid-rapidity range | η | <= 1 at √{ s} = 500/510 GeV. At these kinematics STAR is sensitive to quark distributions near Bjorken-x of 0.16. STAR can also extend the sea quark sensitivity to higher x by measuring the leptonic decays in the forward rapidity range 1.1 < η < 2.0. STAR runs from 2011 through 2013 have collected about 350 pb-1 of data. Presented here are preliminary results for the 2011-2012 W cross section ratios ( 100 pb-1), and an update on the 2013 W cross section analysis ( 250 pb-1).

  8. Multi-Sensor Constrained Time Varying Emissions Estimation of Black Carbon: Attributing Urban and Fire Sources Globally

    NASA Astrophysics Data System (ADS)

    Cohen, J. B.

    2015-12-01

    The short lifetime and heterogeneous distribution of Black Carbon (BC) in the atmosphere leads to complex impacts on radiative forcing, climate, and health, and complicates analysis of its atmospheric processing and emissions. Two recent papers have estimated the global and regional emissions of BC using advanced statistical and computational methods. One used a Kalman Filter, including data from AERONET, NOAA, and other ground-based sources, to estimate global emissions of 17.8+/-5.6 Tg BC/year (with the increase attributable to East Asia, South Asia, Southeast Asia, and Eastern Europe - all regions which have had rapid urban, industrial, and economic expansion). The second additionally used remotely sensed measurements from MISR and a variance maximizing technique, uniquely quantifying fire and urban sources in Southeast Asia, as well as their large year-to-year variability over the past 12 years, leading to increases from 10% to 150%. These new emissions products, when run through our state-of-the art modelling system of chemistry, physics, transport, removal, radiation, and climate, match 140 ground stations and satellites better in both an absolute and a temporal sense. New work now further includes trace species measurements from OMI, which are used with the variance maximizing technique to constrain the types of emissions sources. Furthermore, land-use change and fire estimation products from MODIS are also included, which provide other constraints on the temporal and spatial nature of the variations of intermittent sources like fires or new permanent sources like expanded urbanization. This talk will introduce a new, top-down constrained, weekly varying BC emissions dataset, show that it produces a better fit with observations, and draw conclusions about the sources and impacts from urbanization one hand, and fires on another hand. Results specific to the Southeast and East Asia will demonstrate inter- and intra-annual variations, such as the function of the wet and dry seasons. Further, the impacts of missing data due to cloud coverage and of long-range transport from highly polluted areas to relatively clean downwind areas will be demonstrated. More general results will also be discussed in relation to the global anthropogenic aerosol distribution.

  9. Factors Which Impact the Distribution of Leadership for an ICT Reform: Expertise vis-a-vis Formal Role?

    ERIC Educational Resources Information Center

    Ho, Jeanne Marie; Ng, David

    2012-01-01

    This study examined the process of Information Communication Technology reform in a Singapore school. The focus was on distributed leadership actions, and the factors which enabled and constrained the distribution of leadership. This study adopted a naturalistic inquiry approach, involving the case study of a school. The study found that…

  10. A Maximal Entropy Distribution Derivation of the Sharma-Taneja-Mittal Entropic Form

    NASA Astrophysics Data System (ADS)

    Scarfone, Antonio M.

    In this letter we derive the distribution maximizing the Sharma-Taneja-Mittal entropy under certain constrains by using an information inequality satisfied by the Br`egman divergence associated to this entropic form. The resulting maximal entropy distribution coincides with the one derived from the calculus according to the maximal entropy principle à la Jaynes.

  11. Structural control on the Tohoku earthquake rupture process investigated by 3D FEM, tsunami and geodetic data

    PubMed Central

    Romano, F.; Trasatti, E.; Lorito, S.; Piromallo, C.; Piatanesi, A.; Ito, Y.; Zhao, D.; Hirata, K.; Lanucara, P.; Cocco, M.

    2014-01-01

    The 2011 Tohoku earthquake (Mw = 9.1) highlighted previously unobserved features for megathrust events, such as the large slip in a relatively limited area and the shallow rupture propagation. We use a Finite Element Model (FEM), taking into account the 3D geometrical and structural complexities up to the trench zone, and perform a joint inversion of tsunami and geodetic data to retrieve the earthquake slip distribution. We obtain a close spatial correlation between the main deep slip patch and the local seismic velocity anomalies, and large shallow slip extending also to the North coherently with a seismically observed low-frequency radiation. These observations suggest that the friction controlled the rupture, initially confining the deeper rupture and then driving its propagation up to the trench, where it spreads laterally. These findings are relevant to earthquake and tsunami hazard assessment because they may help to detect regions likely prone to rupture along the megathrust, and to constrain the probability of high slip near the trench. Our estimate of ~40 m slip value around the JFAST (Japan Trench Fast Drilling Project) drilling zone contributes to constrain the dynamic shear stress and friction coefficient of the fault obtained by temperature measurements to ~0.68 MPa and ~0.10, respectively. PMID:25005351

  12. Vibration control of beams using constrained layer damping with functionally graded viscoelastic cores: theory and experiments

    NASA Astrophysics Data System (ADS)

    El-Sabbagh, A.; Baz, A.

    2006-03-01

    Conventionally, the viscoelastic cores of Constrained Layer Damping (CLD) treatments are made of materials that have uniform shear modulus. Under such conditions, it is well-recognized that these treatments are only effective near their edges where the shear strains attain their highest values. In order to enhance the damping characteristics of the CLD treatments, we propose to manufacture the cores from Functionally Graded ViscoElastic Materials (FGVEM) that have optimally selected gradient of the shear modulus over the length of the treatments. With such optimized distribution of the shear modulus, the shear strain can be enhanced, and the energy dissipation can be maximized. The theory governing the vibration of beams treated with CLD, that has functionally graded viscoelastic cores, is presented using the finite element method (FEM). The predictions of the FEM are validated experimentally for plain beams, beams treated conventional CLD, and beams with CLD/FGVEM of different configurations. The obtained results indicate a close agreement between theory and experiments. Furthermore, the obtained results demonstrate the effectiveness of the new class of CLD with functionally graded cores in enhancing the energy dissipation over the conventional CLD over a broad frequency band. Extension of the proposed one-dimensional beam/CLD/FGVEM system to more complex structures is a natural extension to the present study.

  13. Mapping the Spatial Distribution of Metal-Bearing Oxides in VY Canis Majoris

    NASA Astrophysics Data System (ADS)

    Burkhardt, Andrew; Booth, S. Tom; Remijan, Anthony; Carroll, Brandon; Ziurys, Lucy M.

    2015-06-01

    The formation of silicate-based dust grains is not well constrained. Despite this, grain surface chemistry is essential to modern astrochemical formation models. In carbon-poor stellar envelopes, such as the red hypergiant VY Canis Majoris (VY CMa), metal-bearing oxides, the building blocks of silicate grains, dominate the grain formation, and thus are a key location to study dust chemistry. TiO_2, which was only first detected in the radio recently (Kaminski et al., 2013a), has been proposed to be a critical molecule for silicate grain formation, and not oxides containing more abundant metals (eg. Si, Fe, and Mg) (Gail and Sedlmayr, 1998). In addition, other molecules, such as SO_2, have been found to trace shells produced by numerous outflows pushing through the expanding envelope, resulting in a complex velocity structure (Ziurys et al., 2007). With the advanced capabilities of ALMA, it is now possible to individually resolve the velocity structure of each of these outflows and constrain the underlying chemistry in the region. Here, we present high resolution maps of rotational transitions of several metal-bearing oxides in VY CMa from the ALMA Band 7 and Band 9 Science Verification observations. With these maps, the physical parameters of the region and the formation chemistry of metal-bearing oxides will be studied.

  14. Structural control on the Tohoku earthquake rupture process investigated by 3D FEM, tsunami and geodetic data.

    PubMed

    Romano, F; Trasatti, E; Lorito, S; Piromallo, C; Piatanesi, A; Ito, Y; Zhao, D; Hirata, K; Lanucara, P; Cocco, M

    2014-07-09

    The 2011 Tohoku earthquake (Mw = 9.1) highlighted previously unobserved features for megathrust events, such as the large slip in a relatively limited area and the shallow rupture propagation. We use a Finite Element Model (FEM), taking into account the 3D geometrical and structural complexities up to the trench zone, and perform a joint inversion of tsunami and geodetic data to retrieve the earthquake slip distribution. We obtain a close spatial correlation between the main deep slip patch and the local seismic velocity anomalies, and large shallow slip extending also to the North coherently with a seismically observed low-frequency radiation. These observations suggest that the friction controlled the rupture, initially confining the deeper rupture and then driving its propagation up to the trench, where it spreads laterally. These findings are relevant to earthquake and tsunami hazard assessment because they may help to detect regions likely prone to rupture along the megathrust, and to constrain the probability of high slip near the trench. Our estimate of ~40 m slip value around the JFAST (Japan Trench Fast Drilling Project) drilling zone contributes to constrain the dynamic shear stress and friction coefficient of the fault obtained by temperature measurements to ~0.68 MPa and ~0.10, respectively.

  15. On the use of faults and background seismicity in Seismic Probabilistic Tsunami Hazard Analysis (SPTHA)

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Lorito, Stefano; Basili, Roberto; Tonini, Roberto; Tiberti, Mara Monica; Romano, Fabrizio; Perfetti, Paolo; Volpe, Manuela

    2017-04-01

    Most of the SPTHA studies and applications rely on several working assumptions: i) the - mostly offshore - tsunamigenic faults are sufficiently well known; ii) the subduction zone earthquakes dominate the hazard; iii) and their location and geometry is sufficiently well constrained. Hence, a probabilistic model is constructed as regards the magnitude-frequency distribution and sometimes the slip distribution of earthquakes occurring on assumed known faults. Then, tsunami scenarios are usually constructed for all earthquakes location, sizes, and slip distributions included in the probabilistic model, through deterministic numerical modelling of tsunami generation, propagation and impact on realistic bathymetries. Here, we adopt a different approach (Selva et al., GJI, 2016) that releases some of the above assumptions, considering that i) also non-subduction earthquakes may contribute significantly to SPTHA, depending on the local tectonic context; ii) that not all the offshore faults are known or sufficiently well constrained; iii) and that the faulting mechanism of future earthquakes cannot be considered strictly predictable. This approach uses as much as possible information from known faults which, depending on the amount of available information and on the local tectonic complexity, among other things, are either modelled as Predominant Seismicity (PS) or as Background Seismicity (BS). PS is used when it is possible to assume sufficiently known geometry and mechanism (e.g. for the main subduction zones). Conversely, within the BS approach information on faults is merged with that on past seismicity, dominant stress regime, and tectonic characterisation, to determine a probability density function for the faulting mechanism. To illustrate the methodology and its impact on the hazard estimates, we present an application in the NEAM region (Northeast Atlantic, Mediterranean and connected seas), initially designed during the ASTARTE project and now applied for the regional-scale SPTHA in the TSUMAPS-NEAM project funded by DG-ECHO.

  16. Constraining Light-Quark Yukawa Couplings from Higgs Distributions.

    PubMed

    Bishara, Fady; Haisch, Ulrich; Monni, Pier Francesco; Re, Emanuele

    2017-03-24

    We propose a novel strategy to constrain the bottom and charm Yukawa couplings by exploiting Large Hadron Collider (LHC) measurements of transverse momentum distributions in Higgs production. Our method does not rely on the reconstruction of exclusive final states or heavy-flavor tagging. Compared to other proposals, it leads to an enhanced sensitivity to the Yukawa couplings due to distortions of the differential Higgs spectra from emissions which either probe quark loops or are associated with quark-initiated production. We derive constraints using data from LHC run I, and we explore the prospects of our method at future LHC runs. Finally, we comment on the possibility of bounding the strange Yukawa coupling.

  17. Constrained multiple indicator kriging using sequential quadratic programming

    NASA Astrophysics Data System (ADS)

    Soltani-Mohammadi, Saeed; Erhan Tercan, A.

    2012-11-01

    Multiple indicator kriging (MIK) is a nonparametric method used to estimate conditional cumulative distribution functions (CCDF). Indicator estimates produced by MIK may not satisfy the order relations of a valid CCDF which is ordered and bounded between 0 and 1. In this paper a new method has been presented that guarantees the order relations of the cumulative distribution functions estimated by multiple indicator kriging. The method is based on minimizing the sum of kriging variances for each cutoff under unbiasedness and order relations constraints and solving constrained indicator kriging system by sequential quadratic programming. A computer code is written in the Matlab environment to implement the developed algorithm and the method is applied to the thickness data.

  18. Constraining Light-Quark Yukawa Couplings from Higgs Distributions

    NASA Astrophysics Data System (ADS)

    Bishara, Fady; Haisch, Ulrich; Monni, Pier Francesco; Re, Emanuele

    2017-03-01

    We propose a novel strategy to constrain the bottom and charm Yukawa couplings by exploiting Large Hadron Collider (LHC) measurements of transverse momentum distributions in Higgs production. Our method does not rely on the reconstruction of exclusive final states or heavy-flavor tagging. Compared to other proposals, it leads to an enhanced sensitivity to the Yukawa couplings due to distortions of the differential Higgs spectra from emissions which either probe quark loops or are associated with quark-initiated production. We derive constraints using data from LHC run I, and we explore the prospects of our method at future LHC runs. Finally, we comment on the possibility of bounding the strange Yukawa coupling.

  19. Consistent description of kinetic equation with triangle anomaly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pu Shi; Gao Jianhua; Wang Qun

    2011-05-01

    We provide a consistent description of the kinetic equation with a triangle anomaly which is compatible with the entropy principle of the second law of thermodynamics and the charge/energy-momentum conservation equations. In general an anomalous source term is necessary to ensure that the equations for the charge and energy-momentum conservation are satisfied and that the correction terms of distribution functions are compatible to these equations. The constraining equations from the entropy principle are derived for the anomaly-induced leading order corrections to the particle distribution functions. The correction terms can be determined for the minimum number of unknown coefficients in onemore » charge and two charge cases by solving the constraining equations.« less

  20. Reversibly constraining spliceosome-substrate complexes by engineering disulfide crosslinks.

    PubMed

    McCarthy, Patrick; Garside, Erin; Meschede-Krasa, Yonatan; MacMillan, Andrew; Pomeranz Krummel, Daniel

    2017-08-01

    The spliceosome is a highly dynamic mega-Dalton enzyme, formed in part by assembly of U snRNPs onto its pre-mRNA substrate transcripts. Early steps in spliceosome assembly are challenging to study biochemically and structurally due to compositional and conformational dynamics. We detail an approach to covalently and reversibly constrain or trap non-covalent pre-mRNA/protein spliceosome complexes. This approach involves engineering a single disulfide bond between a thiol-bearing cysteine sidechain and a proximal backbone phosphate of the pre-mRNA, site-specifically modified with an N-thioalkyl moiety. When distance and angle between reactants is optimal, the sidechain will react with the single N-thioalkyl to form a crosslink upon oxidation. We provide protocols detailing how this has been applied successfully to trap an 11-subunit RNA-protein assembly, the human U1 snRNP, in complex with a pre-mRNA. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Unveiling the nucleon tensor charge at Jefferson Lab: A study of the SoLID case

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Zhihong; Sato, Nobuo; Allada, Kalyan

    2017-01-27

    Here, future experiments at the Jefferson Lab 12 GeV upgrade, in particular, the Solenoidal Large Intensity Device (SoLID), aim at a very precise data set in the region where the partonic structure of the nucleon is dominated by the valence quarks. One of the main goals is to constrain the transversity quark distributions. We apply recent theoretical advances of the global QCD extraction of the transversity distributions to study the impact of future experimental data from the SoLID. Especially, we develop a model-independent method based on the hessian matrix analysis that allows to estimate the uncertainties of the transversity quarkmore » distributions and their tensor charge contributions extracted from the pseudo-data for the SoLID. Both u and d-quark transversity distributions are shown to be very well constrained in the kinematical region of the future experiments with the proton and the effective neutron targets.« less

  2. Adaptive Multi-Agent Systems for Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Macready, William; Bieniawski, Stefan; Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is a new framework for analyzing and controlling distributed systems. Here we demonstrate its use for distributed stochastic optimization. First we review one motivation of PD theory, as the information-theoretic extension of conventional full-rationality game theory to the case of bounded rational agents. In this extension the equilibrium of the game is the optimizer of a Lagrangian of the (probability distribution of) the joint state of the agents. When the game in question is a team game with constraints, that equilibrium optimizes the expected value of the team game utility, subject to those constraints. The updating of the Lagrange parameters in the Lagrangian can be viewed as a form of automated annealing, that focuses the MAS more and more on the optimal pure strategy. This provides a simple way to map the solution of any constrained optimization problem onto the equilibrium of a Multi-Agent System (MAS). We present computer experiments involving both the Queen s problem and K-SAT validating the predictions of PD theory and its use for off-the-shelf distributed adaptive optimization.

  3. Gatekeepers in the healthcare sector: Knowledge and Bourdieu's concept of field.

    PubMed

    Collyer, Fran M; Willis, Karen F; Lewis, Sophie

    2017-08-01

    Choice is an imperative for patients in the Australian healthcare system. The complexity of this healthcare 'maze', however, means that successfully navigating and making choices depends not only on the decisions of patients, but also other key players in the healthcare sector. Utilising Bourdieu's concepts of capital, habitus and field, we analyse the role of gatekeepers (i.e., those who control access to resources, services and knowledge) in shaping patients' experiences of healthcare, and producing opportunities to enable or constrain their choices. Indepth interviews were conducted with 41 gatekeepers (GPs, specialists, nurses, hospital administrators and policymakers), exploring how they acquire and use knowledge within the healthcare system. Our findings reveal a hierarchy of knowledges and power within the healthcare field which determines the forms of knowledge that are legitimate and can operate as capital within this complex and dynamic arena. As a consequence, forms of knowledge which can operate as capital, are unequally distributed and strategically controlled, ensuring democratic 'reform' remains difficult and 'choices' limited to those beneficial to private medicine. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. The value of qualitative conclusions for the interpretation of Super Soft Source grating spectra

    NASA Astrophysics Data System (ADS)

    Ness, J.

    2017-10-01

    High-resolution (grating) X-ray spectra of Super Soft Sources (SSS) contain a large amount of information. Main-stream interpretation approaches apply radiation transport models that, if uniquely constrained by the data, would provide information about temperature and mass of the underlying white dwarf and chemical composition of the ejecta. The complexity of the grating spectra has so far prohibited unique conclusions because realistic effects such as inhomogeneous density distribution, asymmetric ejecta, expansion etc open up an almost infinite number of dimensions to the problem. Further development of models are with no doubt needed, but unbiased inspection of the observed spectra is needed to narrow down where new developments are needed. In this presentation I illustrate how much we can already conclude without any models and remind of the value of qualitative conclusions. I show examples of past and recent observations and how comparisons with other observations help us to reveal common mechanisms. Albeit the high degree of complexity, some astonishing similarities between very different systems are found which can tailor the development of new models.

  5. Dendritic Glutamate Receptor mRNAs Show Contingent Local Hotspot-Dependent Translational Dynamics

    PubMed Central

    Kim, Tae Kyung; Sul, Jai-Yoon; Helmfors, Henrik; Langel, Ulo; Kim, Junhyong; Eberwine, James

    2014-01-01

    SUMMARY Protein synthesis in neuronal dendrites underlies long-term memory formation in the brain. Local translation of reporter mRNAs has demonstrated translation in dendrites at focal points called translational hotspots. Various reports have shown that hundreds to thousands of mRNAs are localized to dendrites, yet the dynamics of translation of multiple dendritic mRNAs has remained elusive. Here, we show that the protein translational activities of two dendritically localized mRNAs are spatiotemporally complex but constrained by the translational hotspots in which they are colocalized. Cotransfection of glutamate receptor 2 (GluR2) and GluR4 mRNAs (engineered to encode different fluorescent proteins) into rat hippocampal neurons demonstrates a heterogeneous distribution of translational hotspots for the two mRNAs along dendrites. Stimulation with s-3,5-dihydroxy-phenylglycine modifies the translational dynamics of both of these RNAs in a complex saturable manner. These results suggest that the translational hotspot is a primary structural regulator of the simultaneous yet differential translation of multiple mRNAs in the neuronal dendrite. PMID:24075992

  6. Hierarchical organization of functional connectivity in the mouse brain: a complex network approach.

    PubMed

    Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano

    2016-08-18

    This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.

  7. Hierarchical organization of functional connectivity in the mouse brain: a complex network approach

    NASA Astrophysics Data System (ADS)

    Bardella, Giampiero; Bifone, Angelo; Gabrielli, Andrea; Gozzi, Alessandro; Squartini, Tiziano

    2016-08-01

    This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges.

  8. The Effects of Protostellar Disk Turbulence on CO Emission Lines: A Comparison Study of Disks with Constant CO Abundance versus Chemically Evolving Disks

    NASA Astrophysics Data System (ADS)

    Yu, Mo; Evans, Neal J., II; Dodson-Robinson, Sarah E.; Willacy, Karen; Turner, Neal J.

    2017-12-01

    Turbulence is the leading candidate for angular momentum transport in protoplanetary disks and therefore influences disk lifetimes and planet formation timescales. However, the turbulent properties of protoplanetary disks are poorly constrained observationally. Recent studies have found turbulent speeds smaller than what fully-developed MRI would produce (Flaherty et al.). However, existing studies assumed a constant CO/H2 ratio of 10-4 in locations where CO is not frozen-out or photo-dissociated. Our previous studies of evolving disk chemistry indicate that CO is depleted by incorporation into complex organic molecules well inside the freeze-out radius of CO. We consider the effects of this chemical depletion on measurements of turbulence. Simon et al. suggested that the ratio of the peak line flux to the flux at line center of the CO J = 3-2 transition is a reasonable diagnostic of turbulence, so we focus on that metric, while adding some analysis of the more complex effects on spatial distribution. We simulate the emission lines of CO based on chemical evolution models presented in Yu et al., and find that the peak-to-trough ratio changes as a function of time as CO is destroyed. Specifically, a CO-depleted disk with high turbulent velocity mimics the peak-to-trough ratios of a non-CO-depleted disk with lower turbulent velocity. We suggest that disk observers and modelers take into account the possibility of CO depletion when using line profiles or peak-to-trough ratios to constrain the degree of turbulence in disks. Assuming that {CO}/{{{H}}}2={10}-4 at all disk radii can lead to underestimates of turbulent speeds in the disk by at least 0.2 km s-1.

  9. Understanding Subsurface Geoelectrical and Structural Constrains for Low Frequency Radar Sounding of Jovian Satellites

    NASA Astrophysics Data System (ADS)

    Heggy, Essam; Bruzzone, Lorenzo; Beck, Pierre; Doute, Sylvain; Gim, Youngyu; Herique, Alain; Kofman, Wlodek; Orosei, Roberto; Plaut, Jeffery; Rosen, Paul; Seu, Roberto

    2010-05-01

    Thermally stable Ice sheets on earth are known to be among the most favorable geophysical contexts for deep subsurface sounding radars. Penetrations ranging from few to several hundreds of meters have been observed at 10 to 60 MHz when sounding homogenous and pure ice sheets in Antarctica and in Alaskan glaciers. Unlike the terrestrial case, ice sheets on Jovian satellites are older formations with a more complex matrix of mineral inclusions with an even three dimensional distribution on the surface and subsurface that is yet to be understood in order to quantify its effect on the dielectric attenuation at the experiment sounding frequencies. Moreover, ridges, tectonic and shock features, may results in a complex and heterogeneous subsurface structure that can induce scattering attenuation with different amplitudes depending on the subsurface heterogeneity levels. Such attenuation phenomena's has to be accounted in the instrument design and future data analysis in order to optimize the science return, reduce mission risk and define proper operation modes. In order to address those challenges in the current performance studies and instrument design of the proposed radar sounding experiments, we present an attempt to quantify both the dielectric and scattering losses on both icy satellites, Ganymede and Europa, based on experimental dielectric characterization of relevant icy-dust mixtures samples, field work from analog environment and radar propagation simulations in parametric subsurface geophysical models representing potential geological scenarios of the two Jovian satellites. Our preliminary results suggest that the use of a dual band radar enable to overcome several of these constrains and reduces ambiguities associated subsurface interface mapping. Acknowledgement. This research is carried out by the Jet Propulsion Laboratory/Caltech, under a grant from the National Aeronautics and Space Administration.

  10. Innovative Socio-Technical Environments in Support of Distributed Intelligence and Lifelong Learning

    ERIC Educational Resources Information Center

    Fischer, G; Konomi, S.

    2007-01-01

    Individual, unaided human abilities are constrained. Media have helped us to transcend boundaries in thinking, working, learning and collaborating by supporting "distributed intelligence". Wireless and mobile technologies provide new opportunities for creating novel socio-technical environments and thereby empowering humans, but not without…

  11. Genome Informed Trait-Based Models

    NASA Astrophysics Data System (ADS)

    Karaoz, U.; Cheng, Y.; Bouskill, N.; Tang, J.; Beller, H. R.; Brodie, E.; Riley, W. J.

    2013-12-01

    Trait-based approaches are powerful tools for representing microbial communities across both spatial and temporal scales within ecosystem models. Trait-based models (TBMs) represent the diversity of microbial taxa as stochastic assemblages with a distribution of traits constrained by trade-offs between these traits. Such representation with its built-in stochasticity allows the elucidation of the interactions between the microbes and their environment by reducing the complexity of microbial community diversity into a limited number of functional ';guilds' and letting them emerge across spatio-temporal scales. From the biogeochemical/ecosystem modeling perspective, the emergent properties of the microbial community could be directly translated into predictions of biogeochemical reaction rates and microbial biomass. The accuracy of TBMs depends on the identification of key traits of the microbial community members and on the parameterization of these traits. Current approaches to inform TBM parameterization are empirical (i.e., based on literature surveys). Advances in omic technologies (such as genomics, metagenomics, metatranscriptomics, and metaproteomics) pave the way to better-initialize models that can be constrained in a generic or site-specific fashion. Here we describe the coupling of metagenomic data to the development of a TBM representing the dynamics of metabolic guilds from an organic carbon stimulated groundwater microbial community. Illumina paired-end metagenomic data were collected from the community as it transitioned successively through electron-accepting conditions (nitrate-, sulfate-, and Fe(III)-reducing), and used to inform estimates of growth rates and the distribution of metabolic pathways (i.e., aerobic and anaerobic oxidation, fermentation) across a spatially resolved TBM. We use this model to evaluate the emergence of different metabolisms and predict rates of biogeochemical processes over time. We compare our results to observational outputs.

  12. Dark Energy Survey Year 1 Results: Cross-Correlation Redshifts in the DES -- Calibration of the Weak Lensing Source Redshift Distributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, C.; et al.

    We present the calibration of the Dark Energy Survey Year 1 (DES Y1) weak lensing source galaxy redshift distributions from clustering measurements. By cross-correlating the positions of source galaxies with luminous red galaxies selected by the redMaGiC algorithm we measure the redshift distributions of the source galaxies as placed into different tomographic bins. These measurements constrain any such shifts to an accuracy ofmore » $$\\sim0.02$$ and can be computed even when the clustering measurements do not span the full redshift range. The highest-redshift source bin is not constrained by the clustering measurements because of the minimal redshift overlap with the redMaGiC galaxies. We compare our constraints with those obtained from $$\\texttt{COSMOS}$$ 30-band photometry and find that our two very different methods produce consistent constraints.« less

  13. An approach to constrained aerodynamic design with application to airfoils

    NASA Technical Reports Server (NTRS)

    Campbell, Richard L.

    1992-01-01

    An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.

  14. Sparse Poisson noisy image deblurring.

    PubMed

    Carlavan, Mikael; Blanc-Féraud, Laure

    2012-04-01

    Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.

  15. Cooperative Multi-Agent Mobile Sensor Platforms for Jet Engine Inspection: Concept and Implementation

    NASA Technical Reports Server (NTRS)

    Litt, Jonathan S.; Wong, Edmond; Krasowski, Michael J.; Greer, Lawrence C.

    2003-01-01

    Cooperative behavior algorithms utilizing swarm intelligence are being developed for mobile sensor platforms to inspect jet engines on-wing. Experiments are planned in which several relatively simple autonomous platforms will work together in a coordinated fashion to carry out complex maintenance-type tasks within the constrained working environment modeled on the interior of a turbofan engine. The algorithms will emphasize distribution of the tasks among multiple units; they will be scalable and flexible so that units may be added in the future; and will be designed to operate on an individual unit level to produce the desired global effect. This proof of concept demonstration will validate the algorithms and provide justification for further miniaturization and specialization of the hardware toward the true application of on-wing in situ turbine engine maintenance.

  16. Commentary on "Distributed Revisiting: An Analytic for Retention of Coherent Science Learning"

    ERIC Educational Resources Information Center

    Hewitt, Jim

    2015-01-01

    The article, "Distributed Revisiting: An Analytic for Retention of Coherent Science Learning" is an interesting study that operates at the intersection of learning theory and learning analytics. The authors observe that the relationship between learning theory and research in the learning analytics field is constrained by several…

  17. Conditional Entropy-Constrained Residual VQ with Application to Image Coding

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Chung, Wilson C.; Smith, Mark J. T.

    1996-01-01

    This paper introduces an extension of entropy-constrained residual vector quantization (VQ) where intervector dependencies are exploited. The method, which we call conditional entropy-constrained residual VQ, employs a high-order entropy conditioning strategy that captures local information in the neighboring vectors. When applied to coding images, the proposed method is shown to achieve better rate-distortion performance than that of entropy-constrained residual vector quantization with less computational complexity and lower memory requirements. Moreover, it can be designed to support progressive transmission in a natural way. It is also shown to outperform some of the best predictive and finite-state VQ techniques reported in the literature. This is due partly to the joint optimization between the residual vector quantizer and a high-order conditional entropy coder as well as the efficiency of the multistage residual VQ structure and the dynamic nature of the prediction.

  18. Strong-lensing analysis of MACS J0717.5+3745 from Hubble Frontier Fields observations: How well can the mass distribution be constrained?

    NASA Astrophysics Data System (ADS)

    Limousin, M.; Richard, J.; Jullo, E.; Jauzac, M.; Ebeling, H.; Bonamigo, M.; Alavi, A.; Clément, B.; Giocoli, C.; Kneib, J.-P.; Verdugo, T.; Natarajan, P.; Siana, B.; Atek, H.; Rexroth, M.

    2016-04-01

    We present a strong-lensing analysis of MACSJ0717.5+3745 (hereafter MACS J0717), based on the full depth of the Hubble Frontier Field (HFF) observations, which brings the number of multiply imaged systems to 61, ten of which have been spectroscopically confirmed. The total number of images comprised in these systems rises to 165, compared to 48 images in 16 systems before the HFF observations. Our analysis uses a parametric mass reconstruction technique, as implemented in the Lenstool software, and the subset of the 132 most secure multiple images to constrain a mass distribution composed of four large-scale mass components (spatially aligned with the four main light concentrations) and a multitude of galaxy-scale perturbers. We find a superposition of cored isothermal mass components to provide a good fit to the observational constraints, resulting in a very shallow mass distribution for the smooth (large-scale) component. Given the implications of such a flat mass profile, we investigate whether a model composed of "peaky" non-cored mass components can also reproduce the observational constraints. We find that such a non-cored mass model reproduces the observational constraints equally well, in the sense that both models give comparable total rms. Although the total (smooth dark matter component plus galaxy-scale perturbers) mass distributions of both models are consistent, as are the integrated two-dimensional mass profiles, we find that the smooth and the galaxy-scale components are very different. We conclude that, even in the HFF era, the generic degeneracy between smooth and galaxy-scale components is not broken, in particular in such a complex galaxy cluster. Consequently, insights into the mass distribution of MACS J0717 remain limited, emphasizing the need for additional probes beyond strong lensing. Our findings also have implications for estimates of the lensing magnification. We show that the amplification difference between the two models is larger than the error associated with either model, and that this additional systematic uncertainty is approximately the difference in magnification obtained by the different groups of modelers using pre-HFF data. This uncertainty decreases the area of the image plane where we can reliably study the high-redshift Universe by 50 to 70%.

  19. Stochastic Computations in Cortical Microcircuit Models

    PubMed Central

    Maass, Wolfgang

    2013-01-01

    Experimental data from neuroscience suggest that a substantial amount of knowledge is stored in the brain in the form of probability distributions over network states and trajectories of network states. We provide a theoretical foundation for this hypothesis by showing that even very detailed models for cortical microcircuits, with data-based diverse nonlinear neurons and synapses, have a stationary distribution of network states and trajectories of network states to which they converge exponentially fast from any initial state. We demonstrate that this convergence holds in spite of the non-reversibility of the stochastic dynamics of cortical microcircuits. We further show that, in the presence of background network oscillations, separate stationary distributions emerge for different phases of the oscillation, in accordance with experimentally reported phase-specific codes. We complement these theoretical results by computer simulations that investigate resulting computation times for typical probabilistic inference tasks on these internally stored distributions, such as marginalization or marginal maximum-a-posteriori estimation. Furthermore, we show that the inherent stochastic dynamics of generic cortical microcircuits enables them to quickly generate approximate solutions to difficult constraint satisfaction problems, where stored knowledge and current inputs jointly constrain possible solutions. This provides a powerful new computing paradigm for networks of spiking neurons, that also throws new light on how networks of neurons in the brain could carry out complex computational tasks such as prediction, imagination, memory recall and problem solving. PMID:24244126

  20. Made-to-measure modelling of observed galaxy dynamics

    NASA Astrophysics Data System (ADS)

    Bovy, Jo; Kawata, Daisuke; Hunt, Jason A. S.

    2018-01-01

    Amongst dynamical modelling techniques, the made-to-measure (M2M) method for modelling steady-state systems is amongst the most flexible, allowing non-parametric distribution functions in complex gravitational potentials to be modelled efficiently using N-body particles. Here, we propose and test various improvements to the standard M2M method for modelling observed data, illustrated using the simple set-up of a one-dimensional harmonic oscillator. We demonstrate that nuisance parameters describing the modelled system's orientation with respect to the observer - e.g. an external galaxy's inclination or the Sun's position in the Milky Way - as well as the parameters of an external gravitational field can be optimized simultaneously with the particle weights. We develop a method for sampling from the high-dimensional uncertainty distribution of the particle weights. We combine this in a Gibbs sampler with samplers for the nuisance and potential parameters to explore the uncertainty distribution of the full set of parameters. We illustrate our M2M improvements by modelling the vertical density and kinematics of F-type stars in Gaia DR1. The novel M2M method proposed here allows full probabilistic modelling of steady-state dynamical systems, allowing uncertainties on the non-parametric distribution function and on nuisance parameters to be taken into account when constraining the dark and baryonic masses of stellar systems.

  1. Constraining metasomatism in the oceanic lithosphere

    NASA Astrophysics Data System (ADS)

    Plümper, Oliver; Beinlich, Andreas; Austrheim, Hâkon

    2010-05-01

    Serpentinization is the most prominent fluid-mediated alteration process in the oceanic lithosphere, but the physical and chemical conditions of this process are difficult to constrain. It is crucial to establish a framework of mineralogical markers that constrain (a) whether the reaction proceeded without substantial addition of elements from the fluid (isochemical), (b) the reaction is isovolumetric generating no internal stresses and (c) if the overall system was closed with respect to certain elements. We have examined ophiolitic metaperidotites from Norway, combining microtextural and microchemical observations to gain further insight into the complex fluid-mediated phase transformations occurring during the alteration of the oceanic lithosphere. Serpentinization can be isovolumetric, resulting in pseudomorphic mineral replacement reactions (e.g. Viti et al., 2005), or produce an observable volume increase (e.g. Shervais et al., 2005). In the case of olivine, the ideal reaction is commonly written as forsteritic olivine reacting to lizardite and brucite, i.e. 2 Mg2SiO4 + 3 H2O - Mg3[Si2O5](OH)4 + Mg(OH)2, implying a total volume increase of approximately 20%. However, if Mg was lost from the system, the reaction can also be written as 2 Mg2SiO4 + 2 H+ + H2O - Mg3[Si2O5](OH)4 + Mg2+. This suggests that the solid volume is preserved and no internal stresses are generated. Therefore, the presence of brucite could be used to constrain volumetric changes during serpentinization. However, the small size and sparse distribution of brucite makes it difficult to find in serpentinized metaperidotites. Here we show that micro-Raman spectroscopy is a reliable tool to identify even nanometer-sized brucite in serpentine. In addition, we also used the electron backscatter diffraction (EBSD) technique to identify volume increase illustrated by the progressive change of olivine orientation at the tip of a crack induced by serpentinization. Furthermore, it is important to constrain the degree of system openness and the transport of elements through the fluid phase. Observations from fractures in metapyroxenite layers from the Røragen-Feragen ultramafic complex provide closer insight into the late stage alteration of the oceanic lithosphere. Detailed electron microscopy reveals that these fractures are filled with polyhedral serpentine, indicating late stage open system conditions (Andreani et al., 2007). However, microtextures and reactive transport modeling suggest that Ca from clinopyroxene dissolution in the metapyroxenite layers was instantaneously precipitated as andradite within the fracture, without major Ca transport. Hence, although the overall system can be regarded as open for water, Ca exhibits closed system behavior on the decimeter scale within the metapyroxenite layers. Our observations show that mineralogical and microtextural markers, such as characteristic phases, their spatial relationship and stress generation associated with replacement, provide an insight into the metasomatic conditions of oceanic lithosphere alteration. References: Andreani et al. (2007), Geochem. Geophys. Geosyst., 8 (2). Shervais et al. (2005), Int. Geol. Rev., 47, 1-23. Viti et al. (2005) Min. Mag., 69 (2), 491-507.

  2. Status Report on Speech Research. A Report on the Status and Progress of Studies on the Nature of Speech, Instrumentation for Its Investigation, and Practical Applications.

    DTIC Science & Technology

    1984-08-01

    6, 391-395. Abbs, J. H., & Gracco, V. L. (in press). Control of complex motor gestures 0 and orofacial muscle responses to load perturbations of the...E2, and E, are on the same world line where %,! E. is causally constrained by E2 and E. is causally constrained by El. You take pains to note that the

  3. Optimal synchronization in space

    NASA Astrophysics Data System (ADS)

    Brede, Markus

    2010-02-01

    In this Rapid Communication we investigate spatially constrained networks that realize optimal synchronization properties. After arguing that spatial constraints can be imposed by limiting the amount of “wire” available to connect nodes distributed in space, we use numerical optimization methods to construct networks that realize different trade offs between optimal synchronization and spatial constraints. Over a large range of parameters such optimal networks are found to have a link length distribution characterized by power-law tails P(l)∝l-α , with exponents α increasing as the networks become more constrained in space. It is also shown that the optimal networks, which constitute a particular type of small world network, are characterized by the presence of nodes of distinctly larger than average degree around which long-distance links are centered.

  4. Maximum entropy modeling of metabolic networks by constraining growth-rate moments predicts coexistence of phenotypes

    NASA Astrophysics Data System (ADS)

    De Martino, Daniele

    2017-12-01

    In this work maximum entropy distributions in the space of steady states of metabolic networks are considered upon constraining the first and second moments of the growth rate. Coexistence of fast and slow phenotypes, with bimodal flux distributions, emerges upon considering control on the average growth (optimization) and its fluctuations (heterogeneity). This is applied to the carbon catabolic core of Escherichia coli where it quantifies the metabolic activity of slow growing phenotypes and it provides a quantitative map with metabolic fluxes, opening the possibility to detect coexistence from flux data. A preliminary analysis on data for E. coli cultures in standard conditions shows degeneracy for the inferred parameters that extend in the coexistence region.

  5. Identifying Flow Networks in a Karstified Aquifer by Application of the Cellular Automata-Based Deterministic Inversion Method (Lez Aquifer, France)

    NASA Astrophysics Data System (ADS)

    Fischer, P.; Jardani, A.; Wang, X.; Jourde, H.; Lecoq, N.

    2017-12-01

    The distributed modeling of flow paths within karstic and fractured fields remains a complex task because of the high dependence of the hydraulic responses to the relative locations between observational boreholes and interconnected fractures and karstic conduits that control the main flow of the hydrosystem. The inverse problem in a distributed model is one alternative approach to interpret the hydraulic test data by mapping the karstic networks and fractured areas. In this work, we developed a Bayesian inversion approach, the Cellular Automata-based Deterministic Inversion (CADI) algorithm to infer the spatial distribution of hydraulic properties in a structurally constrained model. This method distributes hydraulic properties along linear structures (i.e., flow conduits) and iteratively modifies the structural geometry of this conduit network to progressively match the observed hydraulic data to the modeled ones. As a result, this method produces a conductivity model that is composed of a discrete conduit network embedded in the background matrix, capable of producing the same flow behavior as the investigated hydrologic system. The method is applied to invert a set of multiborehole hydraulic tests collected from a hydraulic tomography experiment conducted at the Terrieu field site in the Lez aquifer, Southern France. The emergent model shows a high consistency to field observation of hydraulic connections between boreholes. Furthermore, it provides a geologically realistic pattern of flow conduits. This method is therefore of considerable value toward an enhanced distributed modeling of the fractured and karstified aquifers.

  6. Environmental Conditions Constrain the Distribution and Diversity of Archaeal merA in Yellowstone National Park, Wyoming, U.S.A.

    USGS Publications Warehouse

    Wang, Y.; Boyd, E.; Crane, S.; Lu-Irving, P.; Krabbenhoft, D.; King, S.; Dighton, J.; Geesey, G.; Barkay, T.

    2011-01-01

    The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient. ?? 2011 Springer Science+Business Media, LLC.

  7. Environmental conditions constrain the distribution and diversity of archaeal merA in Yellowstone National Park, Wyoming, U.S.A.

    PubMed

    Wang, Yanping; Boyd, Eric; Crane, Sharron; Lu-Irving, Patricia; Krabbenhoft, David; King, Susan; Dighton, John; Geesey, Gill; Barkay, Tamar

    2011-11-01

    The distribution and phylogeny of extant protein-encoding genes recovered from geochemically diverse environments can provide insight into the physical and chemical parameters that led to the origin and which constrained the evolution of a functional process. Mercuric reductase (MerA) plays an integral role in mercury (Hg) biogeochemistry by catalyzing the transformation of Hg(II) to Hg(0). Putative merA sequences were amplified from DNA extracts of microbial communities associated with mats and sulfur precipitates from physicochemically diverse Hg-containing springs in Yellowstone National Park, Wyoming, using four PCR primer sets that were designed to capture the known diversity of merA. The recovery of novel and deeply rooted MerA lineages from these habitats supports previous evidence that indicates merA originated in a thermophilic environment. Generalized linear models indicate that the distribution of putative archaeal merA lineages was constrained by a combination of pH, dissolved organic carbon, dissolved total mercury and sulfide. The models failed to identify statistically well supported trends for the distribution of putative bacterial merA lineages as a function of these or other measured environmental variables, suggesting that these lineages were either influenced by environmental parameters not considered in the present study, or the bacterial primer sets were designed to target too broad of a class of genes which may have responded differently to environmental stimuli. The widespread occurrence of merA in the geothermal environments implies a prominent role for Hg detoxification in these environments. Moreover, the differences in the distribution of the merA genes amplified with the four merA primer sets suggests that the organisms putatively engaged in this activity have evolved to occupy different ecological niches within the geothermal gradient.

  8. Establishing conservation baselines with dynamic distribution models for bat populations facing imminent decline

    USGS Publications Warehouse

    Rodhouse, Thomas J.; Ormsbee, Patricia C.; Irvine, Kathryn M.; Vierling, Lee A.; Szewczak, Joseph M.; Vierling, Kerri T.

    2015-01-01

    Landscape keystone structures associated with roosting habitat emerged as regionally important predictors of bat distributions. The challenges of bat monitoring have constrained previous species distribution modelling efforts to temporally static presence-only approaches. Our approach extends to broader spatial and temporal scales than has been possible in the past for bats, making a substantial increase in capacity for bat conservation.

  9. More than the sum of the parts: forest climate response from joint species distribution models

    Treesearch

    James S. Clark; Alan E. Gelfand; Christopher W. Woodall; Kai Zhu

    2014-01-01

    The perceived threat of climate change is often evaluated from species distribution models that are fitted to many species independently and then added together. This approach ignores the fact that species are jointly distributed and limit one another. Species respond to the same underlying climatic variables, and the abundance of any one species can be constrained by...

  10. A robust approach to chance constrained optimal power flow with renewable generation

    DOE PAGES

    Lubin, Miles; Dvorkin, Yury; Backhaus, Scott N.

    2016-09-01

    Optimal Power Flow (OPF) dispatches controllable generation at minimum cost subject to operational constraints on generation and transmission assets. The uncertainty and variability of intermittent renewable generation is challenging current deterministic OPF approaches. Recent formulations of OPF use chance constraints to limit the risk from renewable generation uncertainty, however, these new approaches typically assume the probability distributions which characterize the uncertainty and variability are known exactly. We formulate a robust chance constrained (RCC) OPF that accounts for uncertainty in the parameters of these probability distributions by allowing them to be within an uncertainty set. The RCC OPF is solved usingmore » a cutting-plane algorithm that scales to large power systems. We demonstrate the RRC OPF on a modified model of the Bonneville Power Administration network, which includes 2209 buses and 176 controllable generators. In conclusion, deterministic, chance constrained (CC), and RCC OPF formulations are compared using several metrics including cost of generation, area control error, ramping of controllable generators, and occurrence of transmission line overloads as well as the respective computational performance.« less

  11. Complicating Canons: A Critical Literacy Challenge to Common Core Assessment

    ERIC Educational Resources Information Center

    Peel, Anne

    2017-01-01

    The widespread adoption of the Common Core State Standards in the US has prioritized rigorous reading of complex texts. The emphasis on text complexity has led to instructional and assessment materials that constrain critical literacy practices by emphasizing quantitative features of text, such as sentence length, and a static list of text…

  12. Western Lake Erie Basin: Soft-data-constrained, NHDPlus resolution watershed modeling and exploration of applicable conservation scenarios

    USDA-ARS?s Scientific Manuscript database

    Complex watershed simulation models are powerful tools that can help scientists and policy-makers address challenging topics, such as land use management and water security. In the Western Lake Erie Basin (WLEB), complex hydrological models have been applied at various scales to help describe relat...

  13. Designing for Discovery Learning of Complexity Principles of Congestion by Driving Together in the TrafficJams Simulation

    ERIC Educational Resources Information Center

    Levy, Sharona T.; Peleg, Ran; Ofeck, Eyal; Tabor, Naamit; Dubovi, Ilana; Bluestein, Shiri; Ben-Zur, Hadar

    2018-01-01

    We propose and evaluate a framework supporting collaborative discovery learning of complex systems. The framework blends five design principles: (1) individual action: amidst (2) social interactions; challenged with (3) multiple tasks; set in (4) a constrained interactive learning environment that draws attention to (5) highlighted target…

  14. Origin and Evolutionary Alteration of the Mitochondrial Import System in Eukaryotic Lineages

    PubMed Central

    Fukasawa, Yoshinori; Oda, Toshiyuki; Tomii, Kentaro

    2017-01-01

    Abstract Protein transport systems are fundamentally important for maintaining mitochondrial function. Nevertheless, mitochondrial protein translocases such as the kinetoplastid ATOM complex have recently been shown to vary in eukaryotic lineages. Various evolutionary hypotheses have been formulated to explain this diversity. To resolve any contradiction, estimating the primitive state and clarifying changes from that state are necessary. Here, we present more likely primitive models of mitochondrial translocases, specifically the translocase of the outer membrane (TOM) and translocase of the inner membrane (TIM) complexes, using scrutinized phylogenetic profiles. We then analyzed the translocases’ evolution in eukaryotic lineages. Based on those results, we propose a novel evolutionary scenario for diversification of the mitochondrial transport system. Our results indicate that presequence transport machinery was mostly established in the last eukaryotic common ancestor, and that primitive translocases already had a pathway for transporting presequence-containing proteins. Moreover, secondary changes including convergent and migrational gains of a presequence receptor in TOM and TIM complexes, respectively, likely resulted from constrained evolution. The nature of a targeting signal can constrain alteration to the protein transport complex. PMID:28369657

  15. Periodic Forced Response of Structures Having Three-Dimensional Frictional Constraints

    NASA Astrophysics Data System (ADS)

    CHEN, J. J.; YANG, B. D.; MENQ, C. H.

    2000-01-01

    Many mechanical systems have moving components that are mutually constrained through frictional contacts. When subjected to cyclic excitations, a contact interface may undergo constant changes among sticks, slips and separations, which leads to very complex contact kinematics. In this paper, a 3-D friction contact model is employed to predict the periodic forced response of structures having 3-D frictional constraints. Analytical criteria based on this friction contact model are used to determine the transitions among sticks, slips and separations of the friction contact, and subsequently the constrained force which consists of the induced stick-slip friction force on the contact plane and the contact normal load. The resulting constrained force is often a periodic function and can be considered as a feedback force that influences the response of the constrained structures. By using the Multi-Harmonic Balance Method along with Fast Fourier Transform, the constrained force can be integrated with the receptance of the structures so as to calculate the forced response of the constrained structures. It results in a set of non-linear algebraic equations that can be solved iteratively to yield the relative motion as well as the constrained force at the friction contact. This method is used to predict the periodic response of a frictionally constrained 3-d.o.f. oscillator. The predicted results are compared with those of the direct time integration method so as to validate the proposed method. In addition, the effect of super-harmonic components on the resonant response and jump phenomenon is examined.

  16. The evolution of phenotypic correlations and ‘developmental memory’

    PubMed Central

    Watson, Richard A.; Wagner, Günter P.; Pavlicev, Mihaela; Weinreich, Daniel M.; Mills, Rob

    2014-01-01

    Development introduces structured correlations among traits that may constrain or bias the distribution of phenotypes produced. Moreover, when suitable heritable variation exists, natural selection may alter such constraints and correlations, affecting the phenotypic variation available to subsequent selection. However, exactly how the distribution of phenotypes produced by complex developmental systems can be shaped by past selective environments is poorly understood. Here we investigate the evolution of a network of recurrent non-linear ontogenetic interactions, such as a gene regulation network, in various selective scenarios. We find that evolved networks of this type can exhibit several phenomena that are familiar in cognitive learning systems. These include formation of a distributed associative memory that can ‘store’ and ‘recall’ multiple phenotypes that have been selected in the past, recreate complete adult phenotypic patterns accurately from partial or corrupted embryonic phenotypes, and ‘generalise’ (by exploiting evolved developmental modules) to produce new combinations of phenotypic features. We show that these surprising behaviours follow from an equivalence between the action of natural selection on phenotypic correlations and associative learning, well-understood in the context of neural networks. This helps to explain how development facilitates the evolution of high-fitness phenotypes and how this ability changes over evolutionary time. PMID:24351058

  17. Statistical thermodynamics and the size distributions of tropical convective clouds.

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.; Glenn, I. B.; Krueger, S. K.; Ferlay, N.

    2017-12-01

    Parameterizations for sub-grid cloud dynamics are commonly developed by using fine scale modeling or measurements to explicitly resolve the mechanistic details of clouds to the best extent possible, and then to formulating these behaviors cloud state for use within a coarser grid. A second is to invoke physical intuition and some very general theoretical principles from equilibrium statistical thermodynamics. This second approach is quite widely used elsewhere in the atmospheric sciences: for example to explain the heat capacity of air, blackbody radiation, or even the density profile or air in the atmosphere. Here we describe how entrainment and detrainment across cloud perimeters is limited by the amount of available air and the range of moist static energy in the atmosphere, and that constrains cloud perimeter distributions to a power law with a -1 exponent along isentropes and to a Boltzmann distribution across isentropes. Further, the total cloud perimeter density in a cloud field is directly tied to the buoyancy frequency of the column. These simple results are shown to be reproduced within a complex dynamic simulation of a tropical convective cloud field and in passive satellite observations of cloud 3D structures. The implication is that equilibrium tropical cloud structures can be inferred from the bulk thermodynamic structure of the atmosphere without having to analyze computationally expensive dynamic simulations.

  18. Rate of evolutionary change in cranial morphology of the marsupial genus Monodelphis is constrained by the availability of additive genetic variation

    PubMed Central

    Porto, Arthur; Sebastião, Harley; Pavan, Silvia Eliza; VandeBerg, John L.; Marroig, Gabriel; Cheverud, James M.

    2015-01-01

    We tested the hypothesis that the rate of marsupial cranial evolution is dependent on the distribution of genetic variation in multivariate space. To do so, we carried out a genetic analysis of cranial morphological variation in laboratory strains of Monodelphis domestica and used estimates of genetic covariation to analyze the morphological diversification of the Monodelphis brevicaudata species group. We found that within-species genetic variation is concentrated in only a few axes of the morphospace and that this strong genetic covariation influenced the rate of morphological diversification of the brevicaudata group, with between-species divergence occurring fastest when occurring along the genetic line of least resistance. Accounting for the geometric distribution of genetic variation also increased our ability to detect the selective regimen underlying species diversification, with several instances of selection only being detected when genetic covariances were taken into account. Therefore, this work directly links patterns of genetic covariation among traits to macroevolutionary patterns of morphological divergence. Our findings also suggest that the limited distribution of Monodelphis species in morphospace is the result of a complex interplay between the limited dimensionality of available genetic variation and strong stabilizing selection along two major axes of genetic variation. PMID:25818173

  19. Hydrologic consistency as a basis for assessing complexity of monthly water balance models for the continental United States

    NASA Astrophysics Data System (ADS)

    Martinez, Guillermo F.; Gupta, Hoshin V.

    2011-12-01

    Methods to select parsimonious and hydrologically consistent model structures are useful for evaluating dominance of hydrologic processes and representativeness of data. While information criteria (appropriately constrained to obey underlying statistical assumptions) can provide a basis for evaluating appropriate model complexity, it is not sufficient to rely upon the principle of maximum likelihood (ML) alone. We suggest that one must also call upon a "principle of hydrologic consistency," meaning that selected ML structures and parameter estimates must be constrained (as well as possible) to reproduce desired hydrological characteristics of the processes under investigation. This argument is demonstrated in the context of evaluating the suitability of candidate model structures for lumped water balance modeling across the continental United States, using data from 307 snow-free catchments. The models are constrained to satisfy several tests of hydrologic consistency, a flow space transformation is used to ensure better consistency with underlying statistical assumptions, and information criteria are used to evaluate model complexity relative to the data. The results clearly demonstrate that the principle of consistency provides a sensible basis for guiding selection of model structures and indicate strong spatial persistence of certain model structures across the continental United States. Further work to untangle reasons for model structure predominance can help to relate conceptual model structures to physical characteristics of the catchments, facilitating the task of prediction in ungaged basins.

  20. Enhanced Constrained Predictive Control for Applications to Autonomous Vehicles and Missions

    DTIC Science & Technology

    2016-10-18

    AFRL /RVSV 3550 Aberdeen Ave, SE 11. SPONSOR/MONITOR’S REPORT Kirtland AFB, NM 87117-5776 NUMBER(S) AFRL -RV-PS-TR-2016-0122 12. DISTRIBUTION...Suite 0944 Ft Belvoir, VA 22060-6218 1 cy AFRL /RVIL Kirtland AFB, NM 87117-5776 2 cys Official Record Copy AFRL /RVSV/Richard S. Erwin 1 cy ... AFRL -RV-PS- AFRL -RV-PS- TR-2016-0122 TR-2016-0122 ENHANCED CONSTRAINED PREDICTIVE CONTROL FOR APPLICATIONS TO AUTONOMOUS VEHICLES

  1. The added value of remote sensing products in constraining hydrological models

    NASA Astrophysics Data System (ADS)

    Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus

    2017-04-01

    The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.

  2. Unsupervised background-constrained tank segmentation of infrared images in complex background based on the Otsu method.

    PubMed

    Zhou, Yulong; Gao, Min; Fang, Dan; Zhang, Baoquan

    2016-01-01

    In an effort to implement fast and effective tank segmentation from infrared images in complex background, the threshold of the maximum between-class variance method (i.e., the Otsu method) is analyzed and the working mechanism of the Otsu method is discussed. Subsequently, a fast and effective method for tank segmentation from infrared images in complex background is proposed based on the Otsu method via constraining the complex background of the image. Considering the complexity of background, the original image is firstly divided into three classes of target region, middle background and lower background via maximizing the sum of their between-class variances. Then, the unsupervised background constraint is implemented based on the within-class variance of target region and hence the original image can be simplified. Finally, the Otsu method is applied to simplified image for threshold selection. Experimental results on a variety of tank infrared images (880 × 480 pixels) in complex background demonstrate that the proposed method enjoys better segmentation performance and even could be comparative with the manual segmentation in segmented results. In addition, its average running time is only 9.22 ms, implying the new method with good performance in real time processing.

  3. Probability distributions of hydraulic conductivity for the hydrogeologic units of the Death Valley regional ground-water flow system, Nevada and California

    USGS Publications Warehouse

    Belcher, Wayne R.; Sweetkind, Donald S.; Elliott, Peggy E.

    2002-01-01

    The use of geologic information such as lithology and rock properties is important to constrain conceptual and numerical hydrogeologic models. This geologic information is difficult to apply explicitly to numerical modeling and analyses because it tends to be qualitative rather than quantitative. This study uses a compilation of hydraulic-conductivity measurements to derive estimates of the probability distributions for several hydrogeologic units within the Death Valley regional ground-water flow system, a geologically and hydrologically complex region underlain by basin-fill sediments, volcanic, intrusive, sedimentary, and metamorphic rocks. Probability distributions of hydraulic conductivity for general rock types have been studied previously; however, this study provides more detailed definition of hydrogeologic units based on lithostratigraphy, lithology, alteration, and fracturing and compares the probability distributions to the aquifer test data. Results suggest that these probability distributions can be used for studies involving, for example, numerical flow modeling, recharge, evapotranspiration, and rainfall runoff. These probability distributions can be used for such studies involving the hydrogeologic units in the region, as well as for similar rock types elsewhere. Within the study area, fracturing appears to have the greatest influence on the hydraulic conductivity of carbonate bedrock hydrogeologic units. Similar to earlier studies, we find that alteration and welding in the Tertiary volcanic rocks greatly influence hydraulic conductivity. As alteration increases, hydraulic conductivity tends to decrease. Increasing degrees of welding appears to increase hydraulic conductivity because welding increases the brittleness of the volcanic rocks, thus increasing the amount of fracturing.

  4. Guidance and Control Architecture Design and Demonstration for Low Ballistic Coefficient Atmospheric Entry

    NASA Technical Reports Server (NTRS)

    Swei, Sean

    2014-01-01

    We propose to develop a robust guidance and control system for the ADEPT (Adaptable Deployable Entry and Placement Technology) entry vehicle. A control-centric model of ADEPT will be developed to quantify the performance of candidate guidance and control architectures for both aerocapture and precision landing missions. The evaluation will be based on recent breakthroughs in constrained controllability/reachability analysis of control systems and constrained-based energy-minimum trajectory optimization for guidance development operating in complex environments.

  5. Crustal seismic anisotropy: A localized perspective from surface waves at the Ruby Mountains Core Complex

    NASA Astrophysics Data System (ADS)

    Wilgus, J. T.; Schmandt, B.; Jiang, C.

    2017-12-01

    The relative importance of potential controls on crustal seismic anisotropy, such as deformational fabrics in polycrystalline crustal rocks and the contemporary state of stress, remain poorly constrained. Recent regional western US lithospheric seismic anisotropy studies have concluded that the distribution of strain in the lower crust is diffuse throughout the Basin and Range (BR) and that deformation in the crust and mantle are largely uncoupled. To further contribute to our understanding of crustal anisotropy we are conducting a detailed local study of seismic anisotropy within the BR using surface waves at the Ruby Mountain Core Complex (RMCC), located in northeast Nevada. The RMCC is one of many distinctive uplifts within the North American cordillera called metamorphic core complexes which consist of rocks exhumed from middle to lower crustal depths adjacent to mylonitic shear zones. The RMCC records exhumation depths up to 30 km indicating an anomalously high degree of extension relative to the BR average. This exhumation, the geologic setting of the RMCC, and the availability of dense broadband data from the Transportable Array (TA) and the Ruby Mountain Seismic Experiment (RMSE) coalesce to form an ideal opportunity to characterize seismic anisotropy as a function of depth beneath RMCC and evaluate the degree to which anisotropy deviates from regional scale properties of the BR. Preliminary azimuthal anisotropy results using Rayleigh waves reveal clear anisotropic signals at periods between 5-40 s, and demonstrate significant rotations of fast orientations relative to prior regional scale results. Moving forward we will focus on quantification of depth-dependent radial anisotropy from inversion of Rayleigh and Love waves. These results will be relevant to identification of the deep crustal distribution of strain associated with RMCC formation and may aid interpretation of controls on crustal anisotropy in other regions.

  6. Distributed Constrained Optimization with Semicoordinate Transformations

    NASA Technical Reports Server (NTRS)

    Macready, William; Wolpert, David

    2006-01-01

    Recent work has shown how information theory extends conventional full-rationality game theory to allow bounded rational agents. The associated mathematical framework can be used to solve constrained optimization problems. This is done by translating the problem into an iterated game, where each agent controls a different variable of the problem, so that the joint probability distribution across the agents moves gives an expected value of the objective function. The dynamics of the agents is designed to minimize a Lagrangian function of that joint distribution. Here we illustrate how the updating of the Lagrange parameters in the Lagrangian is a form of automated annealing, which focuses the joint distribution more and more tightly about the joint moves that optimize the objective function. We then investigate the use of "semicoordinate" variable transformations. These separate the joint state of the agents from the variables of the optimization problem, with the two connected by an onto mapping. We present experiments illustrating the ability of such transformations to facilitate optimization. We focus on the special kind of transformation in which the statistically independent states of the agents induces a mixture distribution over the optimization variables. Computer experiment illustrate this for &sat constraint satisfaction problems and for unconstrained minimization of NK functions.

  7. The Spatial Distribution of Attention within and across Objects

    ERIC Educational Resources Information Center

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.; Vecera, Shaun P.

    2012-01-01

    Attention operates to select both spatial locations and perceptual objects. However, the specific mechanism by which attention is oriented to objects is not well understood. We examined the means by which object structure constrains the distribution of spatial attention (i.e., a "grouped array"). Using a modified version of the Egly et…

  8. Tree canopy types constrain plant distributions in ponderosa pine-Gambel oak forests, northern Arizona

    Treesearch

    Scott R. Abella

    2009-01-01

    Trees in many forests affect the soils and plants below their canopies. In current high-density southwestern ponderosa pine (Pinus ponderosa) forests, managers have opportunities to enhance multiple ecosystem values by manipulating tree density, distribution, and canopy cover through tree thinning. I performed a study in northern Arizona ponderosa...

  9. Electrical resistance tomography using steel cased boreholes as electrodes

    DOEpatents

    Daily, W.D.; Ramirez, A.L.

    1999-06-22

    An electrical resistance tomography method is described which uses steel cased boreholes as electrodes. The method enables mapping the electrical resistivity distribution in the subsurface from measurements of electrical potential caused by electrical currents injected into an array of electrodes in the subsurface. By use of current injection and potential measurement electrodes to generate data about the subsurface resistivity distribution, which data is then used in an inverse calculation, a model of the electrical resistivity distribution can be obtained. The inverse model may be constrained by independent data to better define an inverse solution. The method utilizes pairs of electrically conductive (steel) borehole casings as current injection electrodes and as potential measurement electrodes. The greater the number of steel cased boreholes in an array, the greater the amount of data is obtained. The steel cased boreholes may be utilized for either current injection or potential measurement electrodes. The subsurface model produced by this method can be 2 or 3 dimensional in resistivity depending on the detail desired in the calculated resistivity distribution and the amount of data to constrain the models. 2 figs.

  10. Effect of elevation on distribution of female bats in the Black Hills, South Dakota

    USGS Publications Warehouse

    Cryan, P.M.; Bogan, M.A.; Altenbach, J.S.

    2000-01-01

    Presumably, reproductive female bats are more constrained by thermoregulatory and energy needs than are males and nonreproductive females. Constraints imposed on reproductive females may limit their geographic distribution relative to other bats. Such constraints likely increase with latitude and elevation. Males of 11 bat species that inhabit the Black Hills were captured more frequently than females, and reproductive females typically were encountered at low-elevational sites. To investigate the relationship between female distribution and elevation, we fitted a logistic regression model to evaluate the probability of reproductive-female capture as a function of elevation. Mist-net data from 1,197 captures of 7 species revealed that 75% of all captures were males. We found a significant inverse relationship between elevation and relative abundance of reproductive females. Relative abundance of reproductive females decreased as elevation increased. Reproductive females may be constrained from roosting and foraging in high-elevational habitats that impose thermoregulatory costs and decrease foraging efficiency. Failure to account for sex differences in distributional patterns along elevational gradients may significantly bias estimates of population size.

  11. Assessment of source probabilities for potential tsunamis affecting the U.S. Atlantic coast

    USGS Publications Warehouse

    Geist, E.L.; Parsons, T.

    2009-01-01

    Estimating the likelihood of tsunamis occurring along the U.S. Atlantic coast critically depends on knowledge of tsunami source probability. We review available information on both earthquake and landslide probabilities from potential sources that could generate local and transoceanic tsunamis. Estimating source probability includes defining both size and recurrence distributions for earthquakes and landslides. For the former distribution, source sizes are often distributed according to a truncated or tapered power-law relationship. For the latter distribution, sources are often assumed to occur in time according to a Poisson process, simplifying the way tsunami probabilities from individual sources can be aggregated. For the U.S. Atlantic coast, earthquake tsunami sources primarily occur at transoceanic distances along plate boundary faults. Probabilities for these sources are constrained from previous statistical studies of global seismicity for similar plate boundary types. In contrast, there is presently little information constraining landslide probabilities that may generate local tsunamis. Though there is significant uncertainty in tsunami source probabilities for the Atlantic, results from this study yield a comparative analysis of tsunami source recurrence rates that can form the basis for future probabilistic analyses.

  12. A summary analysis of the 3rd inquiry.

    PubMed

    1977-01-01

    20 ESCAP member countries responded to the "Third Population Inquiry among Governments: Population policies in the context of development in 1976." The questionnaire sent to the member countries covered economic and social development and population growth, mortality, fertility and family formation, population distribution and internal migration, international migration, population data collection and research, training, and institutional arrangements for the formulation of population policies within development. Most of the governments in the ESCAP region that responded indicate that the present rate of population growth constrains their social and economic development. Among the governments that consider the present rate of population growth to constrain economic and social development, 13 countries regarded the most appropriate response to the constraint would include an adjustment of both socioeconomic and demographic factors. 11 of the governments regarded their present levels of average life expectancy at birth "acceptable" and 7 identified their levels as "unacceptable." Most of the governments who responded consider that, in general, their present level of fertility is too high and constrains family well-being. Internal migration and population distribution are coming to be seen as concerns for government population policy. The most popular approaches to distributing economic and social activities are rural development, urban and regional development and industrial dispersion. There was much less concern among the governments returning the questionnaire about the effect of international migration than internal migration on social and economic development.

  13. Improved One-Way Hash Chain and Revocation Polynomial-Based Self-Healing Group Key Distribution Schemes in Resource-Constrained Wireless Networks

    PubMed Central

    Chen, Huifang; Xie, Lei

    2014-01-01

    Self-healing group key distribution (SGKD) aims to deal with the key distribution problem over an unreliable wireless network. In this paper, we investigate the SGKD issue in resource-constrained wireless networks. We propose two improved SGKD schemes using the one-way hash chain (OHC) and the revocation polynomial (RP), the OHC&RP-SGKD schemes. In the proposed OHC&RP-SGKD schemes, by introducing the unique session identifier and binding the joining time with the capability of recovering previous session keys, the problem of the collusion attack between revoked users and new joined users in existing hash chain-based SGKD schemes is resolved. Moreover, novel methods for utilizing the one-way hash chain and constructing the personal secret, the revocation polynomial and the key updating broadcast packet are presented. Hence, the proposed OHC&RP-SGKD schemes eliminate the limitation of the maximum allowed number of revoked users on the maximum allowed number of sessions, increase the maximum allowed number of revoked/colluding users, and reduce the redundancy in the key updating broadcast packet. Performance analysis and simulation results show that the proposed OHC&RP-SGKD schemes are practical for resource-constrained wireless networks in bad environments, where a strong collusion attack resistance is required and many users could be revoked. PMID:25529204

  14. Application of a sparseness constraint in multivariate curve resolution - Alternating least squares.

    PubMed

    Hugelier, Siewert; Piqueras, Sara; Bedia, Carmen; de Juan, Anna; Ruckebusch, Cyril

    2018-02-13

    The use of sparseness in chemometrics is a concept that has increased in popularity. The advantage is, above all, a better interpretability of the results obtained. In this work, sparseness is implemented as a constraint in multivariate curve resolution - alternating least squares (MCR-ALS), which aims at reproducing raw (mixed) data by a bilinear model of chemically meaningful profiles. In many cases, the mixed raw data analyzed are not sparse by nature, but their decomposition profiles can be, as it is the case in some instrumental responses, such as mass spectra, or in concentration profiles linked to scattered distribution maps of powdered samples in hyperspectral images. To induce sparseness in the constrained profiles, one-dimensional and/or two-dimensional numerical arrays can be fitted using a basis of Gaussian functions with a penalty on the coefficients. In this work, a least squares regression framework with L 0 -norm penalty is applied. This L 0 -norm penalty constrains the number of non-null coefficients in the fit of the array constrained without having an a priori on the number and their positions. It has been shown that the sparseness constraint induces the suppression of values linked to uninformative channels and noise in MS spectra and improves the location of scattered compounds in distribution maps, resulting in a better interpretability of the constrained profiles. An additional benefit of the sparseness constraint is a lower ambiguity in the bilinear model, since the major presence of null coefficients in the constrained profiles also helps to limit the solutions for the profiles in the counterpart matrix of the MCR bilinear model. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, R.; Howe, S.; O`Leary, J.

    The Piedemonte Llanero petroleum trend of the Cordillera Oriental in Colombia has proven to be one of the most prolific hydrocarbon provinces discovered in recent years. The Piedemonte Llanero is a fold and thrust belt of complex, multi-phase structuration and hydrocarbon generation. Following the discovery of the Cusiana and Cupiagua fields in the southern part of the trend, BP and its partners began exploration further to the northeast. Early seismic data showed the existence of two structural trends: the frontal (or basal) thrust trend, with structures similar to Cusiana; and the overthrust (or duplex) trend, with multiple imbricated structures. Improvedmore » quality seismic data defined the gross structures and allowed them to be successfully drilled, but did not give a constrained model for the kinematic evolution of the fold and thrust belt nor the petroleum play. This resulted in no clear predictive models for reservoir quality and hydrocarbon phase distribution in the undrilled parts of the trend. A wide variety of geological and geochemical analytical techniques including biostratigraphy, reservoir petrology, petroleum geochemistry, thermal maturity data, basin modelling and fluid inclusion studies were undertaken. These were iteratively integrated into the seismo-structural model to develop a constrained interpretation for the evolution of the Piedemonte Llanero petroleum system. This paper summarizes the current understanding of the structural evolution of the trend and the development of a major petroleum system. A companion paper details the reservoir petrography and petroleum geochemistry studies.« less

  16. Present-day kinematics of the Danakil block (southern Red Sea-Afar) constrained by GPS

    NASA Astrophysics Data System (ADS)

    Ladron de Guevara, R.; Jonsson, S.; Ruch, J.; Doubre, C.; Reilinger, R. E.; Ogubazghi, G.; Floyd, M.; Vasyura-Bathke, H.

    2017-12-01

    The rifting of the Arabian plate from the Nubian and Somalian plates is primarily accommodated by seismic and magmatic activity along two rift arms of the Afar triple junction (the Red Sea and Gulf of Aden rifts). The spatial distribution of active deformation in the Afar region have been constrained with geodetic observations. However, the plate boundary configuration in which this deformation occurs is still not fully understood. South of 17°N, the Red Sea rift is composed of two parallel and overlapping rift branches separated by the Danakil block. The distribution of the extension across these two overlapping rifts, their potential connection through a transform fault zone and the counterclockwise rotation of the Danakil block have not yet been fully resolved. Here we analyze new GPS observations from the Danakil block, the Gulf of Zula area (Eritrea) and Afar (Ethiopia) together with previous geodetic survey data to better constrain the plate kinematics and active deformation of the region. The new data has been collected in 2016 and add up to 5 years to the existing geodetic observations (going back to 2000). Our improved GPS velocity field shows differences with previously modeled GPS velocities, suggesting that the rate and rotation of the Danakil block need to be updated. The new velocity field also shows that the plate-boundary strain is accommodated by broad deformation zones rather than across sharp boundaries between tectonic blocks. To better determine the spatial distribution of the strain, we first implement a rigid block model to constrain the overall regional plate kinematics and to isolate the plate-boundary deformation at the western boundary of the Danakil block. We then study whether the recent southern Red Sea rifting events have caused detectable changes in observed GPS velocities and if the observations can be used to constrain the scale of this offshore rift activity. Finally, we investigate different geometries of transform faults that might connect the two overlapping branches of the southern Red Sea rift in the Gulf of Zula region.

  17. Maximum earthquake magnitudes in the Aegean area constrained by tectonic moment release rates

    NASA Astrophysics Data System (ADS)

    Ch. Koravos, G.; Main, I. G.; Tsapanos, T. M.; Musson, R. M. W.

    2003-01-01

    Seismic moment release is usually dominated by the largest but rarest events, making the estimation of seismic hazard inherently uncertain. This uncertainty can be reduced by combining long-term tectonic deformation rates with short-term recurrence rates. Here we adopt this strategy to estimate recurrence rates and maximum magnitudes for tectonic zones in the Aegean area. We first form a merged catalogue for historical and instrumentally recorded earthquakes in the Aegean, based on a recently published catalogue for Greece and surrounding areas covering the time period 550BC-2000AD, at varying degrees of completeness. The historical data are recalibrated to allow for changes in damping in seismic instruments around 1911. We divide the area up into zones that correspond to recent determinations of deformation rate from satellite data. In all zones we find that the Gutenberg-Richter (GR) law holds at low magnitudes. We use Akaike's information criterion to determine the best-fitting distribution at high magnitudes, and classify the resulting frequency-magnitude distributions of the zones as critical (GR law), subcritical (gamma density distribution) or supercritical (`characteristic' earthquake model) where appropriate. We determine the ratio η of seismic to tectonic moment release rate. Low values of η (<0.5) corresponding to relatively aseismic deformation, are associated with higher b values (>1.0). The seismic and tectonic moment release rates are then combined to constrain recurrence rates and maximum credible magnitudes (in the range 6.7-7.6 mW where the results are well constrained) based on extrapolating the short-term seismic data. With current earthquake data, many of the tectonic zones show a characteristic distribution that leads to an elevated probability of magnitudes around 7, but a reduced probability of larger magnitudes above this value when compared with the GR trend. A modification of the generalized gamma distribution is suggested to account for this, based on a finite statistical second moment for the seismic moment distribution.

  18. Mapping the rupture process of moderate earthquakes by inverting accelerograms

    USGS Publications Warehouse

    Hellweg, M.; Boatwright, J.

    1999-01-01

    We present a waveform inversion method that uses recordings of small events as Green's functions to map the rupture growth of moderate earthquakes. The method fits P and S waveforms from many stations simultaneously in an iterative procedure to estimate the subevent rupture time and amplitude relative to the Green's function event. We invert the accelerograms written by two moderate Parkfield earthquakes using smaller events as Green's functions. The first earthquake (M = 4.6) occurred on November 14, 1993, at a depth of 11 km under Middle Mountain, in the assumed preparation zone for the next Parkfield main shock. The second earthquake (M = 4.7) occurred on December 20, 1994, some 6 km to the southeast, at a depth of 9 km on a section of the San Andreas fault with no previous microseismicity and little inferred coseismic slip in the 1966 Parkfield earthquake. The inversion results are strikingly different for the two events. The average stress release in the 1993 event was 50 bars, distributed over a geometrically complex area of 0.9 km2. The average stress release in the 1994 event was only 6 bars, distributed over a roughly elliptical area of 20 km2. The ruptures of both events appear to grow spasmodically into relatively complex shapes: the inversion only constrains the ruptures to grow more slowly than the S wave velocity but does not use smoothness constraints. Copyright 1999 by the American Geophysical Union.

  19. Scaling Laws of Discrete-Fracture-Network Models

    NASA Astrophysics Data System (ADS)

    Philippe, D.; Olivier, B.; Caroline, D.; Jean-Raynald, D.

    2006-12-01

    The statistical description of fracture networks through scale still remains a concern for geologists, considering the complexity of fracture networks. A challenging task of the last 20-years studies has been to find a solid and rectifiable rationale to the trivial observation that fractures exist everywhere and at all sizes. The emergence of fractal models and power-law distributions quantifies this fact, and postulates in some ways that small-scale fractures are genetically linked to their larger-scale relatives. But the validation of these scaling concepts still remains an issue considering the unreachable amount of information that would be necessary with regards to the complexity of natural fracture networks. Beyond the theoretical interest, a scaling law is a basic and necessary ingredient of Discrete-Fracture-Network models (DFN) that are used for many environmental and industrial applications (groundwater resources, mining industry, assessment of the safety of deep waste disposal sites, ..). Indeed, such a function is necessary to assemble scattered data, taken at different scales, into a unified scaling model, and to interpolate fracture densities between observations. In this study, we discuss some important issues related to scaling laws of DFN: - We first describe a complete theoretical and mathematical framework that takes account of both the fracture- size distribution and the fracture clustering through scales (fractal dimension). - We review the scaling laws that have been obtained, and we discuss the ability of fracture datasets to really constrain the parameters of the DFN model. - And finally we discuss the limits of scaling models.

  20. Bayes factors for testing inequality constrained hypotheses: Issues with prior specification.

    PubMed

    Mulder, Joris

    2014-02-01

    Several issues are discussed when testing inequality constrained hypotheses using a Bayesian approach. First, the complexity (or size) of the inequality constrained parameter spaces can be ignored. This is the case when using the posterior probability that the inequality constraints of a hypothesis hold, Bayes factors based on non-informative improper priors, and partial Bayes factors based on posterior priors. Second, the Bayes factor may not be invariant for linear one-to-one transformations of the data. This can be observed when using balanced priors which are centred on the boundary of the constrained parameter space with a diagonal covariance structure. Third, the information paradox can be observed. When testing inequality constrained hypotheses, the information paradox occurs when the Bayes factor of an inequality constrained hypothesis against its complement converges to a constant as the evidence for the first hypothesis accumulates while keeping the sample size fixed. This paradox occurs when using Zellner's g prior as a result of too much prior shrinkage. Therefore, two new methods are proposed that avoid these issues. First, partial Bayes factors are proposed based on transformed minimal training samples. These training samples result in posterior priors that are centred on the boundary of the constrained parameter space with the same covariance structure as in the sample. Second, a g prior approach is proposed by letting g go to infinity. This is possible because the Jeffreys-Lindley paradox is not an issue when testing inequality constrained hypotheses. A simulation study indicated that the Bayes factor based on this g prior approach converges fastest to the true inequality constrained hypothesis. © 2013 The British Psychological Society.

  1. Configuration of the thermal landscape determines thermoregulatory performance of ectotherms

    PubMed Central

    Sears, Michael W.; Angilletta, Michael J.; Schuler, Matthew S.; Borchert, Jason; Dilliplane, Katherine F.; Stegman, Monica; Rusch, Travis W.; Mitchell, William A.

    2016-01-01

    Although most organisms thermoregulate behaviorally, biologists still cannot easily predict whether mobile animals will thermoregulate in natural environments. Current models fail because they ignore how the spatial distribution of thermal resources constrains thermoregulatory performance over space and time. To overcome this limitation, we modeled the spatially explicit movements of animals constrained by access to thermal resources. Our models predict that ectotherms thermoregulate more accurately when thermal resources are dispersed throughout space than when these resources are clumped. This prediction was supported by thermoregulatory behaviors of lizards in outdoor arenas with known distributions of environmental temperatures. Further, simulations showed how the spatial structure of the landscape qualitatively affects responses of animals to climate. Biologists will need spatially explicit models to predict impacts of climate change on local scales. PMID:27601639

  2. Role of Network Science in the Study of Anesthetic State Transitions.

    PubMed

    Lee, UnCheol; Mashour, George A

    2018-04-23

    The heterogeneity of molecular mechanisms, target neural circuits, and neurophysiologic effects of general anesthetics makes it difficult to develop a reliable and drug-invariant index of general anesthesia. No single brain region or mechanism has been identified as the neural correlate of consciousness, suggesting that consciousness might emerge through complex interactions of spatially and temporally distributed brain functions. The goal of this review article is to introduce the basic concepts of networks and explain why the application of network science to general anesthesia could be a pathway to discover a fundamental mechanism of anesthetic-induced unconsciousness. This article reviews data suggesting that reduced network efficiency, constrained network repertoires, and changes in cortical dynamics create inhospitable conditions for information processing and transfer, which lead to unconsciousness. This review proposes that network science is not just a useful tool but a necessary theoretical framework and method to uncover common principles of anesthetic-induced unconsciousness.

  3. Multiscale modelling of precipitation in concentrated alloys: from atomistic Monte Carlo simulations to cluster dynamics I thermodynamics

    NASA Astrophysics Data System (ADS)

    Lépinoux, J.; Sigli, C.

    2018-01-01

    In a recent paper, the authors showed how the clusters free energies are constrained by the coagulation probability, and explained various anomalies observed during the precipitation kinetics in concentrated alloys. This coagulation probability appeared to be a too complex function to be accurately predicted knowing only the cluster distribution in Cluster Dynamics (CD). Using atomistic Monte Carlo (MC) simulations, it is shown that during a transformation at constant temperature, after a short transient regime, the transformation occurs at quasi-equilibrium. It is proposed to use MC simulations until the system quasi-equilibrates then to switch to CD which is mean field but not limited by a box size like MC. In this paper, we explain how to take into account the information available before the quasi-equilibrium state to establish guidelines to safely predict the cluster free energies.

  4. Interstellar Travel and Galactic Colonization: Insights from Percolation Theory and the Yule Process.

    PubMed

    Lingam, Manasvi

    2016-06-01

    In this paper, percolation theory is employed to place tentative bounds on the probability p of interstellar travel and the emergence of a civilization (or panspermia) that colonizes the entire Galaxy. The ensuing ramifications with regard to the Fermi paradox are also explored. In particular, it is suggested that the correlation function of inhabited exoplanets can be used to observationally constrain p in the near future. It is shown, by using a mathematical evolution model known as the Yule process, that the probability distribution for civilizations with a given number of colonized worlds is likely to exhibit a power-law tail. Some of the dynamical aspects of this issue, including the question of timescales and generalizing percolation theory, were also studied. The limitations of these models, and other avenues for future inquiry, are also outlined. Complex life-Extraterrestrial life-Panspermia-Life detection-SETI. Astrobiology 16, 418-426.

  5. Magnetic structure of the crust

    NASA Technical Reports Server (NTRS)

    Wasilewski, P.

    1985-01-01

    The bibuniqueness aspect of geophysical interpretation must be constrained by geological insight to limit the range of theoretically possible models. An additional step in depth understanding of the relationship between rock magnetization and geological circumstances on a grand scale is required. Views about crustal structure and the distribution of lithologies suggests a complex situation with lateral and vertical variability at all levels in the crust. Volcanic, plutonic, and metamorphic processes together with each of the observed anomalies. Important questions are addressed: (1) the location of the magnetic bottom; (2) whether the source is a discrete one or are certain parts of the crust cumulatively contributing to the overall magnetization; (3) if the anomaly to some recognizable surface expression is localized, how to arrive at a geologically realistic model incorporating magnetization contrasts which are realistic; (3) in the way the primary mineralogies are altered by metamorphism and the resulting magnetic contracts; (4) the effects of temperature and pressure on magnetization.

  6. Case studies for observation planning algorithm of a Japanese spaceborne sensor: Hyperspectral Imager Suite (HISUI)

    NASA Astrophysics Data System (ADS)

    Ogawa, Kenta; Konno, Yukiko; Yamamoto, Satoru; Matsunaga, Tsuneo; Tachikawa, Tetsushi; Komoda, Mako; Kashimura, Osamu; Rokugawa, Shuichi

    2016-10-01

    Hyperspectral Imager Suite (HISUI)[1] is a Japanese future spaceborne hyperspectral instrument being developed by Ministry of Economy, Trade, and Industry (METI) and will be delivered to ISS in 2018. In HISUI project, observation strategy is important especially for hyperspectral sensor, and relationship between the limitations of sensor operation and the planned observation scenarios have to be studied. We have developed concept of multiple algorithms approach. The concept is to use two (or more) algorithm models (Long Strip Model and Score Downfall Model) for selecting observing scenes from complex data acquisition requests with satisfactory of sensor constrains. We have tested the algorithm, and found that the performance of two models depends on remaining data acquisition requests, i.e. distribution score along with orbits. We conclude that the multiple algorithms approach will be make better collection plans for HISUI comparing with single fixed approach.

  7. Constrained Multipoint Aerodynamic Shape Optimization Using an Adjoint Formulation and Parallel Computers

    NASA Technical Reports Server (NTRS)

    Reuther, James; Jameson, Antony; Alonso, Juan Jose; Rimlinger, Mark J.; Saunders, David

    1997-01-01

    An aerodynamic shape optimization method that treats the design of complex aircraft configurations subject to high fidelity computational fluid dynamics (CFD), geometric constraints and multiple design points is described. The design process will be greatly accelerated through the use of both control theory and distributed memory computer architectures. Control theory is employed to derive the adjoint differential equations whose solution allows for the evaluation of design gradient information at a fraction of the computational cost required by previous design methods. The resulting problem is implemented on parallel distributed memory architectures using a domain decomposition approach, an optimized communication schedule, and the MPI (Message Passing Interface) standard for portability and efficiency. The final result achieves very rapid aerodynamic design based on a higher order CFD method. In order to facilitate the integration of these high fidelity CFD approaches into future multi-disciplinary optimization (NW) applications, new methods must be developed which are capable of simultaneously addressing complex geometries, multiple objective functions, and geometric design constraints. In our earlier studies, we coupled the adjoint based design formulations with unconstrained optimization algorithms and showed that the approach was effective for the aerodynamic design of airfoils, wings, wing-bodies, and complex aircraft configurations. In many of the results presented in these earlier works, geometric constraints were satisfied either by a projection into feasible space or by posing the design space parameterization such that it automatically satisfied constraints. Furthermore, with the exception of reference 9 where the second author initially explored the use of multipoint design in conjunction with adjoint formulations, our earlier works have focused on single point design efforts. Here we demonstrate that the same methodology may be extended to treat complete configuration designs subject to multiple design points and geometric constraints. Examples are presented for both transonic and supersonic configurations ranging from wing alone designs to complex configuration designs involving wing, fuselage, nacelles and pylons.

  8. The Spatial Distribution of Attention within and across Objects

    PubMed Central

    Hollingworth, Andrew; Maxcey-Richard, Ashleigh M.; Vecera, Shaun P.

    2011-01-01

    Attention operates to select both spatial locations and perceptual objects. However, the specific mechanism by which attention is oriented to objects is not well understood. We examined the means by which object structure constrains the distribution of spatial attention (i.e., a “grouped array”). Using a modified version of the Egly et al. object cuing task, we systematically manipulated within-object distance and object boundaries. Four major findings are reported: 1) spatial attention forms a gradient across the attended object; 2) object boundaries limit the distribution of this gradient, with the spread of attention constrained by a boundary; 3) boundaries within an object operate similarly to across-object boundaries: we observed object-based effects across a discontinuity within a single object, without the demand to divide or switch attention between discrete object representations; and 4) the gradient of spatial attention across an object directly modulates perceptual sensitivity, implicating a relatively early locus for the grouped array representation. PMID:21728455

  9. Geologic Mapping of the Lunar South Pole Quadrangle (LQ-30)

    NASA Technical Reports Server (NTRS)

    Mest, S. C.; Berman, D. C.; Petro, N. E.

    2010-01-01

    In this study we use recent image, spectral and topographic data to map the geology of the lunar South Pole quadrangle (LQ-30) at 1:2.5M scale [1-7]. The overall objective of this research is to constrain the geologic evolution of LQ-30 (60 -90 S, 0 - 180 ) with specific emphasis on evaluation of a) the regional effects of impact basin formation, and b) the spatial distribution of ejecta, in particular resulting from formation of the South Pole-Aitken (SPA) basin and other large basins. Key scientific objectives include: 1) Determining the geologic history of LQ-30 and examining the spatial and temporal variability of geologic processes within the map area. 2) Constraining the distribution of impact-generated materials, and determining the timing and effects of major basin-forming impacts on crustal structure and stratigraphy in the map area. And 3) assessing the distribution of potential resources (e.g., H, Fe, Th) and their relationships with surface materials.

  10. Optimal Coordinated EV Charging with Reactive Power Support in Constrained Distribution Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Paudyal, Sumit; Ceylan, Oğuzhan; Bhattarai, Bishnu P.

    Electric vehicle (EV) charging/discharging can take place in any P-Q quadrants, which means EVs could support reactive power to the grid while charging the battery. In controlled charging schemes, distribution system operator (DSO) coordinates with the charging of EV fleets to ensure grid’s operating constraints are not violated. In fact, this refers to DSO setting upper bounds on power limits for EV charging. In this work, we demonstrate that if EVs inject reactive power into the grid while charging, DSO could issue higher upper bounds on the active power limits for the EVs for the same set of grid constraints.more » We demonstrate the concept in an 33-node test feeder with 1,500 EVs. Case studies show that in constrained distribution grids in coordinated charging, average costs of EV charging could be reduced if the charging takes place in the fourth P-Q quadrant compared to charging with unity power factor.« less

  11. The intercrater plains of Mercury and the Moon: Their nature, origin and role in terrestrial planet evolution. Cratering histories of the intercrater plains. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Leake, M. A.

    1982-01-01

    The intercrater plains of Mercury and the Moon are defined, in part, by their high densities of small craters. The crater size frequency statistics presented in this chapter may help constrain the relative ages and origins of these surfaces. To this end, the effects of common geologic processes on crater frequency statistics are compared with the diameter frequency distributions of the intercrater regions of the Moon and Mercury. Such analyses may determine whether secondary craters dominate the distribution at small diameters, and whether volcanic plains or ballistic deposits form the intercrater surface. Determining the mass frequency distribution and flux of the impacting population is a more difficult problem. The necessary information such as scaling relationships between projectile energy and crater diameter, the relative fluxes of solar system objects, and the absolute ages of surface units is model dependent and poorly constrained, especially for Mercury.

  12. Chance-Constrained System of Systems Based Operation of Power Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kargarian, Amin; Fu, Yong; Wu, Hongyu

    In this paper, a chance-constrained system of systems (SoS) based decision-making approach is presented for stochastic scheduling of power systems encompassing active distribution grids. Based on the concept of SoS, the independent system operator (ISO) and distribution companies (DISCOs) are modeled as self-governing systems. These systems collaborate with each other to run the entire power system in a secure and economic manner. Each self-governing system accounts for its local reserve requirements and line flow constraints with respect to the uncertainties of load and renewable energy resources. A set of chance constraints are formulated to model the interactions between the ISOmore » and DISCOs. The proposed model is solved by using analytical target cascading (ATC) method, a distributed optimization algorithm in which only a limited amount of information is exchanged between collaborative ISO and DISCOs. In this paper, a 6-bus and a modified IEEE 118-bus power systems are studied to show the effectiveness of the proposed algorithm.« less

  13. Sequence stratigraphic controls on reservoir characterization and architecture: case study of the Messinian Abu Madi incised-valley fill, Egypt

    NASA Astrophysics Data System (ADS)

    Abdel-Fattah, Mohamed I.; Slatt, Roger M.

    2013-12-01

    Understanding sequence stratigraphy architecture in the incised-valley is a crucial step to understanding the effect of relative sea level changes on reservoir characterization and architecture. This paper presents a sequence stratigraphic framework of the incised-valley strata within the late Messinian Abu Madi Formation based on seismic and borehole data. Analysis of sand-body distribution reveals that fluvial channel sandstones in the Abu Madi Formation in the Baltim Fields, offshore Nile Delta, Egypt, are not randomly distributed but are predictable in their spatial and stratigraphic position. Elucidation of the distribution of sandstones in the Abu Madi incised-valley fill within a sequence stratigraphic framework allows a better understanding of their characterization and architecture during burial. Strata of the Abu Madi Formation are interpreted to comprise two sequences, which are the most complex stratigraphically; their deposits comprise a complex incised valley fill. The lower sequence (SQ1) consists of a thick incised valley-fill of a Lowstand Systems Tract (LST1)) overlain by a Transgressive Systems Tract (TST1) and Highstand Systems Tract (HST1). The upper sequence (SQ2) contains channel-fill and is interpreted as a LST2 which has a thin sandstone channel deposits. Above this, channel-fill sandstone and related strata with tidal influence delineates the base of TST2, which is overlain by a HST2. Gas reservoirs of the Abu Madi Formation (present-day depth ˜3552 m), the Baltim Fields, Egypt, consist of fluvial lowstand systems tract (LST) sandstones deposited in an incised valley. LST sandstones have a wide range of porosity (15 to 28%) and permeability (1 to 5080mD), which reflect both depositional facies and diagenetic controls. This work demonstrates the value of constraining and evaluating the impact of sequence stratigraphic distribution on reservoir characterization and architecture in incised-valley deposits, and thus has an important impact on reservoir quality evolution in hydrocarbon exploration in such settings.

  14. Analysis of the U.S. forest tolerance patterns depending on current and future temperature and precipitation

    Treesearch

    Jean Lienard; John Harrison; Nikolay Strigul

    2015-01-01

    Forested ecosystems are shaped by climate, soil and biotic interactions, resulting in constrained spatial distribution of species and biomes. Tolerance traits of species determine their fundamental ecological niche, while biotic interactions narrow tree distributions to the realized niche. In particular, shade, drought and waterlogging tolerances have been well-...

  15. Distribution and mixing of old and new nonstructural carbon in two temperate trees

    Treesearch

    Andrew D. Richardson; Mariah S. Carbone; Brett A. Huggett; Morgan E. Furze; Claudia I. Czimczik; Jennifer C. Walker; Xiaomei Xu; Paul G. Schaberg; Paula Murakami

    2015-01-01

    We know surprisingly little about whole-tree nonstructural carbon (NSC; primarily sugars and starch) budgets. Even less well understood is the mixing between recent photosynthetic assimilates (new NSC) and previously stored reserves. And, NSC turnover times are poorly constrained. We characterized the distribution of NSC in the stemwood, branches, and roots of two...

  16. Rational Engineering and Characterization of an mAb that Neutralizes Zika Virus by Targeting a Mutationally Constrained Quaternary Epitope.

    PubMed

    Tharakaraman, Kannan; Watanabe, Satoru; Chan, Kuan Rong; Huan, Jia; Subramanian, Vidya; Chionh, Yok Hian; Raguram, Aditya; Quinlan, Devin; McBee, Megan; Ong, Eugenia Z; Gan, Esther S; Tan, Hwee Cheng; Tyagi, Anu; Bhushan, Shashi; Lescar, Julien; Vasudevan, Subhash G; Ooi, Eng Eong; Sasisekharan, Ram

    2018-05-09

    Following the recent emergence of Zika virus (ZIKV), many murine and human neutralizing anti-ZIKV antibodies have been reported. Given the risk of virus escape mutants, engineering antibodies that target mutationally constrained epitopes with therapeutically relevant potencies can be valuable for combating future outbreaks. Here, we applied computational methods to engineer an antibody, ZAb_FLEP, that targets a highly networked and therefore mutationally constrained surface formed by the envelope protein dimer. ZAb_FLEP neutralized a breadth of ZIKV strains and protected mice in distinct in vivo models, including resolving vertical transmission and fetal mortality in infected pregnant mice. Serial passaging of ZIKV in the presence of ZAb_FLEP failed to generate viral escape mutants, suggesting that its epitope is indeed mutationally constrained. A single-particle cryo-EM reconstruction of the Fab-ZIKV complex validated the structural model and revealed insights into ZAb_FLEP's neutralization mechanism. ZAb_FLEP has potential as a therapeutic in future outbreaks. Copyright © 2018. Published by Elsevier Inc.

  17. A New Family of Solvable Pearson-Dirichlet Random Walks

    NASA Astrophysics Data System (ADS)

    Le Caër, Gérard

    2011-07-01

    An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.

  18. Language complexity modulates 8- and 10-year-olds' success at using their theory of mind abilities in a communication task.

    PubMed

    Wang, J Jessica; Ali, Muna; Frisson, Steven; Apperly, Ian A

    2016-09-01

    Basic competence in theory of mind is acquired during early childhood. Nonetheless, evidence suggests that the ability to take others' perspectives in communication improves continuously from middle childhood to the late teenage years. This indicates that theory of mind performance undergoes protracted developmental changes after the acquisition of basic competence. Currently, little is known about the factors that constrain children's performance or that contribute to age-related improvement. A sample of 39 8-year-olds and 56 10-year-olds were tested on a communication task in which a speaker's limited perspective needed to be taken into account and the complexity of the speaker's utterance varied. Our findings showed that 10-year-olds were generally less egocentric than 8-year-olds. Children of both ages committed more egocentric errors when a speaker uttered complex sentences compared with simple sentences. Both 8- and 10-year-olds were affected by the demand to integrate complex sentences with the speaker's limited perspective and to a similar degree. These results suggest that long after children's development of simple visual perspective-taking, their use of this ability to assist communication is substantially constrained by the complexity of the language involved. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. What Enables and Constrains the Inclusion of the Social Determinants of Health Inequities in Government Policy Agendas? A Narrative Review.

    PubMed

    Baker, Phillip; Friel, Sharon; Kay, Adrian; Baum, Fran; Strazdins, Lyndall; Mackean, Tamara

    2017-11-11

    Despite decades of evidence gathering and calls for action, few countries have systematically attenuated health inequities (HI) through action on the social determinants of health (SDH). This is at least partly because doing so presents a significant political and policy challenge. This paper explores this challenge through a review of the empirical literature, asking: what factors have enabled and constrained the inclusion of the social determinants of health inequities (SDHI) in government policy agendas? A narrative review method was adopted involving three steps: first, drawing upon political science theories on agenda-setting, an integrated theoretical framework was developed to guide the review; second, a systematic search of scholarly databases for relevant literature; and third, qualitative analysis of the data and thematic synthesis of the results. Studies were included if they were empirical, met specified quality criteria, and identified factors that enabled or constrained the inclusion of the SDHI in government policy agendas. A total of 48 studies were included in the final synthesis, with studies spanning a number of country-contexts and jurisdictional settings, and employing a diversity of theoretical frameworks. Influential factors included the ways in which the SDHI were framed in public, media and political discourse; emerging data and evidence describing health inequalities; limited supporting evidence and misalignment of proposed solutions with existing policy and institutional arrangements; institutionalised norms and ideologies (ie, belief systems) that are antithetical to a SDH approach including neoliberalism, the medicalisation of health and racism; civil society mobilization; leadership; and changes in government. A complex set of interrelated, context-dependent and dynamic factors influence the inclusion or neglect of the SDHI in government policy agendas. It is better to think about these factors as increasing (or decreasing) the 'probability' of health equity reaching a government agenda, rather than in terms of 'necessity' or 'sufficiency.' Understanding these factors may help advocates develop strategies for generating political priority for attenuating HI in the future. © 2018 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  20. Constraining the mass of the Local Group

    NASA Astrophysics Data System (ADS)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  1. Head shape evolution in Tropidurinae lizards: does locomotion constrain diet?

    PubMed

    Kohlsdorf, T; Grizante, M B; Navas, C A; Herrel, A

    2008-05-01

    Different components of complex integrated systems may be specialized for different functions, and thus the selective pressures acting on the system as a whole may be conflicting and can ultimately constrain organismal performance and evolution. The vertebrate cranial system is one of the most striking examples of a complex system with several possible functions, being associated to activities as different as locomotion, prey capture, display and defensive behaviours. Therefore, selective pressures on the cranial system as a whole are possibly complex and may be conflicting. The present study focuses on the influence of potentially conflicting selective pressures (diet vs. locomotion) on the evolution of head shape in Tropidurinae lizards. For example, the expected adaptations leading to flat heads and bodies in species living on vertical structures may conflict with the need for improved bite performance associated with the inclusion of hard or tough prey into the diet, a common phenomenon in Tropidurinae lizards. Body size and six variables describing head shape were quantified in preserved specimens of 23 species, and information on diet and substrate usage was obtained from the literature. No phylogenetic signal was observed in the morphological data at any branch length tested, suggesting adaptive evolution of head shape in Tropidurinae. This pattern was confirmed by both factor analysis and independent contrast analysis, which suggested adaptive co-variation between the head shape and the inclusion of hard prey into the diet. In contrast to our expectations, habitat use did not constrain or drive head shape evolution in the group.

  2. Complex hybrid inflation and baryogenesis.

    PubMed

    Delepine, David; Martínez, Carlos; Ureña-López, L Arturo

    2007-04-20

    We propose a hybrid inflation model with a complex waterfall field which contains an interaction term that breaks the U(1) global symmetry associated with the waterfall field charge. We show that the asymmetric evolution of the real and imaginary parts of the complex field during the phase transition at the end of inflation translates into a charge asymmetry. The latter strongly depends on the vacuum expectation value of the waterfall field, which is well constrained by diverse cosmological observations.

  3. Uncertainty quantification of Antarctic contribution to sea-level rise using the fast Elementary Thermomechanical Ice Sheet (f.ETISh) model

    NASA Astrophysics Data System (ADS)

    Bulthuis, Kevin; Arnst, Maarten; Pattyn, Frank; Favier, Lionel

    2017-04-01

    Uncertainties in sea-level rise projections are mostly due to uncertainties in Antarctic ice-sheet predictions (IPCC AR5 report, 2013), because key parameters related to the current state of the Antarctic ice sheet (e.g. sub-ice-shelf melting) and future climate forcing are poorly constrained. Here, we propose to improve the predictions of Antarctic ice-sheet behaviour using new uncertainty quantification methods. As opposed to ensemble modelling (Bindschadler et al., 2013) which provides a rather limited view on input and output dispersion, new stochastic methods (Le Maître and Knio, 2010) can provide deeper insight into the impact of uncertainties on complex system behaviour. Such stochastic methods usually begin with deducing a probabilistic description of input parameter uncertainties from the available data. Then, the impact of these input parameter uncertainties on output quantities is assessed by estimating the probability distribution of the outputs by means of uncertainty propagation methods such as Monte Carlo methods or stochastic expansion methods. The use of such uncertainty propagation methods in glaciology may be computationally costly because of the high computational complexity of ice-sheet models. This challenge emphasises the importance of developing reliable and computationally efficient ice-sheet models such as the f.ETISh ice-sheet model (Pattyn, 2015), a new fast thermomechanical coupled ice sheet/ice shelf model capable of handling complex and critical processes such as the marine ice-sheet instability mechanism. Here, we apply these methods to investigate the role of uncertainties in sub-ice-shelf melting, calving rates and climate projections in assessing Antarctic contribution to sea-level rise for the next centuries using the f.ETISh model. We detail the methods and show results that provide nominal values and uncertainty bounds for future sea-level rise as a reflection of the impact of the input parameter uncertainties under consideration, as well as a ranking of the input parameter uncertainties in the order of the significance of their contribution to uncertainty in future sea-level rise. In addition, we discuss how limitations posed by the available information (poorly constrained data) pose challenges that motivate our current research.

  4. Analysis of the Herschel/HIFI 1.2 THz Wide Spectral Survey of the Orion Kleinmann-Low Nebula

    NASA Astrophysics Data System (ADS)

    Crockett, Nathan R.

    This dissertation presents a comprehensive analysis of a broad band spectral line survey of the Orion Kleinmann-Low nebula (Orion KL), one of the most chemically rich regions in the Galaxy, using the HIFI instrument on board the Herschel Space Observatory. This survey spans a frequency range from 480 to 1907 GHz at a resolution of 1.1 MHz. These observations thus encompass the largest spectral coverage ever obtained toward this massive star forming region in the sub-mm with high spectral resolution, and include frequencies >1 THz where the Earth's atmosphere prevents observations from the ground. In all, we detect emission from 36 molecules (76 isotopologues). Combining this dataset with ground based mm spectroscopy obtained with the IRAM 30 m telescope, we model the molecular emission assuming local thermodynamic equilibrium (LTE). Because of the wide frequency coverage, our models are constrained over an unprecedented range in excitation energy, including states at or close to ground up to energies where emission is no longer detected. A χ2 analysis indicates that most of our models reproduce the observed emission well. In particular complex organics, some with thousands of transitions, are well fit by LTE models implying that gas densities are high (>10^6 cm^-3) and excitation temperatures and column densities are well constrained. Molecular abundances are computed using H2 column densities also derived from the HIFI survey. The rotation temperature distribution of molecules detected toward the hot core is much wider relative to the compact ridge, plateau, and extended ridge. We find that complex N-bearing species, cyanides in particular, systematically probe hotter gas than complex O-bearing species. This indicates complex N-bearing molecules may be more difficult to remove from grain surfaces or that hot gas phase formation routes are important for these species. We also present a detailed non-LTE analysis of H2S emission toward the hot core which suggests this light hydride may probe heavily embedded gas in close proximity to a hidden self-luminous source (or sources), conceivably responsible for OrionKL's high luminosity. The abundances derived here, along with the publicly available data and molecular fits, represent a legacy for comparison to other sources and chemical models.

  5. Secure Fusion Estimation for Bandwidth Constrained Cyber-Physical Systems Under Replay Attacks.

    PubMed

    Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li; Bo Chen; Ho, Daniel W C; Guoqiang Hu; Li Yu; Chen, Bo; Ho, Daniel W C; Hu, Guoqiang; Yu, Li

    2018-06-01

    State estimation plays an essential role in the monitoring and supervision of cyber-physical systems (CPSs), and its importance has made the security and estimation performance a major concern. In this case, multisensor information fusion estimation (MIFE) provides an attractive alternative to study secure estimation problems because MIFE can potentially improve estimation accuracy and enhance reliability and robustness against attacks. From the perspective of the defender, the secure distributed Kalman fusion estimation problem is investigated in this paper for a class of CPSs under replay attacks, where each local estimate obtained by the sink node is transmitted to a remote fusion center through bandwidth constrained communication channels. A new mathematical model with compensation strategy is proposed to characterize the replay attacks and bandwidth constrains, and then a recursive distributed Kalman fusion estimator (DKFE) is designed in the linear minimum variance sense. According to different communication frameworks, two classes of data compression and compensation algorithms are developed such that the DKFEs can achieve the desired performance. Several attack-dependent and bandwidth-dependent conditions are derived such that the DKFEs are secure under replay attacks. An illustrative example is given to demonstrate the effectiveness of the proposed methods.

  6. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    NASA Astrophysics Data System (ADS)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-10-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.

  7. O2 Chemistry of Dicopper Complexes with Alkyltriamine Ligands. Comparing Synergistic Effects on O2 Binding

    PubMed Central

    Company, Anna; Lamata, Diana; Poater, Albert; Solà, Miquel; Que, Lawrence; Fontrodona, Xavier; Parella, Teodor; Llobet, Antoni

    2008-01-01

    Two dicopper(I) complexes containing tertiary N-methylated hexaaza ligands which impose different steric constrains to the Cu ions have been synthetized, and their reactivity towards O2 has been compared with a mononuclear related system, highlighting the importance of cooperative effects between the metal centers in O2 activation. PMID:16813375

  8. Origin and Evolutionary Alteration of the Mitochondrial Import System in Eukaryotic Lineages.

    PubMed

    Fukasawa, Yoshinori; Oda, Toshiyuki; Tomii, Kentaro; Imai, Kenichiro

    2017-07-01

    Protein transport systems are fundamentally important for maintaining mitochondrial function. Nevertheless, mitochondrial protein translocases such as the kinetoplastid ATOM complex have recently been shown to vary in eukaryotic lineages. Various evolutionary hypotheses have been formulated to explain this diversity. To resolve any contradiction, estimating the primitive state and clarifying changes from that state are necessary. Here, we present more likely primitive models of mitochondrial translocases, specifically the translocase of the outer membrane (TOM) and translocase of the inner membrane (TIM) complexes, using scrutinized phylogenetic profiles. We then analyzed the translocases' evolution in eukaryotic lineages. Based on those results, we propose a novel evolutionary scenario for diversification of the mitochondrial transport system. Our results indicate that presequence transport machinery was mostly established in the last eukaryotic common ancestor, and that primitive translocases already had a pathway for transporting presequence-containing proteins. Moreover, secondary changes including convergent and migrational gains of a presequence receptor in TOM and TIM complexes, respectively, likely resulted from constrained evolution. The nature of a targeting signal can constrain alteration to the protein transport complex. © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  9. Kinetics of the initial steps of G protein-coupled receptor-mediated cellular signaling revealed by single-molecule imaging.

    PubMed

    Lill, Yoriko; Martinez, Karen L; Lill, Markus A; Meyer, Bruno H; Vogel, Horst; Hecht, Bert

    2005-08-12

    We report on an in vivo single-molecule study of the signaling kinetics of G protein-coupled receptors (GPCR) performed using the neurokinin 1 receptor (NK1R) as a representative member. The NK1R signaling cascade is triggered by the specific binding of a fluorescently labeled agonist, substance P (SP). The diffusion of single receptor-ligand complexes in plasma membrane of living HEK 293 cells is imaged using fast single-molecule wide-field fluorescence microscopy at 100 ms time resolution. Diffusion trajectories are obtained which show intra- and intertrace heterogeneity in the diffusion mode. To investigate universal patterns in the diffusion trajectories we take the ligand-binding event as the common starting point. This synchronization allows us to observe changes in the character of the ligand-receptor-complex diffusion. Specifically, we find that the diffusion of ligand-receptor complexes is slowed down significantly and becomes more constrained as a function of time during the first 1000 ms. The decelerated and more constrained diffusion is attributed to an increasing interaction of the GPCR with cellular structures after the ligand-receptor complex is formed.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    DAI,YANG; BORISOV,ALEXEY B.; LONGWORTH,JAMES W.

    The construction of inverse states in a finite field F{sub P{sub P{alpha}}} enables the organization of the mass scale by associating particle states with residue class designations. With the assumption of perfect flatness ({Omega}total = 1.0), this approach leads to the derivation of a cosmic seesaw congruence which unifies the concepts of space and mass. The law of quadratic reciprocity profoundly constrains the subgroup structure of the multiplicative group of units F{sub P{sub {alpha}}}* defined by the field. Four specific outcomes of this organization are (1) a reduction in the computational complexity of the mass state distribution by a factormore » of {approximately}10{sup 30}, (2) the extension of the genetic divisor concept to the classification of subgroup orders, (3) the derivation of a simple numerical test for any prospective mass number based on the order of the integer, and (4) the identification of direct biological analogies to taxonomy and regulatory networks characteristic of cellular metabolism, tumor suppression, immunology, and evolution. It is generally concluded that the organizing principle legislated by the alliance of quadratic reciprocity with the cosmic seesaw creates a universal optimized structure that functions in the regulation of a broad range of complex phenomena.« less

  11. Visualization Case Study: Eyjafjallajökull Ash (Invited)

    NASA Astrophysics Data System (ADS)

    Simmon, R.

    2010-12-01

    Although data visualization is a powerful tool in Earth science, the resulting imagery is often complex and difficult to interpret for non-experts. Students, journalists, web site visitors, or museum attendees often have difficulty understanding some of the imagery scientists create, particularly false-color imagery and data-driven maps. Many visualizations are designed for data exploration or peer communication, and often follow discipline conventions or are constrained by software defaults. Different techniques are necessary for communication with a broad audience. Data visualization combines ideas from cognitive science, graphic design, and cartography, and applies them to the challenge of presenting data clearly. Visualizers at NASA's Earth Observatory web site (earthobservatory.nasa.gov) use these techniques to craft remote sensing imagery for interested but non-expert readers. Images range from natural-color satellite images and multivariate maps to illustrations of abstract concepts. I will use imagery of the eruption of Iceland's Eyjafjallajökull volcano as a case study, showing specific applications of general design techniques. By using color carefully (including contextual data), precisely aligning disparate data sets, and highlighting important features, we crafted an image that clearly conveys the complex vertical and horizontal distribution of airborne ash.

  12. Bearing diagnostics: A method based on differential geometry

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Wang, Zili; Lu, Chen; Wang, Zhipeng

    2016-12-01

    The structures around bearings are complex, and the working environment is variable. These conditions cause the collected vibration signals to become nonlinear, non-stationary, and chaotic characteristics that make noise reduction, feature extraction, fault diagnosis, and health assessment significantly challenging. Thus, a set of differential geometry-based methods with superiorities in nonlinear analysis is presented in this study. For noise reduction, the Local Projection method is modified by both selecting the neighborhood radius based on empirical mode decomposition and determining noise subspace constrained by neighborhood distribution information. For feature extraction, Hessian locally linear embedding is introduced to acquire manifold features from the manifold topological structures, and singular values of eigenmatrices as well as several specific frequency amplitudes in spectrograms are extracted subsequently to reduce the complexity of the manifold features. For fault diagnosis, information geometry-based support vector machine is applied to classify the fault states. For health assessment, the manifold distance is employed to represent the health information; the Gaussian mixture model is utilized to calculate the confidence values, which directly reflect the health status. Case studies on Lorenz signals and vibration datasets of bearings demonstrate the effectiveness of the proposed methods.

  13. Sediment unmixing using detrital geochronology

    USGS Publications Warehouse

    Sharman, Glenn R.; Johnstone, Samuel

    2017-01-01

    Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the influence of environmental forcings (e.g., tectonism, climate) on the earth’s surface. Here we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First we summarize ‘top-down’ mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions (‘parents’) that characterize a derived sample or set of samples (‘daughters’). Second we propose the use of ‘bottom-up’ methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable mixtures over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.

  14. Sediment unmixing using detrital geochronology

    NASA Astrophysics Data System (ADS)

    Sharman, Glenn R.; Johnstone, Samuel A.

    2017-11-01

    Sediment mixing within sediment routing systems can exert a strong influence on the preservation of provenance signals that yield insight into the effect of environmental forcing (e.g., tectonism, climate) on the Earth's surface. Here, we discuss two approaches to unmixing detrital geochronologic data in an effort to characterize complex changes in the sedimentary record. First, we summarize 'top-down' mixing, which has been successfully employed in the past to characterize the different fractions of prescribed source distributions ('parents') that characterize a derived sample or set of samples ('daughters'). Second, we propose the use of 'bottom-up' methods, previously used primarily for grain size distributions, to model parent distributions and the abundances of these parents within a set of daughters. We demonstrate the utility of both top-down and bottom-up approaches to unmixing detrital geochronologic data within a well-constrained sediment routing system in central California. Use of a variety of goodness-of-fit metrics in top-down modeling reveals the importance of considering the range of allowable that is well mixed over any single best-fit mixture calculation. Bottom-up modeling of 12 daughter samples from beaches and submarine canyons yields modeled parent distributions that are remarkably similar to those expected from the geologic context of the sediment-routing system. In general, mixture modeling has the potential to supplement more widely applied approaches in comparing detrital geochronologic data by casting differences between samples as differing proportions of geologically meaningful end-member provenance categories.

  15. VizieR Online Data Catalog: Abundances of M33 HII regions (Magrini+, 2010)

    NASA Astrophysics Data System (ADS)

    Magrini, L.; Stanghellini, L.; Corbelli, E.; Galli, D.; Villaver, E.

    2009-11-01

    We analyze the spatial distribution of metals in M33 using a new sample and literature data of HII regions, constraining a model of galactic chemical evolution with HII region and planetary nebula (PN) abundances. We consider chemical abundances of a new sample of HII regions complemented with previous literature data-sets. Supported by a uniform sample of nebular spectroscopic observations, we conclude that: i) the metallicity distribution in M33 is very complex, showing a central depression in metallicity probably due to observational bias; ii) the metallicity gradient in the disk of M33 has a slope of -0.037+/-0.009dex/kpc in the whole radial range up to ~8kpc, and -0.044+/-0.009dex/kpc excluding the central kpc; iii) there is a small evolution of the slope with time from the epoch of PN progenitor formation to the present-time. Description: Emission line fluxes, observed and dereddened of 33 HII regions are presented. Physical and chemical properties, such as electron temperatures and density, ionic and total chemical abundances of He, O, N, Ne, Ar, S, are derived. (3 data files).

  16. Not Normal: the uncertainties of scientific measurements

    NASA Astrophysics Data System (ADS)

    Bailey, David C.

    2017-01-01

    Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student's t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply.

  17. Not Normal: the uncertainties of scientific measurements

    PubMed Central

    2017-01-01

    Judging the significance and reproducibility of quantitative research requires a good understanding of relevant uncertainties, but it is often unclear how well these have been evaluated and what they imply. Reported scientific uncertainties were studied by analysing 41 000 measurements of 3200 quantities from medicine, nuclear and particle physics, and interlaboratory comparisons ranging from chemistry to toxicology. Outliers are common, with 5σ disagreements up to five orders of magnitude more frequent than naively expected. Uncertainty-normalized differences between multiple measurements of the same quantity are consistent with heavy-tailed Student’s t-distributions that are often almost Cauchy, far from a Gaussian Normal bell curve. Medical research uncertainties are generally as well evaluated as those in physics, but physics uncertainty improves more rapidly, making feasible simple significance criteria such as the 5σ discovery convention in particle physics. Contributions to measurement uncertainty from mistakes and unknown problems are not completely unpredictable. Such errors appear to have power-law distributions consistent with how designed complex systems fail, and how unknown systematic errors are constrained by researchers. This better understanding may help improve analysis and meta-analysis of data, and help scientists and the public have more realistic expectations of what scientific results imply. PMID:28280557

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter; Dykes, Katherine; Scott, George

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  19. The effect of stochiastic technique on estimates of population viability from transition matrix models

    USGS Publications Warehouse

    Kaye, T.N.; Pyke, David A.

    2003-01-01

    Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.

  20. Using Real and Simulated TNOs to Constrain the Outer Solar System

    NASA Astrophysics Data System (ADS)

    Kaib, Nathan

    2018-04-01

    Over the past 2-3 decades our understanding of the outer solar system’s history and current state has evolved dramatically. An explosion in the number of detected trans-Neptunian objects (TNOs) coupled with simultaneous advances in numerical models of orbital dynamics has driven this rapid evolution. However, successfully constraining the orbital architecture and evolution of the outer solar system requires accurately comparing simulation results with observational datasets. This process is challenging because observed datasets are influenced by orbital discovery biases as well as TNO size and albedo distributions. Meanwhile, such influences are generally absent from numerical results. Here I will review recent work I and others have undertaken using numerical simulations in concert with catalogs of observed TNOs to constrain the outer solar system’s current orbital architecture and past evolution.

  1. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  2. Performance analysis of smart laminated composite plate integrated with distributed AFC material undergoing geometrically nonlinear transient vibrations

    NASA Astrophysics Data System (ADS)

    Shivakumar, J.; Ashok, M. H.; Khadakbhavi, Vishwanath; Pujari, Sanjay; Nandurkar, Santosh

    2018-02-01

    The present work focuses on geometrically nonlinear transient analysis of laminated smart composite plates integrated with the patches of Active fiber composites (AFC) using Active constrained layer damping (ACLD) as the distributed actuators. The analysis has been carried out using generalised energy based finite element model. The coupled electromechanical finite element model is derived using Von Karman type nonlinear strain displacement relations and a first-order shear deformation theory (FSDT). Eight-node iso-parametric serendipity elements are used for discretization of the overall plate integrated with AFC patch material. The viscoelastic constrained layer is modelled using GHM method. The numerical results shows the improvement in the active damping characteristics of the laminated composite plates over the passive damping for suppressing the geometrically nonlinear transient vibrations of laminated composite plates with AFC as patch material.

  3. Pointwise nonparametric maximum likelihood estimator of stochastically ordered survivor functions

    PubMed Central

    Park, Yongseok; Taylor, Jeremy M. G.; Kalbfleisch, John D.

    2012-01-01

    In this paper, we consider estimation of survivor functions from groups of observations with right-censored data when the groups are subject to a stochastic ordering constraint. Many methods and algorithms have been proposed to estimate distribution functions under such restrictions, but none have completely satisfactory properties when the observations are censored. We propose a pointwise constrained nonparametric maximum likelihood estimator, which is defined at each time t by the estimates of the survivor functions subject to constraints applied at time t only. We also propose an efficient method to obtain the estimator. The estimator of each constrained survivor function is shown to be nonincreasing in t, and its consistency and asymptotic distribution are established. A simulation study suggests better small and large sample properties than for alternative estimators. An example using prostate cancer data illustrates the method. PMID:23843661

  4. Predicting structures in the Zone of Avoidance

    NASA Astrophysics Data System (ADS)

    Sorce, Jenny G.; Colless, Matthew; Kraan-Korteweg, Renée C.; Gottlöber, Stefan

    2017-11-01

    The Zone of Avoidance (ZOA), whose emptiness is an artefact of our Galaxy dust, has been challenging observers as well as theorists for many years. Multiple attempts have been made on the observational side to map this region in order to better understand the local flows. On the theoretical side, however, this region is often simply statistically populated with structures but no real attempt has been made to confront theoretical and observed matter distributions. This paper takes a step forward using constrained realizations (CRs) of the local Universe shown to be perfect substitutes of local Universe-like simulations for smoothed high-density peak studies. Far from generating completely `random' structures in the ZOA, the reconstruction technique arranges matter according to the surrounding environment of this region. More precisely, the mean distributions of structures in a series of constrained and random realizations (RRs) differ: while densities annihilate each other when averaging over 200 RRs, structures persist when summing 200 CRs. The probability distribution function of ZOA grid cells to be highly overdense is a Gaussian with a 15 per cent mean in the random case, while that of the constrained case exhibits large tails. This implies that areas with the largest probabilities host most likely a structure. Comparisons between these predictions and observations, like those of the Puppis 3 cluster, show a remarkable agreement and allow us to assert the presence of the, recently highlighted by observations, Vela supercluster at about 180 h-1 Mpc, right behind the thickest dust layers of our Galaxy.

  5. A Framework to Design the Computational Load Distribution of Wireless Sensor Networks in Power Consumption Constrained Environments

    PubMed Central

    Sánchez-Álvarez, David; Rodríguez-Pérez, Francisco-Javier

    2018-01-01

    In this paper, we present a work based on the computational load distribution among the homogeneous nodes and the Hub/Sink of Wireless Sensor Networks (WSNs). The main contribution of the paper is an early decision support framework helping WSN designers to take decisions about computational load distribution for those WSNs where power consumption is a key issue (when we refer to “framework” in this work, we are considering it as a support tool to make decisions where the executive judgment can be included along with the set of mathematical tools of the WSN designer; this work shows the need to include the load distribution as an integral component of the WSN system for making early decisions regarding energy consumption). The framework takes advantage of the idea that balancing sensors nodes and Hub/Sink computational load can lead to improved energy consumption for the whole or at least the battery-powered nodes of the WSN. The approach is not trivial and it takes into account related issues such as the required data distribution, nodes, and Hub/Sink connectivity and availability due to their connectivity features and duty-cycle. For a practical demonstration, the proposed framework is applied to an agriculture case study, a sector very relevant in our region. In this kind of rural context, distances, low costs due to vegetable selling prices and the lack of continuous power supplies may lead to viable or inviable sensing solutions for the farmers. The proposed framework systematize and facilitates WSN designers the required complex calculations taking into account the most relevant variables regarding power consumption, avoiding full/partial/prototype implementations, and measurements of different computational load distribution potential solutions for a specific WSN. PMID:29570645

  6. High-Contrast Near-Infrared Imaging Polarimetry of the Protoplanetary Disk around RY Tau

    NASA Technical Reports Server (NTRS)

    Takami, Michihiro; Karr, Jennifer L.; Hashimoto, Jun; Kim, Hyosun; Wisenewski, John; Henning, Thomas; Grady, Carol; Kandori, Ryo; Hodapp, Klaus W.; Kudo, Tomoyuki; hide

    2013-01-01

    We present near-infrared coronagraphic imaging polarimetry of RY Tau. The scattered light in the circumstellar environment was imaged at H-band at a high resolution (approx. 0.05) for the first time, using Subaru-HiCIAO. The observed polarized intensity (PI) distribution shows a butterfly-like distribution of bright emission with an angular scale similar to the disk observed at millimeter wavelengths. This distribution is offset toward the blueshifted jet, indicating the presence of a geometrically thick disk or a remnant envelope, and therefore the earliest stage of the Class II evolutionary phase. We perform comparisons between the observed PI distribution and disk models with: (1) full radiative transfer code, using the spectral energy distribution (SED) to constrain the disk parameters; and (2) monochromatic simulations of scattered light which explore a wide range of parameters space to constrain the disk and dust parameters. We show that these models cannot consistently explain the observed PI distribution, SED, and the viewing angle inferred by millimeter interferometry. We suggest that the scattered light in the near-infrared is associated with an optically thin and geometrically thick layer above the disk surface, with the surface responsible for the infrared SED. Half of the scattered light and thermal radiation in this layer illuminates the disk surface, and this process may significantly affect the thermal structure of the disk.

  7. The bioclimatic envelope of the wolverine (Gulo gulo): do climatic constraints limit its geographic distribution?

    Treesearch

    J. P. Copeland; K. S. McKelvey; K. B. Aubry; A. Landa; J. Persson; R. M. Inman; J. Krebs; E. Lofroth; H. Golden; J. R. Squires; A. Magoun; M. K. Schwartz; J. Wilmot; C. L. Copeland; R. E. Yates; I. Kojola; R. May

    2010-01-01

    We propose a fundamental geographic distribution for the wolverine (Gulo gulo (L., 1758)) based on the hypothesis that the occurrence of wolverines is constrained by their obligate association with persistent spring snow cover for successful reproductive denning and by an upper limit of thermoneutrality. To investigate this hypothesis, we developed a composite of MODIS...

  8. Phase-field model of domain structures in ferroelectric thin films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Y. L.; Hu, S. Y.; Liu, Z. K.

    A phase-field model for predicting the coherent microstructure evolution in constrained thin films is developed. It employs an analytical elastic solution derived for a constrained film with arbitrary eigenstrain distributions. The domain structure evolution during a cubic{r_arrow}tetragonal proper ferroelectric phase transition is studied. It is shown that the model is able to simultaneously predict the effects of substrate constraint and temperature on the volume fractions of domain variants, domain-wall orientations, domain shapes, and their temporal evolution. {copyright} 2001 American Institute of Physics.

  9. Including metabolite concentrations into flux balance analysis: thermodynamic realizability as a constraint on flux distributions in metabolic networks

    PubMed Central

    Hoppe, Andreas; Hoffmann, Sabrina; Holzhütter, Hermann-Georg

    2007-01-01

    Background In recent years, constrained optimization – usually referred to as flux balance analysis (FBA) – has become a widely applied method for the computation of stationary fluxes in large-scale metabolic networks. The striking advantage of FBA as compared to kinetic modeling is that it basically requires only knowledge of the stoichiometry of the network. On the other hand, results of FBA are to a large degree hypothetical because the method relies on plausible but hardly provable optimality principles that are thought to govern metabolic flux distributions. Results To augment the reliability of FBA-based flux calculations we propose an additional side constraint which assures thermodynamic realizability, i.e. that the flux directions are consistent with the corresponding changes of Gibb's free energies. The latter depend on metabolite levels for which plausible ranges can be inferred from experimental data. Computationally, our method results in the solution of a mixed integer linear optimization problem with quadratic scoring function. An optimal flux distribution together with a metabolite profile is determined which assures thermodynamic realizability with minimal deviations of metabolite levels from their expected values. We applied our novel approach to two exemplary metabolic networks of different complexity, the metabolic core network of erythrocytes (30 reactions) and the metabolic network iJR904 of Escherichia coli (931 reactions). Our calculations show that increasing network complexity entails increasing sensitivity of predicted flux distributions to variations of standard Gibb's free energy changes and metabolite concentration ranges. We demonstrate the usefulness of our method for assessing critical concentrations of external metabolites preventing attainment of a metabolic steady state. Conclusion Our method incorporates the thermodynamic link between flux directions and metabolite concentrations into a practical computational algorithm. The weakness of conventional FBA to rely on intuitive assumptions about the reversibility of biochemical reactions is overcome. This enables the computation of reliable flux distributions even under extreme conditions of the network (e.g. enzyme inhibition, depletion of substrates or accumulation of end products) where metabolite concentrations may be drastically altered. PMID:17543097

  10. Constraining the effects of permeability uncertainty for geologic CO2 sequestration in a basalt reservoir

    NASA Astrophysics Data System (ADS)

    Jayne, R., Jr.; Pollyea, R.

    2016-12-01

    Carbon capture and sequestration (CCS) in geologic reservoirs is one strategy for reducing anthropogenic CO2 emissions from large-scale point-source emitters. Recent developments at the CarbFix CCS pilot in Iceland have shown that basalt reservoirs are highly effective for permanent mineral trapping on the basis of CO2-water-rock interactions, which result in the formation of carbonates minerals. In order to advance our understanding of basalt sequestration in large igneous provinces, this research uses numerical simulation to evaluate the feasibility of industrial-scale CO2 injections in the Columbia River Basalt Group (CRBG). Although bulk reservoir properties are well constrained on the basis of field and laboratory testing from the Wallula Basalt Sequestration Pilot Project, there remains significant uncertainty in the spatial distribution of permeability at the scale of individual basalt flows. Geostatistical analysis of hydrologic data from 540 wells illustrates that CRBG reservoirs are reasonably modeled as layered heterogeneous systems on the basis of basalt flow morphology; however, the regional dataset is insufficient to constrain permeability variability at the scale of an individual basalt flow. As a result, permeability distribution for this modeling study is established by centering the lognormal permeability distribution in the regional dataset over the bulk permeability measured at Wallula site, which results in a spatially random permeability distribution within the target reservoir. In order to quantify the effects of this permeability uncertainty, CO2 injections are simulated within 50 equally probable synthetic reservoir domains. Each model domain comprises three-dimensional geometry with 530,000 grid blocks, and fracture-matrix interaction is simulated as interacting continua for the two low permeability layers (flow interiors) bounding the injection zone. Results from this research illustrate that permeability uncertainty at the scale of individual basalt flows may significantly impact both injection pressure accumulation and CO2 distribution.

  11. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception.

    PubMed

    Skipper, Jeremy I; Devlin, Joseph T; Lametti, Daniel R

    2017-01-01

    Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires. Copyright © 2016. Published by Elsevier Inc.

  12. Pumping strategies for management of a shallow water table: The value of the simulation-optimization approach

    USGS Publications Warehouse

    Barlow, P.M.; Wagner, B.J.; Belitz, K.

    1996-01-01

    The simulation-optimization approach is used to identify ground-water pumping strategies for control of the shallow water table in the western San Joaquin Valley, California, where shallow ground water threatens continued agricultural productivity. The approach combines the use of ground-water flow simulation with optimization techniques to build on and refine pumping strategies identified in previous research that used flow simulation alone. Use of the combined simulation-optimization model resulted in a 20 percent reduction in the area subject to a shallow water table over that identified by use of the simulation model alone. The simulation-optimization model identifies increasingly more effective pumping strategies for control of the water table as the complexity of the problem increases; that is, as the number of subareas in which pumping is to be managed increases, the simulation-optimization model is better able to discriminate areally among subareas to determine optimal pumping locations. The simulation-optimization approach provides an improved understanding of controls on the ground-water flow system and management alternatives that can be implemented in the valley. In particular, results of the simulation-optimization model indicate that optimal pumping strategies are constrained by the existing distribution of wells between the semiconfined and confined zones of the aquifer, by the distribution of sediment types (and associated hydraulic conductivities) in the western valley, and by the historical distribution of pumping throughout the western valley.

  13. Remote Sensing-Based Detection and Spatial Pattern Analysis for Geo-Ecological Niche Modeling of Tillandsia SPP. In the Atacama, Chile

    NASA Astrophysics Data System (ADS)

    Wolf, N.; Siegmund, A.; del Río, C.; Osses, P.; García, J. L.

    2016-06-01

    In the coastal Atacama Desert in Northern Chile plant growth is constrained to so-called `fog oases' dominated by monospecific stands of the genus Tillandsia. Adapted to the hyperarid environmental conditions, these plants specialize on the foliar uptake of fog as main water and nutrient source. It is this characteristic that leads to distinctive macro- and micro-scale distribution patterns, reflecting complex geo-ecological gradients, mainly affected by the spatiotemporal occurrence of coastal fog respectively the South Pacific Stratocumulus clouds reaching inlands. The current work employs remote sensing, machine learning and spatial pattern/GIS analysis techniques to acquire detailed information on the presence and state of Tillandsia spp. in the Tarapacá region as a base to better understand the bioclimatic and topographic constraints determining the distribution patterns of Tillandsia spp. Spatial and spectral predictors extracted from WorldView-3 satellite data are used to map present Tillandsia vegetation in the Tarapaca region. Regression models on Vegetation Cover Fraction (VCF) are generated combining satellite-based as well as topographic variables and using aggregated high spatial resolution information on vegetation cover derived from UAV flight campaigns as a reference. The results are a first step towards mapping and modelling the topographic as well as bioclimatic factors explaining the spatial distribution patterns of Tillandsia fog oases in the Atacama, Chile.

  14. Using a constrained formulation based on probability summation to fit receiver operating characteristic (ROC) curves

    NASA Astrophysics Data System (ADS)

    Swensson, Richard G.; King, Jill L.; Good, Walter F.; Gur, David

    2000-04-01

    A constrained ROC formulation from probability summation is proposed for measuring observer performance in detecting abnormal findings on medical images. This assumes the observer's detection or rating decision on each image is determined by a latent variable that characterizes the specific finding (type and location) considered most likely to be a target abnormality. For positive cases, this 'maximum- suspicion' variable is assumed to be either the value for the actual target or for the most suspicious non-target finding, whichever is the greater (more suspicious). Unlike the usual ROC formulation, this constrained formulation guarantees a 'well-behaved' ROC curve that always equals or exceeds chance- level decisions and cannot exhibit an upward 'hook.' Its estimated parameters specify the accuracy for separating positive from negative cases, and they also predict accuracy in locating or identifying the actual abnormal findings. The present maximum-likelihood procedure (runs on PC with Windows 95 or NT) fits this constrained formulation to rating-ROC data using normal distributions with two free parameters. Fits of the conventional and constrained ROC formulations are compared for continuous and discrete-scale ratings of chest films in a variety of detection problems, both for localized lesions (nodules, rib fractures) and for diffuse abnormalities (interstitial disease, infiltrates or pnumothorax). The two fitted ROC curves are nearly identical unless the conventional ROC has an ill behaved 'hook,' below the constrained ROC.

  15. Droplet size and velocity distributions for spray modelling

    NASA Astrophysics Data System (ADS)

    Jones, D. P.; Watkins, A. P.

    2012-01-01

    Methods for constructing droplet size distributions and droplet velocity profiles are examined as a basis for the Eulerian spray model proposed in Beck and Watkins (2002,2003) [5,6]. Within the spray model, both distributions must be calculated at every control volume at every time-step where the spray is present and valid distributions must be guaranteed. Results show that the Maximum Entropy formalism combined with the Gamma distribution satisfy these conditions for the droplet size distributions. Approximating the droplet velocity profile is shown to be considerably more difficult due to the fact that it does not have compact support. An exponential model with a constrained exponent offers plausible profiles.

  16. Geothermal prospection in the Greater Geneva Basin (Switzerland and France): Structural and reservoir quality assessment

    NASA Astrophysics Data System (ADS)

    Rusillon, Elme; Clerc, Nicolas; Makhloufi, Yasin; Brentini, Maud; Moscariello, Andrea

    2017-04-01

    A reservoir assessment was performed in the Greater Geneva Basin to evaluate the geothermal resources potential of low to medium enthalpy (Moscariello, 2016). For this purpose, a detail structural analysis of the basin was performed (Clerc et al., 2016) simultaneously with a reservoir appraisal study including petrophysical properties assessment in a consistent sedimentological and stratigraphical frame (Brentini et al., 2017). This multi-disciplinary study was organised in 4 steps: (1) investigation of the surrounding outcrops to understand the stratigraphy and lateral facies distribution of the sedimentary sequence from Permo-Carboniferous to Lower Cretaceous units; (2) development of 3D geological models derived from 2D seismic and well data focusing on the structural scheme of the basin to constrain better the tectonic influence on facies distribution and to assess potential hydraulic connectivity through faults between reservoir units ; (3) evaluation of the distribution, geometry, sedimentology and petrophysical properties of potential reservoir units from well data; (4) identification and selection of the most promising reservoir units for in-depth rock type characterization and 3D modeling. Petrophysical investigations revealed that the Kimmeridgian-Tithonian Reef Complex and the underlying Calcaires de Tabalcon units are the most promising geothermal reservoir targets (porosity range 10-20%; permeability to 1mD). Best reservoir properties are measured in patch reefs and high-energy peri-reefal depositional environments, which are surrounded by synchronous tight lagoonal deposits. Associated highly porous dolomitized intervals reported in the western part of the basin also provide enhanced reservoir quality. The distribution and geometry of best reservoir bodies is complex and constrained by (1) palaeotopography, which can be affected by synsedimentary fault activity during Mesozoic times, (2) sedimentary factors such as hydrodynamics, sea level variations, or sedimentation rates and (3) diagenetic history (Makhloufi et al., 2017). A detail structural characterization of the basin using 2D seismic data reveals the existence of several wrench fault zones and intra-basinal thrusts across the basin, which could act as hydraulic conduits and play a key role in connecting the most productive reservoir facies. To understand the propagation of these heterogeneous reservoirs, rock types are currently defined and will be integrated into 3D geological models. This integrated study allows us to understand better the distribution and properties of productive reservoir facies as well as hydraulic connectivity zones within the study area. This provides consistent knowledge for future geothermal exploration steps toward the successful development of this sustainable energy resource in the Greater Geneva Basin. Brentini et al. 2017 : Geothermal prospection in the Greater Geneva Basin: integration of geological data in the new Information System. Abstract, EGU General Assembly 2017, Vienna, Austria Clerc et al. 2016 : Structural Modeling of the Geneva Basin for Geothermal Ressource Assessment. Abstract, 14th Swiss Geoscience Meeting, Geneva, Switzerland Makhloufi et al. 2017 : Geothermal prospection in the Greater Geneva Basin (Switzerland and France) : impact of diagenesis on reservoir properties of the Upper Jurassic carbonate sediments. Abstract, EGU General Assembly 2017, Vienna, Austria Moscariello, A. 2016 : Geothermal exploration in SW Switzerland, Proceeding , European Geotermal Congress 2016, Strasbourg, France

  17. Direct brain recordings reveal hippocampal rhythm underpinnings of language processing.

    PubMed

    Piai, Vitória; Anderson, Kristopher L; Lin, Jack J; Dewar, Callum; Parvizi, Josef; Dronkers, Nina F; Knight, Robert T

    2016-10-04

    Language is classically thought to be supported by perisylvian cortical regions. Here we provide intracranial evidence linking the hippocampal complex to linguistic processing. We used direct recordings from the hippocampal structures to investigate whether theta oscillations, pivotal in memory function, track the amount of contextual linguistic information provided in sentences. Twelve participants heard sentences that were either constrained ("She locked the door with the") or unconstrained ("She walked in here with the") before presentation of the final word ("key"), shown as a picture that participants had to name. Hippocampal theta power increased for constrained relative to unconstrained contexts during sentence processing, preceding picture presentation. Our study implicates hippocampal theta oscillations in a language task using natural language associations that do not require memorization. These findings reveal that the hippocampal complex contributes to language in an active fashion, relating incoming words to stored semantic knowledge, a necessary process in the generation of sentence meaning.

  18. New Abstraction Networks and a New Visualization Tool in Support of Auditing the SNOMED CT Content

    PubMed Central

    Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan

    2012-01-01

    Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT. PMID:23304293

  19. New abstraction networks and a new visualization tool in support of auditing the SNOMED CT content.

    PubMed

    Geller, James; Ochs, Christopher; Perl, Yehoshua; Xu, Junchuan

    2012-01-01

    Medical terminologies are large and complex. Frequently, errors are hidden in this complexity. Our objective is to find such errors, which can be aided by deriving abstraction networks from a large terminology. Abstraction networks preserve important features but eliminate many minor details, which are often not useful for identifying errors. Providing visualizations for such abstraction networks aids auditors by allowing them to quickly focus on elements of interest within a terminology. Previously we introduced area taxonomies and partial area taxonomies for SNOMED CT. In this paper, two advanced, novel kinds of abstraction networks, the relationship-constrained partial area subtaxonomy and the root-constrained partial area subtaxonomy are defined and their benefits are demonstrated. We also describe BLUSNO, an innovative software tool for quickly generating and visualizing these SNOMED CT abstraction networks. BLUSNO is a dynamic, interactive system that provides quick access to well organized information about SNOMED CT.

  20. Motion Planning and Synthesis of Human-Like Characters in Constrained Environments

    NASA Astrophysics Data System (ADS)

    Zhang, Liangjun; Pan, Jia; Manocha, Dinesh

    We give an overview of our recent work on generating naturally-looking human motion in constrained environments with multiple obstacles. This includes a whole-body motion planning algorithm for high DOF human-like characters. The planning problem is decomposed into a sequence of low dimensional sub-problems. We use a constrained coordination scheme to solve the sub-problems in an incremental manner and a local path refinement algorithm to compute collision-free paths in tight spaces and satisfy the statically stable constraint on CoM. We also present a hybrid algorithm to generate plausible motion by combing the motion computed by our planner with mocap data. We demonstrate the performance of our algorithm on a 40 DOF human-like character and generate efficient motion strategies for object placement, bending, walking, and lifting in complex environments.

  1. nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kusina, A.; Kovarik, Karol; Jezo, T.

    2015-09-01

    We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792 and concentrates on the comparison with other groups providing nuclear parton distributions.

  2. nCTEQ15 - Global analysis of nuclear parton distributions with uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kusina, A.; Kovarik, K.; Jezo, T.

    2015-09-04

    We present the first official release of the nCTEQ nuclear parton distribution functions with errors. The main addition to the previous nCTEQ PDFs is the introduction of PDF uncertainties based on the Hessian method. Another important addition is the inclusion of pion production data from RHIC that give us a handle on constraining the gluon PDF. This contribution summarizes our results from arXiv:1509.00792, and concentrates on the comparison with other groups providing nuclear parton distributions.

  3. Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays

    PubMed Central

    Trucco, Andrea; Traverso, Federico; Crocco, Marco

    2015-01-01

    For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987

  4. Geologic and Geophysical Framework of the Santa Rosa 7.5' Quadrangle, Sonoma County, California

    USGS Publications Warehouse

    McLaughlin, R.J.; Langenheim, V.E.; Sarna-Wojcicki, A. M.; Fleck, R.J.; McPhee, D.K.; Roberts, C.W.; McCabe, C.A.; Wan, Elmira

    2008-01-01

    The geologic and geophysical maps of Santa Rosa 7.5? quadrangle and accompanying structure sections portray the sedimentary and volcanic stratigraphy and crustal structure of the Santa Rosa 7.5? quadrangle and provide a context for interpreting the evolution of volcanism and active faulting in this region. The quadrangle is located in the California Coast Ranges north of San Francisco Bay and is traversed by the active Rodgers Creek, Healdsburg and Maacama Fault Zones. The geologic and geophysical data presented in this report, are substantial improvements over previous geologic and geophysical maps of the Santa Rosa area, allowing us to address important geologic issues. First, the geologic mapping is integrated with gravity and magnetic data, allowing us to depict the thicknesses of Cenozoic deposits, the depth and configuration of the Mesozoic basement surface, and the geometry of fault structures beneath this region to depths of several kilometers. This information has important implications for constraining the geometries of major active faults and for understanding and predicting the distribution and intensity of damage from ground shaking during earthquakes. Secondly, the geologic map and the accompanying description of the area describe in detail the distribution, geometry and complexity of faulting associated with the Rodgers Creek, Healdsburg and Bennett Valley Fault Zones and associated faults in the Santa Rosa quadrangle. The timing of fault movements is constrained by new 40Ar/39Ar ages and tephrochronologic correlations. These new data provide a better understanding of the stratigraphy of the extensive sedimentary and volcanic cover in the area and, in particular, clarify the formational affinities of Pliocene and Pleistocene nonmarine sedimentary units in the map area. Thirdly, the geophysics, particularly gravity data, indicate the locations of thick sections of sedimentary and volcanic fill within ground water basins of the Santa Rosa plain and Rincon, Bennett, and northwestern Sonoma Valleys, providing geohydrologists a more realistic framework for groundwater flow models.

  5. The Airborne Snow Observatory: fusion of imaging spectrometer and scanning lidar for studies of mountain snow cover (Invited)

    NASA Astrophysics Data System (ADS)

    Painter, T. H.; Andreadis, K.; Berisford, D. F.; Goodale, C. E.; Hart, A. F.; Heneghan, C.; Deems, J. S.; Gehrke, F.; Marks, D. G.; Mattmann, C. A.; McGurk, B. J.; Ramirez, P.; Seidel, F. C.; Skiles, M.; Trangsrud, A.; Winstral, A. H.; Kirchner, P.; Zimdars, P. A.; Yaghoobi, R.; Boustani, M.; Khudikyan, S.; Richardson, M.; Atwater, R.; Horn, J.; Goods, D.; Verma, R.; Boardman, J. W.

    2013-12-01

    Snow cover and its melt dominate regional climate and water resources in many of the world's mountainous regions. However, we face significant water resource challenges due to the intersection of increasing demand from population growth and changes in runoff total and timing due to climate change. Moreover, increasing temperatures in desert systems will increase dust loading to mountain snow cover, thus reducing the snow cover albedo and accelerating snowmelt runoff. The two most critical properties for understanding snowmelt runoff and timing are the spatial and temporal distributions of snow water equivalent (SWE) and snow albedo. Despite their importance in controlling volume and timing of runoff, snowpack albedo and SWE are still poorly quantified in the US and not at all in most of the globe, leaving runoff models poorly constrained. Recognizing this need, JPL developed the Airborne Snow Observatory (ASO), an imaging spectrometer and imaging LiDAR system, to quantify snow water equivalent and snow albedo, provide unprecedented knowledge of snow properties, and provide complete, robust inputs to snowmelt runoff models, water management models, and systems of the future. Critical in the design of the ASO system is the availability of snow water equivalent and albedo products within 24 hours of acquisition for timely constraint of snowmelt runoff forecast models. In spring 2013, ASO was deployed for its first year of a multi-year Demonstration Mission of weekly acquisitions in the Tuolumne River Basin (Sierra Nevada) and monthly acquisitions in the Uncompahgre River Basin (Colorado). The ASO data were used to constrain spatially distributed models of varying complexities and integrated into the operations of the O'Shaughnessy Dam on the Hetch Hetchy reservoir on the Tuolumne River. Here we present the first results from the ASO Demonstration Mission 1 along with modeling results with and without the constraint by the ASO's high spatial resolution and spatially complete acquisitions. ASO ultimately provides a potential foundation for coming spaceborne missions.

  6. Isotope Geochemistry of Possible Terrestrial Analogue for Martian Meteorite ALH84001

    NASA Technical Reports Server (NTRS)

    Mojzsis, Stephen J.

    2000-01-01

    We have studied the microdomain oxygen and carbon isotopic compositions by SIMS of complex carbonate rosettes from spinel therzolite xenoliths, hosted by nepheline basanite, from the island of Spitsbergen (Norway). The Quaternary volcanic rocks containing the xenoliths erupted into a high Arctic environment and through relatively thick continental crust containing carbonate rocks. We have attempted to constrain the sources of the carbonates in these rocks by combined O-18/O-16 and C-13/C-12 ratio measurements in 25 micron diameter spots of the carbonate and compare them to previous work based primarily on trace-element distributions. The origin of these carbonates can be interpreted in terms of either contamination by carbonate country rock during ascent of the xenoliths in the host basalt, or more probably by hydrothermal processes after emplacement. The isotopic composition of these carbonates from a combined delta.18O(sub SMOW) and delta.13C(sub PDB) standpoint precludes a primary origin of these minerals from the mantle. Here a description is given of the analysis procedure, standardization of the carbonates, major element compositions of the carbonates measured by electron microprobe, and their correlated C and O isotope compositions as measured by ion microprobe. Since these carbonate rosettes may represent a terrestrial analogue to the carbonate "globules" found in the martian meteorite ALH84001 interpretations for the origin of the features found in the Spitsbergen may be of interest in constraining the origin of these carbonate minerals on Mars.

  7. The genetic architecture of ecological adaptation: intraspecific variation in host plant use by the lepidopteran crop pest Chloridea virescens.

    PubMed

    Oppenheim, Sara J; Gould, Fred; Hopper, Keith R

    2018-03-01

    Intraspecific variation in ecologically important traits is a cornerstone of Darwin's theory of evolution by natural selection. The evolution and maintenance of this variation depends on genetic architecture, which in turn determines responses to natural selection. Some models suggest that traits with complex architectures are less likely to respond to selection than those with simple architectures, yet rapid divergence has been observed in such traits. The simultaneous evolutionary lability and genetic complexity of host plant use in the Lepidopteran subfamily Heliothinae suggest that architecture may not constrain ecological adaptation in this group. Here we investigate the response of Chloridea virescens, a generalist that feeds on diverse plant species, to selection for performance on a novel host, Physalis angulata (Solanaceae). P. angulata is the preferred host of Chloridea subflexa, a narrow specialist on the genus Physalis. In previous experiments, we found that the performance of C. subflexa on P. angulata depends on many loci of small effect distributed throughout the genome, but whether the same architecture would be involved in the generalist's adoption of P. angulata was unknown. Here we report a rapid response to selection in C. virescens for performance on P. angulata, and establish that the genetic architecture of intraspecific variation is quite similar to that of the interspecific differences in terms of the number, distribution, and effect sizes of the QTL involved. We discuss the impact of genetic architecture on the ability of Heliothine moths to respond to varying ecological selection pressures.

  8. Intermittent Granular Dynamics at a Seismogenic Plate Boundary.

    PubMed

    Meroz, Yasmine; Meade, Brendan J

    2017-09-29

    Earthquakes at seismogenic plate boundaries are a response to the differential motions of tectonic blocks embedded within a geometrically complex network of branching and coalescing faults. Elastic strain is accumulated at a slow strain rate on the order of 10^{-15}  s^{-1}, and released intermittently at intervals >100  yr, in the form of rapid (seconds to minutes) coseismic ruptures. The development of macroscopic models of quasistatic planar tectonic dynamics at these plate boundaries has remained challenging due to uncertainty with regard to the spatial and kinematic complexity of fault system behaviors. The characteristic length scale of kinematically distinct tectonic structures is particularly poorly constrained. Here, we analyze fluctuations in Global Positioning System observations of interseismic motion from the southern California plate boundary, identifying heavy-tailed scaling behavior. Namely, we show that, consistent with findings for slowly sheared granular media, the distribution of velocity fluctuations deviates from a Gaussian, exhibiting broad tails, and the correlation function decays as a stretched exponential. This suggests that the plate boundary can be understood as a densely packed granular medium, predicting a characteristic tectonic length scale of 91±20  km, here representing the characteristic size of tectonic blocks in the southern California fault network, and relating the characteristic duration and recurrence interval of earthquakes, with the observed sheared strain rate, and the nanosecond value for the crack tip evolution time scale. Within a granular description, fault and blocks systems may rapidly rearrange the distribution of forces within them, driving a mixture of transient and intermittent fault slip behaviors over tectonic time scales.

  9. Anatomical and functional assemblies of brain BOLD oscillations

    PubMed Central

    Baria, Alexis T.; Baliki, Marwan N.; Parrish, Todd; Apkarian, A. Vania

    2011-01-01

    Brain oscillatory activity has long been thought to have spatial properties, the details of which are unresolved. Here we examine spatial organizational rules for the human brain oscillatory activity as measured by blood oxygen level-dependent (BOLD). Resting state BOLD signal was transformed into frequency space (Welch’s method), averaged across subjects, and its spatial distribution studied as a function of four frequency bands, spanning the full bandwidth of BOLD. The brain showed anatomically constrained distribution of power for each frequency band. This result was replicated on a repository dataset of 195 subjects. Next, we examined larger-scale organization by parceling the neocortex into regions approximating Brodmann Areas (BAs). This indicated that BAs of simple function/connectivity (unimodal), vs. complex properties (transmodal), are dominated by low frequency BOLD oscillations, and within the visual ventral stream we observe a graded shift of power to higher frequency bands for BAs further removed from the primary visual cortex (increased complexity), linking frequency properties of BOLD to hodology. Additionally, BOLD oscillation properties for the default mode network demonstrated that it is composed of distinct frequency dependent regions. When the same analysis was performed on a visual-motor task, frequency-dependent global and voxel-wise shifts in BOLD oscillations could be detected at brain sites mostly outside those identified with general linear modeling. Thus, analysis of BOLD oscillations in full bandwidth uncovers novel brain organizational rules, linking anatomical structures and functional networks to characteristic BOLD oscillations. The approach also identifies changes in brain intrinsic properties in relation to responses to external inputs. PMID:21613505

  10. Intermittent Granular Dynamics at a Seismogenic Plate Boundary

    NASA Astrophysics Data System (ADS)

    Meroz, Yasmine; Meade, Brendan J.

    2017-09-01

    Earthquakes at seismogenic plate boundaries are a response to the differential motions of tectonic blocks embedded within a geometrically complex network of branching and coalescing faults. Elastic strain is accumulated at a slow strain rate on the order of 10-15 s-1 , and released intermittently at intervals >100 yr , in the form of rapid (seconds to minutes) coseismic ruptures. The development of macroscopic models of quasistatic planar tectonic dynamics at these plate boundaries has remained challenging due to uncertainty with regard to the spatial and kinematic complexity of fault system behaviors. The characteristic length scale of kinematically distinct tectonic structures is particularly poorly constrained. Here, we analyze fluctuations in Global Positioning System observations of interseismic motion from the southern California plate boundary, identifying heavy-tailed scaling behavior. Namely, we show that, consistent with findings for slowly sheared granular media, the distribution of velocity fluctuations deviates from a Gaussian, exhibiting broad tails, and the correlation function decays as a stretched exponential. This suggests that the plate boundary can be understood as a densely packed granular medium, predicting a characteristic tectonic length scale of 91 ±20 km , here representing the characteristic size of tectonic blocks in the southern California fault network, and relating the characteristic duration and recurrence interval of earthquakes, with the observed sheared strain rate, and the nanosecond value for the crack tip evolution time scale. Within a granular description, fault and blocks systems may rapidly rearrange the distribution of forces within them, driving a mixture of transient and intermittent fault slip behaviors over tectonic time scales.

  11. Why convective heat transport in the solar nebula was inefficient

    NASA Technical Reports Server (NTRS)

    Cassen, P.

    1993-01-01

    The radial distributions of the effective temperatures of circumstellar disks associated with pre-main sequence (T Tauri) stars are relatively well-constrained by ground-based and spacecraft infrared photometry and radio continuum observations. If the mechanisms by which energy is transported vertically in the disks are understood, these data can be used to constrain models of the thermal structure and evolution of solar nebula. Several studies of the evolution of the solar nebula have included the calculation of the vertical transport of heat by convection. Such calculations rely on a mixing length theory of transport and some assumption regarding the vertical distribution of internal dissipation. In all cases, the results of these calculations indicate that transport by radiation dominates that by convection, even when the nebula is convectively unstable. A simple argument that demonstrates the generality (and limits) of this result, regardless of the details of mixing length theory or the precise distribution of internal heating is presented. It is based on the idea that the radiative gradient in an optically thick nebula generally does not greatly exceed the adiabatic gradient.

  12. Longitudinal Double Spin Asymmetries of π0 - Jet Correlations in Polarized Proton Collisions at s = 510 GeV at STAR

    NASA Astrophysics Data System (ADS)

    Wang, Yaping

    One of the primary goals of the spin physics program at STAR is to constrain the polarized gluon distribution function, Δg(x), by measuring the longitudinal double-spin asymmetry (ALL) of various final-state channels. Using a jet in the mid-rapidity region |η| < 0.9 correlated with an azimuthally back-to-back π0 in the forward rapidity region 0.8 < η < 2.0 provides a new possibility to access the Δg(x) distribution at Bjorken-x down to 0.01. Compared to inclusive jet or inclusive π0 measurements, this channel also allows to constrain the initial parton kinematics. In these proceedings, we will present the status of the analysis of the π0-jet ALL in longitudinally polarized proton+proton collisions at s =510 GeV with 80 pb‑1 of data taken during the 2012 RHIC run. We also compare the projected ALL uncertainties to theoretical predictions of the ALL by next-to-leading order (NLO) model calculations with different polarized parton distribution functions.

  13. Progenitor Masses for Every Nearby Historic Core-Collapse Supernova

    NASA Astrophysics Data System (ADS)

    Williams, Benjamin

    2016-10-01

    Some of the most energetic explosions in the Universe are the core-collapse supernovae (CCSNe) that arise from the death of massive stars. They herald the birth of neutron stars and black holes, are prodigious emitters of neutrinos and gravitational waves, influence galactic hydrodynamics, trigger further star formation, and are a major site for nucleosynthesis, yet even the most basic elements of CCSN theory are poorly constrained by observations. Specifically, there are too few observations to constrain the progenitor mass distribution and fewer observations still to constrain the mapping between progenitor mass and explosion type (e.g. IIP IIL, IIb, Ib/c, etc.). Combining previous measurements with 9 proposed HST pointings covering 13 historic CCSNe, we plan to obtain progenitor mass measurements for all cataloged historic CCSNe within 8 Mpc, optimizing observational mass constraints for CCSN theory.

  14. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.

  15. Application of constrained deconvolution technique for reconstruction of electron bunch profile with strongly non-Gaussian shape

    NASA Astrophysics Data System (ADS)

    Geloni, G.; Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.

    2004-08-01

    An effective and practical technique based on the detection of the coherent synchrotron radiation (CSR) spectrum can be used to characterize the profile function of ultra-short bunches. The CSR spectrum measurement has an important limitation: no spectral phase information is available, and the complete profile function cannot be obtained in general. In this paper we propose to use constrained deconvolution method for bunch profile reconstruction based on a priori-known information about formation of the electron bunch. Application of the method is illustrated with practically important example of a bunch formed in a single bunch-compressor. Downstream of the bunch compressor the bunch charge distribution is strongly non-Gaussian with a narrow leading peak and a long tail. The longitudinal bunch distribution is derived by measuring the bunch tail constant with a streak camera and by using a priory available information about profile function.

  16. GROWTH AND INEQUALITY: MODEL EVALUATION BASED ON AN ESTIMATION-CALIBRATION STRATEGY

    PubMed Central

    Jeong, Hyeok; Townsend, Robert

    2010-01-01

    This paper evaluates two well-known models of growth with inequality that have explicit micro underpinnings related to household choice. With incomplete markets or transactions costs, wealth can constrain investment in business and the choice of occupation and also constrain the timing of entry into the formal financial sector. Using the Thai Socio-Economic Survey (SES), we estimate the distribution of wealth and the key parameters that best fit cross-sectional data on household choices and wealth. We then simulate the model economies for two decades at the estimated initial wealth distribution and analyze whether the model economies at those micro-fit parameter estimates can explain the observed macro and sectoral aspects of income growth and inequality change. Both models capture important features of Thai reality. Anomalies and comparisons across the two distinct models yield specific suggestions for improved research on the micro foundations of growth and inequality. PMID:20448833

  17. Putting a Ring on it: Light Echoes from X-ray Transients as Probes of Interstellar Dust and Galactic Structure

    NASA Astrophysics Data System (ADS)

    Heinz, Sebastian

    2017-09-01

    When an X-ray transient exhibits a bright flare, scattering by interstellar dust clouds can give rise to a light echo in the form of concentric rings. To date, three such echoes have been detected, each leading to significant discoveries and press attention. We propose a Target-of-Opportunity campaign to observe future echoes with the aim to follow the temporal evolution of the echo in order to (a) map the 3D distribution interstellar dust along the line of sight to parsec accuracy, (b) constrain the composition and grain size distribution of ISM dust in each of the clouds towards the source, (c) measure the distance to the X-ray source, (d) constrain the velocity dispersion of molecular clouds and (e) search for evidence of streaming velocities by combing X-ray and CO data on the clouds.

  18. Biomedical innovation in the era of health care spending constraints.

    PubMed

    Robinson, James C

    2015-02-01

    Insurers, hospitals, physicians, and consumers are increasingly weighing price against performance in their decisions to purchase and use new drugs, devices, and other medical technologies. This approach will tend to affect biomedical innovation adversely by reducing the revenues available for research and development. However, a more constrained funding environment may also have positive impacts. The passing era of largely cost-unconscious demand fostered the development of incremental innovations priced at premium levels. The new constrained-funding era will require medical technology firms to design their products with the features most valued by payers and patients, price them at levels justified by clinical performance, and manage distribution through organizations rather than to individual physicians. The emerging era has the potential to increase the social value of innovation by focusing industry on design, pricing, and distribution principles that are more closely aligned with the preferences-and pocketbooks-of its customers. Project HOPE—The People-to-People Health Foundation, Inc.

  19. GPS source solution of the 2004 Parkfield earthquake.

    PubMed

    Houlié, N; Dreger, D; Kim, A

    2014-01-17

    We compute a series of finite-source parameter inversions of the fault rupture of the 2004 Parkfield earthquake based on 1 Hz GPS records only. We confirm that some of the co-seismic slip at shallow depth (<5 km) constrained by InSAR data processing results from early post-seismic deformation. We also show 1) that if located very close to the rupture, a GPS receiver can saturate while it remains possible to estimate the ground velocity (~1.2 m/s) near the fault, 2) that GPS waveforms inversions constrain that the slip distribution at depth even when GPS monuments are not located directly above the ruptured areas and 3) the slip distribution at depth from our best models agree with that recovered from strong motion data. The 95(th) percentile of the slip amplitudes for rupture velocities ranging from 2 to 5 km/s is ~55 ± 6 cm.

  20. GPS source solution of the 2004 Parkfield earthquake

    PubMed Central

    Houlié, N.; Dreger, D.; Kim, A.

    2014-01-01

    We compute a series of finite-source parameter inversions of the fault rupture of the 2004 Parkfield earthquake based on 1 Hz GPS records only. We confirm that some of the co-seismic slip at shallow depth (<5 km) constrained by InSAR data processing results from early post-seismic deformation. We also show 1) that if located very close to the rupture, a GPS receiver can saturate while it remains possible to estimate the ground velocity (~1.2 m/s) near the fault, 2) that GPS waveforms inversions constrain that the slip distribution at depth even when GPS monuments are not located directly above the ruptured areas and 3) the slip distribution at depth from our best models agree with that recovered from strong motion data. The 95th percentile of the slip amplitudes for rupture velocities ranging from 2 to 5 km/s is ~55 ± 6 cm. PMID:24434939

  1. Reinforcement Learning for Constrained Energy Trading Games With Incomplete Information.

    PubMed

    Wang, Huiwei; Huang, Tingwen; Liao, Xiaofeng; Abu-Rub, Haitham; Chen, Guo

    2017-10-01

    This paper considers the problem of designing adaptive learning algorithms to seek the Nash equilibrium (NE) of the constrained energy trading game among individually strategic players with incomplete information. In this game, each player uses the learning automaton scheme to generate the action probability distribution based on his/her private information for maximizing his own averaged utility. It is shown that if one of admissible mixed-strategies converges to the NE with probability one, then the averaged utility and trading quantity almost surely converge to their expected ones, respectively. For the given discontinuous pricing function, the utility function has already been proved to be upper semicontinuous and payoff secure which guarantee the existence of the mixed-strategy NE. By the strict diagonal concavity of the regularized Lagrange function, the uniqueness of NE is also guaranteed. Finally, an adaptive learning algorithm is provided to generate the strategy probability distribution for seeking the mixed-strategy NE.

  2. Global-constrained hidden Markov model applied on wireless capsule endoscopy video segmentation

    NASA Astrophysics Data System (ADS)

    Wan, Yiwen; Duraisamy, Prakash; Alam, Mohammad S.; Buckles, Bill

    2012-06-01

    Accurate analysis of wireless capsule endoscopy (WCE) videos is vital but tedious. Automatic image analysis can expedite this task. Video segmentation of WCE into the four parts of the gastrointestinal tract is one way to assist a physician. The segmentation approach described in this paper integrates pattern recognition with statiscal analysis. Iniatially, a support vector machine is applied to classify video frames into four classes using a combination of multiple color and texture features as the feature vector. A Poisson cumulative distribution, for which the parameter depends on the length of segments, models a prior knowledge. A priori knowledge together with inter-frame difference serves as the global constraints driven by the underlying observation of each WCE video, which is fitted by Gaussian distribution to constrain the transition probability of hidden Markov model.Experimental results demonstrated effectiveness of the approach.

  3. The dynamics of folding instability in a constrained Cosserat medium

    NASA Astrophysics Data System (ADS)

    Gourgiotis, Panos A.; Bigoni, Davide

    2017-04-01

    Different from Cauchy elastic materials, generalized continua, and in particular constrained Cosserat materials, can be designed to possess extreme (near a failure of ellipticity) orthotropy properties and in this way to model folding in a three-dimensional solid. Following this approach, folding, which is a narrow zone of highly localized bending, spontaneously emerges as a deformation pattern occurring in a strongly anisotropic solid. How this peculiar pattern interacts with wave propagation in the time-harmonic domain is revealed through the derivation of an antiplane, infinite-body Green's function, which opens the way to integral techniques for anisotropic constrained Cosserat continua. Viewed as a perturbing agent, the Green's function shows that folding, emerging near a steadily pulsating source in the limit of failure of ellipticity, is transformed into a disturbance with wavefronts parallel to the folding itself. The results of the presented study introduce the possibility of exploiting constrained Cosserat solids for propagating waves in materials displaying origami patterns of deformation. This article is part of the themed issue 'Patterning through instabilities in complex media: theory and applications.'

  4. Optimal vibration control of a rotating plate with self-sensing active constrained layer damping

    NASA Astrophysics Data System (ADS)

    Xie, Zhengchao; Wong, Pak Kin; Lo, Kin Heng

    2012-04-01

    This paper proposes a finite element model for optimally controlled constrained layer damped (CLD) rotating plate with self-sensing technique and frequency-dependent material property in both the time and frequency domain. Constrained layer damping with viscoelastic material can effectively reduce the vibration in rotating structures. However, most existing research models use complex modulus approach to model viscoelastic material, and an additional iterative approach which is only available in frequency domain has to be used to include the material's frequency dependency. It is meaningful to model the viscoelastic damping layer in rotating part by using the anelastic displacement fields (ADF) in order to include the frequency dependency in both the time and frequency domain. Also, unlike previous ones, this finite element model treats all three layers as having the both shear and extension strains, so all types of damping are taken into account. Thus, in this work, a single layer finite element is adopted to model a three-layer active constrained layer damped rotating plate in which the constraining layer is made of piezoelectric material to work as both the self-sensing sensor and actuator under an linear quadratic regulation (LQR) controller. After being compared with verified data, this newly proposed finite element model is validated and could be used for future research.

  5. How well can future CMB missions constrain cosmic inflation?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, Jérôme; Vennin, Vincent; Ringeval, Christophe, E-mail: jmartin@iap.fr, E-mail: christophe.ringeval@uclouvain.be, E-mail: vennin@iap.fr

    2014-10-01

    We study how the next generation of Cosmic Microwave Background (CMB) measurement missions (such as EPIC, LiteBIRD, PRISM and COrE) will be able to constrain the inflationary landscape in the hardest to disambiguate situation in which inflation is simply described by single-field slow-roll scenarios. Considering the proposed PRISM and LiteBIRD satellite designs, we simulate mock data corresponding to five different fiducial models having values of the tensor-to-scalar ratio ranging from 10{sup -1} down to 10{sup -7}. We then compute the Bayesian evidences and complexities of all Encyclopædia Inflationaris models in order to assess the constraining power of PRISM alone andmore » LiteBIRD complemented with the Planck 2013 data. Within slow-roll inflation, both designs have comparable constraining power and can rule out about three quarters of the inflationary scenarios, compared to one third for Planck 2013 data alone. However, we also show that PRISM can constrain the scalar running and has the capability to detect a violation of slow roll at second order. Finally, our results suggest that describing an inflationary model by its potential shape only, without specifying a reheating temperature, will no longer be possible given the accuracy level reached by the future CMB missions.« less

  6. Synergism and Antagonism of Proximate Mechanisms Enable and Constrain the Response to Simultaneous Selection on Body Size and Development Time: An Empirical Test Using Experimental Evolution.

    PubMed

    Davidowitz, Goggy; Roff, Derek; Nijhout, H Frederik

    2016-11-01

    Natural selection acts on multiple traits simultaneously. How mechanisms underlying such traits enable or constrain their response to simultaneous selection is poorly understood. We show how antagonism and synergism among three traits at the developmental level enable or constrain evolutionary change in response to simultaneous selection on two focal traits at the phenotypic level. After 10 generations of 25% simultaneous directional selection on all four combinations of body size and development time in Manduca sexta (Sphingidae), the changes in the three developmental traits predict 93% of the response of development time and 100% of the response of body size. When the two focal traits were under synergistic selection, the response to simultaneous selection was enabled by juvenile hormone and ecdysteroids and constrained by growth rate. When the two focal traits were under antagonistic selection, the response to selection was due primarily to change in growth rate and constrained by the two hormonal traits. The approach used here reduces the complexity of the developmental and endocrine mechanisms to three proxy traits. This generates explicit predictions for the evolutionary response to selection that are based on biologically informed mechanisms. This approach has broad applicability to a diverse range of taxa, including algae, plants, amphibians, mammals, and insects.

  7. Preparing a New Generation of Clinicians for the Era of Big Data

    PubMed Central

    Moskowitz, Ari; McSparron, Jakob; Stone, David J.; Celi, Leo Anthony

    2015-01-01

    Synopsis As medicine becomes increasingly complex and financially constrained, it will be the responsibility of every clinician to understand and participate in the enterprise of extracting lessons learned from digitally captured patient care. PMID:25688383

  8. Wind Farm Turbine Type and Placement Optimization

    NASA Astrophysics Data System (ADS)

    Graf, Peter; Dykes, Katherine; Scott, George; Fields, Jason; Lunacek, Monte; Quick, Julian; Rethore, Pierre-Elouan

    2016-09-01

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. This document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  9. Wind farm turbine type and placement optimization

    DOE PAGES

    Graf, Peter; Dykes, Katherine; Scott, George; ...

    2016-10-03

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  10. A Composite Algorithm for Mixed Integer Constrained Nonlinear Optimization.

    DTIC Science & Technology

    1980-01-01

    de Silva [141, and Weisman and Wood [76). A particular direct search algorithm, the simplex method, has been cited for having the potential for...spaced discrete points on a line which makes the direction suitable for an efficient integer search technique based on Fibonacci numbers. Two...defined by a subset of variables. The complex algorithm is particularly well suited for this subspace search for two reasons. First, the complex method

  11. Complex-Difference Constrained Compressed Sensing Reconstruction for Accelerated PRF Thermometry with Application to MRI Induced RF Heating

    PubMed Central

    Cao, Zhipeng; Oh, Sukhoon; Otazo, Ricardo; Sica, Christopher T.; Griswold, Mark A.; Collins, Christopher M.

    2014-01-01

    Purpose Introduce a novel compressed sensing reconstruction method to accelerate proton resonance frequency (PRF) shift temperature imaging for MRI induced radiofrequency (RF) heating evaluation. Methods A compressed sensing approach that exploits sparsity of the complex difference between post-heating and baseline images is proposed to accelerate PRF temperature mapping. The method exploits the intra- and inter-image correlations to promote sparsity and remove shared aliasing artifacts. Validations were performed on simulations and retrospectively undersampled data acquired in ex-vivo and in-vivo studies by comparing performance with previously proposed techniques. Results The proposed complex difference constrained compressed sensing reconstruction method improved the reconstruction of smooth and local PRF temperature change images compared to various available reconstruction methods in a simulation study, a retrospective study with heating of a human forearm in vivo, and a retrospective study with heating of a sample of beef ex vivo . Conclusion Complex difference based compressed sensing with utilization of a fully-sampled baseline image improves the reconstruction accuracy for accelerated PRF thermometry. It can be used to improve the volumetric coverage and temporal resolution in evaluation of RF heating due to MRI, and may help facilitate and validate temperature-based methods for safety assurance. PMID:24753099

  12. Mesozoic Crustal Thickening of the Longmenshan Belt (NE Tibet, China) by Imbrication of Basement Slices: Insights From Structural Analysis, Petrofabric and Magnetic Fabric Studies, and Gravity Modeling

    NASA Astrophysics Data System (ADS)

    Xue, Zhenhua; Martelet, Guillaume; Lin, Wei; Faure, Michel; Chen, Yan; Wei, Wei; Li, Shuangjian; Wang, Qingchen

    2017-12-01

    This work first presents field structural analysis, anisotropy of magnetic susceptibility (AMS) measurements, and kinematic and microstructural studies on the Neoproterozoic Pengguan complex located in the middle segment of the Longmenshan thrust belt (LMTB), NE Tibet. These investigations indicate that the Pengguan complex is a heterogeneous unit with a ductilely deformed NW domain and an undeformed SE domain, rather than a single homogeneous body as previously thought. The NW part of the Pengguan complex is constrained by top-to-the-NW shearing along its NW boundary and top-to-the-SE shearing along its SE boundary, where it imbricates and overrides the SE domain. Two orogen-perpendicular gravity models not only support the imbricated shape of the Pengguan complex but also reveal an imbrication of high-density material hidden below the Paleozoic rocks on the west of the LMTB. Regionally, this suggests a basement-slice-imbricated structure that developed along the margin of the Yangtze Block, as shown by the regional gravity anomaly map, together with the published nearby seismic profile and the distribution of orogen-parallel Neoproterozoic complexes. Integrating the previously published ages of the NW normal faulting and of the SE directed thrusting, the locally fast exhumation rate, and the lithological characteristics of the sediments in the LMTB front, we interpret the basement-slice-imbricated structure as the result of southeastward thrusting of the basement slices during the Late Jurassic-Early Cretaceous. This architecture makes a significant contribution to the crustal thickening of the LMTB during the Mesozoic, and therefore, the Cenozoic thickening of the Longmenshan belt might be less important than often suggested.

  13. Climate sensitivity across marine domains of life: limits to evolutionary adaptation shape species interactions.

    PubMed

    Storch, Daniela; Menzel, Lena; Frickenhaus, Stephan; Pörtner, Hans-O

    2014-10-01

    Organisms in all domains, Archaea, Bacteria, and Eukarya will respond to climate change with differential vulnerabilities resulting in shifts in species distribution, coexistence, and interactions. The identification of unifying principles of organism functioning across all domains would facilitate a cause and effect understanding of such changes and their implications for ecosystem shifts. For example, the functional specialization of all organisms in limited temperature ranges leads us to ask for unifying functional reasons. Organisms also specialize in either anoxic or various oxygen ranges, with animals and plants depending on high oxygen levels. Here, we identify thermal ranges, heat limits of growth, and critically low (hypoxic) oxygen concentrations as proxies of tolerance in a meta-analysis of data available for marine organisms, with special reference to domain-specific limits. For an explanation of the patterns and differences observed, we define and quantify a proxy for organismic complexity across species from all domains. Rising complexity causes heat (and hypoxia) tolerances to decrease from Archaea to Bacteria to uni- and then multicellular Eukarya. Within and across domains, taxon-specific tolerance limits likely reflect ultimate evolutionary limits of its species to acclimatization and adaptation. We hypothesize that rising taxon-specific complexities in structure and function constrain organisms to narrower environmental ranges. Low complexity as in Archaea and some Bacteria provide life options in extreme environments. In the warmest oceans, temperature maxima reach and will surpass the permanent limits to the existence of multicellular animals, plants and unicellular phytoplankter. Smaller, less complex unicellular Eukarya, Bacteria, and Archaea will thus benefit and predominate even more in a future, warmer, and hypoxic ocean. © 2014 John Wiley & Sons Ltd.

  14. Identification of different geologic units using fuzzy constrained resistivity tomography

    NASA Astrophysics Data System (ADS)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  15. Free energy from molecular dynamics with multiple constraints

    NASA Astrophysics Data System (ADS)

    den Otter, W. K.; Briels, W. J.

    In molecular dynamics simulations of reacting systems, the key step to determining the equilibrium constant and the reaction rate is the calculation of the free energy as a function of the reaction coordinate. Intuitively the derivative of the free energy is equal to the average force needed to constrain the reaction coordinate to a constant value, but the metric tensor effect of the constraint on the sampled phase space distribution complicates this relation. The appropriately corrected expression for the potential of mean constraint force method (PMCF) for systems in which only the reaction coordinate is constrained was published recently. Here we will consider the general case of a system with multiple constraints. This situation arises when both the reaction coordinate and the 'hard' coordinates are constrained, and also in systems with several reaction coordinates. The obvious advantage of this method over the established thermodynamic integration and free energy perturbation methods is that it avoids the cumbersome introduction of a full set of generalized coordinates complementing the constrained coordinates. Simulations of n -butane and n -pentane in vacuum illustrate the method.

  16. THE RISE AND FALL OF THE STAR FORMATION HISTORIES OF BLUE GALAXIES AT REDSHIFTS 0.2 < z < 1.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pacifici, Camilla; Kassin, Susan A.; Gardner, Jonathan P.

    2013-01-01

    Popular cosmological scenarios predict that galaxies form hierarchically from the merger of many progenitors, each with their own unique star formation history (SFH). We use a sophisticated approach to constrain the SFHs of 4517 blue (presumably star-forming) galaxies with spectroscopic redshifts in the range 0.2 < z < 1.4 from the All-Wavelength Extended Groth Strip International Survey. This consists in the Bayesian analysis of the observed galaxy spectral energy distributions with a comprehensive library of synthetic spectra assembled using realistic, hierarchical star formation, and chemical enrichment histories from cosmological simulations. We constrain the SFH of each galaxy in our samplemore » by comparing the observed fluxes in the B, R, I, and K{sub s} bands and rest-frame optical emission-line luminosities with those of one million model spectral energy distributions. We explore the dependence of the resulting SFHs on galaxy stellar mass and redshift. We find that the average SFHs of high-mass galaxies rise and fall in a roughly symmetric bell-shaped manner, while those of low-mass galaxies rise progressively in time, consistent with the typically stronger activity of star formation in low-mass compared to high-mass galaxies. For galaxies of all masses, the star formation activity rises more rapidly at high than at low redshift. These findings imply that the standard approximation of exponentially declining SFHs widely used to interpret observed galaxy spectral energy distributions may not be appropriate to constrain the physical parameters of star-forming galaxies at intermediate redshifts.« less

  17. Geologic Mapping of the Lunar South Pole, Quadrangle LQ-30: Volcanic History and Stratigraphy of Schroedinger Basin

    NASA Technical Reports Server (NTRS)

    Mest, S. C.; Berman, D. C.; Petro, N. E.

    2009-01-01

    In this study we use recent images and topographic data to map the geology and geomorphology of the lunar South Pole quadrangle (LQ-30) at 1:2.5M scale [1-4] in accordance with the Lunar Geologic Mapping Program. Mapping of LQ-30 began during Mest's postdoctoral appointment and has continued under the PG&G Program, from which funding became available in February 2009. Preliminary map-ping and analyses have been done using base materials compiled by Mest, but properly mosaicked and spatially registered base materials are being compiled by the USGS and should be received by the end of June 2009. The overall objective of this research is to constrain the geologic evolution of the lunar South Pole (LQ-30: 60deg -90deg S, 0deg - +/-180deg ) with specific emphasis on evaluation of a) the regional effects of basin formation on the structure and composition of the crust and b) the spatial distribution of ejecta, in particular resulting from formation of the South Pole-Aitken (SPA) basin and other large basins. Key scientific objectives include: 1) Constraining the geologic history of the lunar South Pole and examining the spatial and temporal variability of geologic processes within the map area. 2) Constraining the vertical and lateral structure of the lunar regolith and crust, assessing the distribution of impact-generated materials, and determining the timing and effects of major basin-forming impacts on crustal structure and stratigraphy in the map area. And 3) assessing the distribution of resources (e.g., H, Fe, Th) and their relationships with surface materials.

  18. Aerosol Size Distributions During ACE-Asia: Retrievals From Optical Thickness and Comparisons With In-situ Measurements

    NASA Astrophysics Data System (ADS)

    Kuzmanoski, M.; Box, M.; Box, G. P.; Schmidt, B.; Russell, P. B.; Redemann, J.; Livingston, J. M.; Wang, J.; Flagan, R. C.; Seinfeld, J. H.

    2002-12-01

    As part of the ACE-Asia experiment, conducted off the coast of China, Korea and Japan in spring 2001, measurements of aerosol physical, chemical and radiative characteristics were performed aboard the Twin Otter aircraft. Of particular importance for this paper were spectral measurements of aerosol optical thickness obtained at 13 discrete wavelengths, within 354-1558 nm wavelength range, using the AATS-14 sunphotometer. Spectral aerosol optical thickness can be used to obtain information about particle size distribution. In this paper, we use sunphotometer measurements to retrieve size distribution of aerosols during ACE-Asia. We focus on four cases in which layers influenced by different air masses were identified. Aerosol optical thickness of each layer was inverted using two different techniques - constrained linear inversion and multimodal. In the constrained linear inversion algorithm no assumption about the mathematical form of the distribution to be retrieved is made. Conversely, the multimodal technique assumes that aerosol size distribution is represented as a linear combination of few lognormal modes with predefined values of mode radii and geometric standard deviations. Amplitudes of modes are varied to obtain best fit of sum of optical thicknesses due to individual modes to sunphotometer measurements. In this paper we compare the results of these two retrieval methods. In addition, we present comparisons of retrieved size distributions with in situ measurements taken using an aerodynamic particle sizer and differential mobility analyzer system aboard the Twin Otter aircraft.

  19. Constrained dipole oscillator strength distributions, sum rules, and dispersion coefficients for Br2 and BrCN

    NASA Astrophysics Data System (ADS)

    Kumar, Ashok; Thakkar, Ajit J.

    2017-03-01

    Dipole oscillator strength distributions for Br2 and BrCN are constructed from photoabsorption cross-sections combined with constraints provided by the Kuhn-Reiche-Thomas sum rule, the high-energy behavior of the dipole-oscillator-strength density and molar refractivity data when available. The distributions are used to predict dipole sum rules S (k) , mean excitation energies I (k) , and van der Waals C6 coefficients. Coupled-cluster calculations of the static dipole polarizabilities of Br2 and BrCN are reported for comparison with the values of S (- 2) extracted from the distributions.

  20. Constraining the double gluon distribution by the single gluon distribution

    DOE PAGES

    Golec-Biernat, Krzysztof; Lewandowska, Emilia; Serino, Mirko; ...

    2015-10-03

    We show how to consistently construct initial conditions for the QCD evolution equations for double parton distribution functions in the pure gluon case. We use to momentum sum rule for this purpose and a specific form of the known single gluon distribution function in the MSTW parameterization. The resulting double gluon distribution satisfies exactly the momentum sum rule and is parameter free. Furthermore, we study numerically its evolution with a hard scale and show the approximate factorization into product of two single gluon distributions at small values of x, whereas at large values of x the factorization is always violatedmore » in agreement with the sum rule.« less

  1. Environmental niche divergence among three dune shrub sister species with parapatric distributions

    PubMed Central

    Chefaoui, Rosa M.; Correia, Otília; Bonal, Raúl; Hortal, Joaquín

    2017-01-01

    Abstract Background and Aims The geographical distributions of species are constrained by their ecological requirements. The aim of this work was to analyse the effects of environmental conditions, historical events and biogeographical constraints on the diversification of the three species of the western Mediterranean shrub genus Stauracanthus, which have a parapatric distribution in the Iberian Peninsula. Methods Ecological niche factor analysis and generalized linear models were used to measure the response of all Stauracanthus species to the environmental gradients and map their potential distributions in the Iberian Peninsula. The bioclimatic niche overlap between the three species was determined by using Schoener's index. The genetic differentiation of the Iberian and northern African populations of Stauracanthus species was characterized with GenalEx. The effects on genetic distances of the most important environmental drivers were assessed through Mantel tests and non-metric multidimensional scaling. Key Results The three Stauracanthus species show remarkably similar responses to climatic conditions. This supports the idea that all members of this recently diversified clade retain common adaptations to climate and consequently high levels of climatic niche overlap. This contrasts with the diverse edaphic requirements of Stauracanthus species. The populations of the S. genistoides–spectabilis clade grow on Miocene and Pliocene fine-textured sedimentary soils, whereas S. boivinii, the more genetically distant species, occurs on older and more coarse-textured sedimentary substrates. These patterns of diversification are largely consistent with a stochastic process of geographical range expansion and fragmentation coupled with niche evolution in the context of spatially complex environmental fluctuations. Conclusions: The combined analysis of the distribution, realized environmental niche and phylogeographical relationships of parapatric species proposed in this work allows integration of the biogeographical, ecological and evolutionary processes driving the evolution of species adaptations and how they determine their current geographical ranges. PMID:28334085

  2. Constrained coding for the deep-spaced optical channel

    NASA Technical Reports Server (NTRS)

    Moision, B.; Hamkins, J.

    2002-01-01

    In this paper, we demonstrate a class of low-complexity modulation codes satisfying the (d,k) constraint that offer throughput gains over M-PPM on the order of 10-15%, which translate into SNR gains of .4 - .6 dB.

  3. Emerging technological and cultural shifts advancing drylands research and management

    USDA-ARS?s Scientific Manuscript database

    Sustainable provisioning of ecosystem services in dryland landscapes is complicated by extreme conditions that constrain biological responses to perturbation, vast spatial and temporal complexity, and uncertainty regarding the resilience of these ecosystems to management practices and climate change...

  4. Public Input on Stream Monitoring in the Willamette Valley, Oregon

    EPA Science Inventory

    The goal of environmental monitoring is to track resource condition, and thereby support environmental knowledge and management. Judgments are inevitable during monitoring design regarding what resource features will be assessed. Constraining what to measure given a complex envir...

  5. Public input on stream monitoring in the Willamette Valley, Oregon - ACES

    EPA Science Inventory

    The goal of environmental monitoring is to track resource condition, and thereby support environmental knowledge and management. Judgments are inevitable during monitoring design regarding what resource features will be assessed. Constraining what to measure given a complex envir...

  6. What Do They Want to Know? Public Input on Stream Monitoring

    EPA Science Inventory

    The goal of environmental monitoring is to track resource condition, and thereby support environmental knowledge and management. Judgments are inevitable during monitoring design regarding what resource features will be assessed. Constraining what to measure in a complex environm...

  7. Understanding Local Structure Globally in Earth Science Remote Sensing Data Sets

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Fetzer, Eric

    2007-01-01

    Empirical probability distributions derived from the data are the signatures of physical processes generating the data. Distributions defined on different space-time windows can be compared and differences or changes can be attributed to physical processes. This presentation discusses on ways to reduce remote sensing data in a way that preserves information, focusing on the rate-distortion theory and using the entropy-constrained vector quantization algorithm.

  8. Phenological plasticity will not help all species adapt to climate change.

    PubMed

    Duputié, Anne; Rutschmann, Alexis; Ronce, Ophélie; Chuine, Isabelle

    2015-08-01

    Concerns are rising about the capacity of species to adapt quickly enough to climate change. In long-lived organisms such as trees, genetic adaptation is slow, and how much phenotypic plasticity can help them cope with climate change remains largely unknown. Here, we assess whether, where and when phenological plasticity is and will be adaptive in three major European tree species. We use a process-based species distribution model, parameterized with extensive ecological data, and manipulate plasticity to suppress phenological variations due to interannual, geographical and trend climate variability, under current and projected climatic conditions. We show that phenological plasticity is not always adaptive and mostly affects fitness at the margins of the species' distribution and climatic niche. Under current climatic conditions, phenological plasticity constrains the northern range limit of oak and beech and the southern range limit of pine. Under future climatic conditions, phenological plasticity becomes strongly adaptive towards the trailing edges of beech and oak, but severely constrains the range and niche of pine. Our results call for caution when interpreting geographical variation in trait means as adaptive, and strongly point towards species distribution models explicitly taking phenotypic plasticity into account when forecasting species distribution under climate change scenarios. © 2015 John Wiley & Sons Ltd.

  9. Cosmological implications of primordial black holes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luis Bernal, José; Bellomo, Nicola; Raccanelli, Alvise

    The possibility that a relevant fraction of the dark matter might be comprised of Primordial Black Holes (PBHs) has been seriously reconsidered after LIGO's detection of a ∼ 30 M {sub ⊙} binary black holes merger. Despite the strong interest in the model, there is a lack of studies on possible cosmological implications and effects on cosmological parameters inference. We investigate correlations with the other standard cosmological parameters using cosmic microwave background observations, finding significant degeneracies, especially with the tilt of the primordial power spectrum and the sound horizon at radiation drag. However, these degeneracies can be greatly reduced withmore » the inclusion of small scale polarization data. We also explore if PBHs as dark matter in simple extensions of the standard ΛCDM cosmological model induces extra degeneracies, especially between the additional parameters and the PBH's ones. Finally, we present cosmic microwave background constraints on the fraction of dark matter in PBHs, not only for monochromatic PBH mass distributions but also for popular extended mass distributions. Our results show that extended mass distribution's constraints are tighter, but also that a considerable amount of constraining power comes from the high-ℓ polarization data. Moreover, we constrain the shape of such mass distributions in terms of the correspondent constraints on the PBH mass fraction.« less

  10. Stress-Constrained Structural Topology Optimization with Design-Dependent Loads

    NASA Astrophysics Data System (ADS)

    Lee, Edmund

    Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.

  11. Self-constrained inversion of potential fields

    NASA Astrophysics Data System (ADS)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  12. Geometric constraints in semiclassical initial value representation calculations in Cartesian coordinates: accurate reduction in zero-point energy.

    PubMed

    Issack, Bilkiss B; Roy, Pierre-Nicholas

    2005-08-22

    An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.

  13. Evaluating Predictive Uncertainty of Hyporheic Exchange Modelling

    NASA Astrophysics Data System (ADS)

    Chow, R.; Bennett, J.; Dugge, J.; Wöhling, T.; Nowak, W.

    2017-12-01

    Hyporheic exchange is the interaction of water between rivers and groundwater, and is difficult to predict. One of the largest contributions to predictive uncertainty for hyporheic fluxes have been attributed to the representation of heterogeneous subsurface properties. This research aims to evaluate which aspect of the subsurface representation - the spatial distribution of hydrofacies or the model for local-scale (within-facies) heterogeneity - most influences the predictive uncertainty. Also, we seek to identify data types that help reduce this uncertainty best. For this investigation, we conduct a modelling study of the Steinlach River meander, in Southwest Germany. The Steinlach River meander is an experimental site established in 2010 to monitor hyporheic exchange at the meander scale. We use HydroGeoSphere, a fully integrated surface water-groundwater model, to model hyporheic exchange and to assess the predictive uncertainty of hyporheic exchange transit times (HETT). A highly parameterized complex model is built and treated as `virtual reality', which is in turn modelled with simpler subsurface parameterization schemes (Figure). Then, we conduct Monte-Carlo simulations with these models to estimate the predictive uncertainty. Results indicate that: Uncertainty in HETT is relatively small for early times and increases with transit times. Uncertainty from local-scale heterogeneity is negligible compared to uncertainty in the hydrofacies distribution. Introducing more data to a poor model structure may reduce predictive variance, but does not reduce predictive bias. Hydraulic head observations alone cannot constrain the uncertainty of HETT, however an estimate of hyporheic exchange flux proves to be more effective at reducing this uncertainty. Figure: Approach for evaluating predictive model uncertainty. A conceptual model is first developed from the field investigations. A complex model (`virtual reality') is then developed based on that conceptual model. This complex model then serves as the basis to compare simpler model structures. Through this approach, predictive uncertainty can be quantified relative to a known reference solution.

  14. Parameterization of a complex landscape for a sediment routing model of the Le Sueur River, southern Minnesota

    NASA Astrophysics Data System (ADS)

    Belmont, P.; Viparelli, E.; Parker, G.; Lauer, W.; Jennings, C.; Gran, K.; Wilcock, P.; Melesse, A.

    2008-12-01

    Modeling sediment fluxes and pathways in complex landscapes is limited by our inability to accurately measure and integrate heterogeneous, spatially distributed sources into a single coherent, predictive geomorphic transport law. In this study, we partition the complex landscape of the Le Sueur River watershed into five distributed primary source types, bluffs (including strath terrace caps), ravines, streambanks, tributaries, and flat,agriculture-dominated uplands. The sediment contribution of each source is quantified independently and parameterized for use in a sand and mud routing model. Rigorous modeling of the evolution of this landscape and sediment flux from each source type requires consideration of substrate characteristics, heterogeneity, and spatial connectivity. The subsurface architecture of the Le Sueur drainage basin is defined by a layer cake sequence of fine-grained tills, interbedded with fluvioglacial sands. Nearly instantaneous baselevel fall of 65 m occurred at 11.5 ka, as a result of the catastrophic draining of glacial Lake Agassiz through the Minnesota River, to which the Le Sueur is a tributary. The major knickpoint that was generated from that event has propagated 40 km into the Le Sueur network, initiating an incised river valley with tall, retreating bluffs and actively incising ravines. Loading estimates constrained by river gaging records that bound the knick zone indicate that bluffs connected to the river are retreating at an average rate of less than 2 cm per year and ravines are incising at an average rate of less than 0.8 mm per year, consistent with the Holocene average incision rate on the main stem of the river of less than 0.6 mm per year. Ongoing work with cosmogenic nuclide sediment tracers, ground-based LiDAR, historic aerial photos, and field mapping will be combined to represent the diversity of erosional environments and processes in a single coherent routing model.

  15. Patterns of megafloral change across the Cretaceous-Tertiary boundary in the Northern Great Plains and Rocky Mountains

    NASA Technical Reports Server (NTRS)

    Johnson, Kirk R.; Hickey, Leo J.

    1988-01-01

    The spatial and temporal distribution of vegetation in the terminal Cretaceous of Western Interior North America was a complex mosaic resulting from the interaction of factors including a shifting coastline, tectonic activity, a mild, possibly deteriorating climate, dinosaur herbivory, local facies effects, and a hypothesized bolide impact. In order to achieve sufficient resolution to analyze this vegetational pattern, over 100 megafloral collecting sites were established, yielding approximately 15,000 specimens, in Upper Cretaceous and lower Paleocene strata in the Williston, Powder River, and Bighorn basins in North Dakota, Montana, and Wyoming. These localities were integrated into a lithostratigraphic framework that is based on detailed local reference sections and constrained by vertebrate and palynomorph biostratigraphy, magnetostratigraphy, and sedimentary facies analysis. A regional biostratigraphy based on well located and identified plant megafossils that can be used to address patterns of floral evolution, ecology, and extinction is the goal of this research. Results of the analyses are discussed.

  16. Assessing the role of mini-applications in predicting key performance characteristics of scientific and engineering applications

    DOE PAGES

    Barrett, R. F.; Crozier, P. S.; Doerfler, D. W.; ...

    2014-09-28

    Computational science and engineering application programs are typically large, complex, and dynamic, and are often constrained by distribution limitations. As a means of making tractable rapid explorations of scientific and engineering application programs in the context of new, emerging, and future computing architectures, a suite of miniapps has been created to serve as proxies for full scale applications. Each miniapp is designed to represent a key performance characteristic that does or is expected to significantly impact the runtime performance of an application program. In this paper we introduce a methodology for assessing the ability of these miniapps to effectively representmore » these performance issues. We applied this methodology to four miniapps, examining the linkage between them and an application they are intended to represent. Herein we evaluate the fidelity of that linkage. This work represents the initial steps required to begin to answer the question, ''Under what conditions does a miniapp represent a key performance characteristic in a full app?''« less

  17. Bridging the gap between supernovae and their remnants through multi-dimensional hydrodynamic modeling

    NASA Astrophysics Data System (ADS)

    Orlando, S.; Miceli, M.; Petruk, O.

    2017-02-01

    Supernova remnants (SNRs) are diffuse extended sources characterized by a complex morphology and a non-uniform distribution of ejecta. Such a morphology reflects pristine structures and features of the progenitor supernova (SN) and the early interaction of the SN blast wave with the inhomogeneous circumstellar medium (CSM). Deciphering the observations of SNRs might open the possibility to investigate the physical properties of both the interacting ejecta and the shocked CSM. This requires accurate numerical models which describe the evolution from the SN explosion to the remnant development and which connect the emission properties of the remnants to the progenitor SNe. Here we show how multi-dimensional SN-SNR hydrodynamic models have been very effective in deciphering observations of SNR Cassiopeia A and SN 1987A, thus unveiling the structure of ejecta in the immediate aftermath of the SN explosion and constraining the 3D pre-supernova structure and geometry of the environment surrounding the progenitor SN.

  18. Continued investigation of LDEF's structural frame and thermal blankets by the Meteoroid and Debris Special Investigation Group

    NASA Technical Reports Server (NTRS)

    See, Thomas H.; Mack, Kimberly S.; Warren, Jack L.; Zolensky, Michael E.; Zook, Herbert A.

    1993-01-01

    This report focuses on the data acquired by detailed examination of LDEF intercostals, 68 of which are now in possession of the Meteoroid and Debris Special Investigation Group (M&D SIG) at JSC. In addition, limited data will be presented for several small sections from the A0178 thermal control blankets that were examined/counted prior to being shipped to Principal Investigators (PI's) for scientific study. The data presented here are limited to measurements of crater and penetration-hole diameters and their frequency of occurrence which permits, yet also constrains, more model-dependent, interpretative efforts. Such efforts will focus on the conversion of crater and penetration-hole sizes to projectile diameters (and masses), on absolute particle fluxes, and on the distribution of particle-encounter velocities. These are all complex issues that presently cannot be pursued without making various assumptions which relate, in part, to crater-scaling relationships, and to assumed trajectories of natural and man-made particle populations in LEO that control the initial impact conditions.

  19. Species Radiation of Carabid Beetles (Broscini: Mecodema) in New Zealand

    PubMed Central

    Goldberg, Julia; Knapp, Michael; Emberson, Rowan M.; Townsend, J. Ian; Trewick, Steven A.

    2014-01-01

    New Zealand biodiversity has often been viewed as Gondwanan in origin and age, but it is increasingly apparent from molecular studies that diversification, and in many cases origination of lineages, postdate the break-up of Gondwanaland. Relatively few studies of New Zealand animal species radiations have as yet been reported, and here we consider the species-rich genus of carabid beetles, Mecodema. Constrained stratigraphic information (emergence of the Chatham Islands) and a substitution rate for Coleoptera were separately used to calibrate Bayesian relaxed molecular clock date estimates for diversification of Mecodema. The inferred timings indicate radiation of these beetles no earlier than the mid-Miocene with most divergences being younger, dating to the Plio-Pleistocene. A shallow age for the radiation along with a complex spatial distribution of these taxa involving many instances of sympatry implicates recent ecological speciation rather than a simplistic allopatric model. This emphasises the youthful and dynamic nature of New Zealand evolution that will be further elucidated with detailed ecological and population genetic analyses. PMID:24465949

  20. Modeling the Anomalous Microwave Emission with Spinning Nanoparticles: No PAHs Required

    NASA Astrophysics Data System (ADS)

    Hensley, Brandon S.; Draine, B. T.

    2017-02-01

    In light of recent observational results indicating an apparent lack of correlation between the anomalous microwave emission (AME) and mid-infrared emission from polycyclic aromatic hydrocarbons, we assess whether rotational emission from spinning silicate and/or iron nanoparticles could account for the observed AME without violating observational constraints on interstellar abundances, ultraviolet extinction, and infrared emission. By modifying the SpDust code to compute the rotational emission from these grains, we find that nanosilicate grains could account for the entirety of the observed AME, whereas iron grains could be responsible for only a fraction, even for extreme assumptions on the amount of interstellar iron concentrated in ultrasmall iron nanoparticles. Given the added complexity of contributions from multiple grain populations to the total spinning dust emission, as well as existing uncertainties due to the poorly constrained grain size, charge, and dipole moment distributions, we discuss generic, carrier-independent predictions of spinning dust theory and observational tests that could help identify the AME carrier(s).

  1. Integrated urban water cycle management: the UrbanCycle model.

    PubMed

    Hardy, M J; Kuczera, G; Coombes, P J

    2005-01-01

    Integrated urban water cycle management presents a new framework in which solutions to the provision of urban water services can be sought. It enables new and innovative solutions currently constrained by the existing urban water paradigm to be implemented. This paper introduces the UrbanCycle model. The model is being developed in response to the growing and changing needs of the water management sector and in light of the need for tools to evaluate integrated watercycle management approaches. The key concepts underpinning the UrbanCycle model are the adoption of continuous simulation, hierarchical network modelling, and the careful management of computational complexity. The paper reports on the integration of modelling capabilities across the allotment, and subdivision scales, enabling the interactions between these scales to be explored. A case study illustrates the impacts of various mitigation measures possible under an integrated water management framework. The temporal distribution of runoff into ephemeral streams from a residential allotment in Western Sydney is evaluated and linked to the geomorphic and ecological regimes in receiving waters.

  2. Gaussian process regression for sensor networks under localization uncertainty

    USGS Publications Warehouse

    Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming

    2013-01-01

    In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.

  3. Continuum-based DFN-consistent numerical framework for the simulation of oxygen infiltration into fractured crystalline rocks

    NASA Astrophysics Data System (ADS)

    Trinchero, Paolo; Puigdomenech, Ignasi; Molinero, Jorge; Ebrahimi, Hedieh; Gylling, Björn; Svensson, Urban; Bosbach, Dirk; Deissmann, Guido

    2017-05-01

    We present an enhanced continuum-based approach for the modelling of groundwater flow coupled with reactive transport in crystalline fractured rocks. In the proposed formulation, flow, transport and geochemical parameters are represented onto a numerical grid using Discrete Fracture Network (DFN) derived parameters. The geochemical reactions are further constrained by field observations of mineral distribution. To illustrate how the approach can be used to include physical and geochemical complexities into reactive transport calculations, we have analysed the potential ingress of oxygenated glacial-meltwater in a heterogeneous fractured rock using the Forsmark site (Sweden) as an example. The results of high-performance reactive transport calculations show that, after a quick oxygen penetration, steady state conditions are attained where abiotic reactions (i.e. the dissolution of chlorite and the homogeneous oxidation of aqueous iron(II) ions) counterbalance advective oxygen fluxes. The results show that most of the chlorite becomes depleted in the highly conductive deformation zones where higher mineral surface areas are available for reactions.

  4. Multidisciplinary Optimization Approach for Design and Operation of Constrained and Complex-shaped Space Systems

    NASA Astrophysics Data System (ADS)

    Lee, Dae Young

    The design of a small satellite is challenging since they are constrained by mass, volume, and power. To mitigate these constraint effects, designers adopt deployable configurations on the spacecraft that result in an interesting and difficult optimization problem. The resulting optimization problem is challenging due to the computational complexity caused by the large number of design variables and the model complexity created by the deployables. Adding to these complexities, there is a lack of integration of the design optimization systems into operational optimization, and the utility maximization of spacecraft in orbit. The developed methodology enables satellite Multidisciplinary Design Optimization (MDO) that is extendable to on-orbit operation. Optimization of on-orbit operations is possible with MDO since the model predictive controller developed in this dissertation guarantees the achievement of the on-ground design behavior in orbit. To enable the design optimization of highly constrained and complex-shaped space systems, the spherical coordinate analysis technique, called the "Attitude Sphere", is extended and merged with an additional engineering tools like OpenGL. OpenGL's graphic acceleration facilitates the accurate estimation of the shadow-degraded photovoltaic cell area. This technique is applied to the design optimization of the satellite Electric Power System (EPS) and the design result shows that the amount of photovoltaic power generation can be increased more than 9%. Based on this initial methodology, the goal of this effort is extended from Single Discipline Optimization to Multidisciplinary Optimization, which includes the design and also operation of the EPS, Attitude Determination and Control System (ADCS), and communication system. The geometry optimization satisfies the conditions of the ground development phase; however, the operation optimization may not be as successful as expected in orbit due to disturbances. To address this issue, for the ADCS operations, controllers based on Model Predictive Control that are effective for constraint handling were developed and implemented. All the suggested design and operation methodologies are applied to a mission "CADRE", which is space weather mission scheduled for operation in 2016. This application demonstrates the usefulness and capability of the methodology to enhance CADRE's capabilities, and its ability to be applied to a variety of missions.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.; Abbott, B.; Abdallah, J.

    The electroweak production and subsequent decay of single top quarks is determined by the properties of the Wtb vertex. This vertex can be described by the complex parameters of an effective Lagrangian. An analysis of angular distributions of the decay products of single top quarks produced in the t -channel constrains these parameters simultaneously. The analysis described in this paper uses 4.6 fb -1 of proton-proton collision data at √s=7 TeV collected with the ATLAS detector at the LHC. Two parameters are measured simultaneously in this analysis. The fraction f 1 of decays containing transversely polarised W bosons is measuredmore » to be 0.37 ± 0.07 (stat.⊕syst.). The phase δ - between amplitudes for transversely and longitudinally polarised W bosons recoiling against left-handed b-quarks is measured to be -0.014π ± 0.036π (stat.⊕syst.). The correlation in the measurement of these parameters is 0.15. These values result in two-dimensional limits at the 95% confidence level on the ratio of the complex coupling parameters g R and V L, yielding Re[g R /V L] ϵ [-0.36, 0.10] and Im[g R /V L] ϵ [-0.17, 0.23] with a correlation of 0.11. We find the results are in good agreement with the predictions of the Standard Model.« less

  6. Emergence of a snake-like structure in mobile distributed agents: an exploratory agent-based modeling approach.

    PubMed

    Niazi, Muaz A

    2014-01-01

    The body structure of snakes is composed of numerous natural components thereby making it resilient, flexible, adaptive, and dynamic. In contrast, current computer animations as well as physical implementations of snake-like autonomous structures are typically designed to use either a single or a relatively smaller number of components. As a result, not only these artificial structures are constrained by the dimensions of the constituent components but often also require relatively more computationally intensive algorithms to model and animate. Still, these animations often lack life-like resilience and adaptation. This paper presents a solution to the problem of modeling snake-like structures by proposing an agent-based, self-organizing algorithm resulting in an emergent and surprisingly resilient dynamic structure involving a minimal of interagent communication. Extensive simulation experiments demonstrate the effectiveness as well as resilience of the proposed approach. The ideas originating from the proposed algorithm can not only be used for developing self-organizing animations but can also have practical applications such as in the form of complex, autonomous, evolvable robots with self-organizing, mobile components with minimal individual computational capabilities. The work also demonstrates the utility of exploratory agent-based modeling (EABM) in the engineering of artificial life-like complex adaptive systems.

  7. Emergence of a Snake-Like Structure in Mobile Distributed Agents: An Exploratory Agent-Based Modeling Approach

    PubMed Central

    Niazi, Muaz A.

    2014-01-01

    The body structure of snakes is composed of numerous natural components thereby making it resilient, flexible, adaptive, and dynamic. In contrast, current computer animations as well as physical implementations of snake-like autonomous structures are typically designed to use either a single or a relatively smaller number of components. As a result, not only these artificial structures are constrained by the dimensions of the constituent components but often also require relatively more computationally intensive algorithms to model and animate. Still, these animations often lack life-like resilience and adaptation. This paper presents a solution to the problem of modeling snake-like structures by proposing an agent-based, self-organizing algorithm resulting in an emergent and surprisingly resilient dynamic structure involving a minimal of interagent communication. Extensive simulation experiments demonstrate the effectiveness as well as resilience of the proposed approach. The ideas originating from the proposed algorithm can not only be used for developing self-organizing animations but can also have practical applications such as in the form of complex, autonomous, evolvable robots with self-organizing, mobile components with minimal individual computational capabilities. The work also demonstrates the utility of exploratory agent-based modeling (EABM) in the engineering of artificial life-like complex adaptive systems. PMID:24701135

  8. Low-complex energy-aware image communication in visual sensor networks

    NASA Astrophysics Data System (ADS)

    Phamila, Yesudhas Asnath Victy; Amutha, Ramachandran

    2013-10-01

    A low-complex, low bit rate, energy-efficient image compression algorithm explicitly designed for resource-constrained visual sensor networks applied for surveillance, battle field, habitat monitoring, etc. is presented, where voluminous amount of image data has to be communicated over a bandwidth-limited wireless medium. The proposed method overcomes the energy limitation of individual nodes and is investigated in terms of image quality, entropy, processing time, overall energy consumption, and system lifetime. This algorithm is highly energy efficient and extremely fast since it applies energy-aware zonal binary discrete cosine transform (DCT) that computes only the few required significant coefficients and codes them using enhanced complementary Golomb Rice code without using any floating point operations. Experiments are performed using the Atmel Atmega128 and MSP430 processors to measure the resultant energy savings. Simulation results show that the proposed energy-aware fast zonal transform consumes only 0.3% of energy needed by conventional DCT. This algorithm consumes only 6% of energy needed by Independent JPEG Group (fast) version, and it suits for embedded systems requiring low power consumption. The proposed scheme is unique since it significantly enhances the lifetime of the camera sensor node and the network without any need for distributed processing as was traditionally required in existing algorithms.

  9. Implementation of remote sensing data for flood forecasting

    NASA Astrophysics Data System (ADS)

    Grimaldi, S.; Li, Y.; Pauwels, V. R. N.; Walker, J. P.; Wright, A. J.

    2016-12-01

    Flooding is one of the most frequent and destructive natural disasters. A timely, accurate and reliable flood forecast can provide vital information for flood preparedness, warning delivery, and emergency response. An operational flood forecasting system typically consists of a hydrologic model, which simulates runoff generation and concentration, and a hydraulic model, which models riverine flood wave routing and floodplain inundation. However, these two types of models suffer from various sources of uncertainties, e.g., forcing data initial conditions, model structure and parameters. To reduce those uncertainties, current forecasting systems are typically calibrated and/or updated using streamflow measurements, and such applications are limited in well-gauged areas. The recent increasing availability of spatially distributed Remote Sensing (RS) data offers new opportunities for flood events investigation and forecast. Based on an Australian case study, this presentation will discuss the use 1) of RS soil moisture data to constrain a hydrologic model, and 2) of RS-derived flood extent and level to constrain a hydraulic model. The hydrological model is based on a semi-distributed system coupled with a two-soil-layer rainfall-runoff model GRKAL and a linear Muskingum routing model. Model calibration was performed using either 1) streamflow data only or 2) both streamflow and RS soil moisture data. The model was then further constrained through the integration of real-time soil moisture data. The hydraulic model is based on LISFLOOD-FP which solves the 2D inertial approximation of the Shallow Water Equations. Streamflow data and RS-derived flood extent and levels were used to apply a multi-objective calibration protocol. The effectiveness with which each data source or combination of data sources constrained the parameter space was quantified and discussed.

  10. Variable Cultural Acquisition Costs Constrain Cumulative Cultural Evolution

    PubMed Central

    Mesoudi, Alex

    2011-01-01

    One of the hallmarks of the human species is our capacity for cumulative culture, in which beneficial knowledge and technology is accumulated over successive generations. Yet previous analyses of cumulative cultural change have failed to consider the possibility that as cultural complexity accumulates, it becomes increasingly costly for each new generation to acquire from the previous generation. In principle this may result in an upper limit on the cultural complexity that can be accumulated, at which point accumulated knowledge is so costly and time-consuming to acquire that further innovation is not possible. In this paper I first review existing empirical analyses of the history of science and technology that support the possibility that cultural acquisition costs may constrain cumulative cultural evolution. I then present macroscopic and individual-based models of cumulative cultural evolution that explore the consequences of this assumption of variable cultural acquisition costs, showing that making acquisition costs vary with cultural complexity causes the latter to reach an upper limit above which no further innovation can occur. These models further explore the consequences of different cultural transmission rules (directly biased, indirectly biased and unbiased transmission), population size, and cultural innovations that themselves reduce innovation or acquisition costs. PMID:21479170

  11. Membrane microdomains and the cytoskeleton constrain AtHIR1 dynamics and facilitate the formation of an AtHIR1-associated immune complex.

    PubMed

    Lv, Xueqin; Jing, Yanping; Xiao, Jianwei; Zhang, Yongdeng; Zhu, Yingfang; Julian, Russell; Lin, Jinxing

    2017-04-01

    Arabidopsis hypersensitive-induced reaction (AtHIR) proteins function in plant innate immunity. However, the underlying mechanisms by which AtHIRs participate in plant immunity remain elusive. Here, using VA-TIRFM and FLIM-FRET, we revealed that AtHIR1 is present in membrane microdomains and co-localizes with the membrane microdomain marker REM1.3. Single-particle tracking analysis revealed that membrane microdomains and the cytoskeleton, especially microtubules, restrict the lateral mobility of AtHIR1 at the plasma membrane and facilitate its oligomerization. Furthermore, protein proximity index measurements, fluorescence cross-correlation spectroscopy, and biochemical experiments demonstrated that the formation of the AtHIR1 complex upon pathogen perception requires intact microdomains and cytoskeleton. Taken together, these findings suggest that microdomains and the cytoskeleton constrain AtHIR1 dynamics, promote AtHIR1 oligomerization, and increase the efficiency of the interactions of AtHIR1 with components of the AtHIR1 complex in response to pathogens, thus providing valuable insight into the mechanisms of defense-related responses in plants. © 2017 The Authors The Plant Journal © 2017 John Wiley & Sons Ltd.

  12. Bayesian multiple-source localization in an uncertain ocean environment.

    PubMed

    Dosso, Stan E; Wilmut, Michael J

    2011-06-01

    This paper considers simultaneous localization of multiple acoustic sources when properties of the ocean environment (water column and seabed) are poorly known. A Bayesian formulation is developed in which the environmental parameters, noise statistics, and locations and complex strengths (amplitudes and phases) of multiple sources are considered to be unknown random variables constrained by acoustic data and prior information. Two approaches are considered for estimating source parameters. Focalization maximizes the posterior probability density (PPD) over all parameters using adaptive hybrid optimization. Marginalization integrates the PPD using efficient Markov-chain Monte Carlo methods to produce joint marginal probability distributions for source ranges and depths, from which source locations are obtained. This approach also provides quantitative uncertainty analysis for all parameters, which can aid in understanding of the inverse problem and may be of practical interest (e.g., source-strength probability distributions). In both approaches, closed-form maximum-likelihood expressions for source strengths and noise variance at each frequency allow these parameters to be sampled implicitly, substantially reducing the dimensionality and difficulty of the inversion. Examples are presented of both approaches applied to single- and multi-frequency localization of multiple sources in an uncertain shallow-water environment, and a Monte Carlo performance evaluation study is carried out. © 2011 Acoustical Society of America

  13. Visual control of foot placement when walking over complex terrain.

    PubMed

    Matthis, Jonathan S; Fajen, Brett R

    2014-02-01

    The aim of this study was to investigate the role of visual information in the control of walking over complex terrain with irregularly spaced obstacles. We developed an experimental paradigm to measure how far along the future path people need to see in order to maintain forward progress and avoid stepping on obstacles. Participants walked over an array of randomly distributed virtual obstacles that were projected onto the floor by an LCD projector while their movements were tracked by a full-body motion capture system. Walking behavior in a full-vision control condition was compared with behavior in a number of other visibility conditions in which obstacles did not appear until they fell within a window of visibility centered on the moving observer. Collisions with obstacles were more frequent and, for some participants, walking speed was slower when the visibility window constrained vision to less than two step lengths ahead. When window sizes were greater than two step lengths, the frequency of collisions and walking speed were weakly affected or unaffected. We conclude that visual information from at least two step lengths ahead is needed to guide foot placement when walking over complex terrain. When placed in the context of recent research on the biomechanics of walking, the findings suggest that two step lengths of visual information may be needed because it allows walkers to exploit the passive mechanical forces inherent to bipedal locomotion, thereby avoiding obstacles while maximizing energetic efficiency. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  14. Is Statistical Learning Constrained by Lower Level Perceptual Organization?

    PubMed Central

    Emberson, Lauren L.; Liu, Ran; Zevin, Jason D.

    2013-01-01

    In order for statistical information to aid in complex developmental processes such as language acquisition, learning from higher-order statistics (e.g. across successive syllables in a speech stream to support segmentation) must be possible while perceptual abilities (e.g. speech categorization) are still developing. The current study examines how perceptual organization interacts with statistical learning. Adult participants were presented with multiple exemplars from novel, complex sound categories designed to reflect some of the spectral complexity and variability of speech. These categories were organized into sequential pairs and presented such that higher-order statistics, defined based on sound categories, could support stream segmentation. Perceptual similarity judgments and multi-dimensional scaling revealed that participants only perceived three perceptual clusters of sounds and thus did not distinguish the four experimenter-defined categories, creating a tension between lower level perceptual organization and higher-order statistical information. We examined whether the resulting pattern of learning is more consistent with statistical learning being “bottom-up,” constrained by the lower levels of organization, or “top-down,” such that higher-order statistical information of the stimulus stream takes priority over the perceptual organization, and perhaps influences perceptual organization. We consistently find evidence that learning is constrained by perceptual organization. Moreover, participants generalize their learning to novel sounds that occupy a similar perceptual space, suggesting that statistical learning occurs based on regions of or clusters in perceptual space. Overall, these results reveal a constraint on learning of sound sequences, such that statistical information is determined based on lower level organization. These findings have important implications for the role of statistical learning in language acquisition. PMID:23618755

  15. Constraining particle size-dependent plume sedimentation from the 17 June 1996 eruption of Ruapehu Volcano, New Zealand, using geophysical inversions

    NASA Astrophysics Data System (ADS)

    Klawonn, M.; Frazer, L. N.; Wolfe, C. J.; Houghton, B. F.; Rosenberg, M. D.

    2014-03-01

    Weak subplinian-plinian plumes pose frequent hazards to populations and aviation, yet many key parameters of these particle-laden plumes are, to date, poorly constrained. This study recovers the particle size-dependent mass distribution along the trajectory of a well-constrained weak plume by inverting the dispersion process of tephra fallout. We use the example of the 17 June 1996 Ruapehu eruption in New Zealand and base our computations on mass per unit area tephra measurements and grain size distributions at 118 sample locations. Comparisons of particle fall times and time of sampling collection, as well as observations during the eruption, reveal that particles smaller than 250 μm likely settled as aggregates. For simplicity we assume that all of these fine particles fell as aggregates of constant size and density, whereas we assume that large particles fell as individual particles at their terminal velocity. Mass fallout along the plume trajectory follows distinct trends between larger particles (d≥250 μm) and the fine population (d<250 μm) that are likely due to the two different settling behaviors (aggregate settling versus single-particle settling). In addition, we computed the resulting particle size distribution within the weak plume along its axis and find that the particle mode shifts from an initial 1φ mode to a 2.5φ mode 10 km from the vent and is dominated by a 2.5 to 3φ mode 10-180 km from vent, where the plume reaches the coastline and we do not have further field constraints. The computed particle distributions inside the plume provide new constraints on the mass transport processes within weak plumes and improve previous models. The distinct decay trends between single-particle settling and aggregate settling may serve as a new tool to identify particle sizes that fell as aggregates for other eruptions.

  16. Constraining the processes modifying the surfaces of the classical Uranian satellites

    NASA Astrophysics Data System (ADS)

    Cartwright, Richard J.; Emery, Joshua P.

    2016-10-01

    Near-infrared (NIR) observations of the classical Uranian moons have detected relatively weak H2O ice bands, mixed with a spectrally red, low albedo constituent on the surfaces of their southern hemispheres (sub-observer lat. ~10 - 75°S). The H2O bands and the degree of spectral reddening are greatest on the leading hemispheres of these moons. CO2 ice bands have been detected in spectra collected over their trailing hemispheres, with stronger CO2 bands on the moons closest to Uranus. Our preferred hypotheses to explain the distribution of CO2, H2O, and dark material are: bombardment by magnetospherically-embedded charged particles, primarily on the trailing hemispheres of these moons, and bombardment by micrometeorites, primarily on their leading hemispheres.To test these complementary hypotheses, we are constraining the distribution and spectral characteristics of surface constituents on the currently observable northern hemispheres (sub-observer lat. ~20 - 35°N) to compare with existing southern hemisphere data. Analysis of northern hemisphere data shows that CO2 is present on their trailing hemispheres, and H2O bands and the degree of spectral reddening are strongest on their leading hemispheres, in agreement with the southern hemisphere data. This longitudinal distribution of constituents supports our preferred hypotheses.However, tantalizing mysteries regarding the distribution of constituents remain. There has been no detection of CO2 on Miranda, and H2O bands are stronger on its trailing hemisphere. NIR slope measurements indicate that the northern hemisphere of Titania is redder than Oberon, unlike the spectral colors of their southern hemispheres. There are latitudinal variations in H2O band strengths on these moons, with stronger H2O bands at northern latitudes compared to southern latitudes on Umbriel and Titania. Several Miranda and Ariel spectra potentially include weak and unconfirmed NH3-hydrate bands, which could be tracers of cryovolcanic emplacement. We will present work related to our goals of constraining the processes modifying the surfaces of the classical Uranian moons.

  17. Process consistency in models: The importance of system signatures, expert knowledge, and process complexity

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J.; Savenije, H. H. G.; Gascuel-Odoux, C.

    2014-09-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus, ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study, the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by four calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce a suite of hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by "prior constraints," inferred from expert knowledge to ensure a model which behaves well with respect to the modeler's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model setup exhibited increased performance in the independent test period and skill to better reproduce all tested signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if counter-balanced by prior constraints, can significantly increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge-driven strategy of constraining models.

  18. CH3OCH3 in Orion-KL: a striking similarity with HCOOCH3

    NASA Astrophysics Data System (ADS)

    Brouillet, N.; Despois, D.; Baudry, A.; Peng, T.-C.; Favre, C.; Wootten, A.; Remijan, A. J.; Wilson, T. L.; Combes, F.; Wlodarczak, G.

    2013-02-01

    Context. Orion-KL is a remarkable, nearby star-forming region where a recent explosive event has generated shocks that could have released complex molecules from the grain mantles. Aims: A comparison of the distribution of the different complex molecules will help in understanding their formation and constraining the chemical models. Methods: We used several data sets from the Plateau de Bure Interferometer to map the dimethyl ether emission with different arcsec spatial resolutions and different energy levels (from Eup = 18 to 330 K) to compare with our previous methyl formate maps. Results: Our data show remarkable similarity between the dimethyl ether (CH3OCH3) and the methyl formate (HCOOCH3) distributions even on a small scale (1.8″ × 0.8″ or ~500 AU). This long suspected similarity, seen from both observational and theoretical arguments, is demonstrated with unprecedented confidence, with a correlation coefficient of maps ~0.8. Conclusions: A common precursor is the simplest explanation of our correlation. Comparisons with previous laboratory work and chemical models suggest the major role of grain surface chemistry and a recent release, probably with little processing, of mantle molecules by shocks. In this case the CH3O radical produced from methanol ice would be the common precursor (whereas ethanol, C2H5OH, is produced from the radical CH2OH). The alternative gas phase scheme, where protonated methanol CH3OH+2 is the common precursor to produce methyl formate and dimethyl ether through reactions with HCOOH and CH3OH, is also compatible with our data. Our observations cannot yet definitely allow a choice between the different chemical processes, but the tight correlation between the distributions of HCOOCH3 and CH3OCH3 strongly contrasts with the different behavior we observe for the distributions of ethanol and formic acid. This provides a very significant constraint on models. Based on observations carried out with the IRAM Plateau de Bure Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).

  19. Aiding the search: Examining individual differences in multiply-constrained problem solving.

    PubMed

    Ellis, Derek M; Brewer, Gene A

    2018-07-01

    Understanding and resolving complex problems is of vital importance in daily life. Problems can be defined by the limitations they place on the problem solver. Multiply-constrained problems are traditionally examined with the compound remote associates task (CRAT). Performance on the CRAT is partially dependent on an individual's working memory capacity (WMC). These findings suggest that executive processes are critical for problem solving and that there are reliable individual differences in multiply-constrained problem solving abilities. The goals of the current study are to replicate and further elucidate the relation between WMC and CRAT performance. To achieve these goals, we manipulated preexposure to CRAT solutions and measured WMC with complex-span tasks. In Experiment 1, we report evidence that preexposure to CRAT solutions improved problem solving accuracy, WMC was correlated with problem solving accuracy, and that WMC did not moderate the effect of preexposure on problem solving accuracy. In Experiment 2, we preexposed participants to correct and incorrect solutions. We replicated Experiment 1 and found that WMC moderates the effect of exposure to CRAT solutions such that high WMC participants benefit more from preexposure to correct solutions than low WMC (although low WMC participants have preexposure benefits as well). Broadly, these results are consistent with theories of working memory and problem solving that suggest a mediating role of attention control processes. Published by Elsevier Inc.

  20. Constrained surface controllers for three-dimensional image data reformatting.

    PubMed

    Graves, Martin J; Black, Richard T; Lomas, David J

    2009-07-01

    This study did not require ethical approval in the United Kingdom. The aim of this work was to create two controllers for navigating a two-dimensional image plane through a volumetric data set, providing two important features of the ultrasonographic paradigm: orientation matching of the navigation device and the desired image plane in the three-dimensional (3D) data and a constraining surface to provide a nonvisual reference for the image plane location in the 3D data. The first constrained surface controller (CSC) uses a planar constraining surface, while the second CSC uses a hemispheric constraining surface. Ten radiologists were asked to obtain specific image reformations by using both controllers and a commercially available medical imaging workstation. The time taken to perform each reformatting task was recorded. The users were also asked structured questions comparing the utility of both methods. There was a significant reduction in the time taken to perform the specified reformatting tasks by using the simpler planar controller as compared with a standard workstation, whereas there was no significant difference for the more complex hemispheric controller. The majority of users reported that both controllers allowed them to concentrate entirely on the reformatting task and the related image rather than being distracted by the need for interaction with the workstation interface. In conclusion, the CSCs provide an intuitive paradigm for interactive reformatting of volumetric data. (c) RSNA, 2009.

  1. Constraining the interior density profile of a Jovian planet from precision gravity field data

    NASA Astrophysics Data System (ADS)

    Movshovitz, Naor; Fortney, Jonathan J.; Helled, Ravit; Hubbard, William B.; Thorngren, Daniel; Mankovich, Chris; Wahl, Sean; Militzer, Burkhard; Durante, Daniele

    2017-10-01

    The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properly interpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about the formation mechanism of the planet. Planetary gravity fields are usually described by the coefficients in an expansion of the gravitational potential. Recently, high precision measurements of these coefficients for Jupiter and Saturn have been made by the radio science instruments on the Juno and Cassini spacecraft, respectively.The resulting coefficients come with an associated uncertainty. And while the task of matching a given density profile with a given set of gravity coefficients is relatively straightforward, the question of how best to account for the uncertainty is not. In essentially all prior work on matching models to gravity field data, inferences about planetary structure have rested on imperfect knowledge of the H/He equation of state and on the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet, constrained only by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them.We demonstrate this approach with a sample of Jupiter interior models based on recent Juno data and discuss prospects for Saturn.

  2. Constraining Saturn's interior density profile from precision gravity field measurement obtained during Grand Finale

    NASA Astrophysics Data System (ADS)

    Movshovitz, N.; Fortney, J. J.; Helled, R.; Hubbard, W. B.; Mankovich, C.; Thorngren, D.; Wahl, S. M.; Militzer, B.; Durante, D.

    2017-12-01

    The external gravity field of a planetary body is determined by the distribution of mass in its interior. Therefore, a measurement of the external field, properlyinterpreted, tells us about the interior density profile, ρ(r), which in turn can be used to constrain the composition in the interior and thereby learn about theformation mechanism of the planet. Recently, very high precision measurements of the gravity coefficients for Saturn have been made by the radio science instrument on the Cassini spacecraft during its Grand Finale orbits. The resulting coefficients come with an associated uncertainty. The task of matching a given density profile to a given set of gravity coefficients is relatively straightforward, but the question of how to best account for the uncertainty is not. In essentially all prior work on matching models to gravity field data inferences about planetary structure have rested on assumptions regarding the imperfectly known H/He equation of state and the assumption of an adiabatic interior. Here we wish to vastly expand the phase space of such calculations. We present a framework for describing all the possible interior density structures of a Jovian planet constrained by a given set of gravity coefficients and their associated uncertainties. Our approach is statistical. We produce a random sample of ρ(a) curves drawn from the underlying (and unknown) probability distribution of all curves, where ρ is the density on an interior level surface with equatorial radius a. Since the resulting set of density curves is a random sample, that is, curves appear with frequency proportional to the likelihood of their being consistent with the measured gravity, we can compute probability distributions for any quantity that is a function of ρ, such as central pressure, oblateness, core mass and radius, etc. Our approach is also Bayesian, in that it can utilize any prior assumptions about the planet's interior, as necessary, without being overly constrained by them. We apply this approach to produce a sample of Saturn interior models based on gravity data from Grand Finale orbits and discuss their implications.

  3. Order-Constrained Reference Priors with Implications for Bayesian Isotonic Regression, Analysis of Covariance and Spatial Models

    NASA Astrophysics Data System (ADS)

    Gong, Maozhen

    Selecting an appropriate prior distribution is a fundamental issue in Bayesian Statistics. In this dissertation, under the framework provided by Berger and Bernardo, I derive the reference priors for several models which include: Analysis of Variance (ANOVA)/Analysis of Covariance (ANCOVA) models with a categorical variable under common ordering constraints, the conditionally autoregressive (CAR) models and the simultaneous autoregressive (SAR) models with a spatial autoregression parameter rho considered. The performances of reference priors for ANOVA/ANCOVA models are evaluated by simulation studies with comparisons to Jeffreys' prior and Least Squares Estimation (LSE). The priors are then illustrated in a Bayesian model of the "Risk of Type 2 Diabetes in New Mexico" data, where the relationship between the type 2 diabetes risk (through Hemoglobin A1c) and different smoking levels is investigated. In both simulation studies and real data set modeling, the reference priors that incorporate internal order information show good performances and can be used as default priors. The reference priors for the CAR and SAR models are also illustrated in the "1999 SAT State Average Verbal Scores" data with a comparison to a Uniform prior distribution. Due to the complexity of the reference priors for both CAR and SAR models, only a portion (12 states in the Midwest) of the original data set is considered. The reference priors can give a different marginal posterior distribution compared to a Uniform prior, which provides an alternative for prior specifications for areal data in Spatial statistics.

  4. Modes of failure of Osteonics constrained tripolar implants: a retrospective analysis of forty-three failed implants.

    PubMed

    Guyen, Olivier; Lewallen, David G; Cabanela, Miguel E

    2008-07-01

    The Osteonics constrained tripolar implant has been one of the most commonly used options to manage recurrent instability after total hip arthroplasty. Mechanical failures were expected and have been reported. The purpose of this retrospective review was to identify the observed modes of failure of this device. Forty-three failed Osteonics constrained tripolar implants were revised at our institution between September 1997 and April 2005. All revisions related to the constrained acetabular component only were considered as failures. All of the devices had been inserted for recurrent or intraoperative instability during revision procedures. Seven different methods of implantation were used. Operative reports and radiographs were reviewed to identify the modes of failure. The average time to failure of the forty-three implants was 28.4 months. A total of five modes of failure were observed: failure at the bone-implant interface (type I), which occurred in eleven hips; failure at the mechanisms holding the constrained liner to the metal shell (type II), in six hips; failure of the retaining mechanism of the bipolar component (type III), in ten hips; dislocation of the prosthetic head at the inner bearing of the bipolar component (type IV), in three hips; and infection (type V), in twelve hips. The mode of failure remained unknown in one hip that had been revised at another institution. The Osteonics constrained tripolar total hip arthroplasty implant is a complex device involving many parts. We showed that failure of this device can occur at most of its interfaces. It would therefore appear logical to limit its application to salvage situations.

  5. Constrained approximation of effective generators for multiscale stochastic reaction networks and application to conditioned path sampling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cotter, Simon L., E-mail: simon.cotter@manchester.ac.uk

    2016-10-15

    Efficient analysis and simulation of multiscale stochastic systems of chemical kinetics is an ongoing area for research, and is the source of many theoretical and computational challenges. In this paper, we present a significant improvement to the constrained approach, which is a method for computing effective dynamics of slowly changing quantities in these systems, but which does not rely on the quasi-steady-state assumption (QSSA). The QSSA can cause errors in the estimation of effective dynamics for systems where the difference in timescales between the “fast” and “slow” variables is not so pronounced. This new application of the constrained approach allowsmore » us to compute the effective generator of the slow variables, without the need for expensive stochastic simulations. This is achieved by finding the null space of the generator of the constrained system. For complex systems where this is not possible, or where the constrained subsystem is itself multiscale, the constrained approach can then be applied iteratively. This results in breaking the problem down into finding the solutions to many small eigenvalue problems, which can be efficiently solved using standard methods. Since this methodology does not rely on the quasi steady-state assumption, the effective dynamics that are approximated are highly accurate, and in the case of systems with only monomolecular reactions, are exact. We will demonstrate this with some numerics, and also use the effective generators to sample paths of the slow variables which are conditioned on their endpoints, a task which would be computationally intractable for the generator of the full system.« less

  6. A Fast Variational Approach for Learning Markov Random Field Language Models

    DTIC Science & Technology

    2015-01-01

    the same distribution as n- gram models, but utilize a non-linear neural network pa- rameterization. NLMs have been shown to produce com- petitive...to either resort to local optimiza- tion methods, such as those used in neural lan- guage models, or work with heavily constrained distributions. In...embeddings learned through neural language models. Central to the language modelling problem is the challenge Proceedings of the 32nd International

  7. Emergence of Fundamental Limits in Spatially Distributed Dynamical Networks and Their Tradeoffs

    DTIC Science & Technology

    2017-05-01

    It is shown that the resulting non -convex optimization problem can be equivalently reformulated into a rank-constrained problem. We then...display a current ly valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS. 1. REPORT DATE (DD-MM- YYYY) ,2. REPORT TYPE 3...robustness in distributed control and dynamical systems. Our research re- sults are highly relevant for analysis and synthesis of engineered and natural

  8. Irregularities and Forecast Studies of Equatorial Spread

    DTIC Science & Technology

    2016-07-13

    less certain and requires investigation. It should be possible to observe the Faraday rotation of the signals received at Jicamarca. This is another...indication of the line-integrated electron number 9 DISTRIBUTION A: Distribution approved for public release. density. Like the phase delay, the Faraday ...angle is a modulo-two-pi quantity that is best used to constrain the time evolution of the ionosphere. Both the Faraday angle and the phase delay are

  9. Signatures of Pacific-type orogeny in Lleyn and Anglesey areas, northwest Wales

    NASA Astrophysics Data System (ADS)

    Asanuma, H.; Okada, Y.; Sawaki, Y.; Yamamoto, S.; Hirata, T.; Maruyama, S.

    2014-12-01

    The orogeny is a fundamental process of plate tectonics, and its record is useful for understanding of ancient plate motion. Geotectonic history of British isles has been explained by collision-type orogeny accompanied by closure of Iapetus ocean. High pressure metamorphic rocks such as blueschist and eclogite characterizing Pacific-type orogeny occur in some places, but have not attracted much interests because of their smallness. The subduction-related (Pacific-type) orogeny is characterized by contemporaneous formation of a batholith belt, a regional metamorphic belt (high P/T type) and an accretionary complex. Late Proterozoic-Cambrian (677-498 Ma) calc-alkaline volcano-plutonic complexes crop out in Lleyn and Anglesey areas, northwest Wales. The metamorphic age of high-P/T metamorphic belt in eastern Anglesey was constrained by Ar-Ar isochron age of 560-550 Ma. However, depositional age of the rocks composing accretionary complex wasn't fully constrained due to the limited zircon U-Pb age data and vague microfossil records. Monian Supergroup at Lleyn and Anglesey areas includes three groups; South Stack Group (Gp), New Harbour Gp and Gwna Gp. The Gwna Gp is located at the structural top and includes typical rocks of an ocean plate stratigraphy (OPS), a fundamental unit composing of an accretionary complex. We described detailed geological map and reconstructed the OPSs at some localities with careful attention to layer-parallel thrust. In order to constrain the sedimentary ages of each OPS, we collected sandstones from individual OPSs. We determined U-Pb ages of detrital zircons from the sandstones with LA-ICP-MS at Kyoto University. We adopted the youngest age of the detrital zircons as a constraint of sedimentary age. The results indicate that sediments in Gwna Gp deposited from 623 ± 17 Ma to 535 ± 14 Ma. These are contemporary with the ages of both batholith belt and regional metamorphic belt. In addition, it became evident that structurally upper level is older than lower level. This structurally downward-younging polarity is one of the characteristics of accretionary complex. Therefore, we concluded that the accretionary complex at northwestern Wales was formed between 623 ± 17 Ma and 535 ± 14 Ma, and the subduction-related Pacific-type orogeny had formed a part of British Isles.

  10. Language Program Evaluation

    ERIC Educational Resources Information Center

    Norris, John M.

    2016-01-01

    Language program evaluation is a pragmatic mode of inquiry that illuminates the complex nature of language-related interventions of various kinds, the factors that foster or constrain them, and the consequences that ensue. Program evaluation enables a variety of evidence-based decisions and actions, from designing programs and implementing…

  11. A Thermal-based Two-Source Energy Balance Model for Estimating Evapotranspiration over Complex Canopies

    USDA-ARS?s Scientific Manuscript database

    Land surface temperature (LST) provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition as well as providing useful information for constraining prognostic land surface models. This presentation describes a robust but relatively simple LS...

  12. THE RESPONSE OF ANIMAL RANGE TO CLIMATIC AND LAND-USE CHANGES

    EPA Science Inventory

    The geographic ranges of animal taxa seem much more complex than those of plants, since mobile animals may be constrained by many factors other than readily measurable climatic conditions. These additional factors may include microclimate and availability of particular plant type...

  13. Segmentation in cohesive systems constrained by elastic environments

    NASA Astrophysics Data System (ADS)

    Novak, I.; Truskinovsky, L.

    2017-04-01

    The complexity of fracture-induced segmentation in elastically constrained cohesive (fragile) systems originates from the presence of competing interactions. The role of discreteness in such phenomena is of interest in a variety of fields, from hierarchical self-assembly to developmental morphogenesis. In this paper, we study the analytically solvable example of segmentation in a breakable mass-spring chain elastically linked to a deformable lattice structure. We explicitly construct the complete set of local minima of the energy in this prototypical problem and identify among them the states corresponding to the global energy minima. We show that, even in the continuum limit, the dependence of the segmentation topology on the stretching/pre-stress parameter in this problem takes the form of a devil's type staircase. The peculiar nature of this staircase, characterized by locking in rational microstructures, is of particular importance for biological applications, where its structure may serve as an explanation of the robustness of stress-driven segmentation. This article is part of the themed issue 'Patterning through instabilities in complex media: theory and applications.'

  14. Compton Reflection in AGN with Simbol-X

    NASA Astrophysics Data System (ADS)

    Beckmann, V.; Courvoisier, T. J.-L.; Gehrels, N.; Lubiński, P.; Malzac, J.; Petrucci, P. O.; Shrader, C. R.; Soldi, S.

    2009-05-01

    AGN exhibit complex hard X-ray spectra. Our current understanding is that the emission is dominated by inverse Compton processes which take place in the corona above the accretion disk, and that absorption and reflection in a distant absorber play a major role. These processes can be directly observed through the shape of the continuum, the Compton reflection hump around 30 keV, and the iron fluorescence line at 6.4 keV. We demonstrate the capabilities of Simbol-X to constrain complex models for cases like MCG-05-23-016, NGC 4151, NGC 2110, and NGC 4051 in short (10 ksec) observations. We compare the simulations with recent observations on these sources by INTEGRAL, Swift and Suzaku. Constraining reflection models for AGN with Simbol-X will help us to get a clear view of the processes and geometry near to the central engine in AGN, and will give insight to which sources are responsible for the Cosmic X-ray background at energies >20 keV.

  15. Characterization of new functionalized calcium carbonate-polycaprolactone composite material for application in geometry-constrained drug release formulation development.

    PubMed

    Wagner-Hattler, Leonie; Schoelkopf, Joachim; Huwyler, Jörg; Puchkov, Maxim

    2017-10-01

    A new mineral-polymer composite (FCC-PCL) performance was assessed to produce complex geometries to aid in development of controlled release tablet formulations. The mechanical characteristics of a developed material such as compactibility, compressibility and elastoplastic deformation were measured. The results and comparative analysis versus other common excipients suggest efficient formation of a complex, stable and impermeable geometries for constrained drug release modifications under compression. The performance of the proposed composite material has been tested by compacting it into a geometrically altered tablet (Tablet-In-Cup, TIC) and the drug release was compared to commercially available product. The TIC device exhibited a uniform surface, showed high physical stability, and showed absence of friability. FCC-PCL composite had good binding properties and good compactibility. It was possible to reveal an enhanced plasticity characteristic of a new material which was not present in the individual components. The presented FCC-PCL composite mixture has the potential to become a successful tool to formulate controlled-release dosage solid forms.

  16. Fabrication of (PPC/NCC)/PVA composites with inner-outer double constrained structure and improved glass transition temperature.

    PubMed

    Cui, Shaoying; Li, Li; Wang, Qi

    2018-07-01

    Improving glass transition temperature (T g ) and mechanical property of the environment-friendly poly(propylene carbonate) via intermacromolecular complexation through hydrogen bonding is attractive and of great importance. A novel and effective strategy to prepare (polypropylene carbonate/nanocrystalline cellulose)/polyvinyl alcohol ((PPC/NCC)/PVA) composites with inner-outer double constrained structure was reported in this work. Outside the PPC phase, PVA, as a strong skeleton at microscale, could constrain the movement of PPC molecular chains by forming hydrogen bonding with PPC at the interface of PPC and PVA phases; inside the PPC phase, the rod-like NCC could restrain the flexible molecular chains of PPC at nanoscale by forming multi-hydrogen bonding with PPC. Under the synergistic effect of this novel inner-outer double constrained structure, T g , mechanical properties and thermal stability of (PPC/NCC)/PVA composite were significantly increased, e.g. T g of the composite researched the maximum value of 49.6 °C, respectively 15.6 °C, 5.7 °C and 4.2 °C higher than that of PPC, PPC/NCC and PPC/PVA composite. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Measurement of 240Pu Angular Momentum Dependent Fission Probabilities Using the (α ,α') Reaction

    NASA Astrophysics Data System (ADS)

    Koglin, Johnathon; Burke, Jason; Fisher, Scott; Jovanovic, Igor

    2017-09-01

    The surrogate reaction method often lacks the theoretical framework and necessary experimental data to constrain models especially when rectifying differences between angular momentum state differences between the desired and surrogate reaction. In this work, dual arrays of silicon telescope particle identification detectors and photovoltaic (solar) cell fission fragment detectors have been used to measure the fission probability of the 240Pu(α ,α' f) reaction - a surrogate for the 239Pu(n , f) - and fission fragment angular distributions. Fission probability measurements were performed at a beam energy of 35.9(2) MeV at eleven scattering angles from 40° to 140°e in 10° intervals and at nuclear excitation energies up to 16 MeV. Fission fragment angular distributions were measured in six bins from 4.5 MeV to 8.0 MeV and fit to expected distributions dependent on the vibrational and rotational excitations at the saddle point. In this way, the contributions to the total fission probability from specific states of K angular momentum projection on the symmetry axis are extracted. A sizable data collection is presented to be considered when constraining microscopic cross section calculations.

  18. Generating constrained randomized sequences: item frequency matters.

    PubMed

    French, Robert M; Perruchet, Pierre

    2009-11-01

    All experimental psychologists understand the importance of randomizing lists of items. However, randomization is generally constrained, and these constraints-in particular, not allowing immediately repeated items-which are designed to eliminate particular biases, frequently engender others. We describe a simple Monte Carlo randomization technique that solves a number of these problems. However, in many experimental settings, we are concerned not only with the number and distribution of items but also with the number and distribution of transitions between items. The algorithm mentioned above provides no control over this. We therefore introduce a simple technique that uses transition tables for generating correctly randomized sequences. We present an analytic method of producing item-pair frequency tables and item-pair transitional probability tables when immediate repetitions are not allowed. We illustrate these difficulties and how to overcome them, with reference to a classic article on word segmentation in infants. Finally, we provide free access to an Excel file that allows users to generate transition tables with up to 10 different item types, as well as to generate appropriately distributed randomized sequences of any length without immediately repeated elements. This file is freely available from http://leadserv.u-bourgogne.fr/IMG/xls/TransitionMatrix.xls.

  19. Metallic artifact mitigation and organ-constrained tissue assignment for Monte Carlo calculations of permanent implant lung brachytherapy.

    PubMed

    Sutherland, J G H; Miksys, N; Furutani, K M; Thomson, R M

    2014-01-01

    To investigate methods of generating accurate patient-specific computational phantoms for the Monte Carlo calculation of lung brachytherapy patient dose distributions. Four metallic artifact mitigation methods are applied to six lung brachytherapy patient computed tomography (CT) images: simple threshold replacement (STR) identifies high CT values in the vicinity of the seeds and replaces them with estimated true values; fan beam virtual sinogram replaces artifact-affected values in a virtual sinogram and performs a filtered back-projection to generate a corrected image; 3D median filter replaces voxel values that differ from the median value in a region of interest surrounding the voxel and then applies a second filter to reduce noise; and a combination of fan beam virtual sinogram and STR. Computational phantoms are generated from artifact-corrected and uncorrected images using several tissue assignment schemes: both lung-contour constrained and unconstrained global schemes are considered. Voxel mass densities are assigned based on voxel CT number or using the nominal tissue mass densities. Dose distributions are calculated using the EGSnrc user-code BrachyDose for (125)I, (103)Pd, and (131)Cs seeds and are compared directly as well as through dose volume histograms and dose metrics for target volumes surrounding surgical sutures. Metallic artifact mitigation techniques vary in ability to reduce artifacts while preserving tissue detail. Notably, images corrected with the fan beam virtual sinogram have reduced artifacts but residual artifacts near sources remain requiring additional use of STR; the 3D median filter removes artifacts but simultaneously removes detail in lung and bone. Doses vary considerably between computational phantoms with the largest differences arising from artifact-affected voxels assigned to bone in the vicinity of the seeds. Consequently, when metallic artifact reduction and constrained tissue assignment within lung contours are employed in generated phantoms, this erroneous assignment is reduced, generally resulting in higher doses. Lung-constrained tissue assignment also results in increased doses in regions of interest due to a reduction in the erroneous assignment of adipose to voxels within lung contours. Differences in dose metrics calculated for different computational phantoms are sensitive to radionuclide photon spectra with the largest differences for (103)Pd seeds and smallest but still considerable differences for (131)Cs seeds. Despite producing differences in CT images, dose metrics calculated using the STR, fan beam + STR, and 3D median filter techniques produce similar dose metrics. Results suggest that the accuracy of dose distributions for permanent implant lung brachytherapy is improved by applying lung-constrained tissue assignment schemes to metallic artifact corrected images.

  20. The Rise and Fall of Star Formation Histories of Blue Galaxies at Redshifts 0.2 < z < 1.4

    NASA Technical Reports Server (NTRS)

    Pacifici, Camilla; Kassin, Susan A.; Weiner, Benjamin; Charlot, Stephane; Gardner, Jonathan P.

    2012-01-01

    Popular cosmological scenarios predict that galaxies form hierarchically from the merger of many progenitor, each with their own unique star formation history (SFH). We use the approach recently developed by Pacifici et al. to constrain the SFHs of 4517 blue (presumably star-forming) galaxies with spectroscopic redshifts in the range O.2 < z < 1:4 from the All-Wavelength Extended Groth Strip International Survey (AEGIS). This consists in the Bayesian analysis of the observed galaxy spectral ' energy distributions with a comprehensive library of synthetic spectra assembled using state-of-the-art models of star formation and chemical enrichment histories, stellar population synthesis, nebular emission and attenuation by dust. We constrain the SFH of each galaxy in our sample by comparing the observed fluxes in the B, R,l and K(sub s) bands and rest-frame optical emission-line luminosities with those of one million model spectral energy distributions. We explore the dependence of the resulting SFH on galaxy stellar mass and redshift. We find that the average SFHs of high-mass galaxies rise and fall in a roughly symmetric bell-shaped manner, while those of low-mass galaxies rise progressively in time, consistent with the typically stronger activity of star formation in low-mass compared to high-mass galaxies. For galaxies of all masses, the star formation activity rises more rapidly at high than at low redshift. These findings imply that the standard approximation of exponentially declining SFHs wIdely used to interpret observed galaxy spectral energy distributions is not appropriate to constrain the physical parameters of star-forming galaxies at intermediate redshifts.

  1. Evolution, distribution, and characteristics of rifting in southern Ethiopia

    NASA Astrophysics Data System (ADS)

    Philippon, Melody; Corti, Giacomo; Sani, Federico; Bonini, Marco; Balestrieri, Maria-Laura; Molin, Paola; Willingshofer, Ernst; Sokoutis, Dimitrios; Cloetingh, Sierd

    2014-04-01

    Southern Ethiopia is a key region to understand the evolution of the East African rift system, since it is the area of interaction between the main Ethiopian rift (MER) and the Kenyan rift. However, geological data constraining rift evolution in this remote area are still relatively sparse. In this study the timing, distribution, and style of rifting in southern Ethiopia are constrained by new structural, geochronological, and geomorphological data. The border faults in the area are roughly parallel to preexisting basement fabrics and are progressively more oblique with respect to the regional Nubia-Somalia motion proceeding southward. Kinematic indicators along these faults are mainly dip slip, pointing to a progressive rotation of the computed direction of extension toward the south. Radiocarbon data indicate post 30 ka faulting at both western and eastern margins of the MER with limited axial deformation. Similarly, geomorphological data suggest recent fault activity along the western margins of the basins composing the Gofa Province and in the Chew Bahir basin. This supports that interaction between the MER and the Kenyan rift in southern Ethiopia occurs in a 200 km wide zone of ongoing deformation. Fault-related exhumation at ~10-12 Ma in the Gofa Province, as constrained by new apatite fission track data, occurred later than the ~20 Ma basement exhumation of the Chew Bahir basin, thus pointing to a northward propagation of the Kenyan rift-related extension in the area.

  2. Constraining the redshift distribution of ultrahigh-energy-cosmic-ray sources by isotropic gamma-ray background

    NASA Astrophysics Data System (ADS)

    Liu, Ruo-Yu; Taylor, Andrew; Wang, Xiang-Yu; Aharonian, Felix

    2017-01-01

    By interacting with the cosmic background photons during their propagation through intergalactic space, ultrahigh energy cosmic rays (UHECRs) produce energetic electron/positron pairs and photons which will initiate electromagnetic cascades, contributing to the isotropic gamma-ray background (IGRB). The generated gamma-ray flux level highly depends on the redshift evolution of the UHECR sources. Recently, the Fermi-LAT collaboration reported that 86-14+16 of the total extragalactic gamma-ray flux comes from extragalactic point sources including those unresolved ones. This leaves a limited room for the diffusive gamma ray generated via UHECR propagation, and subsequently constrains their source distribution in the Universe. Normalizing the total cosmic ray energy budget with the observed UHECR flux in the energy band of (1-4)×1018 eV, we calculate the diffuse gamma-ray flux generated through UHECR propagation. We find that in order to not overshoot the new IGRB limit, these sub-ankle UHECRs should be produced mainly by nearby sources, with a possible non-negligible contribution from our Galaxy. The distance for the majority of UHECR sources can be further constrained if a given fraction of the observed IGRB at 820 GeV originates from UHECR. We note that our result should be conservative since there may be various other contributions to the IGRB that is not included here.

  3. Force distribution in a semiflexible loop.

    PubMed

    Waters, James T; Kim, Harold D

    2016-04-01

    Loops undergoing thermal fluctuations are prevalent in nature. Ringlike or cross-linked polymers, cyclic macromolecules, and protein-mediated DNA loops all belong to this category. Stability of these molecules are generally described in terms of free energy, an average quantity, but it may also be impacted by local fluctuating forces acting within these systems. The full distribution of these forces can thus give us insights into mechanochemistry beyond the predictive capability of thermodynamics. In this paper, we study the force exerted by an inextensible semiflexible polymer constrained in a looped state. By using a simulation method termed "phase-space sampling," we generate the equilibrium distribution of chain conformations in both position and momentum space. We compute the constraint forces between the two ends of the loop in this chain ensemble using Lagrangian mechanics, and show that the mean of these forces is equal to the thermodynamic force. By analyzing kinetic and potential contributions to the forces, we find that the mean force acts in the direction of increasing extension not because of bending stress, but in spite of it. Furthermore, we obtain a distribution of constraint forces as a function of chain length, extension, and stiffness. Notably, increasing contour length decreases the average force, but the additional freedom allows fluctuations in the constraint force to increase. The force distribution is asymmetric and falls off less sharply than a Gaussian distribution. Our work exemplifies a system where large-amplitude fluctuations occur in a way unforeseen by a purely thermodynamic framework, and offers computational tools useful for efficient, unbiased simulation of a constrained system.

  4. Force distribution in a semiflexible loop

    PubMed Central

    Waters, James T.; Kim, Harold D.

    2017-01-01

    Loops undergoing thermal fluctuations are prevalent in nature. Ringlike or cross-linked polymers, cyclic macromolecules, and protein-mediated DNA loops all belong to this category. Stability of these molecules are generally described in terms of free energy, an average quantity, but it may also be impacted by local fluctuating forces acting within these systems. The full distribution of these forces can thus give us insights into mechanochemistry beyond the predictive capability of thermodynamics. In this paper, we study the force exerted by an inextensible semiflexible polymer constrained in a looped state. By using a simulation method termed “phase-space sampling,” we generate the equilibrium distribution of chain conformations in both position and momentum space. We compute the constraint forces between the two ends of the loop in this chain ensemble using Lagrangian mechanics, and show that the mean of these forces is equal to the thermodynamic force. By analyzing kinetic and potential contributions to the forces, we find that the mean force acts in the direction of increasing extension not because of bending stress, but in spite of it. Furthermore, we obtain a distribution of constraint forces as a function of chain length, extension, and stiffness. Notably, increasing contour length decreases the average force, but the additional freedom allows fluctuations in the constraint force to increase. The force distribution is asymmetric and falls off less sharply than a Gaussian distribution. Our work exemplifies a system where large-amplitude fluctuations occur in a way unforeseen by a purely thermodynamic framework, and offers computational tools useful for efficient, unbiased simulation of a constrained system. PMID:27176436

  5. Harvesting Entropy for Random Number Generation for Internet of Things Constrained Devices Using On-Board Sensors

    PubMed Central

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-01-01

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357

  6. Harvesting entropy for random number generation for internet of things constrained devices using on-board sensors.

    PubMed

    Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej

    2015-10-22

    Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.

  7. Constraining geostatistical models with hydrological data to improve prediction realism

    NASA Astrophysics Data System (ADS)

    Demyanov, V.; Rojas, T.; Christie, M.; Arnold, D.

    2012-04-01

    Geostatistical models reproduce spatial correlation based on the available on site data and more general concepts about the modelled patters, e.g. training images. One of the problem of modelling natural systems with geostatistics is in maintaining realism spatial features and so they agree with the physical processes in nature. Tuning the model parameters to the data may lead to geostatistical realisations with unrealistic spatial patterns, which would still honour the data. Such model would result in poor predictions, even though although fit the available data well. Conditioning the model to a wider range of relevant data provide a remedy that avoid producing unrealistic features in spatial models. For instance, there are vast amounts of information about the geometries of river channels that can be used in describing fluvial environment. Relations between the geometrical channel characteristics (width, depth, wave length, amplitude, etc.) are complex and non-parametric and are exhibit a great deal of uncertainty, which is important to propagate rigorously into the predictive model. These relations can be described within a Bayesian approach as multi-dimensional prior probability distributions. We propose a way to constrain multi-point statistics models with intelligent priors obtained from analysing a vast collection of contemporary river patterns based on previously published works. We applied machine learning techniques, namely neural networks and support vector machines, to extract multivariate non-parametric relations between geometrical characteristics of fluvial channels from the available data. An example demonstrates how ensuring geological realism helps to deliver more reliable prediction of a subsurface oil reservoir in a fluvial depositional environment.

  8. Automated 3D closed surface segmentation: application to vertebral body segmentation in CT images.

    PubMed

    Liu, Shuang; Xie, Yiting; Reeves, Anthony P

    2016-05-01

    A fully automated segmentation algorithm, progressive surface resolution (PSR), is presented in this paper to determine the closed surface of approximately convex blob-like structures that are common in biomedical imaging. The PSR algorithm was applied to the cortical surface segmentation of 460 vertebral bodies on 46 low-dose chest CT images, which can be potentially used for automated bone mineral density measurement and compression fracture detection. The target surface is realized by a closed triangular mesh, which thereby guarantees the enclosure. The surface vertices of the triangular mesh representation are constrained along radial trajectories that are uniformly distributed in 3D angle space. The segmentation is accomplished by determining for each radial trajectory the location of its intersection with the target surface. The surface is first initialized based on an input high confidence boundary image and then resolved progressively based on a dynamic attraction map in an order of decreasing degree of evidence regarding the target surface location. For the visual evaluation, the algorithm achieved acceptable segmentation for 99.35 % vertebral bodies. Quantitative evaluation was performed on 46 vertebral bodies and achieved overall mean Dice coefficient of 0.939 (with max [Formula: see text] 0.957, min [Formula: see text] 0.906 and standard deviation [Formula: see text] 0.011) using manual annotations as the ground truth. Both visual and quantitative evaluations demonstrate encouraging performance of the PSR algorithm. This novel surface resolution strategy provides uniform angular resolution for the segmented surface with computation complexity and runtime that are linearly constrained by the total number of vertices of the triangular mesh representation.

  9. The importance of diverse data types to calibrate a watershed model of the Trout Lake Basin, Northern Wisconsin, USA

    USGS Publications Warehouse

    Hunt, R.J.; Feinstein, D.T.; Pint, C.D.; Anderson, M.P.

    2006-01-01

    As part of the USGS Water, Energy, and Biogeochemical Budgets project and the NSF Long-Term Ecological Research work, a parameter estimation code was used to calibrate a deterministic groundwater flow model of the Trout Lake Basin in northern Wisconsin. Observations included traditional calibration targets (head, lake stage, and baseflow observations) as well as unconventional targets such as groundwater flows to and from lakes, depth of a lake water plume, and time of travel. The unconventional data types were important for parameter estimation convergence and allowed the development of a more detailed parameterization capable of resolving model objectives with well-constrained parameter values. Independent estimates of groundwater inflow to lakes were most important for constraining lakebed leakance and the depth of the lake water plume was important for determining hydraulic conductivity and conceptual aquifer layering. The most important target overall, however, was a conventional regional baseflow target that led to correct distribution of flow between sub-basins and the regional system during model calibration. The use of an automated parameter estimation code: (1) facilitated the calibration process by providing a quantitative assessment of the model's ability to match disparate observed data types; and (2) allowed assessment of the influence of observed targets on the calibration process. The model calibration required the use of a 'universal' parameter estimation code in order to include all types of observations in the objective function. The methods described in this paper help address issues of watershed complexity and non-uniqueness common to deterministic watershed models. ?? 2005 Elsevier B.V. All rights reserved.

  10. Reconstructing spatial and temporal patterns of paleoglaciation along the Tian Shan

    NASA Astrophysics Data System (ADS)

    Harbor, J.; Stroeven, A. P.; Beel, C.; Blomdin, R.; Caffee, M. W.; Chen, Y.; Codilean, A.; Gribenski, N.; Hattestrand, C.; Heyman, J.; Ivanov, M.; Kassab, C.; Li, Y.; Lifton, N. A.; Liu, G.; Petrakov, D.; Rogozhina, I.; Usubaliev, R.

    2012-12-01

    Testing and calibrating global climate models require well-constrained information on past climates of key regions around the world. Particularly important are transitional regions that provide a sensitive record of past climate change. Central Asia is an extreme continental location with glaciers and rivers that respond sensitively to temporal variations in the dominance of several major climate systems. As an international team initiative, we are reconstructing the glacial history of the Kyrgyz and Chinese Tian Shan, based on mapping and dating of key localities along the range. Remote-sensing-based geomorphological mapping, building on previous maps produced by Kyrgyz, Russian, Chinese and German scholars, is being augmented with field observations of glacial geomorphology and the maximum distribution of erratics. We are using cosmogenic nuclide (CN) 10Be dating of moraines and other landforms that constrain the former maximum extents of glaciers. Study sites include the Ala-Archa, Ak-Shyrak and Inylchek/Sary-Dzaz areas in Kyrgyzstan and the Urumqi valley (as well as its upland and southern slopes), and the Tumur and Bogeda peak areas in China. Comparing consistently dated glacial histories along and across the range will allow us to examine potential shifts in the dominance patterns of climate systems over time in Central Asia. We are also comparing ages based on CN with optically stimulated luminescence (OSL) and electron spin resonance (ESR) dates. The final stage of this project will use intermediate complexity glacier flow models to examine paleoclimatic implications of the observed spatial and temporal patterns of glacier changes across Central Asia and eastern Tibet, focused in particular on the last glacial cycle.

  11. The Interstellar Medium Properties of Heavily Reddened Quasars & Companions at z ˜ 2.5 with ALMA & JVLA

    NASA Astrophysics Data System (ADS)

    Banerji, Manda; Jones, Gareth C.; Wagg, Jeff; Carilli, Chris L.; Bisbas, Thomas G.; Hewett, Paul C.

    2018-06-01

    We study the interstellar medium (ISM) properties of three heavily reddened quasars at z ˜ 2.5 as well as three millimetre-bright companion galaxies near these quasars. New JVLA and ALMA observations constrain the CO(1-0), CO(7-6) and [CI]3P2 - 3P1 line emission as well as the far infrared to radio continuum. The gas excitation and physical properties of the ISM are constrained by comparing our observations to photo-dissociation region (PDR) models. The ISM in our high-redshift quasars is composed of very high-density, high-temperature gas which is already highly enriched in elements like carbon. One of our quasar hosts is shown to be a close-separation (<2″) major merger with different line emission properties in the millimeter-bright galaxy and quasar components. Low angular resolution observations of high-redshift quasars used to assess quasar excitation properties should therefore be interpreted with caution as they could potentially be averaging over multiple components with different ISM conditions. Our quasars and their companion galaxies show a range of CO excitation properties spanning the full extent from starburst-like to quasar-like spectral line energy distributions. We compare gas masses based on CO, CI and dust emission, and find that these can disagree when standard assumptions are made regarding the values of αCO, the gas-to-dust ratio and the atomic carbon abundances. We conclude that the ISM properties of our quasars and their companion galaxies are diverse and likely vary spatially across the full extent of these complex, merging systems.

  12. Using Crater Counts to Constrain Erosion Rates on Mars: Implications for the Global Dust Cycle, Sedimentary Rock Erosion and Organic Matter Preservation

    NASA Astrophysics Data System (ADS)

    Mayer, D. P.; Kite, E. S.

    2016-12-01

    Sandblasting, aeolian infilling, and wind deflation all obliterate impact craters on Mars, complicating the use of crater counts for chronology, particularly on sedimentary rock surfaces. However, crater counts on sedimentary rocks can be exploited to constrain wind erosion rates. Relatively small, shallow craters are preferentially obliterated as a landscape undergoes erosion, so the size-frequency distribution of impact craters in a landscape undergoing steady exhumation will develop a shallower power-law slope than a simple production function. Estimating erosion rates is important for several reasons: (1) Wind erosion is a source of mass for the global dust cycle, so the global dust reservoir will disproportionately sample fast-eroding regions; (2) The pace and pattern of recent wind erosion is a sorely-needed constraint on models of the sculpting of Mars' sedimentary-rock mounds; (3) Near-surface complex organic matter on Mars is destroyed by radiation in <108 years, so high rates of surface exhumation are required for preservation of near-surface organic matter. We use crater counts from 18 HiRISE images over sedimentary rock deposits as the basis for estimating erosion rates. Each image was counted by ≥3 analysts and only features agreed on by ≥2 analysts were included in the erosion rate estimation. Erosion rates range from 0.1-0.2 {μ }m/yr across all images. These rates represent an upper limit on surface erosion by landscape lowering. At the conference we will discuss the within and between-image variability of erosion rates and their implications for recent geological processes on Mars.

  13. A multiple-point geostatistical approach to quantifying uncertainty for flow and transport simulation in geologically complex environments

    NASA Astrophysics Data System (ADS)

    Cronkite-Ratcliff, C.; Phelps, G. A.; Boucher, A.

    2011-12-01

    In many geologic settings, the pathways of groundwater flow are controlled by geologic heterogeneities which have complex geometries. Models of these geologic heterogeneities, and consequently, their effects on the simulated pathways of groundwater flow, are characterized by uncertainty. Multiple-point geostatistics, which uses a training image to represent complex geometric descriptions of geologic heterogeneity, provides a stochastic approach to the analysis of geologic uncertainty. Incorporating multiple-point geostatistics into numerical models provides a way to extend this analysis to the effects of geologic uncertainty on the results of flow simulations. We present two case studies to demonstrate the application of multiple-point geostatistics to numerical flow simulation in complex geologic settings with both static and dynamic conditioning data. Both cases involve the development of a training image from a complex geometric description of the geologic environment. Geologic heterogeneity is modeled stochastically by generating multiple equally-probable realizations, all consistent with the training image. Numerical flow simulation for each stochastic realization provides the basis for analyzing the effects of geologic uncertainty on simulated hydraulic response. The first case study is a hypothetical geologic scenario developed using data from the alluvial deposits in Yucca Flat, Nevada. The SNESIM algorithm is used to stochastically model geologic heterogeneity conditioned to the mapped surface geology as well as vertical drill-hole data. Numerical simulation of groundwater flow and contaminant transport through geologic models produces a distribution of hydraulic responses and contaminant concentration results. From this distribution of results, the probability of exceeding a given contaminant concentration threshold can be used as an indicator of uncertainty about the location of the contaminant plume boundary. The second case study considers a characteristic lava-flow aquifer system in Pahute Mesa, Nevada. A 3D training image is developed by using object-based simulation of parametric shapes to represent the key morphologic features of rhyolite lava flows embedded within ash-flow tuffs. In addition to vertical drill-hole data, transient pressure head data from aquifer tests can be used to constrain the stochastic model outcomes. The use of both static and dynamic conditioning data allows the identification of potential geologic structures that control hydraulic response. These case studies demonstrate the flexibility of the multiple-point geostatistics approach for considering multiple types of data and for developing sophisticated models of geologic heterogeneities that can be incorporated into numerical flow simulations.

  14. Calculation of primordial abundances of light nuclei including a heavy sterile neutrino

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosquera, M.E.; Civitarese, O., E-mail: mmosquera@fcaglp.unlp.edu.ar, E-mail: osvaldo.civitarese@fisica.unlp.edu.ar

    2015-08-01

    We include the coupling of a heavy sterile neutrino with active neutrinos in the calculation of primordial abundances of light-nuclei. We calculate neutrino distribution functions and primordial abundances, as functions depending on a renormalization of the sterile neutrino distribution function (a), the sterile neutrino mass (m{sub s}) and the mixing angle (φ). Using the observable data, we set constrains on these parameters, which have the values 0a < 0.4, sin{sup 2} φ ≈ 0.12−0.39 and 0m{sub s} < 7 keV at 1σ level, for a fixed value of the baryon to photon ratio. When the baryon to photon ratio is allowed to vary, its extracted value ismore » in agreement with the values constrained by Planck observations and by the Wilkinson Microwave Anisotropy Probe (WMAP). It is found that the anomaly in the abundance of {sup 7}Li persists, in spite of the inclusion of a heavy sterile neutrino.« less

  15. On-shell constrained M 2 variables with applications to mass measurements and topology disambiguation

    NASA Astrophysics Data System (ADS)

    Cho, Won Sang; Gainer, James S.; Kim, Doojin; Matchev, Konstantin T.; Moortgat, Filip; Pape, Luc; Park, Myeonghun

    2014-08-01

    We consider a class of on-shell constrained mass variables that are 3+1 dimensional generalizations of the Cambridge M T2 variable and that automatically incorporate various assumptions about the underlying event topology. The presence of additional on-shell constraints causes their kinematic distributions to exhibit sharper endpoints than the usual M T2 distribution. We study the mathematical properties of these new variables, e.g., the uniqueness of the solution selected by the minimization over the invisible particle 4-momenta. We then use this solution to reconstruct the masses of various particles along the decay chain. We propose several tests for validating the assumed event topology in missing energy events from new physics. The tests are able to determine: 1) whether the decays in the event are two-body or three-body, 2) if the decay is two-body, whether the intermediate resonances in the two decay chains are the same, and 3) the exact sequence in which the visible particles are emitted from each decay chain.

  16. RX J1856-3754: Evidence for a Stiff Equation of State

    NASA Astrophysics Data System (ADS)

    Braje, Timothy M.; Romani, Roger W.

    2002-12-01

    We have examined the soft X-ray plus optical/UV spectrum of the nearby isolated neutron star RX J1856-3754, comparing it with detailed models of a thermally emitting surface. Like previous investigators, we find that the spectrum is best fitted by a two-temperature blackbody model. In addition, our simulations constrain the allowed viewing geometry from the observed pulse fraction upper limits. These simulations show that RX J1856-3754 is very likely to be a normal young pulsar, with the nonthermal radio beam missing Earth's line of sight. The spectral energy distribution limits on the model parameter space put a strong constraint on the star's M/R. At the measured parallax distance, the allowed range for MNS=1.5Msolar is RNS=13.7+/-0.6km. Under this interpretation, the equation of state (EOS) is relatively stiff near nuclear density, and the quark star EOS posited in some previous studies is strongly excluded. The data also constrain the surface T distribution over the polar cap.

  17. A Study of Interstellar Medium Components of the Ohio State University Bright Spiral Galaxy Survey

    NASA Astrophysics Data System (ADS)

    Butner, Melissa; Deustua, S. E.; Conti, A.; Smtih, J.

    2011-01-01

    Multi-wavelength data can be used to provide information on the interstellar medium of galaxies, as well as on their stellar populations. We use the Ohio State University Bright Spiral Galaxy Survey (OSBSGS) to investigate the distribution and properties of the interstellar medium in a set of nearby galaxies. The OSBSGS consists of B, V, R, J, H and K band images for a over 200 nearby spiral galaxies. These data allow us to probe the dust temperatures and distribution using color maps. When combined with a pixel based analysis, it may be possible to tease out, perhaps better constraining, the heating mechanism for the ISM, as well as constrain dust models. In this paper we will discuss our progress in understanding, in particular, the properties of dust in nearby galaxies. Melissa Butner was a participant in the STScI Summer Student Program supported by the STScI Director's Discretionary Research Fund. MB also acknowledges support and computer cluster access via NSF grant 07-22890.

  18. Determination of the top-quark pole mass and strong coupling constant from the t t-bar production cross section in pp collisions at $$\\sqrt{s}$$ = 7 TeV

    DOE PAGES

    Chatrchyan, Serguei

    2014-08-21

    The inclusive cross section for top-quark pair production measured by the CMS experiment in proton-proton collisions at a center-of-mass energy of 7 TeV is compared to the QCD prediction at next-to-next-to-leading order with various parton distribution functions to determine the top-quark pole mass,more » $$m_t^{pole}$$, or the strong coupling constant, $$\\alpha_S$$. With the parton distribution function set NNPDF2.3, a pole mass of 176.7$$^{+3.0}_{-2.8}$$ GeV is obtained when constraining $$\\alpha_S$$ at the scale of the Z boson mass, $$m_Z$$, to the current world average. Alternatively, by constraining $$m_t^{pole}$$ to the latest average from direct mass measurements, a value of $$\\alpha_S(m_Z)$$ = 0.1151$$^{+0.0028}_{-0.0027}$$ is extracted. This is the first determination of $$\\alpha_S$$ using events from top-quark production.« less

  19. ALMA observations of Titan : Vertical and spatial distribution of nitriles

    NASA Astrophysics Data System (ADS)

    Moreno, R.; Lellouch, E.; Vinatier, S.; Gurwell, M.; Moullet, A.; Lara, L. M.; Hidayat, T.

    2015-10-01

    We report submm observations of Titan performed with the ALMA interferometer centered at the rotational frequencies of HCN(4-3) and HNC(4-3), i.e. 354 and 362 GHz. These measurements yielded disk-resolved emission spectra of Titan with an angular resolution of ~0.47''. Titan's angular surface diameter was 0.77''. Data were acquired in summer 2012 near the greatest eastern and western elongations of Titan at a spectral resolution of 122 kHz (λ/d λ = 3106). We have obtained maps of several nitriles present in Titan' stratosphere: HCN, HC3N, CH3CN, HNC, C2H5CNand other weak lines (isotopes, vibrationnally excited lines).We will present radiative transfer analysis of the spectra acquired. With the combination of all these detected rotational lines, we will constrain the atmospheric temperature, the spatial and vertical distribution of these species, as well as isotopic ratios. Moreover, Doppler lineshift measurements will enable us to constrain the zonal wind flow in the upper atmosphere.

  20. Formal hardware verification of digital circuits

    NASA Technical Reports Server (NTRS)

    Joyce, J.; Seger, C.-J.

    1991-01-01

    The use of formal methods to verify the correctness of digital circuits is less constrained by the growing complexity of digital circuits than conventional methods based on exhaustive simulation. This paper briefly outlines three main approaches to formal hardware verification: symbolic simulation, state machine analysis, and theorem-proving.

  1. Explaining postseismic and aseismic transient deformation in subduction zones with rate and state friction modeling constrained by lab and geodetic observations

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Dedontney, N. L.; Rice, J. R.

    2007-12-01

    Rate and state friction, as applied to modeling subduction earthquake sequences, routinely predicts postseismic slip. It also predicts spontaneous aseismic slip transients, at least when pore pressure p is highly elevated near and downdip from the stability transition [Liu and Rice, 2007]. Here we address how to make such postseismic and transient predictions more fully compatible with geophysical observations. For example, lab observations can determine the a, b parameters and state evolution slip L of rate and state friction as functions of lithology and temperature and, with aid of a structural and thermal model of the subduction zone, as functions of downdip distance. Geodetic observations constrain interseismic, postseismic and aseismic transient deformations, which are controlled in the modeling by the distributions of a \\barσ and b \\barσ (parameters which also partly control the seismic rupture phase), where \\barσ = σ - p. Elevated p, controlled by tectonic compression and dehydration, may be constrained by petrologic and seismic observations. The amount of deformation and downdip extent of the slipping zone associated with the spontaneous quasi- periodic transients, as thus far modeled [Liu and Rice, 2007], is generally smaller than that observed during episodes of slow slip events in northern Cascadia and SW Japan subduction zones. However, the modeling was based on lab data for granite gouge under hydrothermal conditions because data is most complete for that case. We here report modeling based on lab data on dry granite gouge [Stesky, 1975; Lockner et al., 1986], involving no or lessened chemical interaction with water and hence being a possibly closer analog to dehydrated oceanic crust, and limited data on gabbro gouge [He et al., 2007], an expected lithology. Both data sets show a much less rapid increase of a-b with temperature above the stability transition (~ 350 °C) than does wet granite gouge; a-b increases to ~ 0.08 for wet granite at 600 °C, but to only ~ 0.01 in the dry granite and gabbro cases. We find that the lessened high-T a - b does, for the same \\barσ, modestly extend the transient slip episodes further downdip, although a majority of slip is still contributed near and in the updip rate-weakening region. However, postseismic slip, for the same \\barσ, propagates much further downdip into the rate-strengthening region. To better constrain the downdip distribution of (a - b) \\barσ, and possibly a \\barσ and L, we focus on the geodetically constrained [Hutton et al., 2001] space-time distribution of postseismic slip for the 1995 Mw = 8.0 Colima-Jalisco earthquake. This is a similarly shallow dipping subduction zone with a thermal profile [Currie et al., 2001] comparable to those that have thus far been shown to exhibit aseismic transients and non-volcanic tremor [Peacock et al., 2002]. We extrapolate the modeled 2-D postseismic slip, following a thrust earthquake with a coseismic slip similar to the 1995 event, to a spatial-temporal 3-D distribution. Surface deformation due to such slips on the thrust fault in an elastic half space is calculated and compared to that observed at western Mexico GPS stations, to constrain the above depth-variable model parameters.

  2. A developmental approach to complex PTSD: childhood and adult cumulative trauma as predictors of symptom complexity.

    PubMed

    Cloitre, Marylene; Stolbach, Bradley C; Herman, Judith L; van der Kolk, Bessel; Pynoos, Robert; Wang, Jing; Petkova, Eva

    2009-10-01

    Exposure to multiple traumas, particularly in childhood, has been proposed to result in a complex of symptoms that includes posttraumatic stress disorder (PTSD) as well as a constrained, but variable group of symptoms that highlight self-regulatory disturbances. The relationship between accumulated exposure to different types of traumatic events and total number of different types of symptoms (symptom complexity) was assessed in an adult clinical sample (N = 582) and a child clinical sample (N = 152). Childhood cumulative trauma but not adulthood trauma predicted increasing symptom complexity in adults. Cumulative trauma predicted increasing symptom complexity in the child sample. Results suggest that Complex PTSD symptoms occur in both adult and child samples in a principled, rule-governed way and that childhood experiences significantly influenced adult symptoms. Copyright © 2009 International Society for Traumatic Stress Studies.

  3. An Optimization Framework for Dynamic, Distributed Real-Time Systems

    NASA Technical Reports Server (NTRS)

    Eckert, Klaus; Juedes, David; Welch, Lonnie; Chelberg, David; Bruggerman, Carl; Drews, Frank; Fleeman, David; Parrott, David; Pfarr, Barbara

    2003-01-01

    Abstract. This paper presents a model that is useful for developing resource allocation algorithms for distributed real-time systems .that operate in dynamic environments. Interesting aspects of the model include dynamic environments, utility and service levels, which provide a means for graceful degradation in resource-constrained situations and support optimization of the allocation of resources. The paper also provides an allocation algorithm that illustrates how to use the model for producing feasible, optimal resource allocations.

  4. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure

    PubMed Central

    Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng

    2014-01-01

    Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials. PMID:24763649

  5. Gamma-Ray Burst Afterglows as Probes of Environment and Blastwave Physics II: The Distribution of p and Structure of the Circumburst Medium

    NASA Technical Reports Server (NTRS)

    Starling, R. L. C.; vanderHorst, A. J.; Rol, E.; Wijers, R. A. M. J.; Kouveliotou, C.; Wiersema, K.; Curran, P. A.; Weltevrede, P.

    2007-01-01

    We constrain blastwave parameters and the circumburst media of a subsample of BeppoSAX Gamma-Ray Bursts. For this sample we derive the values of the injected electron energy distribution index, p, and the density structure index of the circumburst medium, k, from simultaneous spectral fits to their X-ray, optical and nIR afterglow data. The spectral fits have been done in count space and include the effects of metallicity, and are compared with the previously reported optical and X-ray temporal behaviour. Assuming the fireball model, we can find a mean value of p for the sample as a whole of 2.035. A statistical analysis Of the distribution demonstrates that the p values in this sample are inconsistent with a single universal value for p at the 3sigma level or greater. This approach provides us with a measured distribution of circumburst density structures rather than considering only the cases of k = 0 (homogeneous) and k = 2 (wind-like). We find five GRBs for which k can be well constrained, and in four of these cases the circumburst medium is clearly wind-like. The fifth source has a value of 0 less than or equal to k less than or equal to 1, consistent with a homogeneous circumburst medium.

  6. Counting Patterns in Degenerated Sequences

    NASA Astrophysics Data System (ADS)

    Nuel, Grégory

    Biological sequences like DNA or proteins, are always obtained through a sequencing process which might produce some uncertainty. As a result, such sequences are usually written in a degenerated alphabet where some symbols may correspond to several possible letters (ex: IUPAC DNA alphabet). When counting patterns in such degenerated sequences, the question that naturally arises is: how to deal with degenerated positions ? Since most (usually 99%) of the positions are not degenerated, it is considered harmless to discard the degenerated positions in order to get an observation, but the exact consequences of such a practice are unclear. In this paper, we introduce a rigorous method to take into account the uncertainty of sequencing for biological sequences (DNA, Proteins). We first introduce a Forward-Backward approach to compute the marginal distribution of the constrained sequence and use it both to perform a Expectation-Maximization estimation of parameters, as well as deriving a heterogeneous Markov distribution for the constrained sequence. This distribution is hence used along with known DFA-based pattern approaches to obtain the exact distribution of the pattern count under the constraints. As an illustration, we consider a EST dataset from the EMBL database. Despite the fact that only 1% of the positions in this dataset are degenerated, we show that not taking into account these positions might lead to erroneous observations, further proving the interest of our approach.

  7. Integrated detection of fractures and caves in carbonate fractured-vuggy reservoirs based on seismic data and well data

    NASA Astrophysics Data System (ADS)

    Cao, Zhanning; Li, Xiangyang; Sun, Shaohan; Liu, Qun; Deng, Guangxiao

    2018-04-01

    Aiming at the prediction of carbonate fractured-vuggy reservoirs, we put forward an integrated approach based on seismic and well data. We divide a carbonate fracture-cave system into four scales for study: micro-scale fracture, meso-scale fracture, macro-scale fracture and cave. Firstly, we analyze anisotropic attributes of prestack azimuth gathers based on multi-scale rock physics forward modeling. We select the frequency attenuation gradient attribute to calculate azimuth anisotropy intensity, and we constrain the result with Formation MicroScanner image data and trial production data to predict the distribution of both micro-scale and meso-scale fracture sets. Then, poststack seismic attributes, variance, curvature and ant algorithms are used to predict the distribution of macro-scale fractures. We also constrain the results with trial production data for accuracy. Next, the distribution of caves is predicted by the amplitude corresponding to the instantaneous peak frequency of the seismic imaging data. Finally, the meso-scale fracture sets, macro-scale fractures and caves are combined to obtain an integrated result. This integrated approach is applied to a real field in Tarim Basin in western China for the prediction of fracture-cave reservoirs. The results indicate that this approach can well explain the spatial distribution of carbonate reservoirs. It can solve the problem of non-uniqueness and improve fracture prediction accuracy.

  8. Role of upwelling hydrothermal fluids in the development of alteration patterns at fast spreading ridges: Evidence from the sheeted dike complex at Pito Deep

    NASA Astrophysics Data System (ADS)

    Heft, Kerri L.; Gillis, Kathryn M.; Pollock, Megan A.; Karson, Jeffery A.; Klein, Emily M.

    2008-05-01

    Alteration of sheeted dikes exposed along submarine escarpments at the Pito Deep Rift (NE edge of the Easter microplate) provides constraints on the crustal component of axial hydrothermal systems at fast spreading mid-ocean ridges. Samples from vertical transects through the upper crust constrain the temporal and spatial scales of hydrothermal fluid flow and fluid-rock reaction. The dikes are relatively fresh (average extent of alteration is 27%), with the extent of alteration ranging from 0 to >80%. Alteration is heterogeneous on scales of tens to hundreds of meters and displays few systematic spatial trends. Background alteration is amphibole-dominated, with chlorite-rich dikes sporadically distributed throughout the dike complex, indicating that peak temperatures ranged from <300°C to >450°C and did not vary systematically with depth. Dikes locally show substantial metal mobility, with Zn and Cu depletion and Mn enrichment. Amphibole and chlorite fill fractures throughout the dike complex, whereas quartz-filled fractures and faults are only locally present. Regional variability in alteration characteristics is found on a scale of <1-2 km, illustrating the diversity of fluid-rock interaction that can be expected in fast spreading crust. We propose that much of the alteration in sheeted dike complexes develops within broad, hot upwelling zones, as the inferred conditions of alteration cannot be achieved in downwelling zones, particularly in the shallow dikes. Migration of circulating cells along rides axes and local evolution of fluid compositions produce sections of the upper crust with a distinctive character of alteration, on a scale of <1-2 km and <5-20 ka.

  9. Use of complex hydraulic variables to predict the distribution and density of unionids in a side channel of the Upper Mississippi River

    USGS Publications Warehouse

    Steuer, J.J.; Newton, T.J.; Zigler, S.J.

    2008-01-01

    Previous attempts to predict the importance of abiotic and biotic factors to unionids in large rivers have been largely unsuccessful. Many simple physical habitat descriptors (e.g., current velocity, substrate particle size, and water depth) have limited ability to predict unionid density. However, more recent studies have found that complex hydraulic variables (e.g., shear velocity, boundary shear stress, and Reynolds number) may be more useful predictors of unionid density. We performed a retrospective analysis with unionid density, current velocity, and substrate particle size data from 1987 to 1988 in a 6-km reach of the Upper Mississippi River near Prairie du Chien, Wisconsin. We used these data to model simple and complex hydraulic variables under low and high flow conditions. We then used classification and regression tree analysis to examine the relationships between hydraulic variables and unionid density. We found that boundary Reynolds number, Froude number, boundary shear stress, and grain size were the best predictors of density. Models with complex hydraulic variables were a substantial improvement over previously published discriminant models and correctly classified 65-88% of the observations for the total mussel fauna and six species. These data suggest that unionid beds may be constrained by threshold limits at both ends of the flow regime. Under low flow, mussels may require a minimum hydraulic variable (Rez.ast;, Fr) to transport nutrients, oxygen, and waste products. Under high flow, areas with relatively low boundary shear stress may provide a hydraulic refuge for mussels. Data on hydraulic preferences and identification of other conditions that constitute unionid habitat are needed to help restore and enhance habitats for unionids in rivers. ?? 2008 Springer Science+Business Media B.V.

  10. Understanding the complexity of the Lévy-walk nature of human mobility with a multi-scale cost∕benefit model.

    PubMed

    Scafetta, Nicola

    2011-12-01

    Probability distributions of human displacements have been fit with exponentially truncated Lévy flights or fat tailed Pareto inverse power law probability distributions. Thus, people usually stay within a given location (for example, the city of residence), but with a non-vanishing frequency they visit nearby or far locations too. Herein, we show that an important empirical distribution of human displacements (range: from 1 to 1000 km) can be well fit by three consecutive Pareto distributions with simple integer exponents equal to 1, 2, and (>) 3. These three exponents correspond to three displacement range zones of about 1 km ≲Δr≲10 km, 10 km ≲Δr≲300 km, and 300 km ≲Δr≲1000 km, respectively. These three zones can be geographically and physically well determined as displacements within a city, visits to nearby cities that may occur within just one-day trips, and visit to far locations that may require multi-days trips. The incremental integer values of the three exponents can be easily explained with a three-scale mobility cost∕benefit model for human displacements based on simple geometrical constrains. Essentially, people would divide the space into three major regions (close, medium, and far distances) and would assume that the travel benefits are randomly∕uniformly distributed mostly only within specific urban-like areas. The three displacement distribution zones appear to be characterized by an integer (1, 2, or >3) inverse power exponent because of the specific number (1, 2, or >3) of cost mechanisms (each of which is proportional to the displacement length). The distributions in the first two zones would be associated to Pareto distributions with exponent β = 1 and β = 2 because of simple geometrical statistical considerations due to the a priori assumption that most benefits are searched in the urban area of the city of residence or in the urban area of specific nearby cities. We also show, by using independent records of human mobility, that the proposed model predicts the statistical properties of human mobility below 1 km ranges, where people just walk. In the latter case, the threshold between zone 1 and zone 2 may be around 100-200 m and, perhaps, may have been evolutionary determined by the natural human high resolution visual range, which characterizes an area of interest where the benefits are assumed to be randomly and uniformly distributed. This rich and suggestive interpretation of human mobility may characterize other complex random walk phenomena that may also be described by a N-piece fit Pareto distributions with increasing integer exponents. This study also suggests that distribution functions used to fit experimental probability distributions must be carefully chosen for not improperly obscuring the physics underlying a phenomenon.

  11. Distributed energy-balance modeling of snow-cover evolution and melt in rugged terrain: Tobacco Root Mountains, Montana, USA

    USGS Publications Warehouse

    Letsinger, S.L.; Olyphant, G.A.

    2007-01-01

    A distributed energy-balance model was developed for simulating snowpack evolution and melt in rugged terrain. The model, which was applied to a 43-km2 watershed in the Tobacco Root Mountains, Montana, USA, used measured ambient data from nearby weather stations to drive energy-balance calculations and to constrain the model of Liston and Sturm [Liston, G.E., Sturm, M., 1998. A snow-transport model for complex terrain. Journal of Glaciology 44 (148), 498-516] for calculating the initial snowpack thickness. Simulated initial snow-water equivalent ranged between 1 cm and 385 cm w.e. (water equivalent) with high values concentrated on east-facing slopes below tall summits. An interpreted satellite image of the snowcover distribution on May 6, 1998, closely matched the simulated distribution with the greatest discrepancy occurring in the floor of the main trunk valley. Model simulations indicated that snowmelt commenced early in the melt season, but rapid meltout of snow cover did not occur until after the average energy balance of the entire watershed became positive about 45 days into the melt season. Meltout was fastest in the lower part of the watershed where warmer temperatures and tree cover enhanced the energy income of the underlying snow. An interpreted satellite image of the snowcover distribution on July 9, 1998 compared favorably with the simulated distribution, and melt curves for modeled canopy-covered cells mimicked the trends measured at nearby snow pillow stations. By the end of the simulation period (August 3), 28% of the watershed remained snow covered, most of which was concentrated in the highest parts of the watershed where initially thick accumulations had been shaded by surrounding summits. The results of this study provide further demonstration of the critical role that topography plays in the timing and magnitude of snowmelt from high mountain watersheds. ?? 2006 Elsevier B.V. All rights reserved.

  12. Optimal control of first order distributed systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Johnson, T. L.

    1972-01-01

    The problem of characterizing optimal controls for a class of distributed-parameter systems is considered. The system dynamics are characterized mathematically by a finite number of coupled partial differential equations involving first-order time and space derivatives of the state variables, which are constrained at the boundary by a finite number of algebraic relations. Multiple control inputs, extending over the entire spatial region occupied by the system ("distributed controls') are to be designed so that the response of the system is optimal. A major example involving boundary control of an unstable low-density plasma is developed from physical laws.

  13. Early Precambrian Carbonate and Evapolite Sediments: Constraints on Environmental and Biological Evolution

    NASA Technical Reports Server (NTRS)

    Grotzinger, John P.

    2002-01-01

    The work accomplished under NASA Grant NAG5-6722 was very successful. Our lab was able to document the occurrence and distribution of evaporite-to-carbonate transitions in several basins during Precambrian time, to help constrain the long-term chemical evolution of seawater.

  14. Starting and Stopping Spontaneous Family Conflicts.

    ERIC Educational Resources Information Center

    Vuchinich, Samuel

    1987-01-01

    Examined how 52 nondistressed families managed spontaneous verbal conflicts during family dinners. Found conflict initiation to be evenly distributed across family roles. Extension of conflict was constrained by constant probability of a next conflict move occurring. Most conflicts ended with no resolution. Mothers were most active in closing…

  15. SELECTING INDICATORS OF BIODIVERSITY FOR CONSERVATION PLANNING: IDENTIFYING THE MECHANISMS BEHIND INDICATOR GROUP PERFORMANCE

    EPA Science Inventory

    Most conservation planning is constrained by time and funding. In particular, the selection of areas to protect biodiversity must often be completed with limited data on species distributions. Consequently, different groups of species have been proposed as indicators or surroga...

  16. Efficient Calibration of Distributed Catchment Models Using Perceptual Understanding and Hydrologic Signatures

    NASA Astrophysics Data System (ADS)

    Hutton, C.; Wagener, T.; Freer, J. E.; Duffy, C.; Han, D.

    2015-12-01

    Distributed models offer the potential to resolve catchment systems in more detail, and therefore simulate the hydrological impacts of spatial changes in catchment forcing (e.g. landscape change). Such models may contain a large number of model parameters which are computationally expensive to calibrate. Even when calibration is possible, insufficient data can result in model parameter and structural equifinality. In order to help reduce the space of feasible models and supplement traditional outlet discharge calibration data, semi-quantitative information (e.g. knowledge of relative groundwater levels), may also be used to identify behavioural models when applied to constrain spatially distributed predictions of states and fluxes. The challenge is to combine these different sources of information together to identify a behavioural region of state-space, and efficiently search a large, complex parameter space to identify behavioural parameter sets that produce predictions that fall within this behavioural region. Here we present a methodology to incorporate different sources of data to efficiently calibrate distributed catchment models. Metrics of model performance may be derived from multiple sources of data (e.g. perceptual understanding and measured or regionalised hydrologic signatures). For each metric, an interval or inequality is used to define the behaviour of the catchment system, accounting for data uncertainties. These intervals are then combined to produce a hyper-volume in state space. The state space is then recast as a multi-objective optimisation problem, and the Borg MOEA is applied to first find, and then populate the hyper-volume, thereby identifying acceptable model parameter sets. We apply the methodology to calibrate the PIHM model at Plynlimon, UK by incorporating perceptual and hydrologic data into the calibration problem. Furthermore, we explore how to improve calibration efficiency through search initialisation from shorter model runs.

  17. Distributed and Localized Deformation Along the Lebanese Restraining Bend from Geomorphic Observations and Modeling

    NASA Astrophysics Data System (ADS)

    Goren, L.; Castelltort, S.; Klinger, Y.

    2014-12-01

    The Dead Sea Fault System changes its orientation across Lebanon and forms a restraining bend. The oblique deformation along the Lebanese restraining bend is characterized by a complex suite of tectonic structures, among which, the Yammouneh Fault (YF), is believed to be the main strand that relays deformation from the southern section to the northern section of the Dead Sea Fault System. However, uncertainties regarding slip rates and strain partitioning in Lebanon still prevail. Here, we use morphometric analysis together with analytical and numerical models to constrain rates and modes of distributed and localized deformation along the Lebanese restraining bend.The rivers that drain the western flank of Mount Lebanon show a consistent counterclockwise rotation with respect to an expected orogen perpendicular orientation. Moreover, a pattern of divide disequilibrium in between these rivers emerges from an application of the χ mapping technique, which aims at estimating the degree of geometrical and topological disequilibrium in river networks. These geometrical patterns are compatible with simulation results using a landscape evolution model, which imposes a distributed velocity field along a domain that represents the western flank of Mount Lebanon. We further develop an analytical model that relates the river orientation to a set of kinematic parameters that represents a combined pure and simple shear strain field, and we find the parameters that best explain the present orientation of the western Lebanon rivers. Our results indicate that distributed deformation to the west of the YF takes as much as 30% of the relative Arabia-Sinai plate velocity since the late Miocene, and that the average slip rate along the YF during the same time interval has been 3.8-4.4 mm/yr. The theoretical model can further explain the inferred rotation from Paleomagnetic measurements.

  18. Finite-time convergent recurrent neural network with a hard-limiting activation function for constrained optimization with piecewise-linear objective functions.

    PubMed

    Liu, Qingshan; Wang, Jun

    2011-04-01

    This paper presents a one-layer recurrent neural network for solving a class of constrained nonsmooth optimization problems with piecewise-linear objective functions. The proposed neural network is guaranteed to be globally convergent in finite time to the optimal solutions under a mild condition on a derived lower bound of a single gain parameter in the model. The number of neurons in the neural network is the same as the number of decision variables of the optimization problem. Compared with existing neural networks for optimization, the proposed neural network has a couple of salient features such as finite-time convergence and a low model complexity. Specific models for two important special cases, namely, linear programming and nonsmooth optimization, are also presented. In addition, applications to the shortest path problem and constrained least absolute deviation problem are discussed with simulation results to demonstrate the effectiveness and characteristics of the proposed neural network.

  19. Magnetic exchange couplings from constrained density functional theory: an efficient approach utilizing analytic derivatives.

    PubMed

    Phillips, Jordan J; Peralta, Juan E

    2011-11-14

    We introduce a method for evaluating magnetic exchange couplings based on the constrained density functional theory (C-DFT) approach of Rudra, Wu, and Van Voorhis [J. Chem. Phys. 124, 024103 (2006)]. Our method shares the same physical principles as C-DFT but makes use of the fact that the electronic energy changes quadratically and bilinearly with respect to the constraints in the range of interest. This allows us to use coupled perturbed Kohn-Sham spin density functional theory to determine approximately the corrections to the energy of the different spin configurations and construct a priori the relevant energy-landscapes obtained by constrained spin density functional theory. We assess this methodology in a set of binuclear transition-metal complexes and show that it reproduces very closely the results of C-DFT. This demonstrates a proof-of-concept for this method as a potential tool for studying a number of other molecular phenomena. Additionally, routes to improving upon the limitations of this method are discussed. © 2011 American Institute of Physics

  20. The ALMA-PILS survey: 3D modeling of the envelope, disks and dust filament of IRAS 16293-2422

    NASA Astrophysics Data System (ADS)

    Jacobsen, S. K.; Jørgensen, J. K.; van der Wiel, M. H. D.; Calcutt, H.; Bourke, T. L.; Brinch, C.; Coutens, A.; Drozdovskaya, M. N.; Kristensen, L. E.; Müller, H. S. P.; Wampfler, S. F.

    2018-04-01

    Context. The Class 0 protostellar binary IRAS 16293-2422 is an interesting target for (sub)millimeter observations due to, both, the rich chemistry toward the two main components of the binary and its complex morphology. Its proximity to Earth allows the study of its physical and chemical structure on solar system scales using high angular resolution observations. Such data reveal a complex morphology that cannot be accounted for in traditional, spherical 1D models of the envelope. Aims: The purpose of this paper is to study the environment of the two components of the binary through 3D radiative transfer modeling and to compare with data from the Atacama Large Millimeter/submillimeter Array. Such comparisons can be used to constrain the protoplanetary disk structures, the luminosities of the two components of the binary and the chemistry of simple species. Methods: We present 13CO, C17O and C18O J = 3-2 observations from the ALMA Protostellar Interferometric Line Survey (PILS), together with a qualitative study of the dust and gas density distribution of IRAS 16293-2422. A 3D dust and gas model including disks and a dust filament between the two protostars is constructed which qualitatively reproduces the dust continuum and gas line emission. Results: Radiative transfer modeling in our sampled parameter space suggests that, while the disk around source A could not be constrained, the disk around source B has to be vertically extended. This puffed-up structure can be obtained with both a protoplanetary disk model with an unexpectedly high scale-height and with the density solution from an infalling, rotating collapse. Combined constraints on our 3D model, from observed dust continuum and CO isotopologue emission between the sources, corroborate that source A should be at least six times more luminous than source B. We also demonstrate that the volume of high-temperature regions where complex organic molecules arise is sensitive to whether or not the total luminosity is in a single radiation source or distributed into two sources, affecting the interpretation of earlier chemical modeling efforts of the IRAS 16293-2422 hot corino which used a single-source approximation. Conclusions: Radiative transfer modeling of source A and B, with the density solution of an infalling, rotating collapse or a protoplanetary disk model, can match the constraints for the disk-like emission around source A and B from the observed dust continuum and CO isotopologue gas emission. If a protoplanetary disk model is used around source B, it has to have an unusually high scale-height in order to reach the dust continuum peak emission value, while fulfilling the other observational constraints. Our 3D model requires source A to be much more luminous than source B; LA 18 L⊙ and LB 3 L⊙.

  1. The clumped-isotope geochemistry of exhumed marbles from Naxos, Greece

    NASA Astrophysics Data System (ADS)

    Ryb, U.; Lloyd, M. K.; Stolper, D. A.; Eiler, J. M.

    2017-07-01

    Exhumation and accompanying retrograde metamorphism alter the compositions and textures of metamorphic rocks through deformation, mineral-mineral reactions, water-rock reactions, and diffusion-controlled intra- and inter-mineral atomic mobility. Here, we demonstrate that these processes are recorded in the clumped- and single-isotope (δ13 C and δ18 O) compositions of marbles, which can be used to constrain retrograde metamorphic histories. We collected 27 calcite and dolomite marbles along a transect from the rim to the center of the metamorphic core-complex of Naxos (Greece), and analyzed their carbonate single- and clumped-isotope compositions. The majority of Δ47 values of whole-rock samples are consistent with exhumation- controlled cooling of the metamorphic complex. However, the data also reveal that water-rock interaction, deformation driven recrystallization and thermal shock associated with hydrothermal alteration may considerably impact the overall distribution of Δ47 values. We analyzed specific carbonate fabrics influenced by deformation and fluid-rock reaction to study how these processes register in the carbonate clumped-isotope system. Δ47 values of domains drilled from a calcite marble show a bimodal distribution. Low Δ47 values correspond to an apparent temperature of 260 °C and are common in static fabrics; high Δ47 values correspond to an apparent temperature of 200 °C and are common in dynamically recrystallized fabrics. We suggest that the low Δ47 values reflect diffusion-controlled isotopic reordering during cooling, whereas high Δ47 values reflect isotopic reordering driven by dynamic recrystallization. We further studied the mechanism by which dynamic recrystallization may alter Δ47 values by controlled heating experiments. Results show no significant difference between laboratory reactions rates in the static and dynamic fabrics, consistent with a mineral-extrinsic mechanism, in which slip along crystal planes was associated with atomic-scale isotopic reordering in the calcite lattice. An intrinsic mechanism (enhanced isotopic reordering rate in deformed minerals) is contraindicated by these experiments. We suggest that Δ47 values of dynamically recrystallized fabrics that form below the diffusion-controlled blocking-temperature for calcite constrain the temperature of deformation. We find that Δ47-based temperatures of static fabrics from Naxos marbles are ∼60-80 °C higher than commonly observed in slowly cooled metamorphic rocks, and would suggest cooling rates of ∼105 °CMyr-1. A similar thermal history is inferred for dolomite marbles from the core vicinity, which preserve apparent temperatures up to 200 °C higher than a typical blocking temperature (∼300 °C). This finding could be explained by a hydrothermal event driving a brief thermal pulse and locally resetting Δ47 values. Rapid cooling of the core-complex region is consistent with a compilation of published cooling ages and a new apatite U-Th/He age, associating the thermal event with the emplacement of a granodiorite pluton at ∼12 Ma.

  2. Visual data mining for quantized spatial data

    NASA Technical Reports Server (NTRS)

    Braverman, Amy; Kahn, Brian

    2004-01-01

    In previous papers we've shown how a well known data compression algorithm called Entropy-constrained Vector Quantization ( can be modified to reduce the size and complexity of very large, satellite data sets. In this paper, we descuss how to visualize and understand the content of such reduced data sets.

  3. Economic Analysis of Biological Invasions in Forests

    Treesearch

    Tomas P. Holmes; Julian Aukema; Jeffrey Englin; Robert G. Haight; Kent Kovacs; Brian Leung

    2014-01-01

    Biological invasions of native forests by nonnative pests result from complex stochastic processes that are difficult to predict. Although economic optimization models describe efficient controls across the stages of an invasion, the ability to calibrate such models is constrained by lack of information on pest population dynamics and consequent economic damages. Here...

  4. Modeling the Components of an Economy as a Complex Adaptive System

    DTIC Science & Technology

    principles of constrained optimization and fails to see economic variables as part of an interconnected network. While tools for forecasting economic...data sets such as the stock market . This research portrays the stock market as one component of a networked system of economic variables, with the

  5. Development of an Environmental Virtual Field Laboratory

    ERIC Educational Resources Information Center

    Ramasundaram, V.; Grunwald, S.; Mangeot, A.; Comerford, N. B.; Bliss, C. M.

    2005-01-01

    Laboratory exercises, field observations and field trips are a fundamental part of many earth science and environmental science courses. Field observations and field trips can be constrained because of distance, time, expense, scale, safety, or complexity of real-world environments. Our objectives were to develop an environmental virtual field…

  6. Children's and Adolescents' Thoughts on Pollution: Cognitive Abilities Required to Understand Environmental Systems

    ERIC Educational Resources Information Center

    Rodríguez, Manuel; Kohen, Raquel; Delval, Juan

    2015-01-01

    Pollution phenomena are complex systems in which different parts are integrated by means of causal and temporal relationships. To understand pollution, children must develop some cognitive abilities related to system thinking and temporal and causal inferential reasoning. These cognitive abilities constrain and guide how children understand…

  7. Comprehensive Benefit Platforms to Simplify Complex HR Processes

    ERIC Educational Resources Information Center

    Ehrsam, Hank

    2012-01-01

    Paying for employee turnover costs, data storage, and multiple layers of benefits can be difficult for fiscally constrained institutions, especially as budget cuts and finance-limiting legislation abound in school districts across the country. Many traditional paper-based systems have been replaced with automated, software-based services, helping…

  8. Distribution-Agnostic Stochastic Optimal Power Flow for Distribution Grids: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    2016-09-01

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  9. OCO-2 Column Carbon Dioxide and Biometric Data Jointly Constrain Parameterization and Projection of a Global Land Model

    NASA Astrophysics Data System (ADS)

    Shi, Z.; Crowell, S.; Luo, Y.; Rayner, P. J.; Moore, B., III

    2015-12-01

    Uncertainty in predicted carbon-climate feedback largely stems from poor parameterization of global land models. However, calibration of global land models with observations has been extremely challenging at least for two reasons. First we lack global data products from systematical measurements of land surface processes. Second, computational demand is insurmountable for estimation of model parameter due to complexity of global land models. In this project, we will use OCO-2 retrievals of dry air mole fraction XCO2 and solar induced fluorescence (SIF) to independently constrain estimation of net ecosystem exchange (NEE) and gross primary production (GPP). The constrained NEE and GPP will be combined with data products of global standing biomass, soil organic carbon and soil respiration to improve the community land model version 4.5 (CLM4.5). Specifically, we will first develop a high fidelity emulator of CLM4.5 according to the matrix representation of the terrestrial carbon cycle. It has been shown that the emulator fully represents the original model and can be effectively used for data assimilation to constrain parameter estimation. We will focus on calibrating those key model parameters (e.g., maximum carboxylation rate, turnover time and transfer coefficients of soil carbon pools, and temperature sensitivity of respiration) for carbon cycle. The Bayesian Markov chain Monte Carlo method (MCMC) will be used to assimilate the global databases into the high fidelity emulator to constrain the model parameters, which will be incorporated back to the original CLM4.5. The calibrated CLM4.5 will be used to make scenario-based projections. In addition, we will conduct observing system simulation experiments (OSSEs) to evaluate how the sampling frequency and length could affect the model constraining and prediction.

  10. Glassy behaviour in simple kinetically constrained models: topological networks, lattice analogues and annihilation-diffusion

    NASA Astrophysics Data System (ADS)

    Sherrington, David; Davison, Lexie; Buhot, Arnaud; Garrahan, Juan P.

    2002-02-01

    We report a study of a series of simple model systems with only non-interacting Hamiltonians, and hence simple equilibrium thermodynamics, but with constrained dynamics of a type initially suggested by foams and idealized covalent glasses. We demonstrate that macroscopic dynamical features characteristic of real and more complex model glasses, such as two-time decays in energy and auto-correlation functions, arise from the dynamics and we explain them qualitatively and quantitatively in terms of annihilation-diffusion concepts and theory. The comparison is with strong glasses. We also consider fluctuation-dissipation relations and demonstrate subtleties of interpretation. We find no FDT breakdown when the correct normalization is chosen.

  11. Complexity, fractal dynamics and determinism in treadmill ambulation: Implications for clinical biomechanists.

    PubMed

    Hollman, John H; Watkins, Molly K; Imhoff, Angela C; Braun, Carly E; Akervik, Kristen A; Ness, Debra K

    2016-08-01

    Reduced inter-stride complexity during ambulation may represent a pathologic state. Evidence is emerging that treadmill training for rehabilitative purposes may constrain the locomotor system and alter gait dynamics in a way that mimics pathological states. The purpose of this study was to examine the dynamical system components of gait complexity, fractal dynamics and determinism during treadmill ambulation. Twenty healthy participants aged 23.8 (1.2) years walked at preferred walking speeds for 6min on a motorized treadmill and overground while wearing APDM 6 Opal inertial monitors. Stride times, stride lengths and peak sagittal plane trunk velocities were measured. Mean values and estimates of complexity, fractal dynamics and determinism were calculated for each parameter. Data were compared between overground and treadmill walking conditions. Mean values for each gait parameter were statistically equivalent between overground and treadmill ambulation (P>0.05). Through nonlinear analyses, however, we found that complexity in stride time signals (P<0.001), and long-range correlations in stride time and stride length signals (P=0.005 and P=0.024, respectively), were reduced on the treadmill. Treadmill ambulation induces more predictable inter-stride time dynamics and constrains fluctuations in stride times and stride lengths, which may alter feedback from destabilizing perturbations normally experienced by the locomotor control system during overground ambulation. Treadmill ambulation, therefore, may provide less opportunity for experiencing the adaptability necessary to successfully ambulate overground. Investigators and clinicians should be aware that treadmill ambulation will alter dynamic gait characteristics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. A fuzzy chance-constrained programming model with type 1 and type 2 fuzzy sets for solid waste management under uncertainty

    NASA Astrophysics Data System (ADS)

    Ma, Xiaolin; Ma, Chi; Wan, Zhifang; Wang, Kewei

    2017-06-01

    Effective management of municipal solid waste (MSW) is critical for urban planning and development. This study aims to develop an integrated type 1 and type 2 fuzzy sets chance-constrained programming (ITFCCP) model for tackling regional MSW management problem under a fuzzy environment, where waste generation amounts are supposed to be type 2 fuzzy variables and treated capacities of facilities are assumed to be type 1 fuzzy variables. The evaluation and expression of uncertainty overcome the drawbacks in describing fuzzy possibility distributions as oversimplified forms. The fuzzy constraints are converted to their crisp equivalents through chance-constrained programming under the same or different confidence levels. Regional waste management of the City of Dalian, China, was used as a case study for demonstration. The solutions under various confidence levels reflect the trade-off between system economy and reliability. It is concluded that the ITFCCP model is capable of helping decision makers to generate reasonable waste-allocation alternatives under uncertainties.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, Kyri; Dall'Anese, Emiliano; Summers, Tyler

    This paper outlines a data-driven, distributionally robust approach to solve chance-constrained AC optimal power flow problems in distribution networks. Uncertain forecasts for loads and power generated by photovoltaic (PV) systems are considered, with the goal of minimizing PV curtailment while meeting power flow and voltage regulation constraints. A data- driven approach is utilized to develop a distributionally robust conservative convex approximation of the chance-constraints; particularly, the mean and covariance matrix of the forecast errors are updated online, and leveraged to enforce voltage regulation with predetermined probability via Chebyshev-based bounds. By combining an accurate linear approximation of the AC power flowmore » equations with the distributionally robust chance constraint reformulation, the resulting optimization problem becomes convex and computationally tractable.« less

  14. Determinants of pulmonary blood flow distribution.

    PubMed

    Glenny, Robb W; Robertson, H Thomas

    2011-01-01

    The primary function of the pulmonary circulation is to deliver blood to the alveolar capillaries to exchange gases. Distributing blood over a vast surface area facilitates gas exchange, yet the pulmonary vascular tree must be constrained to fit within the thoracic cavity. In addition, pressures must remain low within the circulatory system to protect the thin alveolar capillary membranes that allow efficient gas exchange. The pulmonary circulation is engineered for these unique requirements and in turn these special attributes affect the spatial distribution of blood flow. As the largest organ in the body, the physical characteristics of the lung vary regionally, influencing the spatial distribution on large-, moderate-, and small-scale levels. © 2011 American Physiological Society.

  15. Genes under weaker stabilizing selection increase network evolvability and rapid regulatory adaptation to an environmental shift.

    PubMed

    Laarits, T; Bordalo, P; Lemos, B

    2016-08-01

    Regulatory networks play a central role in the modulation of gene expression, the control of cellular differentiation, and the emergence of complex phenotypes. Regulatory networks could constrain or facilitate evolutionary adaptation in gene expression levels. Here, we model the adaptation of regulatory networks and gene expression levels to a shift in the environment that alters the optimal expression level of a single gene. Our analyses show signatures of natural selection on regulatory networks that both constrain and facilitate rapid evolution of gene expression level towards new optima. The analyses are interpreted from the standpoint of neutral expectations and illustrate the challenge to making inferences about network adaptation. Furthermore, we examine the consequence of variable stabilizing selection across genes on the strength and direction of interactions in regulatory networks and in their subsequent adaptation. We observe that directional selection on a highly constrained gene previously under strong stabilizing selection was more efficient when the gene was embedded within a network of partners under relaxed stabilizing selection pressure. The observation leads to the expectation that evolutionarily resilient regulatory networks will contain optimal ratios of genes whose expression is under weak and strong stabilizing selection. Altogether, our results suggest that the variable strengths of stabilizing selection across genes within regulatory networks might itself contribute to the long-term adaptation of complex phenotypes. © 2016 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2016 European Society For Evolutionary Biology.

  16. [Pb(H2O)]2+ and [Pb(OH)]+: four-component density functional theory calculations, correlated scalar relativistic constrained-space orbital variation energy decompositions, and topological analysis.

    PubMed

    Gourlaouen, Christophe; Piquemal, Jean-Philip; Parisel, Olivier

    2006-05-07

    Within the scope of studying the molecular implications of the Pb(2+) cation in environmental and polluting processes, this paper reports Hartree-Fock and density functional theory (B3LYP) four-component relativistic calculations using an all-electron basis set applied to [Pb(H(2)O)](2+) and [Pb(OH)](+), two complexes expected to be found in the terrestrial atmosphere. It is shown that full-relativistic calculations validate the use of scalar relativistic approaches within the framework of density functional theory. [Pb(H(2)O)](2+) is found C(2v) at any level of calculations whereas [Pb(OH)](+) can be found bent or linear depending of the computational methodology used. When C(s) is found the barrier to inversion through the C(infinityv) structure is very low, and can be overcome at high enough temperature, making the molecule floppy. In order to get a better understanding of the bonding occurring between the Pb(2+) cation and the H(2)O and OH(-) ligands, natural bond orbital and atoms-in-molecule calculations have been performed. These approaches are supplemented by a topological analysis of the electron localization function. Finally, the description of these complexes is refined using constrained-space orbital variation complexation energy decompositions.

  17. Wavelet evolutionary network for complex-constrained portfolio rebalancing

    NASA Astrophysics Data System (ADS)

    Suganya, N. C.; Vijayalakshmi Pai, G. A.

    2012-07-01

    Portfolio rebalancing problem deals with resetting the proportion of different assets in a portfolio with respect to changing market conditions. The constraints included in the portfolio rebalancing problem are basic, cardinality, bounding, class and proportional transaction cost. In this study, a new heuristic algorithm named wavelet evolutionary network (WEN) is proposed for the solution of complex-constrained portfolio rebalancing problem. Initially, the empirical covariance matrix, one of the key inputs to the problem, is estimated using the wavelet shrinkage denoising technique to obtain better optimal portfolios. Secondly, the complex cardinality constraint is eliminated using k-means cluster analysis. Finally, WEN strategy with logical procedures is employed to find the initial proportion of investment in portfolio of assets and also rebalance them after certain period. Experimental studies of WEN are undertaken on Bombay Stock Exchange, India (BSE200 index, period: July 2001-July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, period: March 2002-March 2007) data sets. The result obtained using WEN is compared with the only existing counterpart named Hopfield evolutionary network (HEN) strategy and also verifies that WEN performs better than HEN. In addition, different performance metrics and data envelopment analysis are carried out to prove the robustness and efficiency of WEN over HEN strategy.

  18. Bringing the Budget Back into Academic Work Allocation Models: A Management Perspective

    ERIC Educational Resources Information Center

    Robertson, Michael; Germov, John

    2015-01-01

    Issues surrounding increasingly constrained resources and reducing levels of sector-based funding require consideration of a different Academic Work Allocation Model (AWAM) approach. Evidence from the literature indicates that an effective work allocation model is founded on the principles of equity and transparency in the distribution and…

  19. Constraining the dark energy equation of state using Bayes theorem and the Kullback–Leibler divergence

    DOE PAGES

    Hee, S.; Vázquez, J. A.; Handley, W. J.; ...

    2016-12-01

    Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-α data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance ΛCDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ confidence intervals of the posterior distribution. In order to identify themore » power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. Moreover, this formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-α datasets. Furthermore, SNIa and BAO constrain most strongly around redshift range 0.1 - 0.5, whilst the Lyman-α data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.« less

  20. Constraining the dark energy equation of state using Bayes theorem and the Kullback–Leibler divergence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hee, S.; Vázquez, J. A.; Handley, W. J.

    Data-driven model-independent reconstructions of the dark energy equation of state w(z) are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-α data. These reconstructions identify the w(z) behaviour supported by the data and show a bifurcation of the equation of state posterior in the range 1.5 < z < 3. Although the concordance ΛCDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as ‘phantom dark energy’) is identified within the 1.5σ confidence intervals of the posterior distribution. In order to identify themore » power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback–Leibler divergence. Moreover, this formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-α datasets. Furthermore, SNIa and BAO constrain most strongly around redshift range 0.1 - 0.5, whilst the Lyman-α data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the ΛCDM model was favoured at more than 2 log-units in Bayes factors over all the models tested despite the weakly preferred w(z) structure in the data.« less

  1. Constraining the Mechanism of D" Anisotropy: Diversity of Observation Types Required

    NASA Astrophysics Data System (ADS)

    Creasy, N.; Pisconti, A.; Long, M. D.; Thomas, C.

    2017-12-01

    A variety of different mechanisms have been proposed as explanations for seismic anisotropy at the base of the mantle, including crystallographic preferred orientation of various minerals (bridgmanite, post-perovskite, and ferropericlase) and shape preferred orientation of elastically distinct materials such as partial melt. Investigations of the mechanism for D" anisotropy are usually ambiguous, as seismic observations rarely (if ever) uniquely constrain a mechanism. Observations of shear wave splitting and polarities of SdS and PdP reflections off the D" discontinuity are among our best tools for probing D" anisotropy; however, typical data sets cannot constrain a unique scenario suggested by the mineral physics literature. In this work, we determine what types of body wave observations are required to uniquely constrain a mechanism for D" anisotropy. We test multiple possible models based on both single-crystal and poly-phase elastic tensors provided by mineral physics studies. We predict shear wave splitting parameters for SKS, SKKS, and ScS phases and reflection polarities off the D" interface for a range of possible propagation directions. We run a series of tests that create synthetic data sets by random selection over multiple iterations, controlling the total number of measurements, the azimuthal distribution, and the type of phases. We treat each randomly drawn synthetic dataset with the same methodology as in Ford et al. (2015) to determine the possible mechanism(s), carrying out a grid search over all possible elastic tensors and orientations to determine which are consistent with the synthetic data. We find is it difficult to uniquely constrain the starting model with a realistic number of seismic anisotropy measurements with only one measurement technique or phase type. However, having a mix of SKS, SKKS, and ScS measurements, or a mix of shear wave splitting and reflection polarity measurements, dramatically increases the probability of uniquely constraining the starting model. We also explore what types of datasets are needed to uniquely constrain the orientation(s) of anisotropic symmetry if the mechanism is assumed.

  2. Interpretation of Flow Logs from Nevada Test Site Boreholes to Estimate Hydraulic Conductivity Using Numerical Simulations Constrained by Single-Well Aquifer Tests

    USGS Publications Warehouse

    Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.

    2010-01-01

    Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.

  3. "Choice Set" for health behavior in choice-constrained settings to frame research and inform policy: examples of food consumption, obesity and food security.

    PubMed

    Dover, Robert V H; Lambert, Estelle V

    2016-03-16

    Using the nexus between food consumption, food security and obesity, this paper addresses the complexity of health behavior decision-making moments that reflect relational social dynamics in context-specific dialogues, often in choice-constrained conditions. A pragmatic review of literature regarding social determinants of health in relation to food consumption, food security and obesity was used to advance this theoretical model. We suggest that health choice, such as food consumption, is based on more than the capacity and volition of individuals to make "healthy" choices, but is dialogic and adaptive. In terms of food consumption, there will always be choice-constrained conditions, along a continuum representing factors over which the individual has little or no control, to those for which they have greater agency. These range from food store geographies and inventories and food availability, logistical considerations such as transportation, food distribution, the structure of equity in food systems, state and non-government food and nutrition programs, to factors where the individual exercises a greater degree of autonomy, such as sociocultural foodways, family and neighborhood shopping strategies, and personal and family food preferences. At any given food decision-making moment, many factors of the continuum are present consciously or unconsciously when the individual makes a decision. These health behavior decision-making moments are mutable, whether from an individual perspective, or within a broader social or policy context. We review the construct of "choice set", the confluence of factors that are temporally weighted by the differentiated and relationally-contextualized importance of certain factors over others in that moment. The choice transition represents an essential shift of the choice set based on the conscious and unconscious weighting of accumulated evidence, such that people can project certain outcomes. Policies and interventions should avoid dichotomies of "good and bad" food choices or health behaviors, but focus on those issues that contribute to the weightedness of factors influencing food choice behavior at a given decision-making moment and within a given choice set.

  4. Statistical mechanics of budget-constrained auctions

    NASA Astrophysics Data System (ADS)

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-07-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being in the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). On the basis of the cavity method of statistical mechanics, we introduce a message-passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution, and we derive from its properties the phase diagram of the problem. As the control parameter (average value of the budgets) is varied, we find two phase transitions delimiting a region in which long-range correlations arise.

  5. Topographic Response to the Yakutat Block Collision

    NASA Technical Reports Server (NTRS)

    Stock, Joann M.

    2000-01-01

    The principal objective of this grant and this research were to investigate the topographic development of an active glaciated orogenic belt in southern Alaska as that development relates to patterns of erosion and crustal deformation. A specific objective of the research was to investigate feedbacks between mountain building, orographic affects on climate, and patterns of exhumation and rock uplift. To that end, an orogen-scale analysis of topography was conducted with the aid of digital elevation models, magnitudes and patterns of crustal deformation were compiled from existing literature, present and past climate patterns were constrained using the modern and past distribution of glaciers, and styles, magnitudes, and extent of erosion were constrained with observations from the 1998 field season.

  6. Pulling helices inside bacteria: imperfect helices and rings

    NASA Astrophysics Data System (ADS)

    Rutenberg, Andrew; Allard, Jun

    2009-03-01

    We study steady-state configurations of intrinsically-straight elastic filaments constrained within rod-shaped bacteria that have applied forces distributed along their length. Perfect steady-state helices result from axial or azimuthal forces applied at filament ends, however azimuthal forces are required for the small pitches observed for MreB filaments within bacteria. Helix-like configurations can result from distributed forces, including co-existence between rings and imperfect helices. Levels of expression and/or bundling of the polymeric protein could mediate this co-existence.

  7. Pulling Helices inside Bacteria: Imperfect Helices and Rings

    NASA Astrophysics Data System (ADS)

    Allard, Jun F.; Rutenberg, Andrew D.

    2009-04-01

    We study steady-state configurations of intrinsically-straight elastic filaments constrained within rod-shaped bacteria that have applied forces distributed along their length. Perfect steady-state helices result from axial or azimuthal forces applied at filament ends, however azimuthal forces are required for the small pitches observed for MreB filaments within bacteria. Helix-like configurations can result from distributed forces, including coexistence between rings and imperfect helices. Levels of expression and/or bundling of the polymeric protein could mediate this coexistence.

  8. Proton Straggling in Thick Silicon Detectors

    NASA Technical Reports Server (NTRS)

    Selesnick, R. S.; Baker, D. N.; Kanekal, S. G.

    2017-01-01

    Straggling functions for protons in thick silicon radiation detectors are computed by Monte Carlo simulation. Mean energy loss is constrained by the silicon stopping power, providing higher straggling at low energy and probabilities for stopping within the detector volume. By matching the first four moments of simulated energy-loss distributions, straggling functions are approximated by a log-normal distribution that is accurate for Vavilov k is greater than or equal to 0:3. They are verified by comparison to experimental proton data from a charged particle telescope.

  9. Chempy: A flexible chemical evolution model for abundance fitting. Do the Sun's abundances alone constrain chemical evolution models?

    NASA Astrophysics Data System (ADS)

    Rybizki, Jan; Just, Andreas; Rix, Hans-Walter

    2017-09-01

    Elemental abundances of stars are the result of the complex enrichment history of their galaxy. Interpretation of observed abundances requires flexible modeling tools to explore and quantify the information about Galactic chemical evolution (GCE) stored in such data. Here we present Chempy, a newly developed code for GCE modeling, representing a parametrized open one-zone model within a Bayesian framework. A Chempy model is specified by a set of five to ten parameters that describe the effective galaxy evolution along with the stellar and star-formation physics: for example, the star-formation history (SFH), the feedback efficiency, the stellar initial mass function (IMF), and the incidence of supernova of type Ia (SN Ia). Unlike established approaches, Chempy can sample the posterior probability distribution in the full model parameter space and test data-model matches for different nucleosynthetic yield sets. It is essentially a chemical evolution fitting tool. We straightforwardly extend Chempy to a multi-zone scheme. As an illustrative application, we show that interesting parameter constraints result from only the ages and elemental abundances of the Sun, Arcturus, and the present-day interstellar medium (ISM). For the first time, we use such information to infer the IMF parameter via GCE modeling, where we properly marginalize over nuisance parameters and account for different yield sets. We find that 11.6+ 2.1-1.6% of the IMF explodes as core-collapse supernova (CC-SN), compatible with Salpeter (1955, ApJ, 121, 161). We also constrain the incidence of SN Ia per 103M⊙ to 0.5-1.4. At the same time, this Chempy application shows persistent discrepancies between predicted and observed abundances for some elements, irrespective of the chosen yield set. These cannot be remedied by any variations of Chempy's parameters and could be an indication of missing nucleosynthetic channels. Chempy could be a powerful tool to confront predictions from stellar nucleosynthesis with far more complex abundance data sets and to refine the physical processes governing the chemical evolution of stellar systems.

  10. Episodic fluid flow in the Nankai accretionary complex: Timescale, geochemistry, flow rates, and fluid budget

    USGS Publications Warehouse

    Saffer, D.M.; Bekins, B.A.

    1998-01-01

    Down-hole geochemical anomalies encountered in active accretionary systems can be used to constrain the timing, rates, and localization of fluid flow. Here we combine a coupled flow and solute transport model with a kinetic model for smectite dehydration to better understand and quantify fluid flow in the Nankai accretionary complex offshore of Japan. Compaction of sediments and clay dehydration provide fluid sources which drive the model flow system. We explicitly include the consolidation rate of underthrust sediments in our calculations to evaluate the impact that variations in this unknown quantity have on pressure and chloride distribution. Sensitivity analysis of steady state pressure solutions constrains bulk and flow conduit permeabilities. Steady state simulations with 30% smectite in the incoming sedimentary sequence result in minimum chloride concentrations at site 808 of 550 mM, but measured chlorinity is as low as 447 mM. We simulate the transient effects of hydrofracture or a strain event by assuming an instantaneous permeability increase of 3-4 orders of magnitude along a flow conduit (in this case the de??collement), using steady state results as initial conditions. Transient results with an increase in de??collement permeability from 10-16 m2 to 10-13 m2 and 20% smectite reproduce the observed chloride profile at site 808 after 80-160 kyr. Modeled chloride concentrations are highly sensitive to the consolidation rate of underthrust sediments, such that rapid compaction of underthrust material leads to increased freshening. Pressures within the de??collement during transient simulations rise rapidly to a significant fraction of lithostatic and remain high for at least 160 kyr, providing a mechanism for maintaining high permeability. Flow rates at the deformation front for transient simulations are in good agreement with direct measurements, but steady state flow rates are 2-3 orders of magnitude smaller than observed. Fluid budget calculations indicate that nearly 71% of the incoming water in the sediments leaves the accretionary wedge via diffuse flow out the seafloor, 0-5% escapes by focused flow along the de??collement, and roughly 1% is subducted. Copyright 1998 by the American Geophysical Union.

  11. Decoding sediment transport dynamics on alluvial fans from spatial changes in grain size, Death Valley, California

    NASA Astrophysics Data System (ADS)

    Brooke, Sam; Whittaker, Alexander; Watkins, Stephen; Armitage, John

    2017-04-01

    How fluvial sediment transport processes are transmitted to the sedimentary record remains a complex problem for the interpretation of fluvial stratigraphy. Alluvial fans represent the condensed sedimentary archive of upstream fluvial processes, controlled by the interplay between tectonics and climate over time, infused with the complex signal of internal autogenic processes. With high sedimentation rates and near complete preservation, alluvial fans present a unique opportunity to tackle the problem of landscape sensitivity to external boundary conditions such as climate. For three coupled catchments-fan systems in the tectonically well-constrained northern Death Valley, we measure grain size trends across well-preserved Holocene and Late-Pleistocene deposits, which we have mapped in detail. Our results show that fan surfaces from the Late-Pleistocene are, on average, 50% coarser than counterpart active or Holocene fan surfaces, with clear variations in input grain sizes observed between surfaces of differing age. Furthermore, the change in ratio between mean grain size and standard deviation is stable downstream for all surfaces, satisfying the statistical definition of self-similarity. Applying a self-similarity model of selective deposition, we derive a relative mobility function directly from our grain size distributions, and we evaluate for each fan surface the grain size for which the ratio of the probability of transport to deposition is 1. We show that the "equally mobile" grain size lies in the range of 20 to 35 mm, varies over time, and is clearly lower in the Holocene than in the Pleistocene. Our results indicate that coarser grain sizes on alluvial fans are much less mobile than in river systems where such an analysis has been previously applied. These results support recent findings that alluvial fan sediment characteristics can be used as an archive of past environmental change and that landscapes are sensitive to environmental change over a glacial-interglacial cycle. Significantly, the self-similarity methodology offers a means to constrain relative mobility of grain sizes from field measurements where hydrological information is lost or irretrievable.

  12. The size distribution of Jupiter's main ring from Galileo imaging and spectroscopy

    NASA Astrophysics Data System (ADS)

    Brooks, Shawn M.; Esposito, Larry W.; Showalter, Mark R.; Throop, Henry B.

    2004-07-01

    Galileo's Solid State Imaging experiment (SSI) obtained 36 visible wavelength images of Jupiter's ring system during the nominal mission (Ockert-Bell et al., 1999, Icarus 138, 188-213) and another 21 during the extended mission. The Near Infrared Mapping Spectrometer (NIMS) recorded an observation of Jupiter's main ring during orbit C3 at wavelengths from 0.7 to 5.2 μm; a second observation was attempted during orbit E4. We analyze the high phase angle NIMS and SSI observations to constrain the size distribution of the main ring's micron-sized dust population. This portion of the population is best constrained at high phase angles, as the light scattering behavior of small dust grains dominates at these geometries and contributions from larger ring particles are negligible. High phase angle images of the main ring obtained by the Voyager spacecraft covered phase angles between 173.8° and 176.9° (Showalter et al., 1987, Icarus 69, 458-498). Galileo images extend this range up to 178.6°. We model the Galileo phase curve and the ring spectra from the C3 NIMS ring observation as the combination of two power law distributions. Our analysis of the main ring phase curve and the NIMS spectra suggests the size distribution of the smallest ring particles is a power law with an index of 2.0±0.3 below a size of ˜15 μm that transitions to a power law with an index of 5.0±1.5 at larger sizes. This combined power law distribution, or "broken power law" distribution, yields a better fit to the NIMS data than do the power law distributions that have previously been fit to the Voyager imaging data (Showalter et al., 1987, Icarus 69, 458-498). The broken power law distribution reconciles the results of Showalter et al. (1987, Icarus 69, 458-498) and McMuldroch et al. (2000, Icarus 146, 1-11), who also analyzed the NIMS data, and can be considered as an obvious extension of a simple power law. This more complex size distribution could indicate that ring particle production rates and/or lifetimes vary with size and may relate to the physical processes that control their evolution. The significant near arm/far arm asymmetry reported elsewhere (see Showalter et al., 1987, Icarus 69, 458-498; Ockert-Bell et al., 1999, Icarus 138, 188-213) persists in the data even after the main ring is isolated in the SSI images. However, the sense of the asymmetry seen in Galileo images differs from that seen in Voyager images. We interpret this asymmetry as a broad-scale, azimuthal brightness variation. No consistent association with the magnetic field of Jupiter has been observed. It is possible that these longitudinal variations may be similar to the random brightness fluctuations observed in Saturn's F ring by Voyager (Smith et al., 1982, Science 215, 504-537) and during the 1995 ring plane crossings (Nicholson et al., 1996, Science 272, 509-515; Bosh and Rivkin, 1996, Science 272, 518-521; Poulet et al., 2000, Icarus 144, 135-148). Stochastic events may thus play a significant role in the evolution of the jovian main ring.

  13. Using Spin to Understand the Formation of LIGO and Virgo’s Black Holes

    NASA Astrophysics Data System (ADS)

    Farr, Ben; Holz, Daniel E.; Farr, Will M.

    2018-02-01

    With the growing number of binary black hole (BBH) mergers detected by the Advanced LIGO and Virgo detectors, it is becoming possible to constrain the properties of the underlying population and better understand the formation of these systems. Black hole (BH) spin orientations are one of the cleanest discriminators of formation history, with BHs in dynamically formed binaries in dense stellar environments expected to have spins distributed isotropically, in contrast to isolated populations where stellar evolution is expected to induce spins preferentially aligned with the orbital angular momentum. In this work, we propose a simple, model-agnostic approach to characterizing the spin properties of LIGO/Virgo’s BBH population. Using measurements of the effective spin of the binaries, we introduce a simple parameter to quantify the fraction of the population that is isotropically distributed, regardless of the spin magnitude distribution of the population. Once the orientation characteristics of the population have been determined, we show how measurements of effective spin can be used to directly constrain the BH spin magnitude distribution. We find that most effective spin measurements are too small to be informative, with the first four events showing a slight preference for a population with alignment, with an odds ratio of 1.2. We argue that it will be possible to distinguish symmetric and anti-symmetric populations at high confidence with tens of additional detections, although mixed populations may take significantly longer to disentangle. We also derive BH spin magnitude distributions from LIGO’s first four BBHs under the assumption of aligned or isotropic populations.

  14. VERY LARGE INTERSTELLAR GRAINS AS EVIDENCED BY THE MID-INFRARED EXTINCTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Shu; Jiang, B. W.; Li, Aigen, E-mail: shuwang@mail.bnu.edu.cn, E-mail: bjiang@bnu.edu.cn, E-mail: wanshu@missouri.edu, E-mail: lia@missouri.edu

    The sizes of interstellar grains are widely distributed, ranging from a few angstroms to a few micrometers. The ultraviolet (UV) and optical extinction constrains the dust in the size range of a couple hundredths of micrometers to several submicrometers. The near and mid infrared (IR) emission constrains the nanometer-sized grains and angstrom-sized very large molecules. However, the quantity and size distribution of micrometer-sized grains remain unknown because they are gray in the UV/optical extinction and they are too cold and emit too little in the IR to be detected by IRAS, Spitzer, or Herschel. In this work, we employ themore » ∼3–8 μm mid-IR extinction, which is flat in both diffuse and dense regions to constrain the quantity, size, and composition of the μm-sized grain component. We find that, together with nano- and submicron-sized silicate and graphite (as well as polycyclic aromatic hydrocarbons), μm-sized graphite grains with C/H ≈ 137 ppm and a mean size of ∼1.2 μm closely fit the observed interstellar extinction of the Galactic diffuse interstellar medium from the far-UV to the mid-IR, as well as the near-IR to millimeter thermal emission obtained by COBE/DIRBE, COBE/FIRAS, and Planck up to λ ≲ 1000 μm. The μm-sized graphite component accounts for ∼14.6% of the total dust mass and ∼2.5% of the total IR emission.« less

  15. Why Bother to Calibrate? Model Consistency and the Value of Prior Information

    NASA Astrophysics Data System (ADS)

    Hrachowitz, Markus; Fovet, Ophelie; Ruiz, Laurent; Euser, Tanja; Gharari, Shervan; Nijzink, Remko; Savenije, Hubert; Gascuel-Odoux, Chantal

    2015-04-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  16. Why Bother and Calibrate? Model Consistency and the Value of Prior Information.

    NASA Astrophysics Data System (ADS)

    Hrachowitz, M.; Fovet, O.; Ruiz, L.; Euser, T.; Gharari, S.; Nijzink, R.; Freer, J. E.; Savenije, H.; Gascuel-Odoux, C.

    2014-12-01

    Hydrological models frequently suffer from limited predictive power despite adequate calibration performances. This can indicate insufficient representations of the underlying processes. Thus ways are sought to increase model consistency while satisfying the contrasting priorities of increased model complexity and limited equifinality. In this study the value of a systematic use of hydrological signatures and expert knowledge for increasing model consistency was tested. It was found that a simple conceptual model, constrained by 4 calibration objective functions, was able to adequately reproduce the hydrograph in the calibration period. The model, however, could not reproduce 20 hydrological signatures, indicating a lack of model consistency. Subsequently, testing 11 models, model complexity was increased in a stepwise way and counter-balanced by using prior information about the system to impose "prior constraints", inferred from expert knowledge and to ensure a model which behaves well with respect to the modeller's perception of the system. We showed that, in spite of unchanged calibration performance, the most complex model set-up exhibited increased performance in the independent test period and skill to reproduce all 20 signatures, indicating a better system representation. The results suggest that a model may be inadequate despite good performance with respect to multiple calibration objectives and that increasing model complexity, if efficiently counter-balanced by available prior constraints, can increase predictive performance of a model and its skill to reproduce hydrological signatures. The results strongly illustrate the need to balance automated model calibration with a more expert-knowledge driven strategy of constraining models.

  17. An Anatomically Constrained Model for Path Integration in the Bee Brain.

    PubMed

    Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley

    2017-10-23

    Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Refining Southern California Geotherms Using Seismologic, Geologic, and Petrologic Constraints

    NASA Astrophysics Data System (ADS)

    Thatcher, W. R.; Chapman, D. S.; Allam, A. A.; Williams, C. F.

    2017-12-01

    Lithospheric deformation in tectonically active regions depends on the 3D distribution of rheology, which is in turn critically controlled by temperature. Under the auspices of the Southern California Earthquake Center (SCEC) we are developing a 3D Community Thermal Model (CTM) to constrain rheology and so better understand deformation processes within this complex but densely monitored and relatively well-understood region. The San Andreas transform system has sliced southern California into distinct blocks, each with characteristic lithologies, seismic velocities and thermal structures. Guided by the geometry of these blocks we use more than 250 surface heat-flow measurements to define 13 geographically distinct heat flow regions (HFRs). Model geotherms within each HFR are constrained by averages and variances of surface heat flow q0 and the 1D depth distribution of thermal conductivity (k) and radiogenic heat production (A), which are strongly dependent on rock type. Crustal lithologies are not always well known and we turn to seismic imaging for help. We interrogate the SCEC Community Velocity Model (CVM) to determine averages and variances of Vp, Vs and Vp/Vs versus depth within each HFR. We bound (A, k) versus depth by relying on empirical relations between seismic wave speed and rock type and laboratory and modeling methods relating (A, k) to rock type. Many 1D conductive geotherms for each HFR are allowed by the variances in surface heat flow and subsurface (A, k). An additional constraint on the lithosphere temperature field is provided by comparing lithosphere-asthenosphere boundary (LAB) depths identified seismologically with those defined thermally as the depth of onset of partial melting. Receiver function studies in Southern California indicate LAB depths that range from 40 km to 90 km. Shallow LAB depths are correlated with high surface heat flow and deep LAB with low heat flow. The much-restricted families of geotherms that intersect peridotite solidi at the seismological LAB depth in each region require that LAB temperatures lie between 1050 to 1250˚ C, a range that is consistent with a hydrous rather than anhydrous mantle below Southern California.

  19. Fracture zones constrained by neutral surfaces in a fault-related fold: Insights from the Kelasu tectonic zone, Kuqa Depression

    NASA Astrophysics Data System (ADS)

    Sun, Shuai; Hou, Guiting; Zheng, Chunfang

    2017-11-01

    Stress variation associated with folding is one of the controlling factors in the development of tectonic fractures, however, little attention has been paid to the influence of neutral surfaces during folding on fracture distribution in a fault-related fold. In this study, we take the Cretaceous Bashijiqike Formation in the Kuqa Depression as an example and analyze the distribution of tectonic fractures in fault-related folds by core observation and logging data analysis. Three fracture zones are identified in a fault-related fold: a tensile zone, a transition zone and a compressive zone, which may be constrained by two neutral surfaces of fold. Well correlation reveals that the tensile zone and the transition zone reach the maximum thickness at the fold hinge and get thinner in the fold limbs. A 2D viscoelastic stress field model of a fault-related fold was constructed to further investigate the mechanism of fracturing. Statistical and numerical analysis reveal that the tensile zone and the transition zone become thicker with decreasing interlimb angle. Stress variation associated with folding is the first level of control over the general pattern of fracture distribution while faulting is a secondary control over the development of local fractures in a fault-related fold.

  20. Computing the Distribution of Pareto Sums Using Laplace Transformation and Stehfest Inversion

    NASA Astrophysics Data System (ADS)

    Harris, C. K.; Bourne, S. J.

    2017-05-01

    In statistical seismology, the properties of distributions of total seismic moment are important for constraining seismological models, such as the strain partitioning model (Bourne et al. J Geophys Res Solid Earth 119(12): 8991-9015, 2014). This work was motivated by the need to develop appropriate seismological models for the Groningen gas field in the northeastern Netherlands, in order to address the issue of production-induced seismicity. The total seismic moment is the sum of the moments of individual seismic events, which in common with many other natural processes, are governed by Pareto or "power law" distributions. The maximum possible moment for an induced seismic event can be constrained by geomechanical considerations, but rather poorly, and for Groningen it cannot be reliably inferred from the frequency distribution of moment magnitude pertaining to the catalogue of observed events. In such cases it is usual to work with the simplest form of the Pareto distribution without an upper bound, and we follow the same approach here. In the case of seismicity, the exponent β appearing in the power-law relation is small enough for the variance of the unbounded Pareto distribution to be infinite, which renders standard statistical methods concerning sums of statistical variables, based on the central limit theorem, inapplicable. Determinations of the properties of sums of moderate to large numbers of Pareto-distributed variables with infinite variance have traditionally been addressed using intensive Monte Carlo simulations. This paper presents a novel method for accurate determination of the properties of such sums that is accurate, fast and easily implemented, and is applicable to Pareto-distributed variables for which the power-law exponent β lies within the interval [0, 1]. It is based on shifting the original variables so that a non-zero density is obtained exclusively for non-negative values of the parameter and is identically zero elsewhere, a property that is shared by the sum of an arbitrary number of such variables. The technique involves applying the Laplace transform to the normalized sum (which is simply the product of the Laplace transforms of the densities of the individual variables, with a suitable scaling of the Laplace variable), and then inverting it numerically using the Gaver-Stehfest algorithm. After validating the method using a number of test cases, it was applied to address the distribution of total seismic moment, and the quantiles computed for various numbers of seismic events were compared with those obtained in the literature using Monte Carlo simulation. Excellent agreement was obtained. As an application, the method was applied to the evolution of total seismic moment released by tremors due to gas production in the Groningen gas field in the northeastern Netherlands. The speed, accuracy and ease of implementation of the method allows the development of accurate correlations for constraining statistical seismological models using, for example, the maximum-likelihood method. It should also be of value in other natural processes governed by Pareto distributions with exponent less than unity.

  1. A molecular investigation of soil organic carbon composition across a subalpine catchment

    USGS Publications Warehouse

    Hsu, Hsiao-Tieh; Lawrence, Corey R.; Winnick, Matthew J.; Bargar, John R.; Maher, Katharine

    2018-01-01

    The dynamics of soil organic carbon (SOC) storage and turnover are a critical component of the global carbon cycle. Mechanistic models seeking to represent these complex dynamics require detailed SOC compositions, which are currently difficult to characterize quantitatively. Here, we address this challenge by using a novel approach that combines Fourier transform infrared spectroscopy (FT-IR) and bulk carbon X-ray absorption spectroscopy (XAS) to determine the abundance of SOC functional groups, using elemental analysis (EA) to constrain the total amount of SOC. We used this SOC functional group abundance (SOC-fga) method to compare variability in SOC compositions as a function of depth across a subalpine watershed (East River, Colorado, USA) and found a large degree of variability in SOC functional group abundances between sites at different elevations. Soils at a lower elevation are predominantly composed of polysaccharides, while soils at a higher elevation have more substantial portions of carbonyl, phenolic, or aromatic carbon. We discuss the potential drivers of differences in SOC composition between these sites, including vegetation inputs, internal processing and losses, and elevation-driven environmental factors. Although numerical models would facilitate the understanding and evaluation of the observed SOC distributions, quantitative and meaningful measurements of SOC molecular compositions are required to guide such models. Comparison among commonly used characterization techniques on shared reference materials is a critical next step for advancing our understanding of the complex processes controlling SOC compositions.

  2. The Coupled Physical Structure of Gas and Dust in the IM Lup Protoplanetary Disk

    NASA Astrophysics Data System (ADS)

    Cleeves, L. Ilsedore; Öberg, Karin I.; Wilner, David J.; Huang, Jane; Loomis, Ryan A.; Andrews, Sean M.; Czekala, Ian

    2016-12-01

    The spatial distribution of gas and solids in protoplanetary disks determines the composition and formation efficiency of planetary systems. A number of disks show starkly different distributions for the gas and small grains compared to millimeter-centimeter-sized dust. We present new Atacama Large Millimeter/Submillimeter Array observations of the dust continuum, CO, 13CO, and C18O in the IM Lup protoplanetary disk, one of the first systems where this dust-gas dichotomy was clearly seen. The 12CO is detected out to a radius of 970 au, while the millimeter continuum emission is truncated at just 313 au. Based upon these data, we have built a comprehensive physical and chemical model for the disk structure, which takes into account the complex, coupled nature of the gas and dust and the interplay between the local and external environment. We constrain the distributions of gas and dust, the gas temperatures, the CO abundances, the CO optical depths, and the incident external radiation field. We find that the reduction/removal of dust from the outer disk exposes this region to higher stellar and external radiation and decreases the rate of freeze-out, allowing CO to remain in the gas out to large radial distances. We estimate a gas-phase CO abundance of 5% of the interstellar medium value and a low external radiation field (G 0 ≲ 4). The latter is consistent with that expected from the local stellar population. We additionally find tentative evidence for ring-like continuum substructure, suggestions of isotope-selective photodissociation, and a diffuse gas halo.

  3. Thinning factor distributions viewed through numerical models of continental extension

    NASA Astrophysics Data System (ADS)

    Svartman Dias, Anna Eliza; Hayman, Nicholas W.; Lavier, Luc L.

    2016-12-01

    A long-standing question surrounding rifted margins concerns how the observed fault-restored extension in the upper crust is usually less than that calculated from subsidence models or from crustal thickness estimates, the so-called "extension discrepancy." Here we revisit this issue drawing on recently completed numerical results. We extract thinning profiles from four end-member geodynamic model rifts with varying width and asymmetry and propose tectonic models that best explain those results. We then relate the spatial and temporal evolution of upper to lower crustal thinning, or crustal depth-dependent thinning (DDT), and crustal thinning to mantle thinning, or lithospheric DDT, which are difficult to achieve in natural systems due to the lack of observations that constrain thinning at different stages between prerift extension and lithospheric breakup. Our results support the hypothesis that crustal DDT cannot be the main cause of the extension discrepancy, which may be overestimated because of the difficulty in recognizing distributed deformation, and polyphase and detachment faulting in seismic data. More importantly, the results support that lithospheric DDT is likely to dominate at specific stages of rift evolution because crustal and mantle thinning distributions are not always spatially coincident and at times are not even balanced by an equal magnitude of thinning in two dimensions. Moreover, either pure or simple shear models can apply at various points of time and space depending on the type of rift. Both DDT and pure/simple shear variations across space and time can result in observed complex fault geometries, uplift/subsidence, and thermal histories.

  4. Aerosol Properties of the Atmospheres of Extrasolar Giant Planets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lavvas, P.; Koskinen, T., E-mail: panayotis.lavvas@univ-reims.fr

    2017-09-20

    We use a model of aerosol microphysics to investigate the impact of high-altitude photochemical aerosols on the transmission spectra and atmospheric properties of close-in exoplanets, such as HD 209458 b and HD 189733 b. The results depend strongly on the temperature profiles in the middle and upper atmospheres, which are poorly understood. Nevertheless, our model of HD 189733 b, based on the most recently inferred temperature profiles, produces an aerosol distribution that matches the observed transmission spectrum. We argue that the hotter temperature of HD 209458 b inhibits the production of high-altitude aerosols and leads to the appearance of amore » clearer atmosphere than on HD 189733 b. The aerosol distribution also depends on the particle composition, photochemical production, and atmospheric mixing. Due to degeneracies among these inputs, current data cannot constrain the aerosol properties in detail. Instead, our work highlights the role of different factors in controlling the aerosol distribution that will prove useful in understanding different observations, including those from future missions. For the atmospheric mixing efficiency suggested by general circulation models, we find that the aerosol particles are small (∼nm) and probably spherical. We further conclude that a composition based on complex hydrocarbons (soots) is the most likely candidate to survive the high temperatures in hot-Jupiter atmospheres. Such particles would have a significant impact on the energy balance of HD 189733 b’s atmosphere and should be incorporated in future studies of atmospheric structure. We also evaluate the contribution of external sources to photochemical aerosol formation and find that their spectral signature is not consistent with observations.« less

  5. LiteNet: Lightweight Neural Network for Detecting Arrhythmias at Resource-Constrained Mobile Devices.

    PubMed

    He, Ziyang; Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan

    2018-04-17

    By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices.

  6. Developing new scenarios for water allocation negotiations: a case study of the Euphrates River Basin

    NASA Astrophysics Data System (ADS)

    Jarkeh, Mohammad Reza; Mianabadi, Ameneh; Mianabadi, Hojjat

    2016-10-01

    Mismanagement and uneven distribution of water may lead to or increase conflict among countries. Allocation of water among trans-boundary river neighbours is a key issue in utilization of shared water resources. The bankruptcy theory is a cooperative Game Theory method which is used when the amount of demand of riparian states is larger than total available water. In this study, we survey the application of seven methods of Classical Bankruptcy Rules (CBRs) including Proportional (CBR-PRO), Adjusted Proportional (CBR-AP), Constrained Equal Awards (CBR-CEA), Constrained Equal Losses (CBR-CEL), Piniles (CBR-Piniles), Minimal Overlap (CBR-MO), Talmud (CBR-Talmud) and four Sequential Sharing Rules (SSRs) including Proportional (SSR-PRO), Constrained Equal Awards (SSR-CEA), Constrained Equal Losses (SSR-CEL) and Talmud (SSR-Talmud) methods in allocation of the Euphrates River among three riparian countries: Turkey, Syria and Iraq. However, there is not a certain documented method to find more equitable allocation rule. Therefore, in this paper, a new method is established for choosing the most appropriate allocating rule which seems to be more equitable than other allocation rules to satisfy the stakeholders. The results reveal that, based on the new propose model, the CBR-AP seems to be more equitable to allocate the Euphrates River water among Turkey, Syria and Iraq.

  7. LiteNet: Lightweight Neural Network for Detecting Arrhythmias at Resource-Constrained Mobile Devices

    PubMed Central

    Zhang, Xiaoqing; Cao, Yangjie; Liu, Zhi; Zhang, Bo; Wang, Xiaoyan

    2018-01-01

    By running applications and services closer to the user, edge processing provides many advantages, such as short response time and reduced network traffic. Deep-learning based algorithms provide significantly better performances than traditional algorithms in many fields but demand more resources, such as higher computational power and more memory. Hence, designing deep learning algorithms that are more suitable for resource-constrained mobile devices is vital. In this paper, we build a lightweight neural network, termed LiteNet which uses a deep learning algorithm design to diagnose arrhythmias, as an example to show how we design deep learning schemes for resource-constrained mobile devices. Compare to other deep learning models with an equivalent accuracy, LiteNet has several advantages. It requires less memory, incurs lower computational cost, and is more feasible for deployment on resource-constrained mobile devices. It can be trained faster than other neural network algorithms and requires less communication across different processing units during distributed training. It uses filters of heterogeneous size in a convolutional layer, which contributes to the generation of various feature maps. The algorithm was tested using the MIT-BIH electrocardiogram (ECG) arrhythmia database; the results showed that LiteNet outperforms comparable schemes in diagnosing arrhythmias, and in its feasibility for use at the mobile devices. PMID:29673171

  8. CONSTRAINING SOLAR FLARE DIFFERENTIAL EMISSION MEASURES WITH EVE AND RHESSI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Caspi, Amir; McTiernan, James M.; Warren, Harry P.

    2014-06-20

    Deriving a well-constrained differential emission measure (DEM) distribution for solar flares has historically been difficult, primarily because no single instrument is sensitive to the full range of coronal temperatures observed in flares, from ≲2 to ≳50 MK. We present a new technique, combining extreme ultraviolet (EUV) spectra from the EUV Variability Experiment (EVE) onboard the Solar Dynamics Observatory with X-ray spectra from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI), to derive, for the first time, a self-consistent, well-constrained DEM for jointly observed solar flares. EVE is sensitive to ∼2-25 MK thermal plasma emission, and RHESSI to ≳10 MK; together, the twomore » instruments cover the full range of flare coronal plasma temperatures. We have validated the new technique on artificial test data, and apply it to two X-class flares from solar cycle 24 to determine the flare DEM and its temporal evolution; the constraints on the thermal emission derived from the EVE data also constrain the low energy cutoff of the non-thermal electrons, a crucial parameter for flare energetics. The DEM analysis can also be used to predict the soft X-ray flux in the poorly observed ∼0.4-5 nm range, with important applications for geospace science.« less

  9. Comparing particle-size distributions in modern and ancient sand-bed rivers

    NASA Astrophysics Data System (ADS)

    Hajek, E. A.; Lynds, R. M.; Huzurbazar, S. V.

    2011-12-01

    Particle-size distributions yield valuable insight into processes controlling sediment supply, transport, and deposition in sedimentary systems. This is especially true in ancient deposits, where effects of changing boundary conditions and autogenic processes may be detected from deposited sediment. In order to improve interpretations in ancient deposits and constrain uncertainty associated with new methods for paleomorphodynamic reconstructions in ancient fluvial systems, we compare particle-size distributions in three active sand-bed rivers in central Nebraska (USA) to grain-size distributions from ancient sandy fluvial deposits. Within the modern rivers studied, particle-size distributions of active-layer, suspended-load, and slackwater deposits show consistent relationships despite some morphological and sediment-supply differences between the rivers. In particular, there is substantial and consistent overlap between bed-material and suspended-load distributions, and the coarsest material found in slackwater deposits is comparable to the coarse fraction of suspended-sediment samples. Proxy bed-load and slackwater-deposit samples from the Kayenta Formation (Lower Jurassic, Utah/Colorado, USA) show overlap similar to that seen in the modern rivers, suggesting that these deposits may be sampled for paleomorphodynamic reconstructions, including paleoslope estimation. We also compare grain-size distributions of channel, floodplain, and proximal-overbank deposits in the Willwood (Paleocene/Eocene, Bighorn Basin, Wyoming, USA), Wasatch (Paleocene/Eocene, Piceance Creek Basin, Colorado, USA), and Ferris (Cretaceous/Paleocene, Hanna Basin, Wyoming, USA) formations. Grain-size characteristics in these deposits reflect how suspended- and bed-load sediment is distributed across the floodplain during channel avulsion events. In order to constrain uncertainty inherent in such estimates, we evaluate uncertainty associated with sample collection, preparation, analytical particle-size analysis, and statistical characterization in both modern and ancient settings. We consider potential error contributions and evaluate the degree to which this uncertainty might be significant in modern sediment-transport studies and ancient paleomorphodynamic reconstructions.

  10. Multilingual Education: The Role of Language Ideologies and Attitudes

    ERIC Educational Resources Information Center

    Liddicoat, Anthony J.; Taylor-Leech, Kerry

    2015-01-01

    This paper overviews issues relating to the role of ideologies and attitudes in multilingual education (MLE). It argues that ideologies and attitudes are constituent parts of the language planning process and shape the possibilities for multilingualism in educational programmes in complex ways, but most frequently work to constrain the ways that…

  11. Teachers' Epistemic Cognition in the Context of Dialogic Practice: A Question of Calibration?

    ERIC Educational Resources Information Center

    Bråten, Ivar; Muis, Krista R.; Reznitskaya, Alina

    2017-01-01

    In this article, we argue that teachers' epistemic cognition, in particular their thinking about epistemic aims and reliable processes for achieving those aims, may impact students' understanding of complex, controversial issues. This is because teachers' epistemic cognition may facilitate or constrain their implementation of instruction aiming to…

  12. Postfire logging in riparian areas.

    Treesearch

    Gordon H. Reeves; Peter A. Bisson; Bruce E. Rieman; Lee E. Benda

    2006-01-01

    We reviewed the behavior of wildfire in riparian zones, primarily in the western United States, and the potential ecological consequences of postfire logging. Fire behavior in riparian zones is complex, but many aquatic and riparian organisms exhibit a suite of adaptations that allow relatively rapid recovery after fire. Unless constrained by other factors, fish tend...

  13. Energy service contracts in regional engineering center for small and medium businesses

    NASA Astrophysics Data System (ADS)

    Gil'manshin, I. R.; Kashapov, N. F.

    2014-12-01

    The analysis of the energy service contracts development in Russia is given in the article. The role of the Complex learning centres in the field of energy efficiency in the promotion of energy service contracts is described. The reasons of constraining the development of energy service contracts are described.

  14. Complex VLSI Feature Comparison for Commercial Microelectronics Verification

    DTIC Science & Technology

    2014-03-27

    69 4.2.4 Circuit E . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 4.3 Summary...used for high-performance consumer microelectronics. Volume is a significant factor in constraining the technology limit for defense circuits, but it...surveyed in a 2010 Department of Commerce report found counterfeit chips difficult to identify due to improved fabrication quality in overseas counterfeit

  15. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

    ERIC Educational Resources Information Center

    Sarcevic, Aleksandra

    2009-01-01

    An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

  16. Suffix Ordering and Morphological Processing

    ERIC Educational Resources Information Center

    Plag, Ingo; Baayen, Harald

    2009-01-01

    There is a long-standing debate about the principles constraining the combinatorial properties of suffixes. Hay 2002 and Hay & Plag 2004 proposed a model in which suffixes can be ordered along a hierarchy of processing complexity. We show that this model generalizes to a larger set of suffixes, and we provide independent evidence supporting the…

  17. QTL analysis of Fusarium root rot resistance in an Andean x Middle American common bean RIL population

    USDA-ARS?s Scientific Manuscript database

    Aims Fusarium root rot (FRR) is a soil-borne disease that constrains common bean (Phaseolus vulgaris L.) production. FRR causal pathogens include clade 2 members of the Fusarium solani species complex. Here we characterize common bean reaction to four Fusarium species and identify genomic regions as...

  18. A conservation ontology and knowledge base to support delivery of technical assistance to agricultural producers in the united states

    USDA-ARS?s Scientific Manuscript database

    Information systems supporting the delivery of conservation technical assistance by the United States Department of Agriculture (USDA) to agricultural producers on working lands have become increasingly complex over the past 25 years. They are constrained by inconsistent coordination of domain knowl...

  19. Constraint elimination in dynamical systems

    NASA Technical Reports Server (NTRS)

    Singh, R. P.; Likins, P. W.

    1989-01-01

    Large space structures (LSSs) and other dynamical systems of current interest are often extremely complex assemblies of rigid and flexible bodies subjected to kinematical constraints. A formulation is presented for the governing equations of constrained multibody systems via the application of singular value decomposition (SVD). The resulting equations of motion are shown to be of minimum dimension.

  20. New Bernstein type inequalities for polynomials on ellipses

    NASA Technical Reports Server (NTRS)

    Freund, Roland; Fischer, Bernd

    1990-01-01

    New and sharp estimates are derived for the growth in the complex plane of polynomials known to have a curved majorant on a given ellipse. These so-called Bernstein type inequalities are closely connected with certain constrained Chebyshev approximation problems on ellipses. Also presented are some new results for approximation problems of this type.

Top