Sample records for computing large minimum-evolution

  1. Effect of local minima on adiabatic quantum optimization.

    PubMed

    Amin, M H S

    2008-04-04

    We present a perturbative method to estimate the spectral gap for adiabatic quantum optimization, based on the structure of the energy levels in the problem Hamiltonian. We show that, for problems that have an exponentially large number of local minima close to the global minimum, the gap becomes exponentially small making the computation time exponentially long. The quantum advantage of adiabatic quantum computation may then be accessed only via the local adiabatic evolution, which requires phase coherence throughout the evolution and knowledge of the spectrum. Such problems, therefore, are not suitable for adiabatic quantum computation.

  2. Minimum information about a single amplified genome (MISAG) and a metagenome-assembled genome (MIMAG) of bacteria and archaea

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bowers, Robert M.; Kyrpides, Nikos C.; Stepanauskas, Ramunas

    The number of genomes from uncultivated microbes will soon surpass the number of isolate genomes in public databases (Hugenholtz, Skarshewski, & Parks, 2016). Technological advancements in high-throughput sequencing and assembly, including single-cell genomics and the computational extraction of genomes from metagenomes (GFMs), are largely responsible. Here we propose community standards for reporting the Minimum Information about a Single-Cell Genome (MIxS-SCG) and Minimum Information about Genomes extracted From Metagenomes (MIxS-GFM) specific for Bacteria and Archaea. The standards have been developed in the context of the International Genomics Standards Consortium (GSC) community (Field et al., 2014) and can be viewed as amore » supplement to other GSC checklists including the Minimum Information about a Genome Sequence (MIGS), Minimum information about a Metagenomic Sequence(s) (MIMS) (Field et al., 2008) and Minimum Information about a Marker Gene Sequence (MIMARKS) (P. Yilmaz et al., 2011). Community-wide acceptance of MIxS-SCG and MIxS-GFM for Bacteria and Archaea will enable broad comparative analyses of genomes from the majority of taxa that remain uncultivated, improving our understanding of microbial function, ecology, and evolution.« less

  3. Constraints on the power spectrum of the primordial density field from large-scale data - Microwave background and predictions of inflation

    NASA Technical Reports Server (NTRS)

    Kashlinsky, A.

    1992-01-01

    It is shown here that, by using galaxy catalog correlation data as input, measurements of microwave background radiation (MBR) anisotropies should soon be able to test two of the inflationary scenario's most basic predictions: (1) that the primordial density fluctuations produced were scale-invariant and (2) that the universe is flat. They should also be able to detect anisotropies of large-scale structure formed by gravitational evolution of density fluctuations present at the last scattering epoch. Computations of MBR anisotropies corresponding to the minimum of the large-scale variance of the MBR anisotropy are presented which favor an open universe with P(k) significantly different from the Harrison-Zeldovich spectrum predicted by most inflationary models.

  4. Turbulence Generation Using Localized Sources of Energy: Direct Numerical Simulations and the Effects of Thermal Non-Equilibrium

    NASA Astrophysics Data System (ADS)

    Maqui, Agustin Francisco

    Turbulence in high-speed flows is an important problem in aerospace applications, yet extremely difficult from a theoretical, computational and experimental perspective. A main reason for the lack of complete understanding is the difficulty of generating turbulence in the lab at a range of speeds which can also include hypersonic effects such as thermal non-equilibrium. This work studies the feasibility of a new approach to generate turbulence based on laser-induced photo-excitation/dissociation of seeded molecules. A large database of incompressible and compressible direct numerical simulations (DNS) has been generated to systematically study the development and evolution of the flow towards realistic turbulence. Governing parameters and the conditions necessary for the establishment of turbulence, as well as the length and time scales associated with such process, are identified. For both the compressible and incompressible experiments a minimum Reynolds number is found to be needed for the flow to evolve towards fully developed turbulence. Additionally, for incompressible cases a minimum time scale is required, while for compressible cases a minimum distance from the grid and limit on the maximum temperature introduced are required. Through an extensive analysis of single and two point statistics, as well as spectral dynamics, the primary mechanisms leading to turbulence are shown. As commonly done in compressible turbulence, dilatational and solenoidal components are separated to understand the effect of acoustics on the development of turbulence. Finally, a large database of forced isotropic turbulence has been generated to study the effect of internal degrees of freedom on the evolution of turbulence.

  5. The maximum rate of mammal evolution

    NASA Astrophysics Data System (ADS)

    Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.

    2012-03-01

    How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.

  6. The formation and evolution of domain walls

    NASA Technical Reports Server (NTRS)

    Press, William H.; Ryden, Barbara S.; Spergel, David N.

    1991-01-01

    Domain walls are sheet-like defects produced when the low energy vacuum has isolated degenerate minima. The researchers' computer code follows the evolution of a scalar field, whose dynamics are determined by its Lagrangian density. The topology of the scalar field determines the evolution of the domain walls. This approach treats both wall dynamics and reconnection. The researchers investigated not only potentials that produce single domain walls, but also potentials that produce a network of walls and strings. These networks arise in axion models where the U(1) Peccei-Quinn symmetry is broken into Z sub N discrete symmetries. If N equals 1, the walls are bounded by strings and the network quickly disappears. For N greater than 1, the network of walls and strings behaved qualitatively just as the wall network shown in the figures given here. This both confirms the researchers' pessimistic view that domain walls cannot play an important role in the formation of large scale structure and implies that axion models with multiple minimum can be cosmologically disastrous.

  7. Sound production due to large-scale coherent structures

    NASA Technical Reports Server (NTRS)

    Gatski, T. B.

    1979-01-01

    The acoustic pressure fluctuations due to large-scale finite amplitude disturbances in a free turbulent shear flow are calculated. The flow is decomposed into three component scales; the mean motion, the large-scale wave-like disturbance, and the small-scale random turbulence. The effect of the large-scale structure on the flow is isolated by applying both a spatial and phase average on the governing differential equations and by initially taking the small-scale turbulence to be in energetic equilibrium with the mean flow. The subsequent temporal evolution of the flow is computed from global energetic rate equations for the different component scales. Lighthill's theory is then applied to the region with the flowfield as the source and an observer located outside the flowfield in a region of uniform velocity. Since the time history of all flow variables is known, a minimum of simplifying assumptions for the Lighthill stress tensor is required, including no far-field approximations. A phase average is used to isolate the pressure fluctuations due to the large-scale structure, and also to isolate the dynamic process responsible. Variation of mean square pressure with distance from the source is computed to determine the acoustic far-field location and decay rate, and, in addition, spectra at various acoustic field locations are computed and analyzed. Also included are the effects of varying the growth and decay of the large-scale disturbance on the sound produced.

  8. Computational Role of Tunneling in a Programmable Quantum Annealer

    NASA Technical Reports Server (NTRS)

    Boixo, Sergio; Smelyanskiy, Vadim; Shabani, Alireza; Isakov, Sergei V.; Dykman, Mark; Amin, Mohammad; Mohseni, Masoud; Denchev, Vasil S.; Neven, Hartmut

    2016-01-01

    Quantum tunneling is a phenomenon in which a quantum state tunnels through energy barriers above the energy of the state itself. Tunneling has been hypothesized as an advantageous physical resource for optimization. Here we present the first experimental evidence of a computational role of multiqubit quantum tunneling in the evolution of a programmable quantum annealer. We developed a theoretical model based on a NIBA Quantum Master Equation to describe the multi-qubit dissipative cotunneling effects under the complex noise characteristics of such quantum devices.We start by considering a computational primitive, the simplest non-convex optimization problem consisting of just one global and one local minimum. The quantum evolutions enable tunneling to the global minimum while the corresponding classical paths are trapped in a false minimum. In our study the non-convex potentials are realized by frustrated networks of qubit clusters with strong intra-cluster coupling. We show that the collective effect of the quantum environment is suppressed in the critical phase during the evolution where quantum tunneling decides the right path to solution. In a later stage dissipation facilitates the multiqubit cotunneling leading to the solution state. The predictions of the model accurately describe the experimental data from the D-WaveII quantum annealer at NASA Ames. In our computational primitive the temperature dependence of the probability of success in the quantum model is opposite to that of the classical paths with thermal hopping. Specially, we provide an analysis of an optimization problem with sixteen qubits,demonstrating eight qubit cotunneling that increases success probabilities. Furthermore, we report results for larger problems with up to 200 qubits that contain the primitive as subproblems.

  9. On the minimum orbital intersection distance computation: a new effective method

    NASA Astrophysics Data System (ADS)

    Hedo, José M.; Ruíz, Manuel; Peláez, Jesús

    2018-06-01

    The computation of the Minimum Orbital Intersection Distance (MOID) is an old, but increasingly relevant problem. Fast and precise methods for MOID computation are needed to select potentially hazardous asteroids from a large catalogue. The same applies to debris with respect to spacecraft. An iterative method that strictly meets these two premises is presented.

  10. Differential-Evolution Control Parameter Optimization for Unmanned Aerial Vehicle Path Planning

    PubMed Central

    Kok, Kai Yit; Rajendran, Parvathy

    2016-01-01

    The differential evolution algorithm has been widely applied on unmanned aerial vehicle (UAV) path planning. At present, four random tuning parameters exist for differential evolution algorithm, namely, population size, differential weight, crossover, and generation number. These tuning parameters are required, together with user setting on path and computational cost weightage. However, the optimum settings of these tuning parameters vary according to application. Instead of trial and error, this paper presents an optimization method of differential evolution algorithm for tuning the parameters of UAV path planning. The parameters that this research focuses on are population size, differential weight, crossover, and generation number. The developed algorithm enables the user to simply define the weightage desired between the path and computational cost to converge with the minimum generation required based on user requirement. In conclusion, the proposed optimization of tuning parameters in differential evolution algorithm for UAV path planning expedites and improves the final output path and computational cost. PMID:26943630

  11. Weakened Magnetization and Onset of Large-scale Turbulence in the Young Solar Wind—Comparisons of Remote Sensing Observations with Simulation

    NASA Astrophysics Data System (ADS)

    Chhiber, Rohit; Usmanov, Arcadi V.; DeForest, Craig E.; Matthaeus, William H.; Parashar, Tulasi N.; Goldstein, Melvyn L.

    2018-04-01

    Recent analysis of Solar-Terrestrial Relations Observatory (STEREO) imaging observations have described the early stages of the development of turbulence in the young solar wind in solar minimum conditions. Here we extend this analysis to a global magnetohydrodynamic (MHD) simulation of the corona and solar wind based on inner boundary conditions, either dipole or magnetogram type, that emulate solar minimum. The simulations have been calibrated using Ulysses and 1 au observations, and allow, within a well-understood context, a precise determination of the location of the Alfvén critical surfaces and the first plasma beta equals unity surfaces. The compatibility of the the STEREO observations and the simulations is revealed by direct comparisons. Computation of the radial evolution of second-order magnetic field structure functions in the simulations indicates a shift toward more isotropic conditions at scales of a few Gm, as seen in the STEREO observations in the range 40–60 R ⊙. We affirm that the isotropization occurs in the vicinity of the first beta unity surface. The interpretation based on early stages of in situ solar wind turbulence evolution is further elaborated, emphasizing the relationship of the observed length scales to the much smaller scales that eventually become the familiar turbulence inertial range cascade. We argue that the observed dynamics is the very early manifestation of large-scale in situ nonlinear couplings that drive turbulence and heating in the solar wind.

  12. Ascent velocity and dynamics of the Fiumicino mud eruption, Rome, Italy

    NASA Astrophysics Data System (ADS)

    Vona, A.; Giordano, G.; De Benedetti, A. A.; D'Ambrosio, R.; Romano, C.; Manga, M.

    2015-08-01

    In August 2013 drilling triggered the eruption of mud near the international airport of Fiumicino (Rome, Italy). We monitored the evolution of the eruption and collected samples for laboratory characterization of physicochemical and rheological properties. Over time, muds show a progressive dilution with water; the rheology is typical of pseudoplastic fluids, with a small yield stress that decreases as mud density decreases. The eruption, while not naturally triggered, shares several similarities with natural mud volcanoes, including mud componentry, grain-size distribution, gas discharge, and mud rheology. We use the size of large ballistic fragments ejected from the vent along with mud rheology to compute a minimum ascent velocity of the mud. Computed values are consistent with in situ measurements of gas phase velocities, confirming that the stratigraphic record of mud eruptions can be quantitatively used to infer eruption history and ascent rates and hence to assess (or reassess) mud eruption hazards.

  13. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, V. N.; Toussaint, U. V.; Timucin, D. A.

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum excitation gap. g min, = O(n 2(exp -n/2), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to 'the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  14. Dynamics of Quantum Adiabatic Evolution Algorithm for Number Partitioning

    NASA Technical Reports Server (NTRS)

    Smelyanskiy, Vadius; vonToussaint, Udo V.; Timucin, Dogan A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    We have developed a general technique to study the dynamics of the quantum adiabatic evolution algorithm applied to random combinatorial optimization problems in the asymptotic limit of large problem size n. We use as an example the NP-complete Number Partitioning problem and map the algorithm dynamics to that of an auxiliary quantum spin glass system with the slowly varying Hamiltonian. We use a Green function method to obtain the adiabatic eigenstates and the minimum exitation gap, gmin = O(n2(sup -n/2)), corresponding to the exponential complexity of the algorithm for Number Partitioning. The key element of the analysis is the conditional energy distribution computed for the set of all spin configurations generated from a given (ancestor) configuration by simultaneous flipping of a fixed number of spins. For the problem in question this distribution is shown to depend on the ancestor spin configuration only via a certain parameter related to the energy of the configuration. As the result, the algorithm dynamics can be described in terms of one-dimensional quantum diffusion in the energy space. This effect provides a general limitation of a quantum adiabatic computation in random optimization problems. Analytical results are in agreement with the numerical simulation of the algorithm.

  15. RNAmutants: a web server to explore the mutational landscape of RNA secondary structures

    PubMed Central

    Waldispühl, Jerome; Devadas, Srinivas; Berger, Bonnie; Clote, Peter

    2009-01-01

    The history and mechanism of molecular evolution in DNA have been greatly elucidated by contributions from genetics, probability theory and bioinformatics—indeed, mathematical developments such as Kimura's neutral theory, Kingman's coalescent theory and efficient software such as BLAST, ClustalW, Phylip, etc., provide the foundation for modern population genetics. In contrast to DNA, the function of most noncoding RNA depends on tertiary structure, experimentally known to be largely determined by secondary structure, for which dynamic programming can efficiently compute the minimum free energy secondary structure. For this reason, understanding the effect of pointwise mutations in RNA secondary structure could reveal fundamental properties of structural RNA molecules and improve our understanding of molecular evolution of RNA. The web server RNAmutants provides several efficient tools to compute the ensemble of low-energy secondary structures for all k-mutants of a given RNA sequence, where k is bounded by a user-specified upper bound. As we have previously shown, these tools can be used to predict putative deleterious mutations and to analyze regulatory sequences from the hepatitis C and human immunodeficiency genomes. Web server is available at http://bioinformatics.bc.edu/clotelab/RNAmutants/, and downloadable binaries at http://rnamutants.csail.mit.edu/. PMID:19531740

  16. Feature Selection in Classification of Eye Movements Using Electrooculography for Activity Recognition

    PubMed Central

    Mala, S.; Latha, K.

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition. PMID:25574185

  17. Feature selection in classification of eye movements using electrooculography for activity recognition.

    PubMed

    Mala, S; Latha, K

    2014-01-01

    Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.

  18. The relationship between solar activity and coronal hole evolution

    NASA Technical Reports Server (NTRS)

    Nolte, J. T.; Davis, J. M.; Gerassimenko, M.; Krieger, A. S.; Solodyna, C. V.; Golub, L.

    1978-01-01

    The relationship between coronal hole evolution and solar active regions during the Skylab period is examined. A tendency is found for holes to grow or remain stable when the activity nearby, seen as calcium plages and bright regions in X-rays, is predominantly large, long-lived regions. It is also found that there is a significantly higher number of small, short-lived active regions, as indicated by X-ray bright points, in the vicinity of decaying holes than there is near other holes. This is interpreted to mean that holes disappear at least in part because they become filled with many small scale, magnetically closed, X-ray emitting features. This interpretation, together with the observation that the number of X-ray bright points was much larger near solar minimum than it was during the Skylab period, provides a possible explanation for the disappearance of the large, near-equatorial coronal holes at the time of solar minimum.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, X.; Florinski, V.

    We present a new model that couples galactic cosmic-ray (GCR) propagation with magnetic turbulence transport and the MHD background evolution in the heliosphere. The model is applied to the problem of the formation of corotating interaction regions (CIRs) during the last solar minimum from the period between 2007 and 2009. The numerical model simultaneously calculates the large-scale supersonic solar wind properties and its small-scale turbulent content from 0.3 au to the termination shock. Cosmic rays are then transported through the background, and thus computed, with diffusion coefficients derived from the solar wind turbulent properties, using a stochastic Parker approach. Ourmore » results demonstrate that GCR variations depend on the ratio of diffusion coefficients in the fast and slow solar winds. Stream interfaces inside the CIRs always lead to depressions of the GCR intensity. On the other hand, heliospheric current sheet (HCS) crossings do not appreciably affect GCR intensities in the model, which is consistent with the two observations under quiet solar wind conditions. Therefore, variations in diffusion coefficients associated with CIR stream interfaces are more important for GCR propagation than the drift effects of the HCS during a negative solar minimum.« less

  20. Structure and evolution of the large scale solar and heliospheric magnetic fields. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Hoeksema, J. T.

    1984-01-01

    Structure and evolution of large scale photospheric and coronal magnetic fields in the interval 1976-1983 were studied using observations from the Stanford Solar Observatory and a potential field model. The solar wind in the heliosphere is organized into large regions in which the magnetic field has a componenet either toward or away from the sun. The model predicts the location of the current sheet separating these regions. Near solar minimum, in 1976, the current sheet lay within a few degrees of the solar equator having two extensions north and south of the equator. Soon after minimum the latitudinal extent began to increase. The sheet reached to at least 50 deg from 1978 through 1983. The complex structure near maximum occasionally included multiple current sheets. Large scale structures persist for up to two years during the entire interval. To minimize errors in determining the structure of the heliospheric field particular attention was paid to decreasing the distorting effects of rapid field evolution, finding the optimum source surface radius, determining the correction to the sun's polar field, and handling missing data. The predicted structure agrees with direct interplanetary field measurements taken near the ecliptic and with coronameter and interplanetary scintillation measurements which infer the three dimensional interplanetary magnetic structure. During most of the solar cycle the heliospheric field cannot be adequately described as a dipole.

  1. Evaluation of pressure in a plasma produced by laser ablation of steel

    NASA Astrophysics Data System (ADS)

    Hermann, Jörg; Axente, Emanuel; Craciun, Valentin; Taleb, Aya; Pelascini, Frédéric

    2018-05-01

    We investigated the time evolution of pressure in the plume generated by laser ablation with ultraviolet nanosecond laser pulses in a near-atmospheric argon atmosphere. These conditions were previously identified to produce a plasma of properties that facilitate accurate spectroscopic diagnostics. Using steel as sample material, the present investigations benefit from the large number of reliable spectroscopic data available for iron. Recording time-resolved emission spectra with an echelle spectrometer, we were able to perform accurate measurements of electron density and temperature over a time interval from 200 ns to 12 μs. Assuming local thermodynamic equilibrium, we computed the plasma composition within the ablated vapor material and the corresponding kinetic pressure. The time evolution of plume pressure is shown to reach a minimum value below the pressure of the background gas. This indicates that the process of vapor-gas interdiffusion has a negligible influence on the plume expansion dynamics in the considered timescale. Moreover, the results promote the plasma pressure as a control parameter in calibration-free laser-induced breakdown spectroscopy.

  2. Dynamic evolution of the spectrum of long-period fiber Bragg gratings fabricated from hydrogen-loaded optical fiber by ultraviolet laser irradiation.

    PubMed

    Fujita, Keio; Masuda, Yuji; Nakayama, Keisuke; Ando, Maki; Sakamoto, Kenji; Mohri, Jun-pei; Yamauchi, Makoto; Kimura, Masanori; Mizutani, Yasuo; Kimura, Susumu; Yokouchi, Takashi; Suzaki, Yoshifumi; Ejima, Seiki

    2005-11-20

    Long-period fiber Bragg gratings fabricated by exposure of hydrogen-loaded fiber to UV laser light exhibit large-scale dynamic evolution for approximately two weeks at room temperature. During this time two distinct features show up in their spectrum: a large upswing in wavelength and a substantial deepening of the transmission minimum. The dynamic evolution of the transmission spectrum is explained quantitatively by use of Malo's theory of UV-induced quenching [Electron. Lett. 30, 442 (1994)] followed by refilling of hydrogen in the fiber core and the theory of hydrogen diffusion in the fiber material. The amount of hydrogen quenched by the UV irradiation is 6% of the loaded hydrogen.

  3. Functional characteristics of the calcium modulated proteins seen from an evolutionary perspective

    NASA Technical Reports Server (NTRS)

    Kretsinger, R. H.; Nakayama, S.; Moncrief, N. D.

    1991-01-01

    We have constructed dendrograms relating 173 EF-hand proteins of known amino acid sequence. We aligned all of these proteins by their EF-hand domains, omitting interdomain regions. Initial dendrograms were computed by minimum mutation distance methods. Using these as starting points, we determined the best dendrogram by the method of maximum parsimony, scored by minimum mutation distance. We identified 14 distinct subfamilies as well as 6 unique proteins that are perhaps the sole representatives of other subfamilies. This information is given in tabular form. Within subfamilies one can easily align interdomain regions. The resulting dendrograms are very similar to those computed using domains only. Dendrograms constructed using pairs of domains show general congruence. However, there are enough exceptions to caution against an overly simple scheme in which one pair of gene duplications leads from one domain precurser to a four domain prototype from which all other forms evolved. The ability to bind calcium was lost and acquired several times during evolution. The distribution of introns does not conform to the dendrogram based on amino acid sequences. The rates of evolution appear to be much slower within subfamilies, especially within calmodulin, than those prior to the definition of subfamily.

  4. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  5. Spatio-Temporal Mining of PolSAR Satellite Image Time Series

    NASA Astrophysics Data System (ADS)

    Julea, A.; Meger, N.; Trouve, E.; Bolon, Ph.; Rigotti, C.; Fallourd, R.; Nicolas, J.-M.; Vasile, G.; Gay, M.; Harant, O.; Ferro-Famil, L.

    2010-12-01

    This paper presents an original data mining approach for describing Satellite Image Time Series (SITS) spatially and temporally. It relies on pixel-based evolution and sub-evolution extraction. These evolutions, namely the frequent grouped sequential patterns, are required to cover a minimum surface and to affect pixels that are sufficiently connected. These spatial constraints are actively used to face large data volumes and to select evolutions making sense for end-users. In this paper, a specific application to fully polarimetric SAR image time series is presented. Preliminary experiments performed on a RADARSAT-2 SITS covering the Chamonix Mont-Blanc test-site are used to illustrate the proposed approach.

  6. Cyclic Evolution of Coronal Fields from a Coupled Dynamo Potential-Field Source-Surface Model.

    PubMed

    Dikpati, Mausumi; Suresh, Akshaya; Burkepile, Joan

    The structure of the Sun's corona varies with the solar-cycle phase, from a near spherical symmetry at solar maximum to an axial dipole at solar minimum. It is widely accepted that the large-scale coronal structure is governed by magnetic fields that are most likely generated by dynamo action in the solar interior. In order to understand the variation in coronal structure, we couple a potential-field source-surface model with a cyclic dynamo model. In this coupled model, the magnetic field inside the convection zone is governed by the dynamo equation; these dynamo-generated fields are extended from the photosphere to the corona using a potential-field source-surface model. Assuming axisymmetry, we take linear combinations of associated Legendre polynomials that match the more complex coronal structures. Choosing images of the global corona from the Mauna Loa Solar Observatory at each Carrington rotation over half a cycle (1986 - 1991), we compute the coefficients of the associated Legendre polynomials up to degree eight and compare with observations. We show that at minimum the dipole term dominates, but it fades as the cycle progresses; higher-order multipolar terms begin to dominate. The amplitudes of these terms are not exactly the same for the two limbs, indicating that there is a longitude dependence. While both the 1986 and the 1996 minimum coronas were dipolar, the minimum in 2008 was unusual, since there was a substantial departure from a dipole. We investigate the physical cause of this departure by including a North-South asymmetry in the surface source of the magnetic fields in our flux-transport dynamo model, and find that this asymmetry could be one of the reasons for departure from the dipole in the 2008 minimum.

  7. Evolution of the net surface shortwave radiation over the Indian Ocean during summer MONEX (1979) - A satellite description

    NASA Technical Reports Server (NTRS)

    Gautier, C.

    1986-01-01

    The evolution of the net shortwave (NSW) radiation fields during the monsoon of 1979 was analyzed, using geostationary satellite data, collected before, during, and after the monsoon onset. It is seen, from the time sequence of NSW fields, that during the preonset phase the characteristics of the NSW field are dominated by a strong maximum in the entire Arabian Sea and by a strong minimum in the central and eastern equatorial Indian Ocean, the minimum being associated with the intense convective activity occurring in that region. As the season evolves, the minima of NSW associated with the large scale convective activity propagate westward in the equatorial ocean. During the monsoon onset, there occurs an explosive onset of the convection activity in the Arabian Sea: the maximum has retreated towards the Somalia coast, and most of the sea then experiences a strong minimum of NSW associated with the intense precipitation occurring along the southwestern coast of the Indian subcontinent.

  8. Plasma dynamics on current-carrying magnetic flux tubes

    NASA Technical Reports Server (NTRS)

    Swift, Daniel W.

    1992-01-01

    A 1D numerical simulation is used to investigate the evolution of a plasma in a current-carrying magnetic flux tube of variable cross section. A large potential difference, parallel to the magnetic field, is applied across the domain. The result is that density minimum tends to deepen, primarily in the cathode end, and the entire potential drop becomes concentrated across the region of density minimum. The evolution of the simulation shows some sensitivity to particle boundary conditions, but the simulations inevitably evolve into a final state with a nearly stationary double layer near the cathode end. The simulation results are at sufficient variance with observations that it appears unlikely that auroral electrons can be explained by a simple process of acceleration through a field-aligned potential drop.

  9. Solar cycle evolution of solar wind speed structure between 1973 and 1985 observed with the interplanetary scintillation method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kojima, M.; Kakinuma, T.

    1987-07-01

    The solar cycle evolution of solar wind speed structure was studied for the years from 1973 to 1985 on a basis of interplanetary scintillation observations using a new method for mapping solar wind speed to the source surface. The major minimum-speed regions are distributed along a neutral line through the whole period of a solar cycle: when solar activity is low, they are distributed on the wavy neutral line along the solar equator; in the active phase they also tend to be distributed along the neutral line, which has a large latitudinal amplitude. The minimum-speed regions tend to be distributedmore » not only along the neutral line but also at low magnetic intensity regions and/or coronal bright regions which do not correspond to the neutral line. As the polar high-speed regions extend equatorward around the minimum phase, the latitudinal gradient of speed increases at the boundaries of the low-speed region, and the width of the low-speed region decreases. One or two years before the minimum of solar activity, two localized minimum-speed regions appear on the neutral line, and their locations are longitudinally separated by 180. copyright American Geophysical Union 1987« less

  10. Efficiency and large deviations in time-asymmetric stochastic heat engines

    DOE PAGES

    Gingrich, Todd R.; Rotskoff, Grant M.; Vaikuntanathan, Suriyanarayanan; ...

    2014-10-24

    In a stochastic heat engine driven by a cyclic non-equilibrium protocol, fluctuations in work and heat give rise to a fluctuating efficiency. Using computer simulations and tools from large deviation theory, we have examined these fluctuations in detail for a model two-state engine. We find in general that the form of efficiency probability distributions is similar to those described by Verley et al (2014 Nat. Commun. 5 4721), in particular featuring a local minimum in the long-time limit. In contrast to the time-symmetric engine protocols studied previously, however, this minimum need not occur at the value characteristic of a reversible Carnot engine. Furthermore, while the local minimum may reside at the global minimum of a large deviation rate function, it does not generally correspond to the least likely efficiency measured over finite time. Lastly, we introduce a general approximation for the finite-time efficiency distribution,more » $$P(\\eta )$$, based on large deviation statistics of work and heat, that remains very accurate even when $$P(\\eta )$$ deviates significantly from its large deviation form.« less

  11. Fast computation of an optimal controller for large-scale adaptive optics.

    PubMed

    Massioni, Paolo; Kulcsár, Caroline; Raynaud, Henri-François; Conan, Jean-Marc

    2011-11-01

    The linear quadratic Gaussian regulator provides the minimum-variance control solution for a linear time-invariant system. For adaptive optics (AO) applications, under the hypothesis of a deformable mirror with instantaneous response, such a controller boils down to a minimum-variance phase estimator (a Kalman filter) and a projection onto the mirror space. The Kalman filter gain can be computed by solving an algebraic Riccati matrix equation, whose computational complexity grows very quickly with the size of the telescope aperture. This "curse of dimensionality" makes the standard solvers for Riccati equations very slow in the case of extremely large telescopes. In this article, we propose a way of computing the Kalman gain for AO systems by means of an approximation that considers the turbulence phase screen as the cropped version of an infinite-size screen. We demonstrate the advantages of the methods for both off- and on-line computational time, and we evaluate its performance for classical AO as well as for wide-field tomographic AO with multiple natural guide stars. Simulation results are reported.

  12. Postinflationary Higgs relaxation and the origin of matter-antimatter asymmetry.

    PubMed

    Kusenko, Alexander; Pearce, Lauren; Yang, Louis

    2015-02-13

    The recent measurement of the Higgs boson mass implies a relatively slow rise of the standard model Higgs potential at large scales, and a possible second minimum at even larger scales. Consequently, the Higgs field may develop a large vacuum expectation value during inflation. The relaxation of the Higgs field from its large postinflationary value to the minimum of the effective potential represents an important stage in the evolution of the Universe. During this epoch, the time-dependent Higgs condensate can create an effective chemical potential for the lepton number, leading to a generation of the lepton asymmetry in the presence of some large right-handed Majorana neutrino masses. The electroweak sphalerons redistribute this asymmetry between leptons and baryons. This Higgs relaxation leptogenesis can explain the observed matter-antimatter asymmetry of the Universe even if the standard model is valid up to the scale of inflation, and any new physics is suppressed by that high scale.

  13. HR 7578 - A K dwarf double-lined spectroscopic binary with peculiar abundances

    NASA Technical Reports Server (NTRS)

    Fekel, F. C., Jr.; Beavers, W. I.

    1983-01-01

    The number of double-lined K and M dwarf binaries which is currently known is quite small, only a dozen or less of each type. The HR 7578 system was classified as dK5 on the Mount Wilson system and as K2 V on the MK ystem. A summary of radial-velocity measurements including the observatory and weight of each observation is given in a table. The star with the stronger lines has been called component A. The final orbital element solution with all observations appropriately weighted was computed with a differential corrections computer program described by Barker et al. (1967). The program had been modified for the double-lined case. Of particular interest are the very large eccentricity of the system and the large minimum masses for each component. These large minimum masses suggest that eclipses may be detectable despite the relatively long period and small radii of the stars.

  14. A new design approach to achieve a minimum impulse limit cycle in the presence of significant measurement uncertainties

    NASA Technical Reports Server (NTRS)

    Martin, M. W.; Kubiak, E. T.

    1982-01-01

    A new design was developed for the Space Shuttle Transition Phase Digital Autopilot to reduce the impact of large measurement uncertainties in the rate signal during attitude control. The signal source, which was dictated by early computer constraints, is characterized by large quantization, noise, bias, and transport lag which produce a measurement uncertainty larger than the minimum impulse rate change. To ensure convergence to a minimum impulse limit cycle, the design employed bias and transport lag compensation and a switching logic with hysteresis, rate deadzone, and 'walking' switching line. The design background, the rate measurement uncertainties, and the design solution are documented.

  15. Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle

    DOE PAGES

    Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza; ...

    2017-05-18

    We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less

  16. Optimizing Variational Quantum Algorithms Using Pontryagin’s Minimum Principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhi -Cheng; Rahmani, Armin; Shabani, Alireza

    We use Pontryagin’s minimum principle to optimize variational quantum algorithms. We show that for a fixed computation time, the optimal evolution has a bang-bang (square pulse) form, both for closed and open quantum systems with Markovian decoherence. Our findings support the choice of evolution ansatz in the recently proposed quantum approximate optimization algorithm. Focusing on the Sherrington-Kirkpatrick spin glass as an example, we find a system-size independent distribution of the duration of pulses, with characteristic time scale set by the inverse of the coupling constants in the Hamiltonian. The optimality of the bang-bang protocols and the characteristic time scale ofmore » the pulses provide an efficient parametrization of the protocol and inform the search for effective hybrid (classical and quantum) schemes for tackling combinatorial optimization problems. Moreover, we find that the success rates of our optimal bang-bang protocols remain high even in the presence of weak external noise and coupling to a thermal bath.« less

  17. Simulating Cyclic Evolution of Coronal Magnetic Fields using a Potential Field Source Surface Model Coupled with a Dynamo Model

    NASA Astrophysics Data System (ADS)

    Suresh, A.; Dikpati, M.; Burkepile, J.; de Toma, G.

    2013-12-01

    The structure of the Sun's corona varies with solar cycle, from a near spherical symmetry at solar maximum to an axial dipole at solar minimum. Why does this pattern occur? It is widely accepted that large-scale coronal structure is governed by magnetic fields, which are most likely generated by the dynamo action in the solar interior. In order to understand the variation in coronal structure, we couple a potential field source surface model with a cyclic dynamo model. In this coupled model, the magnetic field inside the convection zone is governed by the dynamo equation and above the photosphere these dynamo-generated fields are extended from the photosphere to the corona by using a potential field source surface model. Under the assumption of axisymmetry, the large-scale poloidal fields can be written in terms of the curl of a vector potential. Since from the photosphere and above the magnetic diffusivity is essentially infinite, the evolution of the vector potential is given by Laplace's Equation, the solution of which is obtained in the form of a first order Associated Legendre Polynomial. By taking linear combinations of these polynomial terms, we find solutions that match more complex coronal structures. Choosing images of the global corona from the Mauna Loa Solar Observatory at each Carrington rotation over half a cycle (1986-1991), we compute the coefficients of the Associated Legendre Polynomials up to degree eight and compare with observation. We reproduce some previous results that at minimum the dipole term dominates, but that this term fades with the progress of the cycle and higher order multipole terms begin to dominate. We find that the amplitudes of these terms are not exactly the same in the two limbs, indicating that there is some phi dependence. Furthermore, by comparing the solar minimum corona during the past three minima (1986, 1996, and 2008), we find that, while both the 1986 and 1996 minima were dipolar, the minimum in 2008 was unusual, as there was departure from a dipole. In order to investigate the physical cause of this departure from dipole, we implement north-south asymmetry in the surface source of the magnetic fields in our model, and find that such n/s asymmetry in solar cycle could be one of the reasons for this departure. This work is partially supported by NASA's LWS grant with award number NNX08AQ34G. NCAR is sponsored by the NSF.

  18. An evaluation of superminicomputers for thermal analysis

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.; Vidal, J. B.; Jones, G. K.

    1962-01-01

    The feasibility and cost effectiveness of solving thermal analysis problems on superminicomputers is demonstrated. Conventional thermal analysis and the changing computer environment, computer hardware and software used, six thermal analysis test problems, performance of superminicomputers (CPU time, accuracy, turnaround, and cost) and comparison with large computers are considered. Although the CPU times for superminicomputers were 15 to 30 times greater than the fastest mainframe computer, the minimum cost to obtain the solutions on superminicomputers was from 11 percent to 59 percent of the cost of mainframe solutions. The turnaround (elapsed) time is highly dependent on the computer load, but for large problems, superminicomputers produced results in less elapsed time than a typically loaded mainframe computer.

  19. A New Look at Some Solar Wind Turbulence Puzzles

    NASA Technical Reports Server (NTRS)

    Roberts, Aaron

    2006-01-01

    Some aspects of solar wind turbulence have defied explanation. While it seems likely that the evolution of Alfvenicity and power spectra are largely explained by the shearing of an initial population of solar-generated Alfvenic fluctuations, the evolution of the anisotropies of the turbulence does not fit into the model so far. A two-component model, consisting of slab waves and quasi-two-dimensional fluctuations, offers some ideas, but does not account for the turning of both wave-vector-space power anisotropies and minimum variance directions in the fluctuating vectors as the Parker spiral turns. We will show observations that indicate that the minimum variance evolution is likely not due to traditional turbulence mechanisms, and offer arguments that the idea of two-component turbulence is at best a local approximation that is of little help in explaining the evolution of the fluctuations. Finally, time-permitting, we will discuss some observations that suggest that the low Alfvenicity of many regions of the solar wind in the inner heliosphere is not due to turbulent evolution, but rather to the existence of convected structures, including mini-clouds and other twisted flux tubes, that were formed with low Alfvenicity. There is still a role for turbulence in the above picture, but it is highly modified from the traditional views.

  20. Inclusion of the fitness sharing technique in an evolutionary algorithm to analyze the fitness landscape of the genetic code adaptability.

    PubMed

    Santos, José; Monteagudo, Ángel

    2017-03-27

    The canonical code, although prevailing in complex genomes, is not universal. It was shown the canonical genetic code superior robustness compared to random codes, but it is not clearly determined how it evolved towards its current form. The error minimization theory considers the minimization of point mutation adverse effect as the main selection factor in the evolution of the code. We have used simulated evolution in a computer to search for optimized codes, which helps to obtain information about the optimization level of the canonical code in its evolution. A genetic algorithm searches for efficient codes in a fitness landscape that corresponds with the adaptability of possible hypothetical genetic codes. The lower the effects of errors or mutations in the codon bases of a hypothetical code, the more efficient or optimal is that code. The inclusion of the fitness sharing technique in the evolutionary algorithm allows the extent to which the canonical genetic code is in an area corresponding to a deep local minimum to be easily determined, even in the high dimensional spaces considered. The analyses show that the canonical code is not in a deep local minimum and that the fitness landscape is not a multimodal fitness landscape with deep and separated peaks. Moreover, the canonical code is clearly far away from the areas of higher fitness in the landscape. Given the non-presence of deep local minima in the landscape, although the code could evolve and different forces could shape its structure, the fitness landscape nature considered in the error minimization theory does not explain why the canonical code ended its evolution in a location which is not an area of a localized deep minimum of the huge fitness landscape.

  1. Fast adaptive diamond search algorithm for block-matching motion estimation using spatial correlation

    NASA Astrophysics Data System (ADS)

    Park, Sang-Gon; Jeong, Dong-Seok

    2000-12-01

    In this paper, we propose a fast adaptive diamond search algorithm (FADS) for block matching motion estimation. Many fast motion estimation algorithms reduce the computational complexity by the UESA (Unimodal Error Surface Assumption) where the matching error monotonically increases as the search moves away from the global minimum point. Recently, many fast BMAs (Block Matching Algorithms) make use of the fact that global minimum points in real world video sequences are centered at the position of zero motion. But these BMAs, especially in large motion, are easily trapped into the local minima and result in poor matching accuracy. So, we propose a new motion estimation algorithm using the spatial correlation among the neighboring blocks. We move the search origin according to the motion vectors of the spatially neighboring blocks and their MAEs (Mean Absolute Errors). The computer simulation shows that the proposed algorithm has almost the same computational complexity with DS (Diamond Search), but enhances PSNR. Moreover, the proposed algorithm gives almost the same PSNR as that of FS (Full Search), even for the large motion with half the computational load.

  2. Effects of Combined Stellar Feedback on Star Formation in Stellar Clusters

    NASA Astrophysics Data System (ADS)

    Wall, Joshua Edward; McMillan, Stephen; Pellegrino, Andrew; Mac Low, Mordecai; Klessen, Ralf; Portegies Zwart, Simon

    2018-01-01

    We present results of hybrid MHD+N-body simulations of star cluster formation and evolution including self consistent feedback from the stars in the form of radiation, winds, and supernovae from all stars more massive than 7 solar masses. The MHD is modeled with the adaptive mesh refinement code FLASH, while the N-body computations are done with a direct algorithm. Radiation is modeled using ray tracing along long characteristics in directions distributed using the HEALPIX algorithm, and causes ionization and momentum deposition, while winds and supernova conserve momentum and energy during injection. Stellar evolution is followed using power-law fits to evolution models in SeBa. We use a gravity bridge within the AMUSE framework to couple the N-body dynamics of the stars to the gas dynamics in FLASH. Feedback from the massive stars alters the structure of young clusters as gas ejection occurs. We diagnose this behavior by distinguishing between fractal distribution and central clustering using a Q parameter computed from the minimum spanning tree of each model cluster. Global effects of feedback in our simulations will also be discussed.

  3. PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabau, Adrian S; Gorti, Sarma B; Peter, William H

    A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less

  4. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    PubMed

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  5. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    PubMed Central

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  6. Networked Microcomputers--The Next Generation in College Computing.

    ERIC Educational Resources Information Center

    Harris, Albert L.

    The evolution of computer hardware for college computing has mirrored the industry's growth. When computers were introduced into the educational environment, they had limited capacity and served one user at a time. Then came large mainframes with many terminals sharing the resource. Next, the use of computers in office automation emerged. As…

  7. Design evolution of large wind turbine generators

    NASA Technical Reports Server (NTRS)

    Spera, D. A.

    1979-01-01

    During the past five years, the goals of economy and reliability have led to a significant evolution in the basic design--both external and internal--of large wind turbine systems. To show the scope and nature of recent changes in wind turbine designs, development of three types are described: (1) system configuration developments; (2) computer code developments; and (3) blade technology developments.

  8. Time evolution of giant molecular cloud mass functions with cloud-cloud collisions and gas resurrection in various environments

    NASA Astrophysics Data System (ADS)

    Kobayashi, M. I. N.; Inutsuka, S.; Kobayashi, H.; Hasegawa, K.

    We formulate the evolution equation for the giant molecular cloud (GMC) mass functions including self-growth of GMCs through the thermal instability, self-dispersal due to massive stars born in GMCs, cloud-cloud collisions (CCCs), and gas resurrection that replenishes the minimum-mass GMC population. The computed time evolutions obtained from this formulation suggest that the slope of GMC mass function in the mass range <105.5 Mȯ is governed by the ratio of GMC formation timescale to its dispersal timescale, and that the CCC process modifies only the massive end of the mass function. Our results also suggest that most of the dispersed gas contributes to the mass growth of pre-existing GMCs in arm regions whereas less than 60 per cent contributes in inter-arm regions.

  9. ({The) Solar System Large Planets influence on a new Maunder Miniμm}

    NASA Astrophysics Data System (ADS)

    Yndestad, Harald; Solheim, Jan-Erik

    2016-04-01

    In 1890´s G. Spörer and E. W. Maunder (1890) reported that the solar activity stopped in a period of 70 years from 1645 to 1715. Later a reconstruction of the solar activity confirms the grand minima Maunder (1640-1720), Spörer (1390-1550), Wolf (1270-1340), and the minima Oort (1010-1070) and Dalton (1785-1810) since the year 1000 A.D. (Usoskin et al. 2007). These minimum periods have been associated with less irradiation from the Sun and cold climate periods on Earth. An identification of a three grand Maunder type periods and two Dalton type periods in a period thousand years, indicates that sooner or later there will be a colder climate on Earth from a new Maunder- or Dalton- type period. The cause of these minimum periods, are not well understood. An expected new Maunder-type period is based on the properties of solar variability. If the solar variability has a deterministic element, we can estimate better a new Maunder grand minimum. A random solar variability can only explain the past. This investigation is based on the simple idea that if the solar variability has a deterministic property, it must have a deterministic source, as a first cause. If this deterministic source is known, we can compute better estimates the next expected Maunder grand minimum period. The study is based on a TSI ACRIM data series from 1700, a TSI ACRIM data series from 1000 A.D., sunspot data series from 1611 and a Solar Barycenter orbit data series from 1000. The analysis method is based on a wavelet spectrum analysis, to identify stationary periods, coincidence periods and their phase relations. The result shows that the TSI variability and the sunspots variability have deterministic oscillations, controlled by the large planets Jupiter, Uranus and Neptune, as the first cause. A deterministic model of TSI variability and sunspot variability confirms the known minimum and grand minimum periods since 1000. From this deterministic model we may expect a new Maunder type sunspot minimum period from about 2018 to 2055. The deterministic model of a TSI ACRIM data series from 1700 computes a new Maunder type grand minimum period from 2015 to 2071. A model of the longer TSI ACRIM data series from 1000 computes a new Dalton to Maunder type minimum irradiation period from 2047 to 2068.

  10. Bridging Social and Semantic Computing - Design and Evaluation of User Interfaces for Hybrid Systems

    ERIC Educational Resources Information Center

    Bostandjiev, Svetlin Alex I.

    2012-01-01

    The evolution of the Web brought new interesting problems to computer scientists that we loosely classify in the fields of social and semantic computing. Social computing is related to two major paradigms: computations carried out by a large amount of people in a collective intelligence fashion (i.e. wikis), and performing computations on social…

  11. Effect of a localized minimum in equatorial field strength on resistive tearing instability in the geomagnetotail

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hau, L.N.; Wolf, R.A.

    A two-dimensional, resistive-MHD computer code is used to investigate the spontaneous reconnection of magnetotaillike configurations. The initial conditions adopted in the simulations are of two types: (1) in which the equatorial normal magnetic field component B{sub ze} declines monotonically down the tail, and (2) in which B{sub ze} exhibits a deep minimum in the near-earth plasma sheet. Configurations of the second type have been suggested by Erickson (1984, 1985) to be the inevitable result of adiabatic, earthward convection of the plasma sheet. To represent the case where the earthward convection stops before the X line forms, i.e., the case wheremore » the interplanetary magnetic field turns northward after a period of southward orientation, the authors impose zero-flow boundary conditions at the edges of the computational box. The initial configurations are in equilibrium and stable within ideal MHD. The dynamic evolution of the system starts after the resistivity is turned on. The main results of the simulations basically support the neutral-line model of substorms and confirm Birn's (1980) computer studies. Specifically, they find spontaneous formation of an X-type neutral point and a single O-type plasmoid with strong tailward flow on the tailward side of the X point. in addition, the results show that the formation of the X point for the configurations of type 2 is clearly associated with the assumed initial B{sub z} minimum. Furthermore, the time interval from trablurning on of the resistivity to the formation of a plasmoid is much shorter in the case where there is an initial deep minimum.« less

  12. Stochastic evolutionary dynamics in minimum-effort coordination games

    NASA Astrophysics Data System (ADS)

    Li, Kun; Cong, Rui; Wang, Long

    2016-08-01

    The minimum-effort coordination game draws recently more attention for the fact that human behavior in this social dilemma is often inconsistent with the predictions of classical game theory. Here, we combine evolutionary game theory and coalescence theory to investigate this game in finite populations. Both analytic results and individual-based simulations show that effort costs play a key role in the evolution of contribution levels, which is in good agreement with those observed experimentally. Besides well-mixed populations, set structured populations have also been taken into consideration. Therein we find that large number of sets and moderate migration rate greatly promote effort levels, especially for high effort costs.

  13. 20 CFR 404.261 - Computing your special minimum primary insurance amount.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing your special minimum primary..., SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Special Minimum Primary Insurance Amounts § 404.261 Computing your special minimum primary insurance amount. (a) Years of coverage...

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pisin; Hsin, Po-Shen; Niu, Yuezhen, E-mail: pisinchen@phys.ntu.edu.tw, E-mail: r01222031@ntu.edu.tw, E-mail: yuezhenniu@gmail.com

    We investigate the entropy evolution in the early universe by computing the change of the entanglement entropy in Freedmann-Robertson-Walker quantum cosmology in the presence of particle horizon. The matter is modeled by a Chaplygin gas so as to provide a smooth interpolation between inflationary and radiation epochs, rendering the evolution of entropy from early time to late time trackable. We found that soon after the onset of the inflation, the total entanglement entropy rapidly decreases to a minimum. It then rises monotonically in the remainder of the inflation epoch as well as the radiation epoch. Our result is in qualitativemore » agreement with the area law of Ryu and Takayanagi including the logarithmic correction. We comment on the possible implication of our finding to the cosmological entropy problem.« less

  15. Efficient Online Optimized Quantum Control for Adiabatic Quantum Computation

    NASA Astrophysics Data System (ADS)

    Quiroz, Gregory

    Adiabatic quantum computation (AQC) relies on controlled adiabatic evolution to implement a quantum algorithm. While control evolution can take many forms, properly designed time-optimal control has been shown to be particularly advantageous for AQC. Grover's search algorithm is one such example where analytically-derived time-optimal control leads to improved scaling of the minimum energy gap between the ground state and first excited state and thus, the well-known quadratic quantum speedup. Analytical extensions beyond Grover's search algorithm present a daunting task that requires potentially intractable calculations of energy gaps and a significant degree of model certainty. Here, an in situ quantum control protocol is developed for AQC. The approach is shown to yield controls that approach the analytically-derived time-optimal controls for Grover's search algorithm. In addition, the protocol's convergence rate as a function of iteration number is shown to be essentially independent of system size. Thus, the approach is potentially scalable to many-qubit systems.

  16. Inference of Evolutionary Jumps in Large Phylogenies using Lévy Processes

    PubMed Central

    Duchen, Pablo; Leuenberger, Christoph; Szilágyi, Sándor M.; Harmon, Luke; Eastman, Jonathan; Schweizer, Manuel

    2017-01-01

    Abstract Although it is now widely accepted that the rate of phenotypic evolution may not necessarily be constant across large phylogenies, the frequency and phylogenetic position of periods of rapid evolution remain unclear. In his highly influential view of evolution, G. G. Simpson supposed that such evolutionary jumps occur when organisms transition into so-called new adaptive zones, for instance after dispersal into a new geographic area, after rapid climatic changes, or following the appearance of an evolutionary novelty. Only recently, large, accurate and well calibrated phylogenies have become available that allow testing this hypothesis directly, yet inferring evolutionary jumps remains computationally very challenging. Here, we develop a computationally highly efficient algorithm to accurately infer the rate and strength of evolutionary jumps as well as their phylogenetic location. Following previous work we model evolutionary jumps as a compound process, but introduce a novel approach to sample jump configurations that does not require matrix inversions and thus naturally scales to large trees. We then make use of this development to infer evolutionary jumps in Anolis lizards and Loriinii parrots where we find strong signal for such jumps at the basis of clades that transitioned into new adaptive zones, just as postulated by Simpson’s hypothesis. [evolutionary jump; Lévy process; phenotypic evolution; punctuated equilibrium; quantitative traits. PMID:28204787

  17. Design synthesis and optimization of permanent magnet synchronous machines based on computationally-efficient finite element analysis

    NASA Astrophysics Data System (ADS)

    Sizov, Gennadi Y.

    In this dissertation, a model-based multi-objective optimal design of permanent magnet ac machines, supplied by sine-wave current regulated drives, is developed and implemented. The design procedure uses an efficient electromagnetic finite element-based solver to accurately model nonlinear material properties and complex geometric shapes associated with magnetic circuit design. Application of an electromagnetic finite element-based solver allows for accurate computation of intricate performance parameters and characteristics. The first contribution of this dissertation is the development of a rapid computational method that allows accurate and efficient exploration of large multi-dimensional design spaces in search of optimum design(s). The computationally efficient finite element-based approach developed in this work provides a framework of tools that allow rapid analysis of synchronous electric machines operating under steady-state conditions. In the developed modeling approach, major steady-state performance parameters such as, winding flux linkages and voltages, average, cogging and ripple torques, stator core flux densities, core losses, efficiencies and saturated machine winding inductances, are calculated with minimum computational effort. In addition, the method includes means for rapid estimation of distributed stator forces and three-dimensional effects of stator and/or rotor skew on the performance of the machine. The second contribution of this dissertation is the development of the design synthesis and optimization method based on a differential evolution algorithm. The approach relies on the developed finite element-based modeling method for electromagnetic analysis and is able to tackle large-scale multi-objective design problems using modest computational resources. Overall, computational time savings of up to two orders of magnitude are achievable, when compared to current and prevalent state-of-the-art methods. These computational savings allow one to expand the optimization problem to achieve more complex and comprehensive design objectives. The method is used in the design process of several interior permanent magnet industrial motors. The presented case studies demonstrate that the developed finite element-based approach practically eliminates the need for using less accurate analytical and lumped parameter equivalent circuit models for electric machine design optimization. The design process and experimental validation of the case-study machines are detailed in the dissertation.

  18. Optimizing velocities and transports for complex coastal regions and archipelagos

    NASA Astrophysics Data System (ADS)

    Haley, Patrick J.; Agarwal, Arpit; Lermusiaux, Pierre F. J.

    2015-05-01

    We derive and apply a methodology for the initialization of velocity and transport fields in complex multiply-connected regions with multiscale dynamics. The result is initial fields that are consistent with observations, complex geometry and dynamics, and that can simulate the evolution of ocean processes without large spurious initial transients. A class of constrained weighted least squares optimizations is defined to best fit first-guess velocities while satisfying the complex bathymetry, coastline and divergence strong constraints. A weak constraint towards the minimum inter-island transports that are in accord with the first-guess velocities provides important velocity corrections in complex archipelagos. In the optimization weights, the minimum distance and vertical area between pairs of coasts are computed using a Fast Marching Method. Additional information on velocity and transports are included as strong or weak constraints. We apply our methodology around the Hawaiian islands of Kauai/Niihau, in the Taiwan/Kuroshio region and in the Philippines Archipelago. Comparisons with other common initialization strategies, among hindcasts from these initial conditions (ICs), and with independent in situ observations show that our optimization corrects transports, satisfies boundary conditions and redirects currents. Differences between the hindcasts from these different ICs are found to grow for at least 2-3 weeks. When compared to independent in situ observations, simulations from our optimized ICs are shown to have the smallest errors.

  19. What's in a Name?

    ERIC Educational Resources Information Center

    Petersen, Rodney

    2004-01-01

    The evolution of terms, such as computer security, network security, information security, and information assurance, appears to reflect a changing landscape, largely influenced by rapid developments in technology and the maturity of a relatively young profession and an emerging academic discipline. What lies behind the evolution of these terms?…

  20. A method for calculating minimum biodiversity offset multipliers accounting for time discounting, additionality and permanence

    PubMed Central

    Laitila, Jussi; Moilanen, Atte; Pouzols, Federico M

    2014-01-01

    Biodiversity offsetting, which means compensation for ecological and environmental damage caused by development activity, has recently been gaining strong political support around the world. One common criticism levelled at offsets is that they exchange certain and almost immediate losses for uncertain future gains. In the case of restoration offsets, gains may be realized after a time delay of decades, and with considerable uncertainty. Here we focus on offset multipliers, which are ratios between damaged and compensated amounts (areas) of biodiversity. Multipliers have the attraction of being an easily understandable way of deciding the amount of offsetting needed. On the other hand, exact values of multipliers are very difficult to compute in practice if at all possible. We introduce a mathematical method for deriving minimum levels for offset multipliers under the assumption that offsetting gains must compensate for the losses (no net loss offsetting). We calculate absolute minimum multipliers that arise from time discounting and delayed emergence of offsetting gains for a one-dimensional measure of biodiversity. Despite the highly simplified model, we show that even the absolute minimum multipliers may easily be quite large, in the order of dozens, and theoretically arbitrarily large, contradicting the relatively low multipliers found in literature and in practice. While our results inform policy makers about realistic minimal offsetting requirements, they also challenge many current policies and show the importance of rigorous models for computing (minimum) offset multipliers. The strength of the presented method is that it requires minimal underlying information. We include a supplementary spreadsheet tool for calculating multipliers to facilitate application. PMID:25821578

  1. Computational modelling of large deformations in layered-silicate/PET nanocomposites near the glass transition

    NASA Astrophysics Data System (ADS)

    Figiel, Łukasz; Dunne, Fionn P. E.; Buckley, C. Paul

    2010-01-01

    Layered-silicate nanoparticles offer a cost-effective reinforcement for thermoplastics. Computational modelling has been employed to study large deformations in layered-silicate/poly(ethylene terephthalate) (PET) nanocomposites near the glass transition, as would be experienced during industrial forming processes such as thermoforming or injection stretch blow moulding. Non-linear numerical modelling was applied, to predict the macroscopic large deformation behaviour, with morphology evolution and deformation occurring at the microscopic level, using the representative volume element (RVE) approach. A physically based elasto-viscoplastic constitutive model, describing the behaviour of the PET matrix within the RVE, was numerically implemented into a finite element solver (ABAQUS) using an UMAT subroutine. The implementation was designed to be robust, for accommodating large rotations and stretches of the matrix local to, and between, the nanoparticles. The nanocomposite morphology was reconstructed at the RVE level using a Monte-Carlo-based algorithm that placed straight, high-aspect ratio particles according to the specified orientation and volume fraction, with the assumption of periodicity. Computational experiments using this methodology enabled prediction of the strain-stiffening behaviour of the nanocomposite, observed experimentally, as functions of strain, strain rate, temperature and particle volume fraction. These results revealed the probable origins of the enhanced strain stiffening observed: (a) evolution of the morphology (through particle re-orientation) and (b) early onset of stress-induced pre-crystallization (and hence lock-up of viscous flow), triggered by the presence of particles. The computational model enabled prediction of the effects of process parameters (strain rate, temperature) on evolution of the morphology, and hence on the end-use properties.

  2. CoRoT-2b: a Tidally Inflated, Young Exoplanet?

    NASA Astrophysics Data System (ADS)

    Guillot, Tristan; Havel, M.

    2009-09-01

    CoRoT-2b is among the most anomalously large transiting exoplanet known. Due to its large mass (3.3 Mjup), its large radius ( 1.5 Rjup) cannot be explained by standard evolution models. Recipes that work for other anomalously large exoplanets (e.g. HD209458b), such as invoking kinetic energy transport in the planetary interior or increased opacities, clearly fail for CoRoT-2b. Interestingly, the planet's parent star is an active star with a large fraction (7 to 20%) of spots and a rapid rotation (4.5 days). We first model the star's evolution to accurately constrain the planetary parameters. We find that the stellar activity has little influence on the star's evolution and inferred parameters. However, stellar evolution models point towards two kind of solutions for the star-planet system: (i) a very young system (20-40 Ma) with a star still undergoing pre-main sequence contraction, and a planet which could have a radius as low as 1.4 Rjup, or (ii) a young main-sequence star (40 to 500 Ma) with a planet that is slightly more inflated ( 1.5 Rjup). In either case, planetary evolution models require a significant added internal energy to explain the inferred planet size: from a minimum of 3x1028 erg/s in case (i), to up to 1.5x1029 erg/s in case (ii). We find that evolution models consistently including planet/star tides are able to reproduce the inferred radius but only for a short period of time ( 10 Ma). This points towards a young age for the star/planet system and dissipation by tides due to either circularization or synchronization of the planet. Additional observations of the star (infrared excess due to disk?) and of the planet (precise Rossiter effect, IR secondary eclispe) would be highly valuable to understand the early evolution of star-exoplanet systems.

  3. Modeling the evolution of channel shape: Balancing computational efficiency with hydraulic fidelity

    USGS Publications Warehouse

    Wobus, C.W.; Kean, J.W.; Tucker, G.E.; Anderson, R. Scott

    2008-01-01

    The cross-sectional shape of a natural river channel controls the capacity of the system to carry water off a landscape, to convey sediment derived from hillslopes, and to erode its bed and banks. Numerical models that describe the response of a landscape to changes in climate or tectonics therefore require formulations that can accommodate evolution of channel cross-sectional geometry. However, fully two-dimensional (2-D) flow models are too computationally expensive to implement in large-scale landscape evolution models, while available simple empirical relationships between width and discharge do not adequately capture the dynamics of channel adjustment. We have developed a simplified 2-D numerical model of channel evolution in a cohesive, detachment-limited substrate subject to steady, unidirectional flow. Erosion is assumed to be proportional to boundary shear stress, which is calculated using an approximation of the flow field in which log-velocity profiles are assumed to apply along vectors that are perpendicular to the local channel bed. Model predictions of the velocity structure, peak boundary shear stress, and equilibrium channel shape compare well with predictions of a more sophisticated but more computationally demanding ray-isovel model. For example, the mean velocities computed by the two models are consistent to within ???3%, and the predicted peak shear stress is consistent to within ???7%. Furthermore, the shear stress distributions predicted by our model compare favorably with available laboratory measurements for prescribed channel shapes. A modification to our simplified code in which the flow includes a high-velocity core allows the model to be extended to estimate shear stress distributions in channels with large width-to-depth ratios. Our model is efficient enough to incorporate into large-scale landscape evolution codes and can be used to examine how channels adjust both cross-sectional shape and slope in response to tectonic and climatic forcing. Copyright 2008 by the American Geophysical Union.

  4. A computer-aided design system geared toward conceptual design in a research environment. [for hypersonic vehicles

    NASA Technical Reports Server (NTRS)

    STACK S. H.

    1981-01-01

    A computer-aided design system has recently been developed specifically for the small research group environment. The system is implemented on a Prime 400 minicomputer linked with a CDC 6600 computer. The goal was to assign the minicomputer specific tasks, such as data input and graphics, thereby reserving the large mainframe computer for time-consuming analysis codes. The basic structure of the design system consists of GEMPAK, a computer code that generates detailed configuration geometry from a minimum of input; interface programs that reformat GEMPAK geometry for input to the analysis codes; and utility programs that simplify computer access and data interpretation. The working system has had a large positive impact on the quantity and quality of research performed by the originating group. This paper describes the system, the major factors that contributed to its particular form, and presents examples of its application.

  5. Inference of Evolutionary Jumps in Large Phylogenies using Lévy Processes.

    PubMed

    Duchen, Pablo; Leuenberger, Christoph; Szilágyi, Sándor M; Harmon, Luke; Eastman, Jonathan; Schweizer, Manuel; Wegmann, Daniel

    2017-11-01

    Although it is now widely accepted that the rate of phenotypic evolution may not necessarily be constant across large phylogenies, the frequency and phylogenetic position of periods of rapid evolution remain unclear. In his highly influential view of evolution, G. G. Simpson supposed that such evolutionary jumps occur when organisms transition into so-called new adaptive zones, for instance after dispersal into a new geographic area, after rapid climatic changes, or following the appearance of an evolutionary novelty. Only recently, large, accurate and well calibrated phylogenies have become available that allow testing this hypothesis directly, yet inferring evolutionary jumps remains computationally very challenging. Here, we develop a computationally highly efficient algorithm to accurately infer the rate and strength of evolutionary jumps as well as their phylogenetic location. Following previous work we model evolutionary jumps as a compound process, but introduce a novel approach to sample jump configurations that does not require matrix inversions and thus naturally scales to large trees. We then make use of this development to infer evolutionary jumps in Anolis lizards and Loriinii parrots where we find strong signal for such jumps at the basis of clades that transitioned into new adaptive zones, just as postulated by Simpson's hypothesis. [evolutionary jump; Lévy process; phenotypic evolution; punctuated equilibrium; quantitative traits. The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  6. Transverse momentum dependent parton distributions at small- x

    DOE PAGES

    Xiao, Bo-Wen; Yuan, Feng; Zhou, Jian

    2017-05-23

    We study the transverse momentum dependent (TMD) parton distributions at small-x in a consistent framework that takes into account the TMD evolution and small-x evolution simultaneously. The small-x evolution effects are included by computing the TMDs at appropriate scales in terms of the dipole scattering amplitudes, which obey the relevant Balitsky–Kovchegov equation. Meanwhile, the TMD evolution is obtained by resumming the Collins–Soper type large logarithms emerged from the calculations in small-x formalism into Sudakov factors.

  7. Transverse momentum dependent parton distributions at small-x

    NASA Astrophysics Data System (ADS)

    Xiao, Bo-Wen; Yuan, Feng; Zhou, Jian

    2017-08-01

    We study the transverse momentum dependent (TMD) parton distributions at small-x in a consistent framework that takes into account the TMD evolution and small-x evolution simultaneously. The small-x evolution effects are included by computing the TMDs at appropriate scales in terms of the dipole scattering amplitudes, which obey the relevant Balitsky-Kovchegov equation. Meanwhile, the TMD evolution is obtained by resumming the Collins-Soper type large logarithms emerged from the calculations in small-x formalism into Sudakov factors.

  8. Transverse momentum dependent parton distributions at small- x

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiao, Bo-Wen; Yuan, Feng; Zhou, Jian

    We study the transverse momentum dependent (TMD) parton distributions at small-x in a consistent framework that takes into account the TMD evolution and small-x evolution simultaneously. The small-x evolution effects are included by computing the TMDs at appropriate scales in terms of the dipole scattering amplitudes, which obey the relevant Balitsky–Kovchegov equation. Meanwhile, the TMD evolution is obtained by resumming the Collins–Soper type large logarithms emerged from the calculations in small-x formalism into Sudakov factors.

  9. Large space structure damping design

    NASA Technical Reports Server (NTRS)

    Pilkey, W. D.; Haviland, J. K.

    1983-01-01

    Several FORTRAN subroutines and programs were developed which compute complex eigenvalues of a damped system using different approaches, and which rescale mode shapes to unit generalized mass and make rigid bodies orthogonal to each other. An analytical proof of a Minimum Constrained Frequency Criterion (MCFC) for a single damper is presented. A method to minimize the effect of control spill-over for large space structures is proposed. The characteristic equation of an undamped system with a generalized control law is derived using reanalysis theory. This equation can be implemented in computer programs for efficient eigenvalue analysis or control quasi synthesis. Methods to control vibrations in large space structure are reviewed and analyzed. The resulting prototype, using electromagnetic actuator, is described.

  10. Evolving Better Cars: Teaching Evolution by Natural Selection with a Digital Inquiry Activity

    ERIC Educational Resources Information Center

    Royer, Anne M.; Schultheis, Elizabeth H.

    2014-01-01

    Evolutionary experiments are usually difficult to perform in the classroom because of the large sizes and long timescales of experiments testing evolutionary hypotheses. Computer applications give students a window to observe evolution in action, allowing them to gain comfort with the process of natural selection and facilitating inquiry…

  11. AGIS: Evolution of Distributed Computing information system for ATLAS

    NASA Astrophysics Data System (ADS)

    Anisenkov, A.; Di Girolamo, A.; Alandes, M.; Karavakis, E.

    2015-12-01

    ATLAS, a particle physics experiment at the Large Hadron Collider at CERN, produces petabytes of data annually through simulation production and tens of petabytes of data per year from the detector itself. The ATLAS computing model embraces the Grid paradigm and a high degree of decentralization of computing resources in order to meet the ATLAS requirements of petabytes scale data operations. It has been evolved after the first period of LHC data taking (Run-1) in order to cope with new challenges of the upcoming Run- 2. In this paper we describe the evolution and recent developments of the ATLAS Grid Information System (AGIS), developed in order to integrate configuration and status information about resources, services and topology of the computing infrastructure used by the ATLAS Distributed Computing applications and services.

  12. Wideband Timing of Millisecond Pulsars

    NASA Astrophysics Data System (ADS)

    Pennucci, Timothy; Demorest, Paul; Ransom, Scott M.; North American Nanohertz ObservatoryGravitational Waves (NANOGRAV)

    2015-01-01

    The use of backend instrumentation capable of real-time coherent dedispersion of relatively large fractional bandwidths has become commonplace in pulsar astronomy. However, along with the desired increase in sensitivity to pulsars' broadband signals, a larger instantaneous bandwidth brings a number of potentially aggravating effects that can lead to degraded timing precision. In the case of high-precision timing experiments, such as the one being carried out by the North American Nanohertz Observatory for Gravitational Waves (NANOGrav), subtle effects such as unmodeled intrinsic profile evolution with frequency, interstellar scattering, and dispersion measure variation are potentially capable of reducing the experiment's sensitivity to a gravitational wave signal. In order to account for some of these complications associated with wideband observations, we augmented the traditional algorithm by which the fundamental timing quantities are measured. Our new measurement algorithm accommodates an arbitrary two-dimensional model ``portrait'' of a pulsar's total intensity as a function of observing frequency and rotational phase, and simultaneously determines the time-of-arrival (TOA), the dispersion measure (DM), and per-frequency-channel amplitudes that account for interstellar scintillation. Our publicly available python code incorporates a Gaussian-component modeling routine that allows for independent component evolution with frequency, a ``fiducial component'', and the inclusion of scattering. Here, we will present results from the application of our wideband measurement scheme to the suite of NANOGrav millisecond pulsars, which aimed to determine the level at which the experiment is being harmed by unmodeled profile evolution. We have found thus far, and expect to continue to find, that our new measurements are at least as good as those from traditional techniques. At a minimum, by largely reducing the volume of TOAs we will decrease the computational demand associated with probing posterior distributions in the search for gravitational waves. The development of this algorithm is well-motivated by the promise of even larger fractional bandwidth receiver systems in the future of pulsar astronomy.

  13. Hybrid Quantum-Classical Approach to Quantum Optimal Control.

    PubMed

    Li, Jun; Yang, Xiaodong; Peng, Xinhua; Sun, Chang-Pu

    2017-04-14

    A central challenge in quantum computing is to identify more computational problems for which utilization of quantum resources can offer significant speedup. Here, we propose a hybrid quantum-classical scheme to tackle the quantum optimal control problem. We show that the most computationally demanding part of gradient-based algorithms, namely, computing the fitness function and its gradient for a control input, can be accomplished by the process of evolution and measurement on a quantum simulator. By posing queries to and receiving answers from the quantum simulator, classical computing devices update the control parameters until an optimal control solution is found. To demonstrate the quantum-classical scheme in experiment, we use a seven-qubit nuclear magnetic resonance system, on which we have succeeded in optimizing state preparation without involving classical computation of the large Hilbert space evolution.

  14. Computational and experimental studies of LEBUs at high device Reynolds numbers

    NASA Technical Reports Server (NTRS)

    Bertelrud, Arild; Watson, R. D.

    1988-01-01

    The present paper summarizes computational and experimental studies for large-eddy breakup devices (LEBUs). LEBU optimization (using a computational approach considering compressibility, Reynolds number, and the unsteadiness of the flow) and experiments with LEBUs at high Reynolds numbers in flight are discussed. The measurements include streamwise as well as spanwise distributions of local skin friction. The unsteady flows around the LEBU devices and far downstream are characterized by strain-gage measurements on the devices and hot-wire readings downstream. Computations are made with available time-averaged and quasi-stationary techniques to find suitable device profiles with minimum drag.

  15. Temporal modulation transfer functions in auditory receptor fibres of the locust ( Locusta migratoria L.).

    PubMed

    Prinz, P; Ronacher, B

    2002-08-01

    The temporal resolution of auditory receptors of locusts was investigated by applying noise stimuli with sinusoidal amplitude modulations and by computing temporal modulation transfer functions. These transfer functions showed mostly bandpass characteristics, which are rarely found in other species at the level of receptors. From the upper cut-off frequencies of the modulation transfer functions the minimum integration times were calculated. Minimum integration times showed no significant correlation to the receptor spike rates but depended strongly on the body temperature. At 20 degrees C the average minimum integration time was 1.7 ms, dropping to 0.95 ms at 30 degrees C. The values found in this study correspond well to the range of minimum integration times found in birds and mammals. Gap detection is another standard paradigm to investigate temporal resolution. In locusts and other grasshoppers application of this paradigm yielded values of the minimum detectable gap widths that are approximately twice as large than the minimum integration times reported here.

  16. Hybrid Topological Lie-Hamiltonian Learning in Evolving Energy Landscapes

    NASA Astrophysics Data System (ADS)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    In this Chapter, a novel bidirectional algorithm for hybrid (discrete + continuous-time) Lie-Hamiltonian evolution in adaptive energy landscape-manifold is designed and its topological representation is proposed. The algorithm is developed within a geometrically and topologically extended framework of Hopfield's neural nets and Haken's synergetics (it is currently designed in Mathematica, although with small changes it could be implemented in Symbolic C++ or any other computer algebra system). The adaptive energy manifold is determined by the Hamiltonian multivariate cost function H, based on the user-defined vehicle-fleet configuration matrix W, which represents the pseudo-Riemannian metric tensor of the energy manifold. Search for the global minimum of H is performed using random signal differential Hebbian adaptation. This stochastic gradient evolution is driven (or, pulled-down) by `gravitational forces' defined by the 2nd Lie derivatives of H. Topological changes of the fleet matrix W are observed during the evolution and its topological invariant is established. The evolution stops when the W-topology breaks down into several connectivity-components, followed by topology-breaking instability sequence (i.e., a series of phase transitions).

  17. Launch window analysis of satellites in high eccentricity or large circular orbits

    NASA Technical Reports Server (NTRS)

    Renard, M. L.; Bhate, S. K.; Sridharan, R.

    1973-01-01

    Numerical methods and computer programs for studying the stability and evolution of orbits of large eccentricity are presented. Methods for determining launch windows and target dates are developed. Mathematical models are prepared to analyze the characteristics of specific missions.

  18. Computational analysis of particle reinforced viscoelastic polymer nanocomposites - statistical study of representative volume element

    NASA Astrophysics Data System (ADS)

    Hu, Anqi; Li, Xiaolin; Ajdari, Amin; Jiang, Bing; Burkhart, Craig; Chen, Wei; Brinson, L. Catherine

    2018-05-01

    The concept of representative volume element (RVE) is widely used to determine the effective material properties of random heterogeneous materials. In the present work, the RVE is investigated for the viscoelastic response of particle-reinforced polymer nanocomposites in the frequency domain. The smallest RVE size and the minimum number of realizations at a given volume size for both structural and mechanical properties are determined for a given precision using the concept of margin of error. It is concluded that using the mean of many realizations of a small RVE instead of a single large RVE can retain the desired precision of a result with much lower computational cost (up to three orders of magnitude reduced computation time) for the property of interest. Both the smallest RVE size and the minimum number of realizations for a microstructure with higher volume fraction (VF) are larger compared to those of one with lower VF at the same desired precision. Similarly, a clustered structure is shown to require a larger minimum RVE size as well as a larger number of realizations at a given volume size compared to the well-dispersed microstructures.

  19. 20 CFR 704.103 - Removal of certain minimums when computing or paying compensation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Removal of certain minimums when computing or... PROVISIONS FOR LHWCA EXTENSIONS Defense Base Act § 704.103 Removal of certain minimums when computing or... benefits are to be computed under section 9 of the LHWCA, 33 U.S.C. 909, shall not apply in computing...

  20. Constrained multi-objective optimization of storage ring lattices

    NASA Astrophysics Data System (ADS)

    Husain, Riyasat; Ghodke, A. D.

    2018-03-01

    The storage ring lattice optimization is a class of constrained multi-objective optimization problem, where in addition to low beam emittance, a large dynamic aperture for good injection efficiency and improved beam lifetime are also desirable. The convergence and computation times are of great concern for the optimization algorithms, as various objectives are to be optimized and a number of accelerator parameters to be varied over a large span with several constraints. In this paper, a study of storage ring lattice optimization using differential evolution is presented. The optimization results are compared with two most widely used optimization techniques in accelerators-genetic algorithm and particle swarm optimization. It is found that the differential evolution produces a better Pareto optimal front in reasonable computation time between two conflicting objectives-beam emittance and dispersion function in the straight section. The differential evolution was used, extensively, for the optimization of linear and nonlinear lattices of Indus-2 for exploring various operational modes within the magnet power supply capabilities.

  1. Using traveling salesman problem algorithms for evolutionary tree construction.

    PubMed

    Korostensky, C; Gonnet, G H

    2000-07-01

    The construction of evolutionary trees is one of the major problems in computational biology, mainly due to its complexity. We present a new tree construction method that constructs a tree with minimum score for a given set of sequences, where the score is the amount of evolution measured in PAM distances. To do this, the problem of tree construction is reduced to the Traveling Salesman Problem (TSP). The input for the TSP algorithm are the pairwise distances of the sequences and the output is a circular tour through the optimal, unknown tree plus the minimum score of the tree. The circular order and the score can be used to construct the topology of the optimal tree. Our method can be used for any scoring function that correlates to the amount of changes along the branches of an evolutionary tree, for instance it could also be used for parsimony scores, but it cannot be used for least squares fit of distances. A TSP solution reduces the space of all possible trees to 2n. Using this order, we can guarantee that we reconstruct a correct evolutionary tree if the absolute value of the error for each distance measurement is smaller than f2.gif" BORDER="0">, where f3.gif" BORDER="0">is the length of the shortest edge in the tree. For data sets with large errors, a dynamic programming approach is used to reconstruct the tree. Finally simulations and experiments with real data are shown.

  2. Global optimization algorithms to compute thermodynamic equilibria in large complex systems with performance considerations

    DOE PAGES

    Piro, M. H. A.; Simunovic, S.

    2016-03-17

    Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systemsmore » containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N 3) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.« less

  3. Global optimization algorithms to compute thermodynamic equilibria in large complex systems with performance considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piro, M. H. A.; Simunovic, S.

    Several global optimization methods are reviewed that attempt to ensure that the integral Gibbs energy of a closed isothermal isobaric system is a global minimum to satisfy the necessary and sufficient conditions for thermodynamic equilibrium. In particular, the integral Gibbs energy function of a multicomponent system containing non-ideal phases may be highly non-linear and non-convex, which makes finding a global minimum a challenge. Consequently, a poor numerical approach may lead one to the false belief of equilibrium. Furthermore, confirming that one reaches a global minimum and that this is achieved with satisfactory computational performance becomes increasingly more challenging in systemsmore » containing many chemical elements and a correspondingly large number of species and phases. Several numerical methods that have been used for this specific purpose are reviewed with a benchmark study of three of the more promising methods using five case studies of varying complexity. A modification of the conventional Branch and Bound method is presented that is well suited to a wide array of thermodynamic applications, including complex phases with many constituents and sublattices, and ionic phases that must adhere to charge neutrality constraints. Also, a novel method is presented that efficiently solves the system of linear equations that exploits the unique structure of the Hessian matrix, which reduces the calculation from a O(N 3) operation to a O(N) operation. As a result, this combined approach demonstrates efficiency, reliability and capabilities that are favorable for integration of thermodynamic computations into multi-physics codes with inherent performance considerations.« less

  4. Observation of quantum criticality with ultracold atoms in optical lattices

    NASA Astrophysics Data System (ADS)

    Zhang, Xibo

    As biological problems are becoming more complex and data growing at a rate much faster than that of computer hardware, new and faster algorithms are required. This dissertation investigates computational problems arising in two of the fields: comparative genomics and epigenomics, and employs a variety of computational techniques to address the problems. One fundamental question in the studies of chromosome evolution is whether the rearrangement breakpoints are happening at random positions or along certain hotspots. We investigate the breakpoint reuse phenomenon, and show the analyses that support the more recently proposed fragile breakage model as opposed to the conventional random breakage models for chromosome evolution. The identification of syntenic regions between chromosomes forms the basis for studies of genome architectures, comparative genomics, and evolutionary genomics. The previous synteny block reconstruction algorithms could not be scaled to a large number of mammalian genomes being sequenced; neither did they address the issue of generating non-overlapping synteny blocks suitable for analyzing rearrangements and evolutionary history of large-scale duplications prevalent in plant genomes. We present a new unified synteny block generation algorithm based on A-Bruijn graph framework that overcomes these shortcomings. In the epigenome sequencing, a sample may contain a mixture of epigenomes and there is a need to resolve the distinct methylation patterns from the mixture. Many sequencing applications, such as haplotype inference for diploid or polyploid genomes, and metagenomic sequencing, share the similar objective: to infer a set of distinct assemblies from reads that are sequenced from a heterogeneous sample and subsequently aligned to a reference genome. We model the problem from both a combinatorial and a statistical angles. First, we describe a theoretical framework. A linear-time algorithm is then given to resolve a minimum number of assemblies that are consistent with all reads, substantially improving on previous algorithms. An efficient algorithm is also described to determine a set of assemblies that is consistent with a maximum subset of the reads, a previously untreated problem. We then prove that allowing nested reads or permitting mismatches between reads and their assemblies renders these problems NP-hard. Second, we describe a mixture model-based approach, and applied the model for the detection of allele-specific methylations.

  5. On the subduction of oxygenated surface water in submesoscale cold filaments off Peru.

    NASA Astrophysics Data System (ADS)

    Thomsen, Soeren; Kanzow, Torsten; Colas, Francois; Echevin, Vincent; Krahmann, Gerd

    2015-04-01

    The Peruvian upwelling regime is characterized by pronounced submesoscale variability including filaments and sharp density fronts. Submesoscale frontal processes can drive large vertical velocities and enhance vertical tracer fluxes in the upper ocean. The associated high temporal and spatial variability poses a large challenge to observational approaches targeting submesoscale processes. In this study the role of submesoscale processes for both the ventilation of the near-coastal oxygen minimum zone off Peru and the physical-biogeochemical coupling at these scales is investigated. For our study we use satellite based sea surface temperature measurements in combination with multiple high-resolution glider observations of temperature, salinity, oxygen and chlorophyll fluorescence carried out in January and February 2013 off Peru near 14°S during active upwelling. Additionally, high-resolution regional ocean circulation model outputs (ROMS) are analysed. At the beginning of our observations a previously upwelled, productive and highly oxygenated body of water is found within the mixed layer. Subsequently, a cold filament forms and the waters are moved offshore. After the decay of the filament and the relaxation of the upwelling front, the oxygen enriched surface water is found within the previously less oxygenated thermocline suggesting the occurrence of frontal subduction. A numerical model simulation is used to analyse the evolution of passive tracers and Lagrangian floats within several upwelling filaments, whose vertical structure and hydrographic properties agree well with the observations. The simulated temporal evolution of the tracers and floats support our interpretation that the subduction of previously upwelled water indeed occurs within cold filaments off Peru. Filaments are common features within eastern boundary upwelling systems, which all encompass large oxygen minimum zones. However, most state of-the-art large and regional scale physical-biogeochemical ocean models do not resolve submesoscale filaments and the associated downward transport of oxygen and other solutes. Even if the observed subduction event only reaches into the still oxygenated thermocline the associated ventilation mechanism likely influences the shape and depth of the upper boundary of oxygen minimum zones, which would probably be even shallower without this process.

  6. Complexity of the Quantum Adiabatic Algorithm

    NASA Astrophysics Data System (ADS)

    Hen, Itay

    2013-03-01

    The Quantum Adiabatic Algorithm (QAA) has been proposed as a mechanism for efficiently solving optimization problems on a quantum computer. Since adiabatic computation is analog in nature and does not require the design and use of quantum gates, it can be thought of as a simpler and perhaps more profound method for performing quantum computations that might also be easier to implement experimentally. While these features have generated substantial research in QAA, to date there is still a lack of solid evidence that the algorithm can outperform classical optimization algorihms. Here, we discuss several aspects of the quantum adiabatic algorithm: We analyze the efficiency of the algorithm on several ``hard'' (NP) computational problems. Studying the size dependence of the typical minimum energy gap of the Hamiltonians of these problems using quantum Monte Carlo methods, we find that while for most problems the minimum gap decreases exponentially with the size of the problem, indicating that the QAA is not more efficient than existing classical search algorithms, for other problems there is evidence to suggest that the gap may be polynomial near the phase transition. We also discuss applications of the QAA to ``real life'' problems and how they can be implemented on currently available (albeit prototypical) quantum hardware such as ``D-Wave One'', that impose serious restrictions as to which type of problems may be tested. Finally, we discuss different approaches to find improved implementations of the algorithm such as local adiabatic evolution, adaptive methods, local search in Hamiltonian space and others.

  7. 20 CFR 229.41 - When a spouse can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE When... annuity rate under the overall minimum. A spouse's inclusion in the computation of the overall minimum...

  8. MEvoLib v1.0: the first molecular evolution library for Python.

    PubMed

    Álvarez-Jarreta, Jorge; Ruiz-Pesini, Eduardo

    2016-10-28

    Molecular evolution studies involve many different hard computational problems solved, in most cases, with heuristic algorithms that provide a nearly optimal solution. Hence, diverse software tools exist for the different stages involved in a molecular evolution workflow. We present MEvoLib, the first molecular evolution library for Python, providing a framework to work with different tools and methods involved in the common tasks of molecular evolution workflows. In contrast with already existing bioinformatics libraries, MEvoLib is focused on the stages involved in molecular evolution studies, enclosing the set of tools with a common purpose in a single high-level interface with fast access to their frequent parameterizations. The gene clustering from partial or complete sequences has been improved with a new method that integrates accessible external information (e.g. GenBank's features data). Moreover, MEvoLib adjusts the fetching process from NCBI databases to optimize the download bandwidth usage. In addition, it has been implemented using parallelization techniques to cope with even large-case scenarios. MEvoLib is the first library for Python designed to facilitate molecular evolution researches both for expert and novel users. Its unique interface for each common task comprises several tools with their most used parameterizations. It has also included a method to take advantage of biological knowledge to improve the gene partition of sequence datasets. Additionally, its implementation incorporates parallelization techniques to enhance computational costs when handling very large input datasets.

  9. The effect of tidal forces on the minimum energy configurations of the full three-body problem

    NASA Astrophysics Data System (ADS)

    Levine, Edward

    We investigate the evolution of minimum energy configurations for the Full Three Body Problem (3BP). A stable ternary asteroid system will gradually become unstable due to the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect and an unpredictable trajectory will ensue. Through the interaction of tidal torques, energy in the system will dissipate in the form of heat until a stable minimum energy configuration is reached. We present a simulation that describes the dynamical evolution of three bodies under the mutual effects of gravity and tidal torques. Simulations show that bodies do not get stuck in local minima and transition to the predicted minimum energy configuration.

  10. RNAdualPF: software to compute the dual partition function with sample applications in molecular evolution theory.

    PubMed

    Garcia-Martin, Juan Antonio; Bayegan, Amir H; Dotu, Ivan; Clote, Peter

    2016-10-19

    RNA inverse folding is the problem of finding one or more sequences that fold into a user-specified target structure s 0 , i.e. whose minimum free energy secondary structure is identical to the target s 0 . Here we consider the ensemble of all RNA sequences that have low free energy with respect to a given target s 0 . We introduce the program RNAdualPF, which computes the dual partition function Z ∗ , defined as the sum of Boltzmann factors exp(-E(a,s 0 )/RT) of all RNA nucleotide sequences a compatible with target structure s 0 . Using RNAdualPF, we efficiently sample RNA sequences that approximately fold into s 0 , where additionally the user can specify IUPAC sequence constraints at certain positions, and whether to include dangles (energy terms for stacked, single-stranded nucleotides). Moreover, since we also compute the dual partition function Z ∗ (k) over all sequences having GC-content k, the user can require that all sampled sequences have a precise, specified GC-content. Using Z ∗ , we compute the dual expected energy 〈E ∗ 〉, and use it to show that natural RNAs from the Rfam 12.0 database have higher minimum free energy than expected, thus suggesting that functional RNAs are under evolutionary pressure to be only marginally thermodynamically stable. We show that C. elegans precursor microRNA (pre-miRNA) is significantly non-robust with respect to mutations, by comparing the robustness of each wild type pre-miRNA sequence with 2000 [resp. 500] sequences of the same GC-content generated by RNAdualPF, which approximately [resp. exactly] fold into the wild type target structure. We confirm and strengthen earlier findings that precursor microRNAs and bacterial small noncoding RNAs display plasticity, a measure of structural diversity. We describe RNAdualPF, which rapidly computes the dual partition function Z ∗ and samples sequences having low energy with respect to a target structure, allowing sequence constraints and specified GC-content. Using different inverse folding software, another group had earlier shown that pre-miRNA is mutationally robust, even controlling for compositional bias. Our opposite conclusion suggests a cautionary note that computationally based insights into molecular evolution may heavily depend on the software used. C/C++-software for RNAdualPF is available at http://bioinformatics.bc.edu/clotelab/RNAdualPF .

  11. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  12. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  13. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  14. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true When a child can no longer be included in... Entitlement Under the Overall Minimum Ends § 229.42 When a child can no longer be included in computing an annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate...

  15. 20 CFR 225.15 - Overall Minimum PIA.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Security Act based on combined railroad and social security earnings. The Overall Minimum PIA is used in computing the social security overall minimum guaranty amount. The overall minimum guaranty rate annuity... INSURANCE AMOUNT DETERMINATIONS PIA's Used in Computing Employee, Spouse and Divorced Spouse Annuities § 225...

  16. Dynamic remapping of parallel computations with varying resource demands

    NASA Technical Reports Server (NTRS)

    Nicol, D. M.; Saltz, J. H.

    1986-01-01

    A large class of computational problems is characterized by frequent synchronization, and computational requirements which change as a function of time. When such a problem must be solved on a message passing multiprocessor machine, the combination of these characteristics lead to system performance which decreases in time. Performance can be improved with periodic redistribution of computational load; however, redistribution can exact a sometimes large delay cost. We study the issue of deciding when to invoke a global load remapping mechanism. Such a decision policy must effectively weigh the costs of remapping against the performance benefits. We treat this problem by constructing two analytic models which exhibit stochastically decreasing performance. One model is quite tractable; we are able to describe the optimal remapping algorithm, and the optimal decision policy governing when to invoke that algorithm. However, computational complexity prohibits the use of the optimal remapping decision policy. We then study the performance of a general remapping policy on both analytic models. This policy attempts to minimize a statistic W(n) which measures the system degradation (including the cost of remapping) per computation step over a period of n steps. We show that as a function of time, the expected value of W(n) has at most one minimum, and that when this minimum exists it defines the optimal fixed-interval remapping policy. Our decision policy appeals to this result by remapping when it estimates that W(n) is minimized. Our performance data suggests that this policy effectively finds the natural frequency of remapping. We also use the analytic models to express the relationship between performance and remapping cost, number of processors, and the computation's stochastic activity.

  17. VizieR Online Data Catalog: Evolution of rotating very massive LC stars (Kohler, 2015)

    NASA Astrophysics Data System (ADS)

    Kohler, K.; Langer, N.; de Koter, A.; de Mink, S. E.; Crowther, P. A.; Evans, C. J.; Grafener, G.; Sana, H.; Sanyal, D.; Schneider, F. R. N.; Vink, J. S.

    2014-11-01

    A dense model grid with chemical composition appropriate for the Large Magellanic Cloud is presented. A one-dimensional hydrodynamic stellar evolution code was used to compute our models on the main sequence, taking into account rotation, transport of angular momentum by magnetic fields and stellar wind mass loss. We present stellar evolution models with initial masses of 70-500M⊙ and with initial surface rotational velocities of 0-550km/s. (2 data files).

  18. The first sub-70 min non-interacting WD-BD system: EPIC212235321

    NASA Astrophysics Data System (ADS)

    Casewell, S. L.; Braker, I. P.; Parsons, S. G.; Hermes, J. J.; Burleigh, M. R.; Belardi, C.; Chaushev, A.; Finch, N. L.; Roy, M.; Littlefair, S. P.; Goad, M.; Dennihy, E.

    2018-05-01

    We present the discovery of the shortest period, non-interacting, white dwarf-brown dwarf post-common-envelope binary known. The K2 light curve shows the system, EPIC 21223532 has a period of 68.2 min and is not eclipsing, but does show a large reflection effect due to the irradiation of the brown dwarf by the white dwarf primary. Spectra show hydrogen, magnesium, and calcium emission features from the brown dwarf's irradiated hemisphere, and the mass indicates the spectral type is likely to be L3. Despite having a period substantially lower than the cataclysmic variable period minimum, this system is likely a pre-cataclysmic binary, recently emerged from the common-envelope. These systems are rare, but provide limits on the lowest mass object that can survive common-envelope evolution, and information about the evolution of white dwarf progenitors, and post-common-envelope evolution.

  19. Challenges to assessing connectivity between massive populations of the Australian plague locust

    PubMed Central

    Chapuis, Marie-Pierre; Popple, Julie-Anne M.; Berthier, Karine; Simpson, Stephen J.; Deveson, Edward; Spurgin, Peter; Steinbauer, Martin J.; Sword, Gregory A.

    2011-01-01

    Linking demographic and genetic dispersal measures is of fundamental importance for movement ecology and evolution. However, such integration can be difficult, particularly for highly fecund species that are often the target of management decisions guided by an understanding of population movement. Here, we present an example of how the influence of large population sizes can preclude genetic approaches from assessing demographic population structuring, even at a continental scale. The Australian plague locust, Chortoicetes terminifera, is a significant pest, with populations on the eastern and western sides of Australia having been monitored and managed independently to date. We used microsatellites to assess genetic variation in 12 C. terminifera population samples separated by up to 3000 km. Traditional summary statistics indicated high levels of genetic diversity and a surprising lack of population structure across the entire range. An approximate Bayesian computation treatment indicated that levels of genetic diversity in C. terminifera corresponded to effective population sizes conservatively composed of tens of thousands to several million individuals. We used these estimates and computer simulations to estimate the minimum rate of dispersal, m, that could account for the observed range-wide genetic homogeneity. The rate of dispersal between both sides of the Australian continent could be several orders of magnitude lower than that typically considered as required for the demographic connectivity of populations. PMID:21389030

  20. Mammal body size evolution in North America and Europe over 20 Myr: similar trends generated by different processes.

    PubMed

    Huang, Shan; Eronen, Jussi T; Janis, Christine M; Saarinen, Juha J; Silvestro, Daniele; Fritz, Susanne A

    2017-02-22

    Because body size interacts with many fundamental biological properties of a species, body size evolution can be an essential component of the generation and maintenance of biodiversity. Here we investigate how body size evolution can be linked to the clade-specific diversification dynamics in different geographical regions. We analyse an extensive body size dataset of Neogene large herbivores (covering approx. 50% of the 970 species in the orders Artiodactyla and Perissodactyla) in Europe and North America in a Bayesian framework. We reconstruct the temporal patterns of body size in each order on each continent independently, and find significant increases of minimum size in three of the continental assemblages (except European perissodactyls), suggesting an active selection for larger bodies. Assessment of trait-correlated birth-death models indicates that the common trend of body size increase is generated by different processes in different clades and regions. Larger-bodied artiodactyl species on both continents tend to have higher origination rates, and both clades in North America show strong links between large bodies and low extinction rate. Collectively, our results suggest a strong role of species selection and perhaps of higher-taxon sorting in driving body size evolution, and highlight the value of investigating evolutionary processes in a biogeographic context. © 2017 The Author(s).

  1. Mammal body size evolution in North America and Europe over 20 Myr: similar trends generated by different processes

    PubMed Central

    Eronen, Jussi T.; Janis, Christine M.; Saarinen, Juha J.

    2017-01-01

    Because body size interacts with many fundamental biological properties of a species, body size evolution can be an essential component of the generation and maintenance of biodiversity. Here we investigate how body size evolution can be linked to the clade-specific diversification dynamics in different geographical regions. We analyse an extensive body size dataset of Neogene large herbivores (covering approx. 50% of the 970 species in the orders Artiodactyla and Perissodactyla) in Europe and North America in a Bayesian framework. We reconstruct the temporal patterns of body size in each order on each continent independently, and find significant increases of minimum size in three of the continental assemblages (except European perissodactyls), suggesting an active selection for larger bodies. Assessment of trait-correlated birth-death models indicates that the common trend of body size increase is generated by different processes in different clades and regions. Larger-bodied artiodactyl species on both continents tend to have higher origination rates, and both clades in North America show strong links between large bodies and low extinction rate. Collectively, our results suggest a strong role of species selection and perhaps of higher-taxon sorting in driving body size evolution, and highlight the value of investigating evolutionary processes in a biogeographic context. PMID:28202809

  2. Compiling Planning into Quantum Optimization Problems: A Comparative Study

    DTIC Science & Technology

    2015-06-07

    and Sipser, M. 2000. Quantum computation by adiabatic evolution. arXiv:quant- ph/0001106. Fikes, R. E., and Nilsson, N. J. 1972. STRIPS: A new...become available: quantum annealing. Quantum annealing is one of the most accessible quantum algorithms for a computer sci- ence audience not versed...in quantum computing because of its close ties to classical optimization algorithms such as simulated annealing. While large-scale universal quantum

  3. Quantum population and entanglement evolution in photosynthetic process

    NASA Astrophysics Data System (ADS)

    Zhu, Jing

    Applications of the concepts of quantum information theory are usually related to the powerful and counter-intuitive quantum mechanical effects of superposition, interference and entanglement. In this thesis, I examine the role of coherence and entanglement in complex chemical systems. The research has focused mainly on two related projects: The first project is developing a theoretical model to explain the recent ultrafast experiments on excitonic migration in photosynthetic complexes that show long-lived coherence of the order of hundreds of femtoseconds and the second project developing the Grover algorithm for global optimization of complex systems. The first part can be divided into two sections. The first section is investigating the theoretical frame about the transfer of electronic excitation energy through the Fenna-Matthews-Olson (FMO) pigment-protein complex. The new developed modified scaled hierarchical equation of motion (HEOM) approach is employed for simulating the open quantum system. The second section is investigating the evolution of entanglement in the FMO complex based on the simulation result via scaled HEOM approach. We examine the role of multipartite entanglement in the FMO complex by direct computation of the convex roof optimization for a number of different measures, including pairwise, triplet, quadruple and quintuple sites entanglement. Our results support the hypothesis that multipartite entanglement is maximum primary along the two distinct electronic energy transfer pathways. The second part of this thesis can be separated into two sections. The first section demonstrated that a modified Grover's quantum algorithm can be applied to real problems of finding a global minimum using modest numbers of quantum bits. Calculations of the global minimum of simple test functions and Lennard-Jones clusters have been carried out on a quantum computer simulator using a modified Grover's algorithm. The second section is implementing the basic quantum logical gates upon arrays of trapped ultracold polar molecules as qubits for the quantum computer. Utilized herein is the Multi-Target Optimal Control Theory (MTOCT) as a means of manipulating the initial-to-target transition probability via external laser field. The detailed calculation is applied for the SrO molecule, an ideal candidate in proposed quantum computers using arrays of trapped ultra-cold polar molecules.

  4. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing.

    PubMed

    Xu, Jason; Minin, Vladimir N

    2015-07-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes.

  5. Efficient Transition Probability Computation for Continuous-Time Branching Processes via Compressed Sensing

    PubMed Central

    Xu, Jason; Minin, Vladimir N.

    2016-01-01

    Branching processes are a class of continuous-time Markov chains (CTMCs) with ubiquitous applications. A general difficulty in statistical inference under partially observed CTMC models arises in computing transition probabilities when the discrete state space is large or uncountable. Classical methods such as matrix exponentiation are infeasible for large or countably infinite state spaces, and sampling-based alternatives are computationally intensive, requiring integration over all possible hidden events. Recent work has successfully applied generating function techniques to computing transition probabilities for linear multi-type branching processes. While these techniques often require significantly fewer computations than matrix exponentiation, they also become prohibitive in applications with large populations. We propose a compressed sensing framework that significantly accelerates the generating function method, decreasing computational cost up to a logarithmic factor by only assuming the probability mass of transitions is sparse. We demonstrate accurate and efficient transition probability computations in branching process models for blood cell formation and evolution of self-replicating transposable elements in bacterial genomes. PMID:26949377

  6. Thruput Analysis of AFLC CYBER 73 Computers.

    DTIC Science & Technology

    1981-12-01

    Ref 2:14). This decision permitted a fast conversion effort with minimum programmer/analyst experience (Ref 34). Recently, as the conversion effort...converted (Ref 1:2). 2 . i i i II I i4 Moreover, many of the large data-file and machine-time- consuming systems were not included in the earlier...by LMT personnel revealed that during certain periods i.e., 0000-0800, the machine is normally reserved for the large 3 4 resource- consuming programs

  7. Dynamics of flexible bodies in tree topology - A computer oriented approach

    NASA Technical Reports Server (NTRS)

    Singh, R. P.; Vandervoort, R. J.; Likins, P. W.

    1984-01-01

    An approach suited for automatic generation of the equations of motion for large mechanical systems (i.e., large space structures, mechanisms, robots, etc.) is presented. The system topology is restricted to a tree configuration. The tree is defined as an arbitrary set of rigid and flexible bodies connected by hinges characterizing relative translations and rotations of two adjoining bodies. The equations of motion are derived via Kane's method. The resulting equation set is of minimum dimension. Dynamical equations are imbedded in a computer program called TREETOPS. Extensive control simulation capability is built in the TREETOPS program. The simulation is driven by an interactive set-up program resulting in an easy to use analysis tool.

  8. Gradient spectral analysis of solar radio flare superevents

    NASA Astrophysics Data System (ADS)

    Rosa, R. R.; Veronese, T. B.; Sych, R. A.; Bolzan, M. A.; Sandri, S. A.; Drummond, I. A.; Becceneri, J. C.; Sawant, H. S.

    2011-12-01

    Some of complex solar active regions exhibit rare and sudden transitions that occur over time intervals that are short compared to the characteristic time scales of their evolution. Usually, extreme radio emission is driven by a latent nonlinear process involving magnetic reconnection among coronal loops and such extreme events (e.g., X-class flares and coronal mass ejections) express the presence of plasma and magnetic activity usually hidden inside the solar convective layer. Recently, the scaling exponent obtained from Detrended Fluctuation Analysis has been used to characterize the formation of solar flare superevents - SFS (integrated flux of radiation greater than 1.5 J/m2) when it is observed in the decimetric range of 1-3 GHz (Veronese et al., 2011). Here, we show a complementary computational analisys of four X-class solar flares observed in 17GHz from Nobeyama Radioheliograph. Our analysis is based on the combination of DFA and Gradient Spectral Analysis (GSA) which can be used to characterize the evolution of SFSs under the condition that the emission threshold is large enough (fmax > 300 S.F.U.) and the solar flux unit variability is greater than 50% of the average taken from the minimum flux to the extreme value. Preliminary studies of the gradient spectra of Nobeyama data in 17 GHz can be found in Sawant et al. (JASTP 73(11), 2011). Future applications of GSA on the images which will be observed from the Brazilian Decimetric Array (BDA) are discusssed.

  9. Macroscopic dielectric function within time-dependent density functional theory—Real time evolution versus the Casida approach

    NASA Astrophysics Data System (ADS)

    Sander, Tobias; Kresse, Georg

    2017-02-01

    Linear optical properties can be calculated by solving the time-dependent density functional theory equations. Linearization of the equation of motion around the ground state orbitals results in the so-called Casida equation, which is formally very similar to the Bethe-Salpeter equation. Alternatively one can determine the spectral functions by applying an infinitely short electric field in time and then following the evolution of the electron orbitals and the evolution of the dipole moments. The long wavelength response function is then given by the Fourier transformation of the evolution of the dipole moments in time. In this work, we compare the results and performance of these two approaches for the projector augmented wave method. To allow for large time steps and still rely on a simple difference scheme to solve the differential equation, we correct for the errors in the frequency domain, using a simple analytic equation. In general, we find that both approaches yield virtually indistinguishable results. For standard density functionals, the time evolution approach is, with respect to the computational performance, clearly superior compared to the solution of the Casida equation. However, for functionals including nonlocal exchange, the direct solution of the Casida equation is usually much more efficient, even though it scales less beneficial with the system size. We relate this to the large computational prefactors in evaluating the nonlocal exchange, which renders the time evolution algorithm fairly inefficient.

  10. Thermal and mass implications of magmatic evolution in the Lassen volcanic region, California, and minimum constraints on basalt influx to the lower crust

    USGS Publications Warehouse

    Guffanti, M.; Clynne, M.A.; Muffler, L.J.P.

    1996-01-01

    We have analyzed the heat and mass demands of a petrologic model of basaltdriven magmatic evolution in which variously fractionated mafic magmas mix with silicic partial melts of the lower crust. We have formulated steady state heat budgets for two volcanically distinct areas in the Lassen region: the large, late Quaternary, intermediate to silicic Lassen volcanic center and the nearby, coeval, less evolved Caribou volcanic field. At Caribou volcanic field, heat provided by cooling and fractional crystallization of 52 km3 of basalt is more than sufficient to produce 10 km3 of rhyolitic melt by partial melting of lower crust. Net heat added by basalt intrusion at Caribou volcanic field is equivalent to an increase in lower crustal heat flow of ???7 mW m-2, indicating that the field is not a major crustal thermal anomaly. Addition of cumulates from fractionation is offset by removal of erupted partial melts. A minimum basalt influx of 0.3 km3 (km2 Ma)-1 is needed to supply Caribou volcanic field. Our methodology does not fully account for an influx of basalt that remains in the crust as derivative intrusives. On the basis of comparison to deep heat flow, the input of basalt could be ???3 to 7 times the amount we calculate. At Lassen volcanic center, at least 203 km3 of mantle-derived basalt is needed to produce 141 km3 of partial melt and drive the volcanic system. Partial melting mobilizes lower crustal material, augmenting the magmatic volume available for eruption at Lassen volcanic center; thus the erupted volume of 215 km3 exceeds the calculated basalt input of 203 km3. The minimum basalt input of 1.6 km3 (km2 Ma)-1 is >5 times the minimum influx to the Caribou volcanic field. Basalt influx high enough to sustain considerable partial melting, coupled with locally high extension rate, is a crucial factor in development of Lassen volcanic center; in contrast. Caribou volcanic field has failed to develop into a large silicic center primarily because basalt supply there has been insufficient.

  11. U-series dating and classification of the Apidima 2 hominin from Mani Peninsula, Southern Greece.

    PubMed

    Bartsiokas, Antonis; Arsuaga, Juan Luis; Aubert, Maxime; Grün, Rainer

    2017-08-01

    Laser ablation U-series dating results on a human cranial bone fragment from Apidima, on the western cost of the Mani Peninsula, Southern Greece, indicate a minimum age of 160,000 years. The dated cranial fragment belongs to Apidima 2, which preserves the facial skeleton and a large part of the braincase, lacking the occipital bone. The morphology of the preserved regions of the cranium, and especially that of the facial skeleton, indicates that the fossil belongs to the Neanderthal clade. The dating of the fossil at a minimum age of 160,000 years shows that most of the Neanderthal traits were already present in the MIS 6 and perhaps earlier. This makes Apidima 2 the earliest known fossil with a clear Neanderthal facial morphology. Together with the nearby younger Neanderthal specimens from Lakonis and Kalamakia, the Apidima crania are of crucial importance for the evolution of Neanderthals in the area during the Middle to Late Pleistocene. It can be expected that systematic direct dating of the other human fossils from this area will elucidate our understanding of Neanderthal evolution and demise. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Loef, P.A.; Smed, T.; Andersson, G.

    The minimum singular value of the power flow Jacobian matrix has been used as a static voltage stability index, indicating the distance between the studied operating point and the steady state voltage stability limit. In this paper a fast method to calculate the minimum singular value and the corresponding (left and right) singular vectors is presented. The main advantages of the developed algorithm are the small amount of computation time needed, and that it only requires information available from an ordinary program for power flow calculations. Furthermore, the proposed method fully utilizes the sparsity of the power flow Jacobian matrixmore » and hence the memory requirements for the computation are low. These advantages are preserved when applied to various submatrices of the Jacobian matrix, which can be useful in constructing special voltage stability indices. The developed algorithm was applied to small test systems as well as to a large (real size) system with over 1000 nodes, with satisfactory results.« less

  13. Origine et developpement des industries de la langue (Origin and Development of Language Utilities). Publication K-8.

    ERIC Educational Resources Information Center

    L'Homme, Marie-Claude

    The evolution of "language utilities," a concept confined largely to the francophone world and relating to the uses of language in computer science and the use of computer science for languages, is chronicled. The language utilities are of three types: (1) tools for language development, primarily dictionary databases and related tools;…

  14. Implicit solvers for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Mavriplis, Dimitri J.

    1991-01-01

    Implicit methods for unstructured mesh computations are developed and tested. The approximate system which arises from the Newton-linearization of the nonlinear evolution operator is solved by using the preconditioned generalized minimum residual technique. These different preconditioners are investigated: the incomplete LU factorization (ILU), block diagonal factorization, and the symmetric successive over-relaxation (SSOR). The preconditioners have been optimized to have good vectorization properties. The various methods are compared over a wide range of problems. Ordering of the unknowns, which affects the convergence of these sparse matrix iterative methods, is also investigated. Results are presented for inviscid and turbulent viscous calculations on single and multielement airfoil configurations using globally and adaptively generated meshes.

  15. The Surface Density Distribution in the Solar Nebula

    NASA Technical Reports Server (NTRS)

    Davis, Sanford S.

    2004-01-01

    The commonly used minimum mass power law representation of the pre-solar nebula is reanalyzed using a new cumulative-mass-model. This model predicts a smoother surface density approximation compared with methods based on direct computation of surface density. The density is quantified using two independent analytical formulations. First, a best-fit transcendental function is applied directly to the basic planetary data. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the solar nebula data. The latter model is shown to be a good approximation to the finite-size early Solar Nebula, and by extension to other extra solar protoplanetary disks.

  16. Trends of atmospheric circulation during singular hot days in Europe

    NASA Astrophysics Data System (ADS)

    Jézéquel, Aglaé; Cattiaux, Julien; Naveau, Philippe; Radanovics, Sabine; Ribes, Aurélien; Vautard, Robert; Vrac, Mathieu; Yiou, Pascal

    2018-05-01

    The influence of climate change on mid-latitudes atmospheric circulation is still very uncertain. The large internal variability makes it difficult to extract any statistically significant signal regarding the evolution of the circulation. Here we propose a methodology to calculate dynamical trends tailored to the circulation of specific days by computing the evolution of the distances between the circulation of the day of interest and the other days of the time series. We compute these dynamical trends for two case studies of the hottest days recorded in two different European regions (corresponding to the heat-waves of summer 2003 and 2010). We use the NCEP reanalysis dataset, an ensemble of CMIP5 models, and a large ensemble of a single model (CESM), in order to account for different sources of uncertainty. While we find a positive trend for most models for 2003, we cannot conclude for 2010 since the models disagree on the trend estimates.

  17. 20 CFR 404.260 - Special minimum primary insurance amounts.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    .... 404.260 Section 404.260 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Computing Primary Insurance Amounts Special Minimum Primary... compute your primary insurance amount, if the special minimum primary insurance amount described in § 404...

  18. Experimental determination of Ramsey numbers.

    PubMed

    Bian, Zhengbing; Chudak, Fabian; Macready, William G; Clark, Lane; Gaitan, Frank

    2013-09-27

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  19. Experimental Determination of Ramsey Numbers

    NASA Astrophysics Data System (ADS)

    Bian, Zhengbing; Chudak, Fabian; Macready, William G.; Clark, Lane; Gaitan, Frank

    2013-09-01

    Ramsey theory is a highly active research area in mathematics that studies the emergence of order in large disordered structures. Ramsey numbers mark the threshold at which order first appears and are extremely difficult to calculate due to their explosive rate of growth. Recently, an algorithm that can be implemented using adiabatic quantum evolution has been proposed that calculates the two-color Ramsey numbers R(m,n). Here we present results of an experimental implementation of this algorithm and show that it correctly determines the Ramsey numbers R(3,3) and R(m,2) for 4≤m≤8. The R(8,2) computation used 84 qubits of which 28 were computational qubits. This computation is the largest experimental implementation of a scientifically meaningful adiabatic evolution algorithm that has been done to date.

  20. The revolution in data gathering systems

    NASA Technical Reports Server (NTRS)

    Cambra, J. M.; Trover, W. F.

    1975-01-01

    Data acquisition systems used in NASA's wind tunnels from the 1950's through the present time are summarized as a baseline for assessing the impact of minicomputers and microcomputers on data acquisition and data processing. Emphasis is placed on the cyclic evolution in computer technology which transformed the central computer system, and finally the distributed computer system. Other developments discussed include: medium scale integration, large scale integration, combining the functions of data acquisition and control, and micro and minicomputers.

  1. Binary weight distributions of some Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Arnold, S.

    1992-01-01

    The binary weight distributions of the (7,5) and (15,9) Reed-Solomon (RS) codes and their duals are computed using the MacWilliams identities. Several mappings of symbols to bits are considered and those offering the largest binary minimum distance are found. These results are then used to compute bounds on the soft-decoding performance of these codes in the presence of additive Gaussian noise. These bounds are useful for finding large binary block codes with good performance and for verifying the performance obtained by specific soft-coding algorithms presently under development.

  2. ON THE ROLE OF REPETITIVE MAGNETIC RECONNECTIONS IN EVOLUTION OF MAGNETIC FLUX ROPES IN SOLAR CORONA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Sanjay; Bhattacharyya, R.; Joshi, Bhuwan

    Parker's magnetostatic theorem, extended to astrophysical magnetofluids with large magnetic Reynolds number, supports ceaseless regeneration of current sheets and, hence, spontaneous magnetic reconnections recurring in time. Consequently, a scenario is possible where the repeated reconnections provide an autonomous mechanism governing emergence of coherent structures in astrophysical magnetofluids. In this work, such a scenario is explored by performing numerical computations commensurate with the magnetostatic theorem. In particular, the computations explore the evolution of a flux rope governed by repeated reconnections in a magnetic geometry resembling bipolar loops of solar corona. The revealed morphology of the evolution process—including onset and ascent ofmore » the rope, reconnection locations, and the associated topology of the magnetic field lines—agrees with observations, and thus substantiates physical realizability of the advocated mechanism.« less

  3. VizieR Online Data Catalog: Evolution of solar irradiance during Holocene (Vieira+, 2011)

    NASA Astrophysics Data System (ADS)

    Vieira, L. E. A.; Solanki, S. K.; Krivova, N. A.; Usoskin, I.

    2011-05-01

    This is a composite total solar irradiance (TSI) time series for 9495BC to 2007AD constructed as described in Sect. 3.3 of the paper. Since the TSI is the main external heat input into the Earth's climate system, a consistent record covering as long period as possible is needed for climate models. This was our main motivation for constructing this composite TSI time series. In order to produce a representative time series, we divided the Holocene into four periods according to the available data for each period. Table 4 (see below) summarizes the periods considered and the models available for each period. After the end of the Maunder Minimum we compute daily values, while prior to the end of the Maunder Minimum we compute 10-year averages. For the period for which both solar disk magnetograms and continuum images are available (period 1) we employ the SATIRE-S reconstruction (Krivova et al. 2003A&A...399L...1K; Wenzler et al. 2006A&A...460..583W). SATIRE-T (Krivova et al. 2010JGRA..11512112K) reconstruction is used from the beginning of the Maunder Minimum (approximately 1640AD) to 1977AD. Prior to 1640AD reconstructions are based on cosmogenic isotopes (this paper). Different models of the Earth's geomagnetic field are available before and after approximately 5000BC. Therefore we treat periods 3 and 4 (before and after 5000BC) separately. Further details can be found in the paper. We emphasize that the reconstructions based on different proxies have different time resolutions. (1 data file).

  4. Simulation of Quantum Many-Body Dynamics for Generic Strongly-Interacting Systems

    NASA Astrophysics Data System (ADS)

    Meyer, Gregory; Machado, Francisco; Yao, Norman

    2017-04-01

    Recent experimental advances have enabled the bottom-up assembly of complex, strongly interacting quantum many-body systems from individual atoms, ions, molecules and photons. These advances open the door to studying dynamics in isolated quantum systems as well as the possibility of realizing novel out-of-equilibrium phases of matter. Numerical studies provide insight into these systems; however, computational time and memory usage limit common numerical methods such as exact diagonalization to relatively small Hilbert spaces of dimension 215 . Here we present progress toward a new software package for dynamical time evolution of large generic quantum systems on massively parallel computing architectures. By projecting large sparse Hamiltonians into a much smaller Krylov subspace, we are able to compute the evolution of strongly interacting systems with Hilbert space dimension nearing 230. We discuss and benchmark different design implementations, such as matrix-free methods and GPU based calculations, using both pre-thermal time crystals and the Sachdev-Ye-Kitaev model as examples. We also include a simple symbolic language to describe generic Hamiltonians, allowing simulation of diverse quantum systems without any modification of the underlying C and Fortran code.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bazante, Alexandre P., E-mail: abazante@chem.ufl.edu; Bartlett, Rodney J.; Davidson, E. R.

    The benzene radical anion is studied with ab initio coupled-cluster theory in large basis sets. Unlike the usual assumption, we find that, at the level of theory investigated, the minimum energy geometry is non-planar with tetrahedral distortion at two opposite carbon atoms. The anion is well known for its instability to auto-ionization which poses computational challenges to determine its properties. Despite the importance of the benzene radical anion, the considerable attention it has received in the literature so far has failed to address the details of its structure and shape-resonance character at a high level of theory. Here, we examinemore » the dynamic Jahn-Teller effect and its impact on the anion potential energy surface. We find that a minimum energy geometry of C{sub 2} symmetry is located below one D{sub 2h} stationary point on a C{sub 2h} pseudo-rotation surface. The applicability of standard wave function methods to an unbound anion is assessed with the stabilization method. The isotropic hyperfine splitting constants (A{sub iso}) are computed and compared to data obtained from experimental electron spin resonance experiments. Satisfactory agreement with experiment is obtained with coupled-cluster theory and large basis sets such as cc-pCVQZ.« less

  6. Mapping the universe.

    PubMed

    Geller, M J; Huchra, J P

    1989-11-17

    Maps of the galaxy distribution in the nearby universe reveal large coherent structures. The extent of the largest features is limited only by the size of the survey. Voids with a density typically 20 percent of the mean and with diameters of 5000 km s(-1) are present in every survey large enough to contain them. Many galaxies lie in thin sheet-like structures. The largest sheet detected so far is the "Great Wall" with a minimum extent of 60 h(-1) Mpc x 170 h(-1) Mpc, where h is the Hubble constant in units of 100 km s(-1) Mpc(-1). The frequent occurrence of these structures is one of several serious challenges to our current understanding of the origin and evolution of the large-scale distribution of matter in the universe.

  7. A hybrid optimization algorithm to explore atomic configurations of TiO 2 nanoparticles

    DOE PAGES

    Inclan, Eric J.; Geohegan, David B.; Yoon, Mina

    2017-10-17

    Here in this paper we present a hybrid algorithm comprised of differential evolution, coupled with the Broyden–Fletcher–Goldfarb–Shanno quasi-Newton optimization algorithm, for the purpose of identifying a broad range of (meta)stable Ti nO 2n nanoparticles, as an example system, described by Buckingham interatomic potential. The potential and its gradient are modified to be piece-wise continuous to enable use of these continuous-domain, unconstrained algorithms, thereby improving compatibility. To measure computational effectiveness a regression on known structures is used. This approach defines effectiveness as the ability of an algorithm to produce a set of structures whose energy distribution follows the regression as themore » number of Ti nO 2n increases such that the shape of the distribution is consistent with the algorithm’s stated goals. Our calculation demonstrates that the hybrid algorithm finds global minimum configurations more effectively than the differential evolution algorithms, widely employed in the field of materials science. Specifically, the hybrid algorithm is shown to reproduce the global minimum energy structures reported in the literature up to n = 5, and retains good agreement with the regression up to n = 25. For 25 < n < 100, where literature structures are unavailable, the hybrid effectively obtains structures that are in lower energies per TiO 2 unit as the system size increases.« less

  8. The Seismic Tool-Kit (STK): an open source software for seismology and signal processing.

    NASA Astrophysics Data System (ADS)

    Reymond, Dominique

    2016-04-01

    We present an open source software project (GNU public license), named STK: Seismic ToolKit, that is dedicated mainly for seismology and signal processing. The STK project that started in 2007, is hosted by SourceForge.net, and count more than 19 500 downloads at the date of writing. The STK project is composed of two main branches: First, a graphical interface dedicated to signal processing (in the SAC format (SAC_ASCII and SAC_BIN): where the signal can be plotted, zoomed, filtered, integrated, derivated, ... etc. (a large variety of IFR and FIR filter is proposed). The estimation of spectral density of the signal are performed via the Fourier transform, with visualization of the Power Spectral Density (PSD) in linear or log scale, and also the evolutive time-frequency representation (or sonagram). The 3-components signals can be also processed for estimating their polarization properties, either for a given window, or either for evolutive windows along the time. This polarization analysis is useful for extracting the polarized noises, differentiating P waves, Rayleigh waves, Love waves, ... etc. Secondly, a panel of Utilities-Program are proposed for working in a terminal mode, with basic programs for computing azimuth and distance in spherical geometry, inter/auto-correlation, spectral density, time-frequency for an entire directory of signals, focal planes, and main components axis, radiation pattern of P waves, Polarization analysis of different waves (including noize), under/over-sampling the signals, cubic-spline smoothing, and linear/non linear regression analysis of data set. A MINimum library of Linear AlGebra (MIN-LINAG) is also provided for computing the main matrix process like: QR/QL decomposition, Cholesky solve of linear system, finding eigen value/eigen vectors, QR-solve/Eigen-solve of linear equations systems ... etc. STK is developed in C/C++, mainly under Linux OS, and it has been also partially implemented under MS-Windows. Usefull links: http://sourceforge.net/projects/seismic-toolkit/ http://sourceforge.net/p/seismic-toolkit/wiki/browse_pages/

  9. Mesoproterozoic Archaeoellipsoides: akinetes of heterocystous cyanobacteria

    NASA Technical Reports Server (NTRS)

    Golubic, S.; Sergeev, V. N.; Knoll, A. H.

    1995-01-01

    The genus Archaeoellipsoides Horodyski & Donaldson comprises large (up to 135 micrometers long) ellipsoidal and rod-shaped microfossils commonly found in silicified peritidal carbonates of Mesoproterozoic age. Based on morphometric and sedimentary comparisons with the akinetes of modern bloom-forming Anabaena species, Archaeoellipsoides is interpreted as the fossilized remains of akinetes produced by planktic heterocystous cyanobacteria. These fossils set a minimum date for the evolution of derived cyanobacteria capable of marked cell differentiation, and they corroborate geochemical evidence indicating that atmospheric oxygen levels were well above 1% of present day levels 1,500 million years ago.

  10. Evolution of the heteroharmonic strategy for target-range computation in the echolocation of Mormoopidae

    PubMed Central

    Mora, Emanuel C.; Macías, Silvio; Hechavarría, Julio; Vater, Marianne; Kössl, Manfred

    2013-01-01

    Echolocating bats use the time elapsed from biosonar pulse emission to the arrival of echo (defined as echo-delay) to assess target-distance. Target-distance is represented in the brain by delay-tuned neurons that are classified as either “heteroharmonic” or “homoharmormic.” Heteroharmonic neurons respond more strongly to pulse-echo pairs in which the timing of the pulse is given by the fundamental biosonar harmonic while the timing of echoes is provided by one (or several) of the higher order harmonics. On the other hand, homoharmonic neurons are tuned to the echo delay between similar harmonics in the emitted pulse and echo. It is generally accepted that heteroharmonic computations are advantageous over homoharmonic computations; i.e., heteroharmonic neurons receive information from call and echo in different frequency-bands which helps to avoid jamming between pulse and echo signals. Heteroharmonic neurons have been found in two species of the family Mormoopidae (Pteronotus parnellii and Pteronotus quadridens) and in Rhinolophus rouxi. Recently, it was proposed that heteroharmonic target-range computations are a primitive feature of the genus Pteronotus that was preserved in the evolution of the genus. Here, we review recent findings on the evolution of echolocation in Mormoopidae, and try to link those findings to the evolution of the heteroharmonic computation strategy (HtHCS). We stress the hypothesis that the ability to perform heteroharmonic computations evolved separately from the ability of using long constant-frequency echolocation calls, high duty cycle echolocation, and Doppler Shift Compensation. Also, we present the idea that heteroharmonic computations might have been of advantage for categorizing prey size, hunting eared insects, and living in large conspecific colonies. We make five testable predictions that might help future investigations to clarify the evolution of the heteroharmonic echolocation in Mormoopidae and other families. PMID:23781209

  11. 20 CFR 229.47 - Child's benefit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...

  12. 20 CFR 229.47 - Child's benefit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...

  13. 20 CFR 229.47 - Child's benefit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...

  14. 20 CFR 229.47 - Child's benefit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...

  15. 20 CFR 229.47 - Child's benefit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Child's benefit. 229.47 Section 229.47... OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.47 Child's benefit. If a child is included in the computation of the overall minimum, a child's benefit of 50 percent times the Overall...

  16. 20 CFR 229.42 - When a child can no longer be included in computing an annuity rate under the overall minimum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... annuity rate under the overall minimum. A child's inclusion in the computation of the overall minimum rate... second month after the month the child's disability ends, if the child is 18 years old or older, and not...

  17. Smaller beaks for colder winters: Thermoregulation drives beak size evolution in Australasian songbirds.

    PubMed

    Friedman, Nicholas R; Harmáčková, Lenka; Economo, Evan P; Remeš, Vladimír

    2017-08-01

    Birds' beaks play a key role in foraging, and most research on their size and shape has focused on this function. Recent findings suggest that beaks may also be important for thermoregulation, and this may drive morphological evolution as predicted by Allen's rule. However, the role of thermoregulation in the evolution of beak size across species remains largely unexplored. In particular, it remains unclear whether the need for retaining heat in the winter or dissipating heat in the summer plays the greater role in selection for beak size. Comparative studies are needed to evaluate the relative importance of these functions in beak size evolution. We addressed this question in a clade of birds exhibiting wide variation in their climatic niche: the Australasian honeyeaters and allies (Meliphagoidea). Across 158 species, we compared species' climatic conditions extracted from their ranges to beak size measurements in a combined spatial-phylogenetic framework. We found that winter minimum temperature was positively correlated with beak size, while summer maximum temperature was not. This suggests that while diet and foraging behavior may drive evolutionary changes in beak shape, changes in beak size can also be explained by the beak's role in thermoregulation, and winter heat retention in particular. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.

  18. The Independent Evolution Method Is Not a Viable Phylogenetic Comparative Method

    PubMed Central

    2015-01-01

    Phylogenetic comparative methods (PCMs) use data on species traits and phylogenetic relationships to shed light on evolutionary questions. Recently, Smaers and Vinicius suggested a new PCM, Independent Evolution (IE), which purportedly employs a novel model of evolution based on Felsenstein’s Adaptive Peak Model. The authors found that IE improves upon previous PCMs by producing more accurate estimates of ancestral states, as well as separate estimates of evolutionary rates for each branch of a phylogenetic tree. Here, we document substantial theoretical and computational issues with IE. When data are simulated under a simple Brownian motion model of evolution, IE produces severely biased estimates of ancestral states and changes along individual branches. We show that these branch-specific changes are essentially ancestor-descendant or “directional” contrasts, and draw parallels between IE and previous PCMs such as “minimum evolution”. Additionally, while comparisons of branch-specific changes between variables have been interpreted as reflecting the relative strength of selection on those traits, we demonstrate through simulations that regressing IE estimated branch-specific changes against one another gives a biased estimate of the scaling relationship between these variables, and provides no advantages or insights beyond established PCMs such as phylogenetically independent contrasts. In light of our findings, we discuss the results of previous papers that employed IE. We conclude that Independent Evolution is not a viable PCM, and should not be used in comparative analyses. PMID:26683838

  19. Computing smallest intervention strategies for multiple metabolic networks in a boolean model.

    PubMed

    Lu, Wei; Tamura, Takeyuki; Song, Jiangning; Akutsu, Tatsuya

    2015-02-01

    This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online.

  20. Formation et évolution des Galaxies : le rôle de leur environnement

    NASA Astrophysics Data System (ADS)

    Boselli, Alessandro

    2016-08-01

    The new panoramic detectors on large telescopes as well as the most performing space missions allowed us to complete large surveys of the Universe at different wavelengths and thus study the relationships between the different galaxy components at various epochs. At the same time, the increasing computing power allowed us to simulate the evolution of galaxies since their formation at an angular resolution never reached so far. In this article I will briefly describe how the comparison between the most recent observations and the predictions of models and simulations changed our view on the process of galaxy formation and evolution.

  1. Electrohydrodynamic coalescence of droplets using an embedded potential flow model

    NASA Astrophysics Data System (ADS)

    Garzon, M.; Gray, L. J.; Sethian, J. A.

    2018-03-01

    The coalescence, and subsequent satellite formation, of two inviscid droplets is studied numerically. The initial drops are taken to be of equal and different sizes, and simulations have been carried out with and without the presence of an electrical field. The main computational challenge is the tracking of a free surface that changes topology. Coupling level set and boundary integral methods with an embedded potential flow model, we seamlessly compute through these singular events. As a consequence, the various coalescence modes that appear depending upon the relative ratio of the parent droplets can be studied. Computations of first stage pinch-off, second stage pinch-off, and complete engulfment are analyzed and compared to recent numerical studies and laboratory experiments. Specifically, we study the evolution of bridge radii and the related scaling laws, the minimum drop radii evolution from coalescence to satellite pinch-off, satellite sizes, and the upward stretching of the near cylindrical protrusion at the droplet top. Clear evidence of partial coalescence self-similarity is presented for parent droplet ratios between 1.66 and 4. This has been possible due to the fact that computational initial conditions only depend upon the mother droplet size, in contrast with laboratory experiments where the difficulty in establishing the same initial physical configuration is well known. The presence of electric forces changes the coalescence patterns, and it is possible to control the satellite droplet size by tuning the electrical field intensity. All of the numerical results are in very good agreement with recent laboratory experiments for water droplet coalescence.

  2. Solution Methods for Certain Evolution Equations

    NASA Astrophysics Data System (ADS)

    Vega-Guzman, Jose Manuel

    Solution methods for certain linear and nonlinear evolution equations are presented in this dissertation. Emphasis is placed mainly on the analytical treatment of nonautonomous differential equations, which are challenging to solve despite the existent numerical and symbolic computational software programs available. Ideas from the transformation theory are adopted allowing one to solve the problems under consideration from a non-traditional perspective. First, the Cauchy initial value problem is considered for a class of nonautonomous and inhomogeneous linear diffusion-type equation on the entire real line. Explicit transformations are used to reduce the equations under study to their corresponding standard forms emphasizing on natural relations with certain Riccati(and/or Ermakov)-type systems. These relations give solvability results for the Cauchy problem of the parabolic equation considered. The superposition principle allows to solve formally this problem from an unconventional point of view. An eigenfunction expansion approach is also considered for this general evolution equation. Examples considered to corroborate the efficacy of the proposed solution methods include the Fokker-Planck equation, the Black-Scholes model and the one-factor Gaussian Hull-White model. The results obtained in the first part are used to solve the Cauchy initial value problem for certain inhomogeneous Burgers-type equation. The connection between linear (the Diffusion-type) and nonlinear (Burgers-type) parabolic equations is stress in order to establish a strong commutative relation. Traveling wave solutions of a nonautonomous Burgers equation are also investigated. Finally, it is constructed explicitly the minimum-uncertainty squeezed states for quantum harmonic oscillators. They are derived by the action of corresponding maximal kinematical invariance group on the standard ground state solution. It is shown that the product of the variances attains the required minimum value only at the instances that one variance is a minimum and the other is a maximum, when the squeezing of one of the variances occurs. Such explicit construction is possible due to the relation between the diffusion-type equation studied in the first part and the time-dependent Schrodinger equation. A modication of the radiation field operators for squeezed photons in a perfect cavity is also suggested with the help of a nonstandard solution of Heisenberg's equation of motion.

  3. 50 CFR Figures 18a, 18b and 18c to... - Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts (Triangular Cuts); Large...

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts and Leading Edge Cut; Large Frame TED Escape Opening; Minimum Dimensions Using All-Points...—Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED...

  4. 50 CFR Figures 18a, 18b and 18c to... - Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts (Triangular Cuts); Large...

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts and Leading Edge Cut; Large Frame TED Escape Opening; Minimum Dimensions Using All-Points...—Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED...

  5. 50 CFR Figures 18a, 18b and 18c to... - Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts (Triangular Cuts); Large...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts and Leading Edge Cut; Large Frame TED Escape Opening; Minimum Dimensions Using All-Points...—Large Frame TED Escape Opening; Minimum Dimensions Using All-Bar Cuts (Triangular Cuts); Large Frame TED...

  6. Earthquake fracture energy inferred from kinematic rupture models on extended faults

    USGS Publications Warehouse

    Tinti, E.; Spudich, P.; Cocco, M.

    2005-01-01

    We estimate fracture energy on extended faults for several recent earthquakes by retrieving dynamic traction evolution at each point on the fault plane from slip history imaged by inverting ground motion waveforms. We define the breakdown work (Wb) as the excess of work over some minimum traction level achieved during slip. Wb is equivalent to "seismological" fracture energy (G) in previous investigations. Our numerical approach uses slip velocity as a boundary condition on the fault. We employ a three-dimensional finite difference algorithm to compute the dynamic traction evolution in the time domain during the earthquake rupture. We estimate Wb by calculating the scalar product between dynamic traction and slip velocity vectors. This approach does not require specifying a constitutive law and assuming dynamic traction to be collinear with slip velocity. If these vectors are not collinear, the inferred breakdown work depends on the initial traction level. We show that breakdown work depends on the square of slip. The spatial distribution of breakdown work in a single earthquake is strongly correlated with the slip distribution. Breakdown work density and its integral over the fault, breakdown energy, scale with seismic moment according to a power law (with exponent 0.59 and 1.18, respectively). Our estimates of breakdown work range between 4 ?? 105 and 2 ?? 107 J/m2 for earthquakes having moment magnitudes between 5.6 and 7.2. We also compare our inferred values with geologic surface energies. This comparison might suggest that breakdown work for large earthquakes goes primarily into heat production. Copyright 2005 by the American Geophysical Union.

  7. Combined AIE/EBE/GMRES approach to incompressible flows. [Adaptive Implicit-Explicit/Grouped Element-by-Element/Generalized Minimum Residuals

    NASA Technical Reports Server (NTRS)

    Liou, J.; Tezduyar, T. E.

    1990-01-01

    Adaptive implicit-explicit (AIE), grouped element-by-element (GEBE), and generalized minimum residuals (GMRES) solution techniques for incompressible flows are combined. In this approach, the GEBE and GMRES iteration methods are employed to solve the equation systems resulting from the implicitly treated elements, and therefore no direct solution effort is involved. The benchmarking results demonstrate that this approach can substantially reduce the CPU time and memory requirements in large-scale flow problems. Although the description of the concepts and the numerical demonstration are based on the incompressible flows, the approach presented here is applicable to larger class of problems in computational mechanics.

  8. NEW EVIDENCE FOR CHARGE-SIGN-DEPENDENT MODULATION DURING THE SOLAR MINIMUM OF 2006 TO 2009

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Felice, V.; Munini, R.; Vos, E. E.

    The PAMELA space experiment, in orbit since 2006, has measured cosmic rays (CRs) through the most recent period of minimum solar activity with the magnetic field polarity as A  < 0. During this entire time, galactic electrons and protons have been detected down to 70 MV and 400 MV, respectively, and their differential variation in intensity with time has been monitored with unprecedented accuracy. These observations are used to show how differently electrons and protons responded to the quiet modulation conditions that prevailed from 2006 to 2009. It is well known that particle drifts, as one of four major mechanisms for the solarmore » modulation of CRs, cause charge-sign-dependent solar modulation. Periods of minimum solar activity provide optimal conditions in which to study these drift effects. The observed behavior is compared to the solutions of a three-dimensional model for CRs in the heliosphere, including drifts. The numerical results confirm that the difference in the evolution of electron and proton spectra during the last prolonged solar minimum is attributed to a large extent to particle drifts. We therefore present new evidence of charge-sign-dependent solar modulation, with a perspective on its peculiarities for the observed period from 2006 to 2009.« less

  9. Relationship between fluid bed aerosol generator operation and the aerosol produced

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, R.L.; Yerkes, K.

    1980-12-01

    The relationships between bed operation in a fluid bed aerosol generator and aerosol output were studied. A two-inch diameter fluid bed aerosol generator (FBG) was constructed using stainless steel powder as a fluidizing medium. Fly ash from coal combustion was aerosolized and the influence of FBG operating parameters on aerosol mass median aerodynamic diameter (MMAD), geometric standard deviation (sigma/sub g/) and concentration was examined. In an effort to extend observations on large fluid beds to small beds using fine bed particles, minimum fluidizing velocities and elutriation constant were computed. Although FBG minimum fluidizing velocity agreed well with calculations, FBG elutriationmore » constant did not. The results of this study show that the properties of aerosols produced by a FBG depend on fluid bed height and air flow through the bed after the minimum fluidizing velocity is exceeded.« less

  10. The Minimum-Mass Surface Density of the Solar Nebula using the Disk Evolution Equation

    NASA Technical Reports Server (NTRS)

    Davis, Sanford S.

    2005-01-01

    The Hayashi minimum-mass power law representation of the pre-solar nebula (Hayashi 1981, Prog. Theo. Phys.70,35) is revisited using analytic solutions of the disk evolution equation. A new cumulative-planetary-mass-model (an integrated form of the surface density) is shown to predict a smoother surface density compared with methods based on direct estimates of surface density from planetary data. First, a best-fit transcendental function is applied directly to the cumulative planetary mass data with the surface density obtained by direct differentiation. Next a solution to the time-dependent disk evolution equation is parametrically adapted to the planetary data. The latter model indicates a decay rate of r -1/2 in the inner disk followed by a rapid decay which results in a sharper outer boundary than predicted by the minimum mass model. The model is shown to be a good approximation to the finite-size early Solar Nebula and by extension to extra solar protoplanetary disks.

  11. Locality for quantum systems on graphs depends on the number field

    NASA Astrophysics Data System (ADS)

    Hall, H. Tracy; Severini, Simone

    2013-07-01

    Adapting a definition of Aaronson and Ambainis (2005 Theory Comput. 1 47-79), we call a quantum dynamics on a digraph saturated Z-local if the nonzero transition amplitudes specifying the unitary evolution are in exact correspondence with the directed edges (including loops) of the digraph. This idea appears recurrently in a variety of contexts including angular momentum, quantum chaos, and combinatorial matrix theory. Complete characterization of the digraph properties that allow such a process to exist is a long-standing open question that can also be formulated in terms of minimum rank problems. We prove that saturated Z-local dynamics involving complex amplitudes occur on a proper superset of the digraphs that allow restriction to the real numbers or, even further, the rationals. Consequently, among these fields, complex numbers guarantee the largest possible choice of topologies supporting a discrete quantum evolution. A similar construction separates complex numbers from the skew field of quaternions. The result proposes a concrete ground for distinguishing between complex and quaternionic quantum mechanics.

  12. The interaction between giant gaseous protoplanets and the primitive solar nebula

    NASA Technical Reports Server (NTRS)

    Cameron, A. G. W.

    1979-01-01

    The manner in which a giant gaseous protoplanet becomes embedded in the primitive solar nebula determines surface boundary conditions which must be used in studying the evolution of such objects. On the one hand, if the system resembles a contact binary system, then the envelope of the protoplanet should approach the entropy of the surrounding nebula. On the other hand angular momentum transfer by resonance and tidal effects between the nebula and the protoplanet may cause the nebula to exhibit a zone of avoidance near the protoplanet, thus inhibiting exchange of material. This problem has been studied with a computer program developed by D. N. C. Lin which simulates disk hydrodynamics by particle motions with dissipation. These studies suggest that for expected values of the protoplanet/protosun mass ratios, significant inhibition of mass exchange is likely, so that it is a reasonable next step to undertake protoplanet evolution studies with the imposition of minimum protoplanet surface temperatures.

  13. Compression of Born ratio for fluorescence molecular tomography/x-ray computed tomography hybrid imaging: methodology and in vivo validation.

    PubMed

    Mohajerani, Pouyan; Ntziachristos, Vasilis

    2013-07-01

    The 360° rotation geometry of the hybrid fluorescence molecular tomography/x-ray computed tomography modality allows for acquisition of very large datasets, which pose numerical limitations on the reconstruction. We propose a compression method that takes advantage of the correlation of the Born-normalized signal among sources in spatially formed clusters to reduce the size of system model. The proposed method has been validated using an ex vivo study and an in vivo study of a nude mouse with a subcutaneous 4T1 tumor, with and without inclusion of a priori anatomical information. Compression rates of up to two orders of magnitude with minimum distortion of reconstruction have been demonstrated, resulting in large reduction in weight matrix size and reconstruction time.

  14. Design of transonic airfoil sections using a similarity theory

    NASA Technical Reports Server (NTRS)

    Nixon, D.

    1978-01-01

    A study of the available methods for transonic airfoil and wing design indicates that the most powerful technique is the numerical optimization procedure. However, the computer time for this method is relatively large because of the amount of computation required in the searches during optimization. The optimization method requires that base and calibration solutions be computed to determine a minimum drag direction. The design space is then computationally searched in this direction; it is these searches that dominate the computation time. A recent similarity theory allows certain transonic flows to be calculated rapidly from the base and calibration solutions. In this paper the application of the similarity theory to design problems is examined with the object of at least partially eliminating the costly searches of the design optimization method. An example of an airfoil design is presented.

  15. EON: software for long time simulations of atomic scale systems

    NASA Astrophysics Data System (ADS)

    Chill, Samuel T.; Welborn, Matthew; Terrell, Rye; Zhang, Liang; Berthet, Jean-Claude; Pedersen, Andreas; Jónsson, Hannes; Henkelman, Graeme

    2014-07-01

    The EON software is designed for simulations of the state-to-state evolution of atomic scale systems over timescales greatly exceeding that of direct classical dynamics. States are defined as collections of atomic configurations from which a minimization of the potential energy gives the same inherent structure. The time evolution is assumed to be governed by rare events, where transitions between states are uncorrelated and infrequent compared with the timescale of atomic vibrations. Several methods for calculating the state-to-state evolution have been implemented in EON, including parallel replica dynamics, hyperdynamics and adaptive kinetic Monte Carlo. Global optimization methods, including simulated annealing, basin hopping and minima hopping are also implemented. The software has a client/server architecture where the computationally intensive evaluations of the interatomic interactions are calculated on the client-side and the state-to-state evolution is managed by the server. The client supports optimization for different computer architectures to maximize computational efficiency. The server is written in Python so that developers have access to the high-level functionality without delving into the computationally intensive components. Communication between the server and clients is abstracted so that calculations can be deployed on a single machine, clusters using a queuing system, large parallel computers using a message passing interface, or within a distributed computing environment. A generic interface to the evaluation of the interatomic interactions is defined so that empirical potentials, such as in LAMMPS, and density functional theory as implemented in VASP and GPAW can be used interchangeably. Examples are given to demonstrate the range of systems that can be modeled, including surface diffusion and island ripening of adsorbed atoms on metal surfaces, molecular diffusion on the surface of ice and global structural optimization of nanoparticles.

  16. We introduce an algorithm for the simultaneous reconstruction of faults and slip fields. We prove that the minimum of a related regularized functional converges to the unique solution of the fault inverse problem. We consider a Bayesian approach. We use a parallel multi-core platform and we discuss techniques to save on computational time.

    NASA Astrophysics Data System (ADS)

    Volkov, D.

    2017-12-01

    We introduce an algorithm for the simultaneous reconstruction of faults and slip fields on those faults. We define a regularized functional to be minimized for the reconstruction. We prove that the minimum of that functional converges to the unique solution of the related fault inverse problem. Due to inherent uncertainties in measurements, rather than seeking a deterministic solution to the fault inverse problem, we consider a Bayesian approach. The advantage of such an approach is that we obtain a way of quantifying uncertainties as part of our final answer. On the downside, this Bayesian approach leads to a very large computation. To contend with the size of this computation we developed an algorithm for the numerical solution to the stochastic minimization problem which can be easily implemented on a parallel multi-core platform and we discuss techniques to save on computational time. After showing how this algorithm performs on simulated data and assessing the effect of noise, we apply it to measured data. The data was recorded during a slow slip event in Guerrero, Mexico.

  17. SANs and Large Scale Data Migration at the NASA Center for Computational Sciences

    NASA Technical Reports Server (NTRS)

    Salmon, Ellen M.

    2004-01-01

    Evolution and migration are a way of life for provisioners of high-performance mass storage systems that serve high-end computers used by climate and Earth and space science researchers: the compute engines come and go, but the data remains. At the NASA Center for Computational Sciences (NCCS), disk and tape SANs are deployed to provide high-speed I/O for the compute engines and the hierarchical storage management systems. Along with gigabit Ethernet, they also enable the NCCS's latest significant migration: the transparent transfer of 300 Til3 of legacy HSM data into the new Sun SAM-QFS cluster.

  18. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    PubMed

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  19. The QSE-Reduced Nuclear Reaction Network for Silicon Burning

    NASA Astrophysics Data System (ADS)

    Hix, W. Raphael; Parete-Koon, Suzanne T.; Freiburghaus, Christian; Thielemann, Friedrich-Karl

    2007-09-01

    Iron and neighboring nuclei are formed in massive stars shortly before core collapse and during their supernova outbursts, as well as during thermonuclear supernovae. Complete and incomplete silicon burning are responsible for the production of a wide range of nuclei with atomic mass numbers from 28 to 64. Because of the large number of nuclei involved, accurate modeling of silicon burning is computationally expensive. However, examination of the physics of silicon burning has revealed that the nuclear evolution is dominated by large groups of nuclei in mutual equilibrium. We present a new hybrid equilibrium-network scheme which takes advantage of this quasi-equilibrium in order to reduce the number of independent variables calculated. This allows accurate prediction of the nuclear abundance evolution, deleptonization, and energy generation at a greatly reduced computational cost when compared to a conventional nuclear reaction network. During silicon burning, the resultant QSE-reduced network is approximately an order of magnitude faster than the full network it replaces and requires the tracking of less than a third as many abundance variables, without significant loss of accuracy. These reductions in computational cost and the number of species evolved make QSE-reduced networks well suited for inclusion within hydrodynamic simulations, particularly in multidimensional applications.

  20. Trinity to Trinity 1945-2015

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moniz, Ernest; Carr, Alan; Bethe, Hans

    The Trinity Test of July 16, 1945 was the first full-scale, real-world test of a nuclear weapon; with the new Trinity supercomputer Los Alamos National Laboratory's goal is to do this virtually, in 3D. Trinity was the culmination of a fantastic effort of groundbreaking science and engineering by hundreds of men and women at Los Alamos and other Manhattan Project sites. It took them less than two years to change the world. The Laboratory is marking the 70th anniversary of the Trinity Test because it not only ushered in the Nuclear Age, but with it the origin of today’s advancedmore » supercomputing. We live in the Age of Supercomputers due in large part to nuclear weapons science here at Los Alamos. National security science, and nuclear weapons science in particular, at Los Alamos National Laboratory have provided a key motivation for the evolution of large-scale scientific computing. Beginning with the Manhattan Project there has been a constant stream of increasingly significant, complex problems in nuclear weapons science whose timely solutions demand larger and faster computers. The relationship between national security science at Los Alamos and the evolution of computing is one of interdependence.« less

  1. Trinity to Trinity 1945-2015

    ScienceCinema

    Moniz, Ernest; Carr, Alan; Bethe, Hans; Morrison, Phillip; Ramsay, Norman; Teller, Edward; Brixner, Berlyn; Archer, Bill; Agnew, Harold; Morrison, John

    2018-01-16

    The Trinity Test of July 16, 1945 was the first full-scale, real-world test of a nuclear weapon; with the new Trinity supercomputer Los Alamos National Laboratory's goal is to do this virtually, in 3D. Trinity was the culmination of a fantastic effort of groundbreaking science and engineering by hundreds of men and women at Los Alamos and other Manhattan Project sites. It took them less than two years to change the world. The Laboratory is marking the 70th anniversary of the Trinity Test because it not only ushered in the Nuclear Age, but with it the origin of today’s advanced supercomputing. We live in the Age of Supercomputers due in large part to nuclear weapons science here at Los Alamos. National security science, and nuclear weapons science in particular, at Los Alamos National Laboratory have provided a key motivation for the evolution of large-scale scientific computing. Beginning with the Manhattan Project there has been a constant stream of increasingly significant, complex problems in nuclear weapons science whose timely solutions demand larger and faster computers. The relationship between national security science at Los Alamos and the evolution of computing is one of interdependence.

  2. On the critical flame radius and minimum ignition energy for spherical flame initiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zheng; Burke, M. P.; Ju, Yiguang

    2011-01-01

    Spherical flame initiation from an ignition kernel is studied theoretically and numerically using different fuel/oxygen/helium/argon mixtures (fuel: hydrogen, methane, and propane). The emphasis is placed on investigating the critical flame radius controlling spherical flame initiation and its correlation with the minimum ignition energy. It is found that the critical flame radius is different from the flame thickness and the flame ball radius and that their relationship depends strongly on the Lewis number. Three different flame regimes in terms of the Lewis number are observed and a new criterion for the critical flame radius is introduced. For mixtures with Lewis numbermore » larger than a critical Lewis number above unity, the critical flame radius is smaller than the flame ball radius but larger than the flame thickness. As a result, the minimum ignition energy can be substantially over-predicted (under-predicted) based on the flame ball radius (the flame thickness). The results also show that the minimum ignition energy for successful spherical flame initiation is proportional to the cube of the critical flame radius. Furthermore, preferential diffusion of heat and mass (i.e. the Lewis number effect) is found to play an important role in both spherical flame initiation and flame kernel evolution after ignition. It is shown that the critical flame radius and the minimum ignition energy increase significantly with the Lewis number. Therefore, for transportation fuels with large Lewis numbers, blending of small molecule fuels or thermal and catalytic cracking will significantly reduce the minimum ignition energy.« less

  3. Baseline Water Demand at Forward Operating Bases

    DTIC Science & Technology

    2013-09-15

    population, often equaling or exceeding the military population: • Brigade: 6000 soldiers • Battalion: 1000 soldiers • Company : 150 soldiers. ERDC/CERL TR...requirements for a company outpost (COP) of 120 personnel (PAX) in the format that the computer tool generates. This tool generates a basic sus...facilities world-wide through several large contractors. One contractor, Kellogg , Brown, and Root (KBR), used a minimum planning factor of 18.4

  4. Directional cultural change by modification and replacement of memes.

    PubMed

    Cardoso, Gonçalo C; Atwell, Jonathan W

    2011-01-01

    Evolutionary approaches to culture remain contentious. A source of contention is that cultural mutation may be substantial and, if it drives cultural change, then current evolutionary models are not adequate. But we lack studies quantifying the contribution of mutations to directional cultural change. We estimated the contribution of one type of cultural mutations--modification of memes--to directional cultural change using an amenable study system: learned birdsongs in a species that recently entered an urban habitat. Many songbirds have higher minimum song frequency in cities, to alleviate masking by low-frequency noise. We estimated that the input of meme modifications in an urban songbird population explains about half the extent of the population divergence in song frequency. This contribution of cultural mutations is large, but insufficient to explain the entire population divergence. The remaining divergence is due to selection of memes or creation of new memes. We conclude that the input of cultural mutations can be quantitatively important, unlike in genetic evolution, and that it operates together with other mechanisms of cultural evolution. For this and other traits, in which the input of cultural mutations might be important, quantitative studies of cultural mutation are necessary to calibrate realistic models of cultural evolution. © 2010 The Author(s). Evolution© 2010 The Society for the Study of Evolution.

  5. Computing Smallest Intervention Strategies for Multiple Metabolic Networks in a Boolean Model

    PubMed Central

    Lu, Wei; Song, Jiangning; Akutsu, Tatsuya

    2015-01-01

    Abstract This article considers the problem whereby, given two metabolic networks N1 and N2, a set of source compounds, and a set of target compounds, we must find the minimum set of reactions whose removal (knockout) ensures that the target compounds are not producible in N1 but are producible in N2. Similar studies exist for the problem of finding the minimum knockout with the smallest side effect for a single network. However, if technologies of external perturbations are advanced in the near future, it may be important to develop methods of computing the minimum knockout for multiple networks (MKMN). Flux balance analysis (FBA) is efficient if a well-polished model is available. However, that is not always the case. Therefore, in this article, we study MKMN in Boolean models and an elementary mode (EM)-based model. Integer linear programming (ILP)-based methods are developed for these models, since MKMN is NP-complete for both the Boolean model and the EM-based model. Computer experiments are conducted with metabolic networks of clostridium perfringens SM101 and bifidobacterium longum DJO10A, respectively known as bad bacteria and good bacteria for the human intestine. The results show that larger networks are more likely to have MKMN solutions. However, solving for these larger networks takes a very long time, and often the computation cannot be completed. This is reasonable, because small networks do not have many alternative pathways, making it difficult to satisfy the MKMN condition, whereas in large networks the number of candidate solutions explodes. Our developed software minFvskO is available online. PMID:25684199

  6. Iterative Minimum Variance Beamformer with Low Complexity for Medical Ultrasound Imaging.

    PubMed

    Deylami, Ali Mohades; Asl, Babak Mohammadzadeh

    2018-06-04

    Minimum variance beamformer (MVB) improves the resolution and contrast of medical ultrasound images compared with delay and sum (DAS) beamformer. The weight vector of this beamformer should be calculated for each imaging point independently, with a cost of increasing computational complexity. The large number of necessary calculations limits this beamformer to application in real-time systems. A beamformer is proposed based on the MVB with lower computational complexity while preserving its advantages. This beamformer avoids matrix inversion, which is the most complex part of the MVB, by solving the optimization problem iteratively. The received signals from two imaging points close together do not vary much in medical ultrasound imaging. Therefore, using the previously optimized weight vector for one point as initial weight vector for the new neighboring point can improve the convergence speed and decrease the computational complexity. The proposed method was applied on several data sets, and it has been shown that the method can regenerate the results obtained by the MVB while the order of complexity is decreased from O(L 3 ) to O(L 2 ). Copyright © 2018 World Federation for Ultrasound in Medicine and Biology. Published by Elsevier Inc. All rights reserved.

  7. Interspecific geographic range size-body size relationship and the diversification dynamics of Neotropical furnariid birds.

    PubMed

    Inostroza-Michael, Oscar; Hernández, Cristián E; Rodríguez-Serrano, Enrique; Avaria-Llautureo, Jorge; Rivadeneira, Marcelo M

    2018-05-01

    Among the earliest macroecological patterns documented, is the range and body size relationship, characterized by a minimum geographic range size imposed by the species' body size. This boundary for the geographic range size increases linearly with body size and has been proposed to have implications in lineages evolution and conservation. Nevertheless, the macroevolutionary processes involved in the origin of this boundary and its consequences on lineage diversification have been poorly explored. We evaluate the macroevolutionary consequences of the difference (hereafter the distance) between the observed and the minimum range sizes required by the species' body size, to untangle its role on the diversification of a Neotropical species-rich bird clade using trait-dependent diversification models. We show that speciation rate is a positive hump-shaped function of the distance to the lower boundary. The species with highest and lowest distances to minimum range size had lower speciation rates, while species close to medium distances values had the highest speciation rates. Further, our results suggest that the distance to the minimum range size is a macroevolutionary constraint that affects the diversification process responsible for the origin of this macroecological pattern in a more complex way than previously envisioned. © 2018 The Author(s). Evolution © 2018 The Society for the Study of Evolution.

  8. A 20-year period of orthotopic liver transplantation activity in a single center: a time series analysis performed using the R Statistical Software.

    PubMed

    Santori, G; Andorno, E; Morelli, N; Casaccia, M; Bottino, G; Di Domenico, S; Valente, U

    2009-05-01

    In many Western countries a "minimum volume rule" policy has been adopted as a quality measure for complex surgical procedures. In Italy, the National Transplant Centre set the minimum number of orthotopic liver transplantation (OLT) procedures/y at 25/center. OLT procedures performed in a single center for a reasonably large period may be treated as a time series to evaluate trend, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1987 and December 31, 2006, we performed 563 cadaveric donor OLTs to adult recipients. During 2007, there were another 28 procedures. The greatest numbers of OLTs/y were performed in 2001 (n = 51), 2005 (n = 50), and 2004 (n = 49). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed an incremental trend after exponential smoothing as well as after seasonal decomposition. The predicted OLT/mo for 2007 calculated with the Holt-Winters exponential smoothing applied to the previous period 1987-2006 helped to identify the months where there was a major difference between predicted and performed procedures. The time series approach may be helpful to establish a minimum volume/y at a single-center level.

  9. Spherical harmonics based descriptor for neural network potentials: Structure and dynamics of Au147 nanocluster.

    PubMed

    Jindal, Shweta; Chiriki, Siva; Bulusu, Satya S

    2017-05-28

    We propose a highly efficient method for fitting the potential energy surface of a nanocluster using a spherical harmonics based descriptor integrated with an artificial neural network. Our method achieves the accuracy of quantum mechanics and speed of empirical potentials. For large sized gold clusters (Au 147 ), the computational time for accurate calculation of energy and forces is about 1.7 s, which is faster by several orders of magnitude compared to density functional theory (DFT). This method is used to perform the global minimum optimizations and molecular dynamics simulations for Au 147 , and it is found that its global minimum is not an icosahedron. The isomer that can be regarded as the global minimum is found to be 4 eV lower in energy than the icosahedron and is confirmed from DFT. The geometry of the obtained global minimum contains 105 atoms on the surface and 42 atoms in the core. A brief study on the fluxionality in Au 147 is performed, and it is concluded that Au 147 has a dynamic surface, thus opening a new window for studying its reaction dynamics.

  10. Spherical harmonics based descriptor for neural network potentials: Structure and dynamics of Au147 nanocluster

    NASA Astrophysics Data System (ADS)

    Jindal, Shweta; Chiriki, Siva; Bulusu, Satya S.

    2017-05-01

    We propose a highly efficient method for fitting the potential energy surface of a nanocluster using a spherical harmonics based descriptor integrated with an artificial neural network. Our method achieves the accuracy of quantum mechanics and speed of empirical potentials. For large sized gold clusters (Au147), the computational time for accurate calculation of energy and forces is about 1.7 s, which is faster by several orders of magnitude compared to density functional theory (DFT). This method is used to perform the global minimum optimizations and molecular dynamics simulations for Au147, and it is found that its global minimum is not an icosahedron. The isomer that can be regarded as the global minimum is found to be 4 eV lower in energy than the icosahedron and is confirmed from DFT. The geometry of the obtained global minimum contains 105 atoms on the surface and 42 atoms in the core. A brief study on the fluxionality in Au147 is performed, and it is concluded that Au147 has a dynamic surface, thus opening a new window for studying its reaction dynamics.

  11. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 2 2010-04-01 2010-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  12. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 2 2011-04-01 2011-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  13. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 2 2012-04-01 2012-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  14. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  15. 20 CFR Appendix V to Subpart C of... - Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 2 2013-04-01 2013-04-01 false Computing the Special Minimum Primary Insurance Amount and Related Maximum Family Benefits V Appendix V to Subpart C of Part 404 Employees...- ) Computing Primary Insurance Amounts Pt. 404, Subpt. C, App. V Appendix V to Subpart C of Part 404—Computing...

  16. Rise and fall of political complexity in island South-East Asia and the Pacific.

    PubMed

    Currie, Thomas E; Greenhill, Simon J; Gray, Russell D; Hasegawa, Toshikazu; Mace, Ruth

    2010-10-14

    There is disagreement about whether human political evolution has proceeded through a sequence of incremental increases in complexity, or whether larger, non-sequential increases have occurred. The extent to which societies have decreased in complexity is also unclear. These debates have continued largely in the absence of rigorous, quantitative tests. We evaluated six competing models of political evolution in Austronesian-speaking societies using phylogenetic methods. Here we show that in the best-fitting model political complexity rises and falls in a sequence of small steps. This is closely followed by another model in which increases are sequential but decreases can be either sequential or in bigger drops. The results indicate that large, non-sequential jumps in political complexity have not occurred during the evolutionary history of these societies. This suggests that, despite the numerous contingent pathways of human history, there are regularities in cultural evolution that can be detected using computational phylogenetic methods.

  17. An experimental and computational evolution-based method to study a mode of co-evolution of overlapping open reading frames in the AAV2 viral genome.

    PubMed

    Kawano, Yasuhiro; Neeley, Shane; Adachi, Kei; Nakai, Hiroyuki

    2013-01-01

    Overlapping open reading frames (ORFs) in viral genomes undergo co-evolution; however, how individual amino acids coded by overlapping ORFs are structurally, functionally, and co-evolutionarily constrained remains difficult to address by conventional homologous sequence alignment approaches. We report here a new experimental and computational evolution-based methodology to address this question and report its preliminary application to elucidating a mode of co-evolution of the frame-shifted overlapping ORFs in the adeno-associated virus (AAV) serotype 2 viral genome. These ORFs encode both capsid VP protein and non-structural assembly-activating protein (AAP). To show proof of principle of the new method, we focused on the evolutionarily conserved QVKEVTQ and KSKRSRR motifs, a pair of overlapping heptapeptides in VP and AAP, respectively. In the new method, we first identified a large number of capsid-forming VP3 mutants and functionally competent AAP mutants of these motifs from mutant libraries by experimental directed evolution under no co-evolutionary constraints. We used Illumina sequencing to obtain a large dataset and then statistically assessed the viability of VP and AAP heptapeptide mutants. The obtained heptapeptide information was then integrated into an evolutionary algorithm, with which VP and AAP were co-evolved from random or native nucleotide sequences in silico. As a result, we demonstrate that these two heptapeptide motifs could exhibit high degeneracy if coded by separate nucleotide sequences, and elucidate how overlap-evoked co-evolutionary constraints play a role in making the VP and AAP heptapeptide sequences into the present shape. Specifically, we demonstrate that two valine (V) residues and β-strand propensity in QVKEVTQ are structurally important, the strongly negative and hydrophilic nature of KSKRSRR is functionally important, and overlap-evoked co-evolution imposes strong constraints on serine (S) residues in KSKRSRR, despite high degeneracy of the motifs in the absence of co-evolutionary constraints.

  18. Computing and Visualizing Reachable Volumes for Maneuvering Satellites

    NASA Astrophysics Data System (ADS)

    Jiang, M.; de Vries, W.; Pertica, A.; Olivier, S.

    2011-09-01

    Detecting and predicting maneuvering satellites is an important problem for Space Situational Awareness. The spatial envelope of all possible locations within reach of such a maneuvering satellite is known as the Reachable Volume (RV). As soon as custody of a satellite is lost, calculating the RV and its subsequent time evolution is a critical component in the rapid recovery of the satellite. In this paper, we present a Monte Carlo approach to computing the RV for a given object. Essentially, our approach samples all possible trajectories by randomizing thrust-vectors, thrust magnitudes and time of burn. At any given instance, the distribution of the "point-cloud" of the virtual particles defines the RV. For short orbital time-scales, the temporal evolution of the point-cloud can result in complex, multi-reentrant manifolds. Visualization plays an important role in gaining insight and understanding into this complex and evolving manifold. In the second part of this paper, we focus on how to effectively visualize the large number of virtual trajectories and the computed RV. We present a real-time out-of-core rendering technique for visualizing the large number of virtual trajectories. We also examine different techniques for visualizing the computed volume of probability density distribution, including volume slicing, convex hull and isosurfacing. We compare and contrast these techniques in terms of computational cost and visualization effectiveness, and describe the main implementation issues encountered during our development process. Finally, we will present some of the results from our end-to-end system for computing and visualizing RVs using examples of maneuvering satellites.

  19. cOSPREY: A Cloud-Based Distributed Algorithm for Large-Scale Computational Protein Design

    PubMed Central

    Pan, Yuchao; Dong, Yuxi; Zhou, Jingtian; Hallen, Mark; Donald, Bruce R.; Xu, Wei

    2016-01-01

    Abstract Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches. PMID:27154509

  20. Unveiling the nature of dark matter with high redshift 21 cm line experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evoli, C.; Mesinger, A.; Ferrara, A., E-mail: carmelo.evoli@desy.de, E-mail: andrei.mesinger@sns.it, E-mail: andrea.ferrara@sns.it

    2014-11-01

    Observations of the redshifted 21 cm line from neutral hydrogen will open a new window on the early Universe. By influencing the thermal and ionization history of the intergalactic medium (IGM), annihilating dark matter (DM) can leave a detectable imprint in the 21 cm signal. Building on the publicly available 21cmFAST code, we compute the 21 cm signal for a 10 GeV WIMP DM candidate. The most pronounced role of DM annihilations is in heating the IGM earlier and more uniformly than astrophysical sources of X-rays. This leaves several unambiguous, qualitative signatures in the redshift evolution of the large-scale (k ≅ 0.1more » Mpc{sup -1}) 21 cm power amplitude: (i) the local maximum (peak) associated with IGM heating can be lower than the other maxima; (ii) the heating peak can occur while the IGM is in emission against the cosmic microwave background (CMB); (iii) there can be a dramatic drop in power (a global minimum) corresponding to the epoch when the IGM temperature is comparable to the CMB temperature. These signatures are robust to astrophysical uncertainties, and will be easily detectable with second generation interferometers. We also briefly show that decaying warm dark matter has a negligible role in heating the IGM.« less

  1. Computationally-Efficient Minimum-Time Aircraft Routes in the Presence of Winds

    NASA Technical Reports Server (NTRS)

    Jardin, Matthew R.

    2004-01-01

    A computationally efficient algorithm for minimizing the flight time of an aircraft in a variable wind field has been invented. The algorithm, referred to as Neighboring Optimal Wind Routing (NOWR), is based upon neighboring-optimal-control (NOC) concepts and achieves minimum-time paths by adjusting aircraft heading according to wind conditions at an arbitrary number of wind measurement points along the flight route. The NOWR algorithm may either be used in a fast-time mode to compute minimum- time routes prior to flight, or may be used in a feedback mode to adjust aircraft heading in real-time. By traveling minimum-time routes instead of direct great-circle (direct) routes, flights across the United States can save an average of about 7 minutes, and as much as one hour of flight time during periods of strong jet-stream winds. The neighboring optimal routes computed via the NOWR technique have been shown to be within 1.5 percent of the absolute minimum-time routes for flights across the continental United States. On a typical 450-MHz Sun Ultra workstation, the NOWR algorithm produces complete minimum-time routes in less than 40 milliseconds. This corresponds to a rate of 25 optimal routes per second. The closest comparable optimization technique runs approximately 10 times slower. Airlines currently use various trial-and-error search techniques to determine which of a set of commonly traveled routes will minimize flight time. These algorithms are too computationally expensive for use in real-time systems, or in systems where many optimal routes need to be computed in a short amount of time. Instead of operating in real-time, airlines will typically plan a trajectory several hours in advance using wind forecasts. If winds change significantly from forecasts, the resulting flights will no longer be minimum-time. The need for a computationally efficient wind-optimal routing algorithm is even greater in the case of new air-traffic-control automation concepts. For air-traffic-control automation, thousands of wind-optimal routes may need to be computed and checked for conflicts in just a few minutes. These factors motivated the need for a more efficient wind-optimal routing algorithm.

  2. Gradient gravitational search: An efficient metaheuristic algorithm for global optimization.

    PubMed

    Dash, Tirtharaj; Sahu, Prabhat K

    2015-05-30

    The adaptation of novel techniques developed in the field of computational chemistry to solve the concerned problems for large and flexible molecules is taking the center stage with regard to efficient algorithm, computational cost and accuracy. In this article, the gradient-based gravitational search (GGS) algorithm, using analytical gradients for a fast minimization to the next local minimum has been reported. Its efficiency as metaheuristic approach has also been compared with Gradient Tabu Search and others like: Gravitational Search, Cuckoo Search, and Back Tracking Search algorithms for global optimization. Moreover, the GGS approach has also been applied to computational chemistry problems for finding the minimal value potential energy of two-dimensional and three-dimensional off-lattice protein models. The simulation results reveal the relative stability and physical accuracy of protein models with efficient computational cost. © 2015 Wiley Periodicals, Inc.

  3. Computational method for the correction of proximity effect in electron-beam lithography (Poster Paper)

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Yuan; Owen, Gerry; Pease, Roger Fabian W.; Kailath, Thomas

    1992-07-01

    Dose correction is commonly used to compensate for the proximity effect in electron lithography. The computation of the required dose modulation is usually carried out using 'self-consistent' algorithms that work by solving a large number of simultaneous linear equations. However, there are two major drawbacks: the resulting correction is not exact, and the computation time is excessively long. A computational scheme, as shown in Figure 1, has been devised to eliminate this problem by the deconvolution of the point spread function in the pattern domain. The method is iterative, based on a steepest descent algorithm. The scheme has been successfully tested on a simple pattern with a minimum feature size 0.5 micrometers , exposed on a MEBES tool at 10 KeV in 0.2 micrometers of PMMA resist on a silicon substrate.

  4. Computational domain length and Reynolds number effects on large-scale coherent motions in turbulent pipe flow

    NASA Astrophysics Data System (ADS)

    Feldmann, Daniel; Bauer, Christian; Wagner, Claus

    2018-03-01

    We present results from direct numerical simulations (DNS) of turbulent pipe flow at shear Reynolds numbers up to Reτ = 1500 using different computational domains with lengths up to ?. The objectives are to analyse the effect of the finite size of the periodic pipe domain on large flow structures in dependency of Reτ and to assess a minimum ? required for relevant turbulent scales to be captured and a minimum Reτ for very large-scale motions (VLSM) to be analysed. Analysing one-point statistics revealed that the mean velocity profile is invariant for ?. The wall-normal location at which deviations occur in shorter domains changes strongly with increasing Reτ from the near-wall region to the outer layer, where VLSM are believed to live. The root mean square velocity profiles exhibit domain length dependencies for pipes shorter than 14R and 7R depending on Reτ. For all Reτ, the higher-order statistical moments show only weak dependencies and only for the shortest domain considered here. However, the analysis of one- and two-dimensional pre-multiplied energy spectra revealed that even for larger ?, not all physically relevant scales are fully captured, even though the aforementioned statistics are in good agreement with the literature. We found ? to be sufficiently large to capture VLSM-relevant turbulent scales in the considered range of Reτ based on our definition of an integral energy threshold of 10%. The requirement to capture at least 1/10 of the global maximum energy level is justified by a 14% increase of the streamwise turbulence intensity in the outer region between Reτ = 720 and 1500, which can be related to VLSM-relevant length scales. Based on this scaling anomaly, we found Reτ⪆1500 to be a necessary minimum requirement to investigate VLSM-related effects in pipe flow, even though the streamwise energy spectra does not yet indicate sufficient scale separation between the most energetic and the very long motions.

  5. The use of inexpensive computer-based scanning survey technology to perform medical practice satisfaction surveys.

    PubMed

    Shumaker, L; Fetterolf, D E; Suhrie, J

    1998-01-01

    The recent availability of inexpensive document scanners and optical character recognition technology has created the ability to process surveys in large numbers with a minimum of operator time. Programs, which allow computer entry of such scanned questionnaire results directly into PC based relational databases, have further made it possible to quickly collect and analyze significant amounts of information. We have created an internal capability to easily generate survey data and conduct surveillance across a number of medical practice sites within a managed care/practice management organization. Patient satisfaction surveys, referring physician surveys and a variety of other evidence gathering tools have been deployed.

  6. A FORTRAN program for determining aircraft stability and control derivatives from flight data

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1975-01-01

    A digital computer program written in FORTRAN IV for the estimation of aircraft stability and control derivatives is presented. The program uses a maximum likelihood estimation method, and two associated programs for routine, related data handling are also included. The three programs form a package that can be used by relatively inexperienced personnel to process large amounts of data with a minimum of manpower. This package was used to successfully analyze 1500 maneuvers on 20 aircraft, and is designed to be used without modification on as many types of computers as feasible. Program listings and sample check cases are included.

  7. Implicit solvers for unstructured meshes

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Mavriplis, Dimitri J.

    1991-01-01

    Implicit methods were developed and tested for unstructured mesh computations. The approximate system which arises from the Newton linearization of the nonlinear evolution operator is solved by using the preconditioned GMRES (Generalized Minimum Residual) technique. Three different preconditioners were studied, namely, the incomplete LU factorization (ILU), block diagonal factorization, and the symmetric successive over relaxation (SSOR). The preconditioners were optimized to have good vectorization properties. SSOR and ILU were also studied as iterative schemes. The various methods are compared over a wide range of problems. Ordering of the unknowns, which affects the convergence of these sparse matrix iterative methods, is also studied. Results are presented for inviscid and turbulent viscous calculations on single and multielement airfoil configurations using globally and adaptively generated meshes.

  8. The evolution of computer monitoring of real time data during the Atlas Centaur launch countdown

    NASA Technical Reports Server (NTRS)

    Thomas, W. F.

    1981-01-01

    In the last decade, improvements in computer technology have provided new 'tools' for controlling and monitoring critical missile systems. In this connection, computers have gradually taken a large role in monitoring all flights and ground systems on the Atlas Centaur. The wide body Centaur which will be launched in the Space Shuttle Cargo Bay will use computers to an even greater extent. It is planned to use the wide body Centaur to boost the Galileo spacecraft toward Jupiter in 1985. The critical systems which must be monitored prior to liftoff are examined. Computers have now been programmed to monitor all critical parameters continuously. At this time, there are two separate computer systems used to monitor these parameters.

  9. Computational neuroanatomy using brain deformations: From brain parcellation to multivariate pattern analysis and machine learning.

    PubMed

    Davatzikos, Christos

    2016-10-01

    The past 20 years have seen a mushrooming growth of the field of computational neuroanatomy. Much of this work has been enabled by the development and refinement of powerful, high-dimensional image warping methods, which have enabled detailed brain parcellation, voxel-based morphometric analyses, and multivariate pattern analyses using machine learning approaches. The evolution of these 3 types of analyses over the years has overcome many challenges. We present the evolution of our work in these 3 directions, which largely follows the evolution of this field. We discuss the progression from single-atlas, single-registration brain parcellation work to current ensemble-based parcellation; from relatively basic mass-univariate t-tests to optimized regional pattern analyses combining deformations and residuals; and from basic application of support vector machines to generative-discriminative formulations of multivariate pattern analyses, and to methods dealing with heterogeneity of neuroanatomical patterns. We conclude with discussion of some of the future directions and challenges. Copyright © 2016. Published by Elsevier B.V.

  10. Computational neuroanatomy using brain deformations: From brain parcellation to multivariate pattern analysis and machine learning

    PubMed Central

    Davatzikos, Christos

    2017-01-01

    The past 20 years have seen a mushrooming growth of the field of computational neuroanatomy. Much of this work has been enabled by the development and refinement of powerful, high-dimensional image warping methods, which have enabled detailed brain parcellation, voxel-based morphometric analyses, and multivariate pattern analyses using machine learning approaches. The evolution of these 3 types of analyses over the years has overcome many challenges. We present the evolution of our work in these 3 directions, which largely follows the evolution of this field. We discuss the progression from single-atlas, single-registration brain parcellation work to current ensemble-based parcellation; from relatively basic mass-univariate t-tests to optimized regional pattern analyses combining deformations and residuals; and from basic application of support vector machines to generative-discriminative formulations of multivariate pattern analyses, and to methods dealing with heterogeneity of neuroanatomical patterns. We conclude with discussion of some of the future directions and challenges. PMID:27514582

  11. Minimum Conflict Mainstreaming.

    ERIC Educational Resources Information Center

    Awen, Ed; And Others

    Computer technology is discussed as a tool for facilitating the implementation of the mainstreaming process. Minimum conflict mainstreaming/merging (MCM) is defined as an approach which utilizes computer technology to circumvent such structural obstacles to mainstreaming as transportation scheduling, screening and assignment of students, testing,…

  12. The structure of common-envelope remnants

    NASA Astrophysics Data System (ADS)

    Hall, Philip D.

    2015-05-01

    We investigate the structure and evolution of the remnants of common-envelope evolution in binary star systems. In a common-envelope phase, two stars become engulfed in a gaseous envelope and, under the influence of drag forces, spiral to smaller separations. They may merge to form a single star or the envelope may be ejected to leave the stars in a shorter period orbit. This process explains the short orbital periods of many observed binary systems, such as cataclysmic variables and low-mass X-ray binary systems. Despite the importance of these systems, and of common-envelope evolution to their formation, it remains poorly understood. Specifically, we are unable to confidently predict the outcome of a common-envelope phase from the properties at its onset. After presenting a review of work on stellar evolution, binary systems, common-envelope evolution and the computer programs used, we describe the results of three computational projects on common-envelope evolution. Our work specifically relates to the methods and prescriptions which are used for predicting the outcome. We use the Cambridge stellar-evolution code STARS to produce detailed models of the structure and evolution of remnants of common-envelope evolution. We compare different assumptions about the uncertain end-of-common envelope structure and envelope mass of remnants which successfully eject their common envelopes. In the first project, we use detailed remnant models to investigate whether planetary nebulae are predicted after common-envelope phases initiated by low-mass red giants. We focus on the requirement that a remnant evolves rapidly enough to photoionize the nebula and compare the predictions for different ideas about the structure at the end of a common-envelope phase. We find that planetary nebulae are possible for some prescriptions for the end-of-common envelope structure. In our second contribution, we compute a large set of single-star models and fit new formulae to the core radii of evolved stars. These formulae can be used to better compute the outcome of common-envelope evolution with rapid evolution codes. We find that the new formulae are necessary for accurate predictions of the properties of post-common envelope systems. Finally, we use detailed remnant models of massive stars to investigate whether hydrogen may be retained after a common-envelope phase to the point of core-collapse and so be observable in supernovae. We find that this is possible and thus common-envelope evolution may contribute to the formation of Type IIb supernovae.

  13. Principles of time evolution in classical physics

    NASA Astrophysics Data System (ADS)

    Güémez, J.; Fiolhais, M.

    2018-07-01

    We address principles of time evolution in classical mechanical/thermodynamical systems in translational and rotational motion, in three cases: when there is conservation of mechanical energy, when there is energy dissipation and when there is mechanical energy production. In the first case, the time derivative of the Hamiltonian vanishes. In the second one, when dissipative forces are present, the time evolution is governed by the minimum potential energy principle, or, equivalently, maximum increase of the entropy of the universe. Finally, in the third situation, when internal sources of work are available to the system, it evolves in time according to the principle of minimum Gibbs function. We apply the Lagrangian formulation to the systems, dealing with the non-conservative forces using restriction functions such as the Rayleigh dissipative function.

  14. Slowly switching between environments facilitates reverse evolution in small populations

    NASA Astrophysics Data System (ADS)

    Tan, Longzhi; Gore, Jeff

    2011-03-01

    The rate at which a physical process occurs usually changes the behavior of a system. In thermodynamics, the reversibility of a process generally increases when it occurs at an infinitely slow rate. In biological evolution, adaptations to a new environment may be reversed by evolution in the ancestral environment. Such fluctuating environments are ubiquitous in nature, although how the rate of switching affects reverse evolution is unknown. Here we use a computational approach to quantify evolutionary reversibility as a function of the rate of switching between two environments. For small population sizes, which travel on landscapes as random walkers, we find that both genotypic and phenotypic reverse evolution increase at slow switching rates. However, slow switching of environments decreases evolutionary reversibility for a greedy walker, corresponding to large populations (extensive clonal interference). We conclude that the impact of the switching rate for biological evolution is more complicated than other common physical processes, and that a quantitative approach may yield significant insight into reverse evolution.

  15. The Earth System Model

    NASA Technical Reports Server (NTRS)

    Schoeberl, Mark; Rood, Richard B.; Hildebrand, Peter; Raymond, Carol

    2003-01-01

    The Earth System Model is the natural evolution of current climate models and will be the ultimate embodiment of our geophysical understanding of the planet. These models are constructed from components - atmosphere, ocean, ice, land, chemistry, solid earth, etc. models and merged together through a coupling program which is responsible for the exchange of data from the components. Climate models and future earth system models will have standardized modules, and these standards are now being developed by the ESMF project funded by NASA. The Earth System Model will have a variety of uses beyond climate prediction. The model can be used to build climate data records making it the core of an assimilation system, and it can be used in OSSE experiments to evaluate. The computing and storage requirements for the ESM appear to be daunting. However, the Japanese ES theoretical computing capability is already within 20% of the minimum requirements needed for some 2010 climate model applications. Thus it seems very possible that a focused effort to build an Earth System Model will achieve succcss.

  16. Tetrahedron Formation Control

    NASA Technical Reports Server (NTRS)

    Guzman, Jose J.

    2003-01-01

    Spacecraft flying in tetrahedron formations are excellent instrument platforms for electromagnetic and plasma studies. A minimum of four spacecraft - to establish a volume - is required to study some of the key regions of a planetary magnetic field. The usefulness of the measurements recorded is strongly affected by the tetrahedron orbital evolution. This paper considers the preliminary development of a general optimization procedure for tetrahedron formation control. The maneuvers are assumed to be impulsive and a multi-stage optimization method is employed. The stages include targeting to a fixed tetrahedron orientation, rotating and translating the tetrahedron and/or varying the initial and final times. The number of impulsive maneuvers citn also be varied. As the impulse locations and times change, new arcs are computed using a differential corrections scheme that varies the impulse magnitudes and directions. The result is a continuous trajectory with velocity discontinuities. The velocity discontinuities are then used to formulate the cost function. Direct optimization techniques are employed. The procedure is applied to the Magnetospheric Multiscale Mission (MMS) to compute preliminary formation control fuel requirements.

  17. On the Relevancy of Efficient, Integrated Computer and Network Monitoring in HEP Distributed Online Environment

    NASA Astrophysics Data System (ADS)

    Carvalho, D.; Gavillet, Ph.; Delgado, V.; Albert, J. N.; Bellas, N.; Javello, J.; Miere, Y.; Ruffinoni, D.; Smith, G.

    Large Scientific Equipments are controlled by Computer Systems whose complexity is growing driven, on the one hand by the volume and variety of the information, its distributed nature, the sophistication of its treatment and, on the other hand by the fast evolution of the computer and network market. Some people call them genetically Large-Scale Distributed Data Intensive Information Systems or Distributed Computer Control Systems (DCCS) for those systems dealing more with real time control. Taking advantage of (or forced by) the distributed architecture, the tasks are more and more often implemented as Client-Server applications. In this framework the monitoring of the computer nodes, the communications network and the applications becomes of primary importance for ensuring the safe running and guaranteed performance of the system. With the future generation of HEP experiments, such as those at the LHC in view, it is proposed to integrate the various functions of DCCS monitoring into one general purpose Multi-layer System.

  18. Computational modeling of bedform evolution in rivers with implications for predictions of flood stage and bed evolution

    USGS Publications Warehouse

    Nelson, Jonathan M.; Shimizu, Yasuyuki; Giri, Sanjay; McDonald, Richard R.

    2010-01-01

    Uncertainties in flood stage prediction and bed evolution in rivers are frequently associated with the evolution of bedforms over a hydrograph. For the case of flood prediction, the evolution of the bedforms may alter the effective bed roughness, so predictions of stage and velocity based on assuming bedforms retain the same size and shape over a hydrograph will be incorrect. These same effects will produce errors in the prediction of the sediment transport and bed evolution, but in this latter case the errors are typically larger, as even small errors in the prediction of bedform form drag can make very large errors in predicting the rates of sediment motion and the associated erosion and deposition. In situations where flows change slowly, it may be possible to use empirical results that relate bedform morphology to roughness and effective form drag to avoid these errors; but in many cases where the bedforms evolve rapidly and are in disequilibrium with the instantaneous flow, these empirical methods cannot be accurately applied. Over the past few years, computational models for bedform development, migration, and adjustment to varying flows have been developed and tested with a variety of laboratory and field data. These models, which are based on detailed multidimensional flow modeling incorporating large eddy simulation, appear to be capable of predicting bedform dimensions during steady flows as well as their time dependence during discharge variations. In the work presented here, models of this type are used to investigate the impacts of bedform on stage and bed evolution in rivers during flood hydrographs. The method is shown to reproduce hysteresis in rating curves as well as other more subtle effects in the shape of flood waves. Techniques for combining the bedform evolution models with larger-scale models for river reach flow, sediment transport, and bed evolution are described and used to show the importance of including dynamic bedform effects in river modeling. For example calculations for a flood on the Kootenai River, errors of almost 1m in predicted stage and errors of about a factor of two in the predicted maximum depths of erosion can be attributed to bedform evolution. Thus, treating bedforms explicitly in flood and bed evolution models can decrease uncertainty and increase the accuracy of predictions.

  19. Astrophysical cosmology

    NASA Astrophysics Data System (ADS)

    Bardeen, J. M.

    The last several years have seen a tremendous ferment of activity in astrophysical cosmology. Much of the theoretical impetus has come from particle physics theories of the early universe and candidates for dark matter, but what promise to be even more significant are improved direct observations of high z galaxies and intergalactic matter, deeper and more comprehensive redshift surveys, and the increasing power of computer simulations of the dynamical evolution of large scale structure. Upper limits on the anisotropy of the microwave background radiation are gradually getting tighter and constraining more severely theoretical scenarios for the evolution of the universe.

  20. Current-Sheet Formation and Reconnection at a Magnetic X Line in Particle-in-Cell Simulations

    NASA Technical Reports Server (NTRS)

    Black, C.; Antiochos, S. K.; Hesse, M.; Karpen, J. T.; Kuznetsova, M. M.; Zenitani, S.

    2011-01-01

    The integration of kinetic effects into macroscopic numerical models is currently of great interest to the heliophysics community, particularly in the context of magnetic reconnection. Reconnection governs the large-scale energy release and topological rearrangement of magnetic fields in a wide variety of laboratory, heliophysical, and astrophysical systems. We are examining the formation and reconnection of current sheets in a simple, two-dimensional X-line configuration using high-resolution particle-in-cell (PIC) simulations. The initial minimum-energy, potential magnetic field is perturbed by excess thermal pressure introduced into the particle distribution function far from the X line. Subsequently, the relaxation of this added stress leads self-consistently to the development of a current sheet that reconnects for imposed stress of sufficient strength. We compare the time-dependent evolution and final state of our PIC simulations with macroscopic magnetohydrodynamic simulations assuming both uniform and localized electrical resistivities (C. R. DeVore et al., this meeting), as well as with force-free magnetic-field equilibria in which the amount of reconnection across the X line can be constrained to be zero (ideal evolution) or optimal (minimum final magnetic energy). We will discuss implications of our results for understanding magnetic-reconnection onset and cessation at kinetic scales in dynamically formed current sheets, such as those occurring in the solar corona and terrestrial magnetotail.

  1. Theoretical studies of Resonance Enhanced Stimulated Raman Scattering (RESRS) of frequency doubled Alexandrite laser wavelength in cesium vapor

    NASA Technical Reports Server (NTRS)

    Lawandy, Nabil M.

    1987-01-01

    The third phase of research will focus on the propagation and energy extraction of the pump and SERS beams in a variety of configurations including oscillator structures. In order to address these questions a numerical code capable of allowing for saturation and full transverse beam evolution is required. The method proposed is based on a discretized propagation energy extraction model which uses a Kirchoff integral propagator coupled to the three level Raman model already developed. The model will have the resolution required by diffraction limits and will use the previous density matrix results in the adiabatic following limit. Owing to its large computational requirements, such a code must be implemented on a vector array processor. One code on the Cyber is being tested by using previously understood two-level laser models as guidelines for interpreting the results. Two tests were implemented: the evolution of modes in a passive resonator and the evolution of a stable state of the adiabatically eliminated laser equations. These results show mode shapes and diffraction losses for the first case and relaxation oscillations for the second one. Finally, in order to clarify the computing methodology used to exploit the speed of the Cyber's computational speed, the time it takes to perform both of the computations previously mentioned to run on the Cyber and VAX 730 must be measured. Also included is a short description of the current laser model (CAVITY.FOR) and a flow chart of the test computations.

  2. Evolution of the orbit of asteroid 4179 Toutatis over 11,550 years.

    NASA Astrophysics Data System (ADS)

    Zausaev, A. F.; Pushkarev, A. N.

    1994-05-01

    The Everhart method is used to study evolution of the orbit of the asteroid 4179 Toutatis, a member of the Apollo group, over the time period 9300 B.C. to 2250 A.D. Minimum asteroid-Earth distances during the evolution process are calculated. It is shown that the asteroid presents no danger to the Earth over the interval studied.

  3. Subscale Fast Cookoff Testing and Modeling for the Hazard Assessment of Large Rocket Motors

    DTIC Science & Technology

    2001-03-01

    41 LIST OF TABLES Table 1 Heats of Vaporization Parameter for Two-liner Phase Transformation - Complete Liner Sublimation and/or Combined Liner...One-dimensional 2-D Two-dimensional ALE3D Arbitrary-Lagrange-Eulerian (3-D) Computer Code ALEGRA 3-D Arbitrary-Lagrange-Eulerian Computer Code for...case-liner bond areas and in the grain inner bore to explore the pre-ignition and ignition phases , as well as burning evolution in rocket motor fast

  4. The van Hove distribution function for Brownian hard spheres: Dynamical test particle theory and computer simulations for bulk dynamics

    NASA Astrophysics Data System (ADS)

    Hopkins, Paul; Fortini, Andrea; Archer, Andrew J.; Schmidt, Matthias

    2010-12-01

    We describe a test particle approach based on dynamical density functional theory (DDFT) for studying the correlated time evolution of the particles that constitute a fluid. Our theory provides a means of calculating the van Hove distribution function by treating its self and distinct parts as the two components of a binary fluid mixture, with the "self " component having only one particle, the "distinct" component consisting of all the other particles, and using DDFT to calculate the time evolution of the density profiles for the two components. We apply this approach to a bulk fluid of Brownian hard spheres and compare to results for the van Hove function and the intermediate scattering function from Brownian dynamics computer simulations. We find good agreement at low and intermediate densities using the very simple Ramakrishnan-Yussouff [Phys. Rev. B 19, 2775 (1979)] approximation for the excess free energy functional. Since the DDFT is based on the equilibrium Helmholtz free energy functional, we can probe a free energy landscape that underlies the dynamics. Within the mean-field approximation we find that as the particle density increases, this landscape develops a minimum, while an exact treatment of a model confined situation shows that for an ergodic fluid this landscape should be monotonic. We discuss possible implications for slow, glassy, and arrested dynamics at high densities.

  5. Computational Approaches to Viral Evolution and Rational Vaccine Design

    NASA Astrophysics Data System (ADS)

    Bhattacharya, Tanmoy

    2006-10-01

    Viral pandemics, including HIV, are a major health concern across the world. Experimental techniques available today have uncovered a great wealth of information about how these viruses infect, grow, and cause disease; as well as how our body attempts to defend itself against them. Nevertheless, due to the high variability and fast evolution of many of these viruses, the traditional method of developing vaccines by presenting a heuristically chosen strain to the body fails and an effective intervention strategy still eludes us. A large amount of carefully curated genomic data on a number of these viruses are now available, often annotated with disease and immunological context. The availability of parallel computers has now made it possible to carry out a systematic analysis of this data within an evolutionary framework. I will describe, as an example, how computations on such data has allowed us to understand the origins and diversification of HIV, the causative agent of AIDS. On the practical side, computations on the same data is now being used to inform choice or defign of optimal vaccine strains.

  6. A Large number of fast cosmological simulations

    NASA Astrophysics Data System (ADS)

    Koda, Jun; Kazin, E.; Blake, C.

    2014-01-01

    Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.

  7. Large-Scale Analysis Exploring Evolution of Catalytic Machineries and Mechanisms in Enzyme Superfamilies.

    PubMed

    Furnham, Nicholas; Dawson, Natalie L; Rahman, Syed A; Thornton, Janet M; Orengo, Christine A

    2016-01-29

    Enzymes, as biological catalysts, form the basis of all forms of life. How these proteins have evolved their functions remains a fundamental question in biology. Over 100 years of detailed biochemistry studies, combined with the large volumes of sequence and protein structural data now available, means that we are able to perform large-scale analyses to address this question. Using a range of computational tools and resources, we have compiled information on all experimentally annotated changes in enzyme function within 379 structurally defined protein domain superfamilies, linking the changes observed in functions during evolution to changes in reaction chemistry. Many superfamilies show changes in function at some level, although one function often dominates one superfamily. We use quantitative measures of changes in reaction chemistry to reveal the various types of chemical changes occurring during evolution and to exemplify these by detailed examples. Additionally, we use structural information of the enzymes active site to examine how different superfamilies have changed their catalytic machinery during evolution. Some superfamilies have changed the reactions they perform without changing catalytic machinery. In others, large changes of enzyme function, in terms of both overall chemistry and substrate specificity, have been brought about by significant changes in catalytic machinery. Interestingly, in some superfamilies, relatives perform similar functions but with different catalytic machineries. This analysis highlights characteristics of functional evolution across a wide range of superfamilies, providing insights that will be useful in predicting the function of uncharacterised sequences and the design of new synthetic enzymes. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. 20 CFR 229.43 - When a divorced spouse can no longer be included in computing an annuity under the overall minimum.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... included in computing an annuity under the overall minimum. A divorced spouse's inclusion in the... spouse becomes entitled to a retirement or disability benefit under the Social Security Act based upon a...

  9. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    NASA Astrophysics Data System (ADS)

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  10. An Adaptive QSE-reduced Nuclear Reaction Network for Silicon Burning

    NASA Astrophysics Data System (ADS)

    Parete-Koon, Suzanne; Hix, William Raphael; Thielemann, Friedrich-Karl

    2010-02-01

    The nuclei of the ``iron peak'' are formed late in the evolution of massive stars and during supernovae. Silicon burning during these events is responsible for the production of a wide range of nuclei with atomic mass numbers from 28 to 64. The large number of nuclei involved make accurate modeling of silicon burning computationally expensive. Examination of the physics of silicon burning reveals that the nuclear evolution is dominated by large groups of nuclei in mutual equilibrium. We present an improvement on our hybrid equilibrium-network scheme that takes advantage of this quasi-equilibrium (QSE) to reduce the number of independent variables calculated. Because the membership and number of these groups vary as the temperature, density and electron faction change, achieving maximal efficiency requires dynamic adjustment of group number and membership. The resultant QSE-reduced network is up to 20 times faster than the full network it replaces without significant loss of accuracy. These reductions in computational cost and the number of species evolved make QSE-reduced networks well suited for inclusion within hydrodynamic simulations, particularly in multi-dimensional applications. )

  11. Parallel Computational Protein Design.

    PubMed

    Zhou, Yichao; Donald, Bruce R; Zeng, Jianyang

    2017-01-01

    Computational structure-based protein design (CSPD) is an important problem in computational biology, which aims to design or improve a prescribed protein function based on a protein structure template. It provides a practical tool for real-world protein engineering applications. A popular CSPD method that guarantees to find the global minimum energy solution (GMEC) is to combine both dead-end elimination (DEE) and A* tree search algorithms. However, in this framework, the A* search algorithm can run in exponential time in the worst case, which may become the computation bottleneck of large-scale computational protein design process. To address this issue, we extend and add a new module to the OSPREY program that was previously developed in the Donald lab (Gainza et al., Methods Enzymol 523:87, 2013) to implement a GPU-based massively parallel A* algorithm for improving protein design pipeline. By exploiting the modern GPU computational framework and optimizing the computation of the heuristic function for A* search, our new program, called gOSPREY, can provide up to four orders of magnitude speedups in large protein design cases with a small memory overhead comparing to the traditional A* search algorithm implementation, while still guaranteeing the optimality. In addition, gOSPREY can be configured to run in a bounded-memory mode to tackle the problems in which the conformation space is too large and the global optimal solution cannot be computed previously. Furthermore, the GPU-based A* algorithm implemented in the gOSPREY program can be combined with the state-of-the-art rotamer pruning algorithms such as iMinDEE (Gainza et al., PLoS Comput Biol 8:e1002335, 2012) and DEEPer (Hallen et al., Proteins 81:18-39, 2013) to also consider continuous backbone and side-chain flexibility.

  12. Post-Newtonian evolution of massive black hole triplets in galactic nuclei - III. A robust lower limit to the nHz stochastic background of gravitational waves

    NASA Astrophysics Data System (ADS)

    Bonetti, Matteo; Sesana, Alberto; Barausse, Enrico; Haardt, Francesco

    2018-06-01

    Inspiraling massive black hole binaries (MBHBs) forming in the aftermath of galaxy mergers are expected to be the loudest gravitational-wave (GW) sources relevant for pulsar-timing arrays (PTAs) at nHz frequencies. The incoherent overlap of signals from a cosmic population of MBHBs gives rise to a stochastic GW background (GWB) with characteristic strain around hc ˜ 10-15 at a reference frequency of 1 yr-1, although uncertainties around this value are large. Current PTAs are piercing into the GW amplitude range predicted by MBHB-population models, but no detection has been reported so far. To assess the future success prospects of PTA experiments, it is therefore important to estimate the minimum GWB level consistent with our current understanding of the formation and evolution of galaxies and massive black holes (MBHs). To this purpose, we couple a semi-analytic model of galaxy evolution and an extensive study of the statistical outcome of triple MBH interactions. We show that even in the most pessimistic scenario where all MBHBs stall before entering the GW-dominated regime, triple interactions resulting from subsequent galaxy mergers inevitably drive a considerable fraction of the MBHB population to coalescence. At frequencies relevant for PTA, the resulting GWB is only a factor of 2-3 suppressed compared to a fiducial model where binaries are allowed to merge over Gyr time-scales . Coupled with current estimates of the expected GWB amplitude range, our findings suggest that the minimum GWB from cosmic MBHBs is unlikely to be lower than hc ˜ 10-16 (at f = 1 yr-1), well within the expected sensitivity of projected PTAs based on future observations with FAST, MeerKAT, and SKA.

  13. A model for evolution of overlapping community networks

    NASA Astrophysics Data System (ADS)

    Karan, Rituraj; Biswal, Bibhu

    2017-05-01

    A model is proposed for the evolution of network topology in social networks with overlapping community structure. Starting from an initial community structure that is defined in terms of group affiliations, the model postulates that the subsequent growth and loss of connections is similar to the Hebbian learning and unlearning in the brain and is governed by two dominant factors: the strength and frequency of interaction between the members, and the degree of overlap between different communities. The temporal evolution from an initial community structure to the current network topology can be described based on these two parameters. It is possible to quantify the growth occurred so far and predict the final stationary state to which the network is likely to evolve. Applications in epidemiology or the spread of email virus in a computer network as well as finding specific target nodes to control it are envisaged. While facing the challenge of collecting and analyzing large-scale time-resolved data on social groups and communities one faces the most basic questions: how do communities evolve in time? This work aims to address this issue by developing a mathematical model for the evolution of community networks and studying it through computer simulation.

  14. Ways to improve your correlation functions

    NASA Technical Reports Server (NTRS)

    Hamilton, A. J. S.

    1993-01-01

    This paper describes a number of ways to improve on the standard method for measuring the two-point correlation function of large scale structure in the Universe. Issues addressed are: (1) the problem of the mean density, and how to solve it; (2) how to estimate the uncertainty in a measured correlation function; (3) minimum variance pair weighting; (4) unbiased estimation of the selection function when magnitudes are discrete; and (5) analytic computation of angular integrals in background pair counts.

  15. Online mass storage system detailed requirements document

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The requirements for an online high density magnetic tape data storage system that can be implemented in a multipurpose, multihost environment is set forth. The objective of the mass storage system is to provide a facility for the compact storage of large quantities of data and to make this data accessible to computer systems with minimum operator handling. The results of a market survey and analysis of candidate vendor who presently market high density tape data storage systems are included.

  16. Large-scale structure non-Gaussianities with modal methods

    NASA Astrophysics Data System (ADS)

    Schmittfull, Marcel

    2016-10-01

    Relying on a separable modal expansion of the bispectrum, the implementation of a fast estimator for the full bispectrum of a 3d particle distribution is presented. The computational cost of accurate bispectrum estimation is negligible relative to simulation evolution, so the bispectrum can be used as a standard diagnostic whenever the power spectrum is evaluated. As an application, the time evolution of gravitational and primordial dark matter bispectra was measured in a large suite of N-body simulations. The bispectrum shape changes characteristically when the cosmic web becomes dominated by filaments and halos, therefore providing a quantitative probe of 3d structure formation. Our measured bispectra are determined by ~ 50 coefficients, which can be used as fitting formulae in the nonlinear regime and for non-Gaussian initial conditions. We also compare the measured bispectra with predictions from the Effective Field Theory of Large Scale Structures (EFTofLSS).

  17. Recent Evolution of the Mont Saint-Michel Bay as seen by ALOS AVNIR-2 Data (ADEN AO 3643)

    NASA Astrophysics Data System (ADS)

    Deroin, Jean-Paul; Bilaudeau, Clelia; Deffontaines, Benoit

    2008-11-01

    The ALOS AVNIR-2 scene acquired on October 24, 2007 has been used for drawing a new map of the Mont Saint-Michel Bay. This area is characterised by a large dry-fallen tidal flat, one of the largest in the world. The tidal records indicate that the ALOS datatake was acquired in favorable conditions, the elevation of the sea at 2.56 m being very close to the theoretical minimum value (about 2.30 m). In these conditions, the largest tidal flat observed by a sun-synchronous satellite on the Mont Saint-Michel Bay is exposed.

  18. Optimizing Teleportation Cost in Distributed Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Zomorodi-Moghadam, Mariam; Houshmand, Mahboobeh; Houshmand, Monireh

    2018-03-01

    The presented work provides a procedure for optimizing the communication cost of a distributed quantum circuit (DQC) in terms of the number of qubit teleportations. Because of technology limitations which do not allow large quantum computers to work as a single processing element, distributed quantum computation is an appropriate solution to overcome this difficulty. Previous studies have applied ad-hoc solutions to distribute a quantum system for special cases and applications. In this study, a general approach is proposed to optimize the number of teleportations for a DQC consisting of two spatially separated and long-distance quantum subsystems. To this end, different configurations of locations for executing gates whose qubits are in distinct subsystems are considered and for each of these configurations, the proposed algorithm is run to find the minimum number of required teleportations. Finally, the configuration which leads to the minimum number of teleportations is reported. The proposed method can be used as an automated procedure to find the configuration with the optimal communication cost for the DQC. This cost can be used as a basic measure of the communication cost for future works in the distributed quantum circuits.

  19. French Meteor Network for High Precision Orbits of Meteoroids

    NASA Technical Reports Server (NTRS)

    Atreya, P.; Vaubaillon, J.; Colas, F.; Bouley, S.; Gaillard, B.; Sauli, I.; Kwon, M. K.

    2011-01-01

    There is a lack of precise meteoroids orbit from video observations as most of the meteor stations use off-the-shelf CCD cameras. Few meteoroids orbit with precise semi-major axis are available using film photographic method. Precise orbits are necessary to compute the dust flux in the Earth s vicinity, and to estimate the ejection time of the meteoroids accurately by comparing them with the theoretical evolution model. We investigate the use of large CCD sensors to observe multi-station meteors and to compute precise orbit of these meteoroids. An ideal spatial and temporal resolution to get an accuracy to those similar of photographic plates are discussed. Various problems faced due to the use of large CCD, such as increasing the spatial and the temporal resolution at the same time and computational problems in finding the meteor position are illustrated.

  20. Current algorithmic solutions for peptide-based proteomics data generation and identification.

    PubMed

    Hoopmann, Michael R; Moritz, Robert L

    2013-02-01

    Peptide-based proteomic data sets are ever increasing in size and complexity. These data sets provide computational challenges when attempting to quickly analyze spectra and obtain correct protein identifications. Database search and de novo algorithms must consider high-resolution MS/MS spectra and alternative fragmentation methods. Protein inference is a tricky problem when analyzing large data sets of degenerate peptide identifications. Combining multiple algorithms for improved peptide identification puts significant strain on computational systems when investigating large data sets. This review highlights some of the recent developments in peptide and protein identification algorithms for analyzing shotgun mass spectrometry data when encountering the aforementioned hurdles. Also explored are the roles that analytical pipelines, public spectral libraries, and cloud computing play in the evolution of peptide-based proteomics. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Comparative mapping and rapid karyotypic evolution in the genus helianthus.

    PubMed Central

    Burke, John M; Lai, Zhao; Salmaso, Marzia; Nakazato, Takuya; Tang, Shunxue; Heesacker, Adam; Knapp, Steven J; Rieseberg, Loren H

    2004-01-01

    Comparative genetic linkage maps provide a powerful tool for the study of karyotypic evolution. We constructed a joint SSR/RAPD genetic linkage map of the Helianthus petiolaris genome and used it, along with an integrated SSR genetic linkage map derived from four independent H. annuus mapping populations, to examine the evolution of genome structure between these two annual sunflower species. The results of this work indicate the presence of 27 colinear segments resulting from a minimum of eight translocations and three inversions. These 11 rearrangements are more than previously suspected on the basis of either cytological or genetic map-based analyses. Taken together, these rearrangements required a minimum of 20 chromosomal breakages/fusions. On the basis of estimates of the time since divergence of these two species (750,000-1,000,000 years), this translates into an estimated rate of 5.5-7.3 chromosomal rearrangements per million years of evolution, the highest rate reported for any taxonomic group to date. PMID:15166168

  2. Satellite broadcasting system study

    NASA Technical Reports Server (NTRS)

    1972-01-01

    The study to develop a system model and computer program representative of broadcasting satellite systems employing community-type receiving terminals is reported. The program provides a user-oriented tool for evaluating performance/cost tradeoffs, synthesizing minimum cost systems for a given set of system requirements, and performing sensitivity analyses to identify critical parameters and technology. The performance/ costing philosophy and what is meant by a minimum cost system is shown graphically. Topics discussed include: main line control program, ground segment model, space segment model, cost models and launch vehicle selection. Several examples of minimum cost systems resulting from the computer program are presented. A listing of the computer program is also included.

  3. Kinetics and dynamics of near-resonant vibrational energy transfer in gas ensembles of atmospheric interest

    NASA Astrophysics Data System (ADS)

    McCaffery, Anthony J.

    2018-03-01

    This study of near-resonant, vibration-vibration (V-V) gas-phase energy transfer in diatomic molecules uses the theoretical/computational method, of Marsh & McCaffery (Marsh & McCaffery 2002 J. Chem. Phys. 117, 503 (doi:10.1063/1.1489998)) The method uses the angular momentum (AM) theoretical formalism to compute quantum-state populations within the component molecules of large, non-equilibrium, gas mixtures as the component species proceed to equilibration. Computed quantum-state populations are displayed in a number of formats that reveal the detailed mechanism of the near-resonant V-V process. Further, the evolution of quantum-state populations, for each species present, may be followed as the number of collision cycles increases, displaying the kinetics of evolution for each quantum state of the ensemble's molecules. These features are illustrated for ensembles containing vibrationally excited N2 in H2, O2 and N2 initially in their ground states. This article is part of the theme issue `Modern theoretical chemistry'.

  4. Towards an Autonomic Cluster Management System (ACMS) with Reflex Autonomicity

    NASA Technical Reports Server (NTRS)

    Truszkowski, Walt; Hinchey, Mike; Sterritt, Roy

    2005-01-01

    Cluster computing, whereby a large number of simple processors or nodes are combined together to apparently function as a single powerful computer, has emerged as a research area in its own right. The approach offers a relatively inexpensive means of providing a fault-tolerant environment and achieving significant computational capabilities for high-performance computing applications. However, the task of manually managing and configuring a cluster quickly becomes daunting as the cluster grows in size. Autonomic computing, with its vision to provide self-management, can potentially solve many of the problems inherent in cluster management. We describe the development of a prototype Autonomic Cluster Management System (ACMS) that exploits autonomic properties in automating cluster management and its evolution to include reflex reactions via pulse monitoring.

  5. A time series analysis performed on a 25-year period of kidney transplantation activity in a single center.

    PubMed

    Santori, G; Fontana, I; Bertocchi, M; Gasloli, G; Valente, U

    2010-05-01

    Following the example of many Western countries, where a "minimum volume rule" policy has been adopted as a quality parameter for complex surgical procedures, the Italian National Transplant Centre set the minimum number of kidney transplantation procedures/y at 30/center. The number of procedures performed in a single center over a large period may be treated as a time series to evaluate trends, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1983, and December 31, 2007, we performed 1376 procedures in adult or pediatric recipients from living or cadaveric donors. The greatest numbers of cases/y were performed in 1998 (n = 86) followed by 2004 (n = 82), 1996 (n = 75), and 2003 (n = 73). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed a whole incremental trend after exponential smoothing as well as after seasonal decomposition. However, starting from 2005, we observed a decreased trend in the series. The number of kidney transplants expected to be performed for 2008 by using the Holt-Winters exponential smoothing applied to the period 1983 to 2007 suggested 58 procedures, while in that year there were 52. The time series approach may be helpful to establish a minimum volume/y at a single-center level. Copyright (c) 2010 Elsevier Inc. All rights reserved.

  6. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  7. Arterial cannula shape optimization by means of the rotational firefly algorithm

    NASA Astrophysics Data System (ADS)

    Tesch, K.; Kaczorowska, K.

    2016-03-01

    This article presents global optimization results of arterial cannula shapes by means of the newly modified firefly algorithm. The search for the optimal arterial cannula shape is necessary in order to minimize losses and prepare the flow that leaves the circulatory support system of a ventricle (i.e. blood pump) before it reaches the heart. A modification of the standard firefly algorithm, the so-called rotational firefly algorithm, is introduced. It is shown that the rotational firefly algorithm allows for better exploration of search spaces which results in faster convergence and better solutions in comparison with its standard version. This is particularly pronounced for smaller population sizes. Furthermore, it maintains greater diversity of populations for a longer time. A small population size and a low number of iterations are necessary to keep to a minimum the computational cost of the objective function of the problem, which comes from numerical solution of the nonlinear partial differential equations. Moreover, both versions of the firefly algorithm are compared to the state of the art, namely the differential evolution and covariance matrix adaptation evolution strategies.

  8. Mantle flow influence on subduction evolution

    NASA Astrophysics Data System (ADS)

    Chertova, Maria V.; Spakman, Wim; Steinberger, Bernhard

    2018-05-01

    The impact of remotely forced mantle flow on regional subduction evolution is largely unexplored. Here we investigate this by means of 3D thermo-mechanical numerical modeling using a regional modeling domain. We start with simplified models consisting of a 600 km (or 1400 km) wide subducting plate surrounded by other plates. Mantle inflow of ∼3 cm/yr is prescribed during 25 Myr of slab evolution on a subset of the domain boundaries while the other side boundaries are open. Our experiments show that the influence of imposed mantle flow on subduction evolution is the least for trench-perpendicular mantle inflow from either the back or front of the slab leading to 10-50 km changes in slab morphology and trench position while no strong slab dip changes were observed, as compared to a reference model with no imposed mantle inflow. In experiments with trench-oblique mantle inflow we notice larger effects of slab bending and slab translation of the order of 100-200 km. Lastly, we investigate how subduction in the western Mediterranean region is influenced by remotely excited mantle flow that is computed by back-advection of a temperature and density model scaled from a global seismic tomography model. After 35 Myr of subduction evolution we find 10-50 km changes in slab position and slab morphology and a slight change in overall slab tilt. Our study shows that remotely forced mantle flow leads to secondary effects on slab evolution as compared to slab buoyancy and plate motion. Still these secondary effects occur on scales, 10-50 km, typical for the large-scale deformation of the overlying crust and thus may still be of large importance for understanding geological evolution.

  9. Number-phase minimum-uncertainty state with reduced number uncertainty in a Kerr nonlinear interferometer

    NASA Astrophysics Data System (ADS)

    Kitagawa, M.; Yamamoto, Y.

    1987-11-01

    An alternative scheme for generating amplitude-squeezed states of photons based on unitary evolution which can properly be described by quantum mechanics is presented. This scheme is a nonlinear Mach-Zehnder interferometer containing an optical Kerr medium. The quasi-probability density (QPD) and photon-number distribution of the output field are calculated, and it is demonstrated that the reduced photon-number uncertainty and enhanced phase uncertainty maintain the minimum-uncertainty product. A self-phase-modulation of the single-mode quantized field in the Kerr medium is described based on localized operators. The spatial evolution of the state is demonstrated by QPD in the Schroedinger picture. It is shown that photon-number variance can be reduced to a level far below the limit for an ordinary squeezed state, and that the state prepared using this scheme remains a number-phase minimum-uncertainty state until the maximum reduction of number fluctuations is surpassed.

  10. Million-body star cluster simulations: comparisons between Monte Carlo and direct N-body

    NASA Astrophysics Data System (ADS)

    Rodriguez, Carl L.; Morscher, Meagan; Wang, Long; Chatterjee, Sourav; Rasio, Frederic A.; Spurzem, Rainer

    2016-12-01

    We present the first detailed comparison between million-body globular cluster simulations computed with a Hénon-type Monte Carlo code, CMC, and a direct N-body code, NBODY6++GPU. Both simulations start from an identical cluster model with 106 particles, and include all of the relevant physics needed to treat the system in a highly realistic way. With the two codes `frozen' (no fine-tuning of any free parameters or internal algorithms of the codes) we find good agreement in the overall evolution of the two models. Furthermore, we find that in both models, large numbers of stellar-mass black holes (>1000) are retained for 12 Gyr. Thus, the very accurate direct N-body approach confirms recent predictions that black holes can be retained in present-day, old globular clusters. We find only minor disagreements between the two models and attribute these to the small-N dynamics driving the evolution of the cluster core for which the Monte Carlo assumptions are less ideal. Based on the overwhelming general agreement between the two models computed using these vastly different techniques, we conclude that our Monte Carlo approach, which is more approximate, but dramatically faster compared to the direct N-body, is capable of producing an accurate description of the long-term evolution of massive globular clusters even when the clusters contain large populations of stellar-mass black holes.

  11. 26 CFR 1.55-1 - Alternative minimum taxable income.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 1 2010-04-01 2010-04-01 true Alternative minimum taxable income. 1.55-1... TAXES Tax Surcharge § 1.55-1 Alternative minimum taxable income. (a) General rule for computing alternative minimum taxable income. Except as otherwise provided by statute, regulations, or other published...

  12. Pre-main Sequence Evolution and the Hydrogen-Burning Minimum Mass

    NASA Astrophysics Data System (ADS)

    Nakano, Takenori

    There is a lower limit to the mass of the main-sequence stars (the hydrogen-burning minimum mass) below which the stars cannot replenish the energy lost from their surfaces with the energy released by the hydrogen burning in their cores. This is caused by the electron degeneracy in the stars which suppresses the increase of the central temperature with contraction. To find out the lower limit we need the accurate knowledge of the pre-main sequence evolution of very low-mass stars in which the effect of electron degeneracy is important. We review how Hayashi and Nakano (1963) carried out the first determination of this limit.

  13. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. These approaches are implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  14. Brain Evolution and Human Neuropsychology: The Inferential Brain Hypothesis

    PubMed Central

    Koscik, Timothy R.; Tranel, Daniel

    2013-01-01

    Collaboration between human neuropsychology and comparative neuroscience has generated invaluable contributions to our understanding of human brain evolution and function. Further cross-talk between these disciplines has the potential to continue to revolutionize these fields. Modern neuroimaging methods could be applied in a comparative context, yielding exciting new data with the potential of providing insight into brain evolution. Conversely, incorporating an evolutionary base into the theoretical perspectives from which we approach human neuropsychology could lead to novel hypotheses and testable predictions. In the spirit of these objectives, we present here a new theoretical proposal, the Inferential Brain Hypothesis, whereby the human brain is thought to be characterized by a shift from perceptual processing to inferential computation, particularly within the social realm. This shift is believed to be a driving force for the evolution of the large human cortex. PMID:22459075

  15. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    PubMed

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  16. Holographic signatures of cosmological singularities.

    PubMed

    Engelhardt, Netta; Hertog, Thomas; Horowitz, Gary T

    2014-09-19

    To gain insight into the quantum nature of cosmological singularities, we study anisotropic Kasner solutions in gauge-gravity duality. The dual description of the bulk evolution towards the singularity involves N=4 super Yang-Mills theory on the expanding branch of deformed de Sitter space and is well defined. We compute two-point correlators of Yang-Mills operators of large dimensions using spacelike geodesics anchored on the boundary. The correlators show a strong signature of the singularity around horizon scales and decay at large boundary separation at different rates in different directions. More generally, the boundary evolution exhibits a process of particle creation similar to that in inflation. This leads us to conjecture that information on the quantum nature of cosmological singularities is encoded in long-wavelength features of the boundary wave function.

  17. An optical spectrum of a large isolated gas-phase PAH cation: C78H26+

    PubMed Central

    Zhen, Junfeng; Mulas, Giacomo; Bonnamy, Anthony; Joblin, Christine

    2016-01-01

    A gas-phase optical spectrum of a large polycyclic aromatic hydrocarbon (PAH) cation - C78H26+- in the 410-610 nm range is presented. This large all-benzenoid PAH should be large enough to be stable with respect to photodissociation in the harsh conditions prevailing in the interstellar medium (ISM). The spectrum is obtained via multi-photon dissociation (MPD) spectroscopy of cationic C78H26 stored in the Fourier Transform Ion Cyclotron Resonance (FT-ICR) cell using the radiation from a mid-band optical parametric oscillator (OPO) laser. The experimental spectrum shows two main absorption peaks at 431 nm and 516 nm, in good agreement with a theoretical spectrum computed via time-dependent density functional theory (TD-DFT). DFT calculations indicate that the equilibrium geometry, with the absolute minimum energy, is of lowered, nonplanar C2 symmetry instead of the more symmetric planar D2h symmetry that is usually the minimum for similar PAHs of smaller size. This kind of slightly broken symmetry could produce some of the fine structure observed in some diffuse interstellar bands (DIBs). It can also favor the folding of C78H26+ fragments and ultimately the formation of fullerenes. This study opens up the possibility to identify the most promising candidates for DIBs amongst large cationic PAHs. PMID:26942230

  18. Long-wave model for strongly anisotropic growth of a crystal step.

    PubMed

    Khenner, Mikhail

    2013-08-01

    A continuum model for the dynamics of a single step with the strongly anisotropic line energy is formulated and analyzed. The step grows by attachment of adatoms from the lower terrace, onto which atoms adsorb from a vapor phase or from a molecular beam, and the desorption is nonnegligible (the "one-sided" model). Via a multiscale expansion, we derived a long-wave, strongly nonlinear, and strongly anisotropic evolution PDE for the step profile. Written in terms of the step slope, the PDE can be represented in a form similar to a convective Cahn-Hilliard equation. We performed the linear stability analysis and computed the nonlinear dynamics. Linear stability depends on whether the stiffness is minimum or maximum in the direction of the step growth. It also depends nontrivially on the combination of the anisotropy strength parameter and the atomic flux from the terrace to the step. Computations show formation and coarsening of a hill-and-valley structure superimposed onto a long-wavelength profile, which independently coarsens. Coarsening laws for the hill-and-valley structure are computed for two principal orientations of a maximum step stiffness, the increasing anisotropy strength, and the varying atomic flux.

  19. Optimal design of structures with multiple design variables per group and multiple loading conditions on the personal computer

    NASA Technical Reports Server (NTRS)

    Nguyen, D. T.; Rogers, J. L., Jr.

    1986-01-01

    A finite element based programming system for minimum weight design of a truss-type structure subjected to displacement, stress, and lower and upper bounds on design variables is presented. The programming system consists of a number of independent processors, each performing a specific task. These processors, however, are interfaced through a well-organized data base, thus making the tasks of modifying, updating, or expanding the programming system much easier in a friendly environment provided by many inexpensive personal computers. The proposed software can be viewed as an important step in achieving a 'dummy' finite element for optimization. The programming system has been implemented on both large and small computers (such as VAX, CYBER, IBM-PC, and APPLE) although the focus is on the latter. Examples are presented to demonstrate the capabilities of the code. The present programming system can be used stand-alone or as part of the multilevel decomposition procedure to obtain optimum design for very large scale structural systems. Furthermore, other related research areas such as developing optimization algorithms (or in the larger level: a structural synthesis program) for future trends in using parallel computers may also benefit from this study.

  20. Rock climbing: A local-global algorithm to compute minimum energy and minimum free energy pathways.

    PubMed

    Templeton, Clark; Chen, Szu-Hua; Fathizadeh, Arman; Elber, Ron

    2017-10-21

    The calculation of minimum energy or minimum free energy paths is an important step in the quantitative and qualitative studies of chemical and physical processes. The computations of these coordinates present a significant challenge and have attracted considerable theoretical and computational interest. Here we present a new local-global approach to study reaction coordinates, based on a gradual optimization of an action. Like other global algorithms, it provides a path between known reactants and products, but it uses a local algorithm to extend the current path in small steps. The local-global approach does not require an initial guess to the path, a major challenge for global pathway finders. Finally, it provides an exact answer (the steepest descent path) at the end of the calculations. Numerical examples are provided for the Mueller potential and for a conformational transition in a solvated ring system.

  1. Beyond directed evolution - semi-rational protein engineering and design

    PubMed Central

    Lutz, Stefan

    2010-01-01

    Over the last two decades, directed evolution has transformed the field of protein engineering. The advances in understanding protein structure and function, in no insignificant part a result of directed evolution studies, are increasingly empowering scientists and engineers to device more effective methods for manipulating and tailoring biocatalysts. Abandoning large combinatorial libraries, the focus has shifted to small, functionally-rich libraries and rational design. A critical component to the success of these emerging engineering strategies are computational tools for the evaluation of protein sequence datasets and the analysis of conformational variations of amino acids in proteins. Highlighting the opportunities and limitations of such approaches, this review focuses on recent engineering and design examples that require screening or selection of small libraries. PMID:20869867

  2. Biological intuition in alignment-free methods: response to Posada.

    PubMed

    Ragan, Mark A; Chan, Cheong Xin

    2013-08-01

    A recent editorial in Journal of Molecular Evolution highlights opportunities and challenges facing molecular evolution in the era of next-generation sequencing. Abundant sequence data should allow more-complex models to be fit at higher confidence, making phylogenetic inference more reliable and improving our understanding of evolution at the molecular level. However, concern that approaches based on multiple sequence alignment may be computationally infeasible for large datasets is driving the development of so-called alignment-free methods for sequence comparison and phylogenetic inference. The recent editorial characterized these approaches as model-free, not based on the concept of homology, and lacking in biological intuition. We argue here that alignment-free methods have not abandoned models or homology, and can be biologically intuitive.

  3. Multi Agent Systems with Symbiotic Learning and Evolution using GNP

    NASA Astrophysics Data System (ADS)

    Eguchi, Toru; Hirasawa, Kotaro; Hu, Jinglu; Murata, Junichi

    Recently, various attempts relevant to Multi Agent Systems (MAS) which is one of the most promising systems based on Distributed Artificial Intelligence have been studied to control large and complicated systems efficiently. In these trends of MAS, Multi Agent Systems with Symbiotic Learning and Evolution named Masbiole has been proposed. In Masbiole, symbiotic phenomena among creatures are considered in the process of learning and evolution of MAS. So we can expect more flexible and sophisticated solutions than conventional MAS. In this paper, we apply Masbiole to Iterative Prisoner’s Dilemma Games (IPD Games) using Genetic Network Programming (GNP) which is a newly developed evolutionary computation method for constituting agents. Some characteristics of Masbiole using GNP in IPD Games are clarified.

  4. The linear regulator problem for parabolic systems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Kunisch, K.

    1983-01-01

    An approximation framework is presented for computation (in finite imensional spaces) of Riccati operators that can be guaranteed to converge to the Riccati operator in feedback controls for abstract evolution systems in a Hilbert space. It is shown how these results may be used in the linear optimal regulator problem for a large class of parabolic systems.

  5. Modal analysis of circular Bragg fibers with arbitrary index profiles

    NASA Astrophysics Data System (ADS)

    Horikis, Theodoros P.; Kath, William L.

    2006-12-01

    A finite-difference approach based upon the immersed interface method is used to analyze the mode structure of Bragg fibers with arbitrary index profiles. The method allows general propagation constants and eigenmodes to be calculated to a high degree of accuracy, while computation times are kept to a minimum by exploiting sparse matrix algebra. The method is well suited to handle complicated structures comprised of a large number of thin layers with high-index contrast and simultaneously determines multiple eigenmodes without modification.

  6. Cultural macroevolution matters

    PubMed Central

    Gray, Russell D.

    2017-01-01

    Evolutionary thinking can be applied to both cultural microevolution and macroevolution. However, much of the current literature focuses on cultural microevolution. In this article, we argue that the growing availability of large cross-cultural datasets facilitates the use of computational methods derived from evolutionary biology to answer broad-scale questions about the major transitions in human social organization. Biological methods can be extended to human cultural evolution. We illustrate this argument with examples drawn from our recent work on the roles of Big Gods and ritual human sacrifice in the evolution of large, stratified societies. These analyses show that, although the presence of Big Gods is correlated with the evolution of political complexity, in Austronesian cultures at least, they do not play a causal role in ratcheting up political complexity. In contrast, ritual human sacrifice does play a causal role in promoting and sustaining the evolution of stratified societies by maintaining and legitimizing the power of elites. We briefly discuss some common objections to the application of phylogenetic modeling to cultural evolution and argue that the use of these methods does not require a commitment to either gene-like cultural inheritance or to the view that cultures are like vertebrate species. We conclude that the careful application of these methods can substantially enhance the prospects of an evolutionary science of human history. PMID:28739960

  7. Evolution of Protein Domain Repeats in Metazoa

    PubMed Central

    Schüler, Andreas; Bornberg-Bauer, Erich

    2016-01-01

    Repeats are ubiquitous elements of proteins and they play important roles for cellular function and during evolution. Repeats are, however, also notoriously difficult to capture computationally and large scale studies so far had difficulties in linking genetic causes, structural properties and evolutionary trajectories of protein repeats. Here we apply recently developed methods for repeat detection and analysis to a large dataset comprising over hundred metazoan genomes. We find that repeats in larger protein families experience generally very few insertions or deletions (indels) of repeat units but there is also a significant fraction of noteworthy volatile outliers with very high indel rates. Analysis of structural data indicates that repeats with an open structure and independently folding units are more volatile and more likely to be intrinsically disordered. Such disordered repeats are also significantly enriched in sites with a high functional potential such as linear motifs. Furthermore, the most volatile repeats have a high sequence similarity between their units. Since many volatile repeats also show signs of recombination, we conclude they are often shaped by concerted evolution. Intriguingly, many of these conserved yet volatile repeats are involved in host-pathogen interactions where they might foster fast but subtle adaptation in biological arms races. Key Words: protein evolution, domain rearrangements, protein repeats, concerted evolution. PMID:27671125

  8. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    PubMed

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Oort spike comets with large perihelion distances

    NASA Astrophysics Data System (ADS)

    Królikowska, Małgorzata; Dybczyński, Piotr A.

    2017-12-01

    The complete sample of large-perihelion nearly-parabolic comets discovered during the period 1901-2010 is studied, starting with their orbit determination. Next, an orbital evolution that includes three perihelion passages (previous-observed-next) is investigated in which a full model of Galactic perturbations and perturbations from passing stars is incorporated. We show that the distribution of planetary perturbations suffered by actual large-perihelion comets during their passage through the Solar system has a deep, unexpected minimum around zero, which indicates a lack of 'almost unperturbed' comets. Using a series of simulations we show that this deep well is moderately resistant to some diffusion of the orbital elements of the analysed comets. It seems reasonable to assert that the observed stream of these large-perihelion comets experienced a series of specific planetary configurations when passing through the planetary zone. An analysis of the past dynamics of these comets clearly shows that dynamically new comets can appear only when their original semimajor axes are greater than 20 000 au. On the other hand, dynamically old comets are completely absent for semimajor axes longer than 40 000 au. We demonstrate that the observed 1/aori-distribution exhibits a local minimum separating dynamically new from dynamically old comets. Long-term dynamical studies reveal a wide variety of orbital behaviour. Several interesting examples of the action of passing stars are also described, in particular the impact of Gliese 710, which will pass close to the Sun in the future. However, none of the obtained stellar perturbations is sufficient to change the dynamical status of the analysed comets.

  10. Influence of savanna fire on Australian monsoon season precipitation and circulation as simulated using a distributed computing environment

    NASA Astrophysics Data System (ADS)

    Lynch, Amanda H.; Abramson, David; Görgen, Klaus; Beringer, Jason; Uotila, Petteri

    2007-10-01

    Fires in the Australian savanna have been hypothesized to affect monsoon evolution, but the hypothesis is controversial and the effects have not been quantified. A distributed computing approach allows the development of a challenging experimental design that permits simultaneous variation of all fire attributes. The climate model simulations are distributed around multiple independent computer clusters in six countries, an approach that has potential for a range of other large simulation applications in the earth sciences. The experiment clarifies that savanna burning can shape the monsoon through two mechanisms. Boundary-layer circulation and large-scale convergence is intensified monotonically through increasing fire intensity and area burned. However, thresholds of fire timing and area are evident in the consequent influence on monsoon rainfall. In the optimal band of late, high intensity fires with a somewhat limited extent, it is possible for the wet season to be significantly enhanced.

  11. Using Supercomputers to Probe the Early Universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giorgi, Elena Edi

    For decades physicists have been trying to decipher the first moments after the Big Bang. Using very large telescopes, for example, scientists scan the skies and look at how fast galaxies move. Satellites study the relic radiation left from the Big Bang, called the cosmic microwave background radiation. And finally, particle colliders, like the Large Hadron Collider at CERN, allow researchers to smash protons together and analyze the debris left behind by such collisions. Physicists at Los Alamos National Laboratory, however, are taking a different approach: they are using computers. In collaboration with colleagues at University of California San Diego,more » the Los Alamos researchers developed a computer code, called BURST, that can simulate conditions during the first few minutes of cosmological evolution.« less

  12. From evolutionary computation to the evolution of things.

    PubMed

    Eiben, Agoston E; Smith, Jim

    2015-05-28

    Evolution has provided a source of inspiration for algorithm designers since the birth of computers. The resulting field, evolutionary computation, has been successful in solving engineering tasks ranging in outlook from the molecular to the astronomical. Today, the field is entering a new phase as evolutionary algorithms that take place in hardware are developed, opening up new avenues towards autonomous machines that can adapt to their environment. We discuss how evolutionary computation compares with natural evolution and what its benefits are relative to other computing approaches, and we introduce the emerging area of artificial evolution in physical systems.

  13. Speedup of minimum discontinuity phase unwrapping algorithm with a reference phase distribution

    NASA Astrophysics Data System (ADS)

    Liu, Yihang; Han, Yu; Li, Fengjiao; Zhang, Qican

    2018-06-01

    In three-dimensional (3D) shape measurement based on phase analysis, the phase analysis process usually produces a wrapped phase map ranging from - π to π with some 2 π discontinuities, and thus a phase unwrapping algorithm is necessary to recover the continuous and nature phase map from which 3D height distribution can be restored. Usually, the minimum discontinuity phase unwrapping algorithm can be used to solve many different kinds of phase unwrapping problems, but its main drawback is that it requires a large amount of computations and has low efficiency in searching for the improving loop within the phase's discontinuity area. To overcome this drawback, an improvement to speedup of the minimum discontinuity phase unwrapping algorithm by using the phase distribution on reference plane is proposed. In this improved algorithm, before the minimum discontinuity phase unwrapping algorithm is carried out to unwrap phase, an integer number K was calculated from the ratio of the wrapped phase to the nature phase on a reference plane. And then the jump counts of the unwrapped phase can be reduced by adding 2K π, so the efficiency of the minimum discontinuity phase unwrapping algorithm is significantly improved. Both simulated and experimental data results verify the feasibility of the proposed improved algorithm, and both of them clearly show that the algorithm works very well and has high efficiency.

  14. Pan-European seasonal trends and recent changes of drought frequency and severity

    NASA Astrophysics Data System (ADS)

    Spinoni, Jonathan; Naumann, Gustavo; Vogt, Jürgen V.

    2017-01-01

    In the last decades drought has become one of the natural disasters with most relevant impacts in Europe and this not only in water scarce areas such as the Mediterranean that are inclined to such events. As a complex natural phenomenon, drought is characterized by many hydro-meteorological aspects, a large variety of possible impacts and definitions. This study focuses on meteorological drought, investigated by using indicators that include precipitation and potential evapotranspiration (PET), i.e. the Standardized Precipitation Index (SPI) and the Standardized Precipitation-Evapotranspiration Index (SPEI). These indicators account for the lack of precipitation and the drying effects of hot temperatures and in this study have been computed for short-accumulation periods (3-month) to capture the seasonality of droughts. The input variables, monthly precipitation and temperature for 1950-2015, stem from daily gridded E-OBS data and indicators were computed on regular grids spanning over the whole of Europe. PET was calculated from minimum and maximum temperatures using the Hargreaves-Samani formulation. Monthly precipitation and PET have then been used to compute the SPI-3 and the SPEI-3 time series. From these series drought events were defined at seasonal scale and trends of frequency and severity of droughts and extreme droughts were analyzed for the periods 1950-2015 and 1981-2015. According to the SPI (driven by precipitation), results show a statistically significant tendency towards less frequent and severe drought events over North-Eastern Europe, especially in winter and spring, and a moderate opposite tendency over Southern Europe, especially in spring and summer. According to the SPEI (driven by precipitation and temperature), Northern Europe shows similar wetting patterns, while Southern and Eastern Europe show a more remarkable drying tendency, especially in summer and autumn. Both for frequency and severity, the evolution towards drier conditions is more relevant in the last three decades over Central Europe in spring, the Mediterranean area in summer, and Eastern Europe in autumn.

  15. Quantum computation in the analysis of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Gomez, Richard B.; Ghoshal, Debabrata; Jayanna, Anil

    2004-08-01

    Recent research on the topic of quantum computation provides us with some quantum algorithms with higher efficiency and speedup compared to their classical counterparts. In this paper, it is our intent to provide the results of our investigation of several applications of such quantum algorithms - especially the Grover's Search algorithm - in the analysis of Hyperspectral Data. We found many parallels with Grover's method in existing data processing work that make use of classical spectral matching algorithms. Our efforts also included the study of several methods dealing with hyperspectral image analysis work where classical computation methods involving large data sets could be replaced with quantum computation methods. The crux of the problem in computation involving a hyperspectral image data cube is to convert the large amount of data in high dimensional space to real information. Currently, using the classical model, different time consuming methods and steps are necessary to analyze these data including: Animation, Minimum Noise Fraction Transform, Pixel Purity Index algorithm, N-dimensional scatter plot, Identification of Endmember spectra - are such steps. If a quantum model of computation involving hyperspectral image data can be developed and formalized - it is highly likely that information retrieval from hyperspectral image data cubes would be a much easier process and the final information content would be much more meaningful and timely. In this case, dimensionality would not be a curse, but a blessing.

  16. DIRECTIONAL CULTURAL CHANGE BY MODIFICATION AND REPLACEMENT OF MEMES

    PubMed Central

    Cardoso, Gonçalo C.; Atwell, Jonathan W.

    2017-01-01

    Evolutionary approaches to culture remain contentious. A source of contention is that cultural mutation may be substantial and, if it drives cultural change, then current evolutionary models are not adequate. But we lack studies quantifying the contribution of mutations to directional cultural change. We estimated the contribution of one type of cultural mutations—modification of memes—to directional cultural change using an amenable study system: learned birdsongs in a species that recently entered an urban habitat. Many songbirds have higher minimum song frequency in cities, to alleviate masking by low-frequency noise. We estimated that the input of meme modifications in an urban songbird population explains about half the extent of the population divergence in song frequency. This contribution of cultural mutations is large, but insufficient to explain the entire population divergence. The remaining divergence is due to selection of memes or creation of new memes. We conclude that the input of cultural mutations can be quantitatively important, unlike in genetic evolution, and that it operates together with other mechanisms of cultural evolution. For this and other traits, in which the input of cultural mutations might be important, quantitative studies of cultural mutation are necessary to calibrate realistic models of cultural evolution. PMID:20722726

  17. Solar-Cycle Variability of Magnetosheath Fluctuations at Earth and Venus

    NASA Astrophysics Data System (ADS)

    Dwivedi, N. K.; Narita, Y.; Kovacs, P.

    2014-12-01

    The magnetosheath is a region between the bow-shock and magnetopause and the magnetosheath plasma is mostly in the turbulent state. In the present investigation we put an effort to closely examine the magnetosheath fluctuations dependency on the solar-cycles (solar-maximum and solar minimum) at the magnetized planetary body (Earth) and their comparison with the un-magnetized planetary body (Venus) for the solar minimum. We use the CLUSTER FGM data for the solar-maximum (2001-2002), solar-minimum (2006-2008) and Venus fluxgate magnetometer data for the solar-minimum (2006-2008) to perform a comparative statistical study on the energy spectra and probability density function (PDF) and asses the spectral features of the magnetic fluctuations of the both planetary bodies. In the comparison we study the relation between the inertial ranges of the spectra and the temporal scales of non-Gaussian magnetic fluctuations derived from PDF analyses. The first can refer to turbulent cascade dynamics, while the latter may indicate intermittency. We first transformed the magnetic field data into mean field aligned coordinate system with respect to the large-scale magnetic field direction and then after we compute the power spectral density with the help of Welch algorithm. The computed energy spectra of Earth's magnetosheath show a moderate variability with the solar-cycles and have a broader inertial range. However the estimated energy spectra for the solar-minimum at Venus give the clear evidence of the existence of the break point in the vicinity of the ion gyroradius. After the break-point the energy spectra become steeper and show a distinctive spectral scales which is interpreted as the realization of the begging of the energy cascade. We also briefly address the influence of turbulence on the plasma transport and wave dynamics responsible for the spectral break and predict spectral features of the energy spectra for the solar-maximum at Venus based on the results obtained for the solar-minimum. The research leading to these results has received funding from the European Community's Seventh Framework Programme ([FP7/2007-2013]) under grant agreement number 313038/STORM.

  18. Minimum-noise production of translation factor eIF4G maps to a mechanistically determined optimal rate control window for protein synthesis

    PubMed Central

    Meng, Xiang; Firczuk, Helena; Pietroni, Paola; Westbrook, Richard; Dacheux, Estelle; Mendes, Pedro; McCarthy, John E.G.

    2017-01-01

    Gene expression noise influences organism evolution and fitness. The mechanisms determining the relationship between stochasticity and the functional role of translation machinery components are critical to viability. eIF4G is an essential translation factor that exerts strong control over protein synthesis. We observe an asymmetric, approximately bell-shaped, relationship between the average intracellular abundance of eIF4G and rates of cell population growth and global mRNA translation, with peak rates occurring at normal physiological abundance. This relationship fits a computational model in which eIF4G is at the core of a multi-component–complex assembly pathway. This model also correctly predicts a plateau-like response of translation to super-physiological increases in abundance of the other cap-complex factors, eIF4E and eIF4A. Engineered changes in eIF4G abundance amplify noise, demonstrating that minimum stochasticity coincides with physiological abundance of this factor. Noise is not increased when eIF4E is overproduced. Plasmid-mediated synthesis of eIF4G imposes increased global gene expression stochasticity and reduced viability because the intrinsic noise for this factor influences total cellular gene noise. The naturally evolved eIF4G gene expression noise minimum maps within the optimal activity zone dictated by eIF4G's mechanistic role. Rate control and noise are therefore interdependent and have co-evolved to share an optimal physiological abundance point. PMID:27928055

  19. Generation of anisotropy in turbulent flows subjected to rapid distortion

    NASA Astrophysics Data System (ADS)

    Clark, Timothy T.; Kurien, Susan; Rubinstein, Robert

    2018-01-01

    A computational tool for the anisotropic time-evolution of the spectral velocity correlation tensor is presented. We operate in the linear, rapid distortion limit of the mean-field-coupled equations. Each term of the equations is written in the form of an expansion to arbitrary order in the basis of irreducible representations of the SO(3) symmetry group. The computational algorithm for this calculation solves a system of coupled equations for the scalar weights of each generated anisotropic mode. The analysis demonstrates that rapid distortion rapidly but systematically generates higher-order anisotropic modes. To maintain a tractable computation, the maximum number of rotational modes to be used in a given calculation is specified a priori. The computed Reynolds stress converges to the theoretical result derived by Batchelor and Proudman [Quart. J. Mech. Appl. Math. 7, 83 (1954), 10.1093/qjmam/7.1.83] if a sufficiently large maximum number of rotational modes is utilized; more modes are required to recover the solution at later times. The emergence and evolution of the underlying multidimensional space of functions is presented here using a 64-mode calculation. Alternative implications for modeling strategies are discussed.

  20. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  1. Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU

    NASA Astrophysics Data System (ADS)

    Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis

    2016-06-01

    Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20x to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.

  2. Exemplar for simulation challenges: Large-deformation micromechanics of Sylgard 184/glass microballoon syntactic foams.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Judith Alice; Long, Kevin Nicholas

    2018-05-01

    Sylgard® 184/Glass Microballoon (GMB) potting material is currently used in many NW systems. Analysts need a macroscale constitutive model that can predict material behavior under complex loading and damage evolution. To address this need, ongoing modeling and experimental efforts have focused on study of damage evolution in these materials. Micromechanical finite element simulations that resolve individual GMB and matrix components promote discovery and better understanding of the material behavior. With these simulations, we can study the role of the GMB volume fraction, time-dependent damage, behavior under confined vs. unconfined compression, and the effects of partial damage. These simulations are challengingmore » and push the boundaries of capability even with the high performance computing tools available at Sandia. We summarize the major challenges and the current state of this modeling effort, as an exemplar of micromechanical modeling needs that can motivate advances in future computing efforts.« less

  3. Soft evolution of multi-jet final states

    DOE PAGES

    Gerwick, Erik; Schumann, Steffen; Höche, Stefan; ...

    2015-02-16

    We present a new framework for computing resummed and matched distributions in processes with many hard QCD jets. The intricate color structure of soft gluon emission at large angles renders resummed calculations highly non-trivial in this case. We automate all ingredients necessary for the color evolution of the soft function at next-to-leading-logarithmic accuracy, namely the selection of the color bases and the projections of color operators and Born amplitudes onto those bases. Explicit results for all QCD processes with up to 2 → 5 partons are given. We also devise a new tree-level matching scheme for resummed calculations which exploitsmore » a quasi-local subtraction based on the Catani-Seymour dipole formalism. We implement both resummation and matching in the Sherpa event generator. As a proof of concept, we compute the resummed and matched transverse-thrust distribution for hadronic collisions.« less

  4. Aerodynamic optimization of supersonic compressor cascade using differential evolution on GPU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aissa, Mohamed Hasanine; Verstraete, Tom; Vuik, Cornelis

    Differential Evolution (DE) is a powerful stochastic optimization method. Compared to gradient-based algorithms, DE is able to avoid local minima but requires at the same time more function evaluations. In turbomachinery applications, function evaluations are performed with time-consuming CFD simulation, which results in a long, non affordable, design cycle. Modern High Performance Computing systems, especially Graphic Processing Units (GPUs), are able to alleviate this inconvenience by accelerating the design evaluation itself. In this work we present a validated CFD Solver running on GPUs, able to accelerate the design evaluation and thus the entire design process. An achieved speedup of 20xmore » to 30x enabled the DE algorithm to run on a high-end computer instead of a costly large cluster. The GPU-enhanced DE was used to optimize the aerodynamics of a supersonic compressor cascade, achieving an aerodynamic loss minimization of 20%.« less

  5. Energy, time, and channel evolution in catastrophically disturbed fluvial systems

    USGS Publications Warehouse

    Simon, A.

    1992-01-01

    Specific energy is shown to decrease nonlinearly with time during channel evolution and provides a measure of reductions in available energy at the channel bed. Data from two sites show convergence towards a minimum specific energy with time. Time-dependent reductions in specific energy at a point act in concert with minimization of the rate of energy dissipation over a reach during channel evolution as the fluvial systems adjust to a new equilibrium.

  6. Non-invasive imaging techniques in assessing non-alcoholic fatty liver disease: a current status of available methods

    PubMed Central

    Lăpădat, AM; Jianu, IR; Ungureanu, BS; Florescu, LM; Gheonea, DI; Sovaila, S; Gheonea, IA

    2017-01-01

    Non-alcoholic fatty liver disease (NAFLD) is an ailment affecting and increasing a number of people worldwide diagnosed via non-invasive imaging techniques, at a time when a minimum harm caused by medical procedures is rightfully emphasized, more sought after, than ever before. Liver steatosis should not be taken lightly even if its evolution is largely benign as it has the potential to develop into non-alcoholic steatohepatitis (NASH) or even more concerning, hepatic cirrhosis, and hepatocellular carcinoma (HCC). Traditionally, liver biopsy has been the standard for diagnosing this particular liver disease, but nowadays, a consistent number of imagistic methods are available for diagnosing hepatosteatosis and choosing the one appropriate to the clinical context is the key. Although different in sensitivity and specificity when it comes to determining the hepatic fat fraction (FF), these imaging techniques possessing a diverse availability, operating difficulty, cost, and reproducibility are invaluable to any modern physician. Ultrasonography (US), computed tomography (CT), magnetic resonance imaging (MRI), elastography, and spectroscopy will be discussed in order to lay out the advantages and disadvantages of their diagnostic potential and application. Although imagistics has given physicians a valuable insight into the means of managing NAFLD, the current methods are far from perfect, but given the time, they will surely be improved and the use of liver biopsy will be completely removed. PMID:28255371

  7. 26 CFR 1.383-2 - Limitations on certain capital losses and excess credits in computing alternative minimum tax...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 26 Internal Revenue 4 2010-04-01 2010-04-01 false Limitations on certain capital losses and excess credits in computing alternative minimum tax. [Reserved] 1.383-2 Section 1.383-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES Insolvency...

  8. 12 CFR Appendix A to Subpart A of... - Minimum Capital Components for Interest Rate and Foreign Exchange Rate Contracts

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... interest rate and foreign exchange rate contracts are computed on the basis of the credit equivalent amounts of such contracts. Credit equivalent amounts are computed for each of the following off-balance... Equivalent Amounts a. The minimum capital components for interest rate and foreign exchange rate contracts...

  9. On the formation of granulites

    USGS Publications Warehouse

    Bohlen, S.R.

    1991-01-01

    The tectonic settings for the formation and evolution of regional granulite terranes and the lowermost continental crust can be deduced from pressure-temperature-time (P-T-time) paths and constrained by petrological and geophysical considerations. P-T conditions deduced for regional granulites require transient, average geothermal gradients of greater than 35??C km-1, implying minimum heat flow in excess of 100 mW m-2. Such high heat flow is probably caused by magmatic heating. Tectonic settings wherein such conditions are found include convergent plate margins, continental rifts, hot spots and at the margins of large, deep-seated batholiths. Cooling paths can be constrained by solid-solid and devolatilization equilibria and geophysical modelling. -from Author

  10. Subgrid-scale models for large-eddy simulation of rotating turbulent flows

    NASA Astrophysics Data System (ADS)

    Silvis, Maurits; Trias, Xavier; Abkar, Mahdi; Bae, Hyunji Jane; Lozano-Duran, Adrian; Verstappen, Roel

    2016-11-01

    This paper discusses subgrid models for large-eddy simulation of anisotropic flows using anisotropic grids. In particular, we are looking into ways to model not only the subgrid dissipation, but also transport processes, since these are expected to play an important role in rotating turbulent flows. We therefore consider subgrid-scale models of the form τ = - 2νt S +μt (SΩ - ΩS) , where the eddy-viscosity νt is given by the minimum-dissipation model, μt represents a transport coefficient; S is the symmetric part of the velocity gradient and Ω the skew-symmetric part. To incorporate the effect of mesh anisotropy the filter length is taken in such a way that it minimizes the difference between the turbulent stress in physical and computational space, where the physical space is covered by an anisotropic mesh and the computational space is isotropic. The resulting model is successfully tested for rotating homogeneous isotropic turbulence and rotating plane-channel flows. The research was largely carried out during the CTR SP 2016. M.S, and R.V. acknowledge the financial support to attend this Summer Program.

  11. Minimum Information about a Spinal Cord Injury Experiment: A Proposed Reporting Standard for Spinal Cord Injury Experiments

    PubMed Central

    Ferguson, Adam R.; Popovich, Phillip G.; Xu, Xiao-Ming; Snow, Diane M.; Igarashi, Michihiro; Beattie, Christine E.; Bixby, John L.

    2014-01-01

    Abstract The lack of reproducibility in many areas of experimental science has a number of causes, including a lack of transparency and precision in the description of experimental approaches. This has far-reaching consequences, including wasted resources and slowing of progress. Additionally, the large number of laboratories around the world publishing articles on a given topic make it difficult, if not impossible, for individual researchers to read all of the relevant literature. Consequently, centralized databases are needed to facilitate the generation of new hypotheses for testing. One strategy to improve transparency in experimental description, and to allow the development of frameworks for computer-readable knowledge repositories, is the adoption of uniform reporting standards, such as common data elements (data elements used in multiple clinical studies) and minimum information standards. This article describes a minimum information standard for spinal cord injury (SCI) experiments, its major elements, and the approaches used to develop it. Transparent reporting standards for experiments using animal models of human SCI aim to reduce inherent bias and increase experimental value. PMID:24870067

  12. The HART II International Workshop: An Assessment of the State-of-the-Art in Comprehensive Code Prediction

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2013-01-01

    Significant advancements in computational fluid dynamics (CFD) and their coupling with computational structural dynamics (CSD, or comprehensive codes) for rotorcraft applications have been achieved recently. Despite this, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this article, the capabilities of such codes are evaluated using the HART II International Workshop database, focusing on a typical descent operating condition which includes strong blade-vortex interactions. A companion article addresses the CFD/CSD coupled approach. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics-especially for the cases with HHC-and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  13. An Assessment of Comprehensive Code Prediction State-of-the-Art Using the HART II International Workshop Data

    NASA Technical Reports Server (NTRS)

    vanderWall, Berend G.; Lim, Joon W.; Smith, Marilyn J.; Jung, Sung N.; Bailly, Joelle; Baeder, James D.; Boyd, D. Douglas, Jr.

    2012-01-01

    Despite significant advancements in computational fluid dynamics and their coupling with computational structural dynamics (= CSD, or comprehensive codes) for rotorcraft applications, CSD codes with their engineering level of modeling the rotor blade dynamics, the unsteady sectional aerodynamics and the vortical wake are still the workhorse for the majority of applications. This is especially true when a large number of parameter variations is to be performed and their impact on performance, structural loads, vibration and noise is to be judged in an approximate yet reliable and as accurate as possible manner. In this paper, the capabilities of such codes are evaluated using the HART II Inter- national Workshop data base, focusing on a typical descent operating condition which includes strong blade-vortex interactions. Three cases are of interest: the baseline case and two cases with 3/rev higher harmonic blade root pitch control (HHC) with different control phases employed. One setting is for minimum blade-vortex interaction noise radiation and the other one for minimum vibration generation. The challenge is to correctly predict the wake physics - especially for the cases with HHC - and all the dynamics, aerodynamics, modifications of the wake structure and the aero-acoustics coming with it. It is observed that the comprehensive codes used today have a surprisingly good predictive capability when they appropriately account for all of the physics involved. The minimum requirements to obtain these results are outlined.

  14. Quantum Neural Nets

    NASA Technical Reports Server (NTRS)

    Zak, Michail; Williams, Colin P.

    1997-01-01

    The capacity of classical neurocomputers is limited by the number of classical degrees of freedom which is roughly proportional to the size of the computer. By Contrast, a Hypothetical quantum neurocomputer can implement an exponentially large number of the degrees of freedom within the same size. In this paper an attempt is made to reconcile linear reversible structure of quantum evolution with nonlinear irreversible dynamics for neural nets.

  15. Mathematical and Computational Aspects of Multiscale Materials Modeling, Mathematics-Numerical analysis, Section II.A.a.3.4, Conference and symposia organization II.A.2.a

    DTIC Science & Technology

    2015-02-04

    dislocation dynamics models ( DDD ), continuum representations). Coupling of these models is difficult. Coupling of atomistics and DDD models has been...explored to some extent, but the coupling between DDD and continuum models of the evolution of large populations of dislocations is essentially unexplored

  16. QCDNUM: Fast QCD evolution and convolution

    NASA Astrophysics Data System (ADS)

    Botje, M.

    2011-02-01

    The QCDNUM program numerically solves the evolution equations for parton densities and fragmentation functions in perturbative QCD. Un-polarised parton densities can be evolved up to next-to-next-to-leading order in powers of the strong coupling constant, while polarised densities or fragmentation functions can be evolved up to next-to-leading order. Other types of evolution can be accessed by feeding alternative sets of evolution kernels into the program. A versatile convolution engine provides tools to compute parton luminosities, cross-sections in hadron-hadron scattering, and deep inelastic structure functions in the zero-mass scheme or in generalised mass schemes. Input to these calculations are either the QCDNUM evolved densities, or those read in from an external parton density repository. Included in the software distribution are packages to calculate zero-mass structure functions in un-polarised deep inelastic scattering, and heavy flavour contributions to these structure functions in the fixed flavour number scheme. Program summaryProgram title: QCDNUM version: 17.00 Catalogue identifier: AEHV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU Public Licence No. of lines in distributed program, including test data, etc.: 45 736 No. of bytes in distributed program, including test data, etc.: 911 569 Distribution format: tar.gz Programming language: Fortran-77 Computer: All Operating system: All RAM: Typically 3 Mbytes Classification: 11.5 Nature of problem: Evolution of the strong coupling constant and parton densities, up to next-to-next-to-leading order in perturbative QCD. Computation of observable quantities by Mellin convolution of the evolved densities with partonic cross-sections. Solution method: Parametrisation of the parton densities as linear or quadratic splines on a discrete grid, and evolution of the spline coefficients by solving (coupled) triangular matrix equations with a forward substitution algorithm. Fast computation of convolution integrals as weighted sums of spline coefficients, with weights derived from user-given convolution kernels. Restrictions: Accuracy and speed are determined by the density of the evolution grid. Running time: Less than 10 ms on a 2 GHz Intel Core 2 Duo processor to evolve the gluon density and 12 quark densities at next-to-next-to-leading order over a large kinematic range.

  17. Integrated computer-aided design using minicomputers

    NASA Technical Reports Server (NTRS)

    Storaasli, O. O.

    1980-01-01

    Computer-Aided Design/Computer-Aided Manufacturing (CAD/CAM), a highly interactive software, has been implemented on minicomputers at the NASA Langley Research Center. CAD/CAM software integrates many formerly fragmented programs and procedures into one cohesive system; it also includes finite element modeling and analysis, and has been interfaced via a computer network to a relational data base management system and offline plotting devices on mainframe computers. The CAD/CAM software system requires interactive graphics terminals operating at a minimum of 4800 bits/sec transfer rate to a computer. The system is portable and introduces 'interactive graphics', which permits the creation and modification of models interactively. The CAD/CAM system has already produced designs for a large area space platform, a national transonic facility fan blade, and a laminar flow control wind tunnel model. Besides the design/drafting element analysis capability, CAD/CAM provides options to produce an automatic program tooling code to drive a numerically controlled (N/C) machine. Reductions in time for design, engineering, drawing, finite element modeling, and N/C machining will benefit productivity through reduced costs, fewer errors, and a wider range of configuration.

  18. Computation of distribution of minimum resolution for log-normal distribution of chromatographic peak heights.

    PubMed

    Davis, Joe M

    2011-10-28

    General equations are derived for the distribution of minimum resolution between two chromatographic peaks, when peak heights in a multi-component chromatogram follow a continuous statistical distribution. The derivation draws on published theory by relating the area under the distribution of minimum resolution to the area under the distribution of the ratio of peak heights, which in turn is derived from the peak-height distribution. Two procedures are proposed for the equations' numerical solution. The procedures are applied to the log-normal distribution, which recently was reported to describe the distribution of component concentrations in three complex natural mixtures. For published statistical parameters of these mixtures, the distribution of minimum resolution is similar to that for the commonly assumed exponential distribution of peak heights used in statistical-overlap theory. However, these two distributions of minimum resolution can differ markedly, depending on the scale parameter of the log-normal distribution. Theory for the computation of the distribution of minimum resolution is extended to other cases of interest. With the log-normal distribution of peak heights as an example, the distribution of minimum resolution is computed when small peaks are lost due to noise or detection limits, and when the height of at least one peak is less than an upper limit. The distribution of minimum resolution shifts slightly to lower resolution values in the first case and to markedly larger resolution values in the second one. The theory and numerical procedure are confirmed by Monte Carlo simulation. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. NeuroManager: a workflow analysis based simulation management engine for computational neuroscience

    PubMed Central

    Stockton, David B.; Santamaria, Fidel

    2015-01-01

    We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project. PMID:26528175

  20. NeuroManager: a workflow analysis based simulation management engine for computational neuroscience.

    PubMed

    Stockton, David B; Santamaria, Fidel

    2015-01-01

    We developed NeuroManager, an object-oriented simulation management software engine for computational neuroscience. NeuroManager automates the workflow of simulation job submissions when using heterogeneous computational resources, simulators, and simulation tasks. The object-oriented approach (1) provides flexibility to adapt to a variety of neuroscience simulators, (2) simplifies the use of heterogeneous computational resources, from desktops to super computer clusters, and (3) improves tracking of simulator/simulation evolution. We implemented NeuroManager in MATLAB, a widely used engineering and scientific language, for its signal and image processing tools, prevalence in electrophysiology analysis, and increasing use in college Biology education. To design and develop NeuroManager we analyzed the workflow of simulation submission for a variety of simulators, operating systems, and computational resources, including the handling of input parameters, data, models, results, and analyses. This resulted in 22 stages of simulation submission workflow. The software incorporates progress notification, automatic organization, labeling, and time-stamping of data and results, and integrated access to MATLAB's analysis and visualization tools. NeuroManager provides users with the tools to automate daily tasks, and assists principal investigators in tracking and recreating the evolution of research projects performed by multiple people. Overall, NeuroManager provides the infrastructure needed to improve workflow, manage multiple simultaneous simulations, and maintain provenance of the potentially large amounts of data produced during the course of a research project.

  1. Cloud computing task scheduling strategy based on improved differential evolution algorithm

    NASA Astrophysics Data System (ADS)

    Ge, Junwei; He, Qian; Fang, Yiqiu

    2017-04-01

    In order to optimize the cloud computing task scheduling scheme, an improved differential evolution algorithm for cloud computing task scheduling is proposed. Firstly, the cloud computing task scheduling model, according to the model of the fitness function, and then used improved optimization calculation of the fitness function of the evolutionary algorithm, according to the evolution of generation of dynamic selection strategy through dynamic mutation strategy to ensure the global and local search ability. The performance test experiment was carried out in the CloudSim simulation platform, the experimental results show that the improved differential evolution algorithm can reduce the cloud computing task execution time and user cost saving, good implementation of the optimal scheduling of cloud computing tasks.

  2. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion

    PubMed Central

    Du, Shichuan; Martinez, Aleix M.

    2013-01-01

    Abstract Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10–20 ms), even at low resolutions. Fear and anger are recognized the slowest (100–250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70–200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models. PMID:23509409

  3. Radiative Transfer and Satellite Remote Sensing of Cirrus Clouds Using FIRE-2-IFO Data

    NASA Technical Reports Server (NTRS)

    2000-01-01

    Under the support of the NASA grant, we have developed a new geometric-optics model (GOM2) for the calculation of the single-scattering and polarization properties for arbitrarily oriented hexagonal ice crystals. From comparisons with the results computed by the finite difference time domain (FDTD) method, we show that the novel geometric-optics can be applied to the computation of the extinction cross section and single-scattering albedo for ice crystals with size parameters along the minimum dimension as small as approximately 6. We demonstrate that the present model converges to the conventional ray tracing method for large size parameters and produces single-scattering results close to those computed by the FDTD method for size parameters along the minimum dimension smaller than approximately 20. We demonstrate that neither the conventional geometric optics method nor the Lorenz-Mie theory can be used to approximate the scattering, absorption, and polarization features for hexagonal ice crystals with size parameters from approximately 5 to 20. On the satellite remote sensing algorithm development and validation, we have developed a numerical scheme to identify multilayer cirrus cloud systems using AVHRR data. We have applied this scheme to the satellite data collected over the FIRE-2-IFO area during nine overpasses within seven observation dates. Determination of the threshold values used in the detection scheme are based on statistical analyses of these satellite data.

  4. Framework to trade optimality for local processing in large-scale wavefront reconstruction problems.

    PubMed

    Haber, Aleksandar; Verhaegen, Michel

    2016-11-15

    We show that the minimum variance wavefront estimation problems permit localized approximate solutions, in the sense that the wavefront value at a point (excluding unobservable modes, such as the piston mode) can be approximated by a linear combination of the wavefront slope measurements in the point's neighborhood. This enables us to efficiently compute a wavefront estimate by performing a single sparse matrix-vector multiplication. Moreover, our results open the possibility for the development of wavefront estimators that can be easily implemented in a decentralized/distributed manner, and in which the estimate optimality can be easily traded for computational efficiency. We numerically validate our approach on Hudgin wavefront sensor geometries, and the results can be easily generalized to Fried geometries.

  5. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations

    PubMed Central

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2014-01-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios. PMID:24729986

  6. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    NASA Astrophysics Data System (ADS)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  7. EVOLUTION OF THE MAGNETIC FIELD LINE DIFFUSION COEFFICIENT AND NON-GAUSSIAN STATISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snodin, A. P.; Ruffolo, D.; Matthaeus, W. H.

    The magnetic field line random walk (FLRW) plays an important role in the transport of energy and particles in turbulent plasmas. For magnetic fluctuations that are transverse or almost transverse to a large-scale mean magnetic field, theories describing the FLRW usually predict asymptotic diffusion of magnetic field lines perpendicular to the mean field. Such theories often depend on the assumption that one can relate the Lagrangian and Eulerian statistics of the magnetic field via Corrsin’s hypothesis, and additionally take the distribution of magnetic field line displacements to be Gaussian. Here we take an ordinary differential equation (ODE) model with thesemore » underlying assumptions and test how well it describes the evolution of the magnetic field line diffusion coefficient in 2D+slab magnetic turbulence, by comparisons to computer simulations that do not involve such assumptions. In addition, we directly test the accuracy of the Corrsin approximation to the Lagrangian correlation. Over much of the studied parameter space we find that the ODE model is in fairly good agreement with computer simulations, in terms of both the evolution and asymptotic values of the diffusion coefficient. When there is poor agreement, we show that this can be largely attributed to the failure of Corrsin’s hypothesis rather than the assumption of Gaussian statistics of field line displacements. The degree of non-Gaussianity, which we measure in terms of the kurtosis, appears to be an indicator of how well Corrsin’s approximation works.« less

  8. A minute fossil phoretic mite recovered by phase-contrast X-ray computed tomography.

    PubMed

    Dunlop, Jason A; Wirth, Stefan; Penney, David; McNeil, Andrew; Bradley, Robert S; Withers, Philip J; Preziosi, Richard F

    2012-06-23

    High-resolution phase-contrast X-ray computed tomography (CT) reveals the phoretic deutonymph of a fossil astigmatid mite (Acariformes: Astigmata) attached to a spider's carapace (Araneae: Dysderidae) in Eocene (44-49 Myr ago) Baltic amber. Details of appendages and a sucker plate were resolved, and the resulting three-dimensional model demonstrates the potential of tomography to recover morphological characters of systematic significance from even the tiniest amber inclusions without the need for a synchrotron. Astigmatids have an extremely sparse palaeontological record. We confirm one of the few convincing fossils, potentially the oldest record of Histiostomatidae. At 176 µm long, we believe this to be the smallest arthropod in amber to be CT-scanned as a complete body fossil, extending the boundaries for what can be recovered using this technique. We also demonstrate a minimum age for the evolution of phoretic behaviour among their deutonymphs, an ecological trait used by extant species to disperse into favourable environments. The occurrence of the fossil on a spider is noteworthy, as modern histiostomatids tend to favour other arthropods as carriers.

  9. A multi-populations multi-strategies differential evolution algorithm for structural optimization of metal nanoclusters

    NASA Astrophysics Data System (ADS)

    Fan, Tian-E.; Shao, Gui-Fang; Ji, Qing-Shuang; Zheng, Ji-Wen; Liu, Tun-dong; Wen, Yu-Hua

    2016-11-01

    Theoretically, the determination of the structure of a cluster is to search the global minimum on its potential energy surface. The global minimization problem is often nondeterministic-polynomial-time (NP) hard and the number of local minima grows exponentially with the cluster size. In this article, a multi-populations multi-strategies differential evolution algorithm has been proposed to search the globally stable structure of Fe and Cr nanoclusters. The algorithm combines a multi-populations differential evolution with an elite pool scheme to keep the diversity of the solutions and avoid prematurely trapping into local optima. Moreover, multi-strategies such as growing method in initialization and three differential strategies in mutation are introduced to improve the convergence speed and lower the computational cost. The accuracy and effectiveness of our algorithm have been verified by comparing the results of Fe clusters with Cambridge Cluster Database. Meanwhile, the performance of our algorithm has been analyzed by comparing the convergence rate and energy evaluations with the classical DE algorithm. The multi-populations, multi-strategies mutation and growing method in initialization in our algorithm have been considered respectively. Furthermore, the structural growth pattern of Cr clusters has been predicted by this algorithm. The results show that the lowest-energy structure of Cr clusters contains many icosahedra, and the number of the icosahedral rings rises with increasing size.

  10. 25 CFR 542.14 - What are the minimum internal control standards for the cage?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for the cage? 542.14 Section 542.14 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.14 What are the minimum internal control standards for the cage? (a) Computer applications. For...

  11. 25 CFR 542.8 - What are the minimum internal control standards for pull tabs?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 25 Indians 2 2013-04-01 2013-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...

  12. 25 CFR 542.8 - What are the minimum internal control standards for pull tabs?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 25 Indians 2 2014-04-01 2014-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...

  13. 25 CFR 542.8 - What are the minimum internal control standards for pull tabs?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 25 Indians 2 2012-04-01 2012-04-01 false What are the minimum internal control standards for pull tabs? 542.8 Section 542.8 Indians NATIONAL INDIAN GAMING COMMISSION, DEPARTMENT OF THE INTERIOR HUMAN SERVICES MINIMUM INTERNAL CONTROL STANDARDS § 542.8 What are the minimum internal control standards for pull tabs? (a) Computer applications. For...

  14. Urban noise and the cultural evolution of bird songs.

    PubMed

    Luther, David; Baptista, Luis

    2010-02-07

    In urban environments, anthropogenic noise can interfere with animal communication. Here we study the influence of urban noise on the cultural evolution of bird songs. We studied three adjacent dialects of white-crowned sparrow songs over a 30-year time span. Urban noise, which is louder at low frequencies, increased during our study period and therefore should have created a selection pressure for songs with higher frequencies. We found that the minimum frequency of songs increased both within and between dialects during the 30-year time span. For example, the dialect with the highest minimum frequency is in the process of replacing another dialect that has lower frequency songs. Songs with the highest minimum frequency were favoured in this environment and should have the most effective transmission properties. We suggest that one mechanism that influences how dialects, and cultural traits in general, are selected and transmitted from one generation to the next is the dialect's ability to be effectively communicated in the local environment.

  15. The enchanted loom. [Book on evolution of intelligence

    NASA Technical Reports Server (NTRS)

    Jastrow, R.

    1981-01-01

    The evolution of intelligence began with the movement of Crossopterygian fish onto land. The eventual appearance of large dinosaurs eliminated all but the smallest of mammalian creatures, with the survivors forced to move only nocturnally, when enhanced olfactory and aural faculties were favored and involved a larger grey matter/body mass ratio than possessed by the dinosaurs. Additionally, the mammals made comparisons between the inputs of various senses, implying the presence of significant memory capacity and an ability to abstract survival information. More complex behavior occurred with the advent of tree dwellers (forward-looking eyes), hands, color vision, and the ability to grip and manipulate objects. An extra pound of brain evolved in the human skull in less than a million years. The neural processes that can lead to an action by a creature with a brain are mimicked by the basic AND and OR gates in computers, which are rapidly approaching the circuit density of the human brain. It is suggested that humans will eventually produce computers of higher intelligence than people possess, and computer spacecraft, alive in an electronic sense, will travel outward to explore the universe.

  16. New Parallel Algorithms for Landscape Evolution Model

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Zhang, H.; Shi, Y.

    2017-12-01

    Most landscape evolution models (LEM) developed in the last two decades solve the diffusion equation to simulate the transportation of surface sediments. This numerical approach is difficult to parallelize due to the computation of drainage area for each node, which needs huge amount of communication if run in parallel. In order to overcome this difficulty, we developed two parallel algorithms for LEM with a stream net. One algorithm handles the partition of grid with traditional methods and applies an efficient global reduction algorithm to do the computation of drainage areas and transport rates for the stream net; the other algorithm is based on a new partition algorithm, which partitions the nodes in catchments between processes first, and then partitions the cells according to the partition of nodes. Both methods focus on decreasing communication between processes and take the advantage of massive computing techniques, and numerical experiments show that they are both adequate to handle large scale problems with millions of cells. We implemented the two algorithms in our program based on the widely used finite element library deal.II, so that it can be easily coupled with ASPECT.

  17. HBLAST: Parallelised sequence similarity--A Hadoop MapReducable basic local alignment search tool.

    PubMed

    O'Driscoll, Aisling; Belogrudov, Vladislav; Carroll, John; Kropp, Kai; Walsh, Paul; Ghazal, Peter; Sleator, Roy D

    2015-04-01

    The recent exponential growth of genomic databases has resulted in the common task of sequence alignment becoming one of the major bottlenecks in the field of computational biology. It is typical for these large datasets and complex computations to require cost prohibitive High Performance Computing (HPC) to function. As such, parallelised solutions have been proposed but many exhibit scalability limitations and are incapable of effectively processing "Big Data" - the name attributed to datasets that are extremely large, complex and require rapid processing. The Hadoop framework, comprised of distributed storage and a parallelised programming framework known as MapReduce, is specifically designed to work with such datasets but it is not trivial to efficiently redesign and implement bioinformatics algorithms according to this paradigm. The parallelisation strategy of "divide and conquer" for alignment algorithms can be applied to both data sets and input query sequences. However, scalability is still an issue due to memory constraints or large databases, with very large database segmentation leading to additional performance decline. Herein, we present Hadoop Blast (HBlast), a parallelised BLAST algorithm that proposes a flexible method to partition both databases and input query sequences using "virtual partitioning". HBlast presents improved scalability over existing solutions and well balanced computational work load while keeping database segmentation and recompilation to a minimum. Enhanced BLAST search performance on cheap memory constrained hardware has significant implications for in field clinical diagnostic testing; enabling faster and more accurate identification of pathogenic DNA in human blood or tissue samples. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Performance evaluation of the inverse dynamics method for optimal spacecraft reorientation

    NASA Astrophysics Data System (ADS)

    Ventura, Jacopo; Romano, Marcello; Walter, Ulrich

    2015-05-01

    This paper investigates the application of the inverse dynamics in the virtual domain method to Euler angles, quaternions, and modified Rodrigues parameters for rapid optimal attitude trajectory generation for spacecraft reorientation maneuvers. The impact of the virtual domain and attitude representation is numerically investigated for both minimum time and minimum energy problems. Owing to the nature of the inverse dynamics method, it yields sub-optimal solutions for minimum time problems. Furthermore, the virtual domain improves the optimality of the solution, but at the cost of more computational time. The attitude representation also affects solution quality and computational speed. For minimum energy problems, the optimal solution can be obtained without the virtual domain with any considered attitude representation.

  19. Energetic Consistency and Coupling of the Mean and Covariance Dynamics

    NASA Technical Reports Server (NTRS)

    Cohn, Stephen E.

    2008-01-01

    The dynamical state of the ocean and atmosphere is taken to be a large dimensional random vector in a range of large-scale computational applications, including data assimilation, ensemble prediction, sensitivity analysis, and predictability studies. In each of these applications, numerical evolution of the covariance matrix of the random state plays a central role, because this matrix is used to quantify uncertainty in the state of the dynamical system. Since atmospheric and ocean dynamics are nonlinear, there is no closed evolution equation for the covariance matrix, nor for the mean state. Therefore approximate evolution equations must be used. This article studies theoretical properties of the evolution equations for the mean state and covariance matrix that arise in the second-moment closure approximation (third- and higher-order moment discard). This approximation was introduced by EPSTEIN [1969] in an early effort to introduce a stochastic element into deterministic weather forecasting, and was studied further by FLEMING [1971a,b], EPSTEIN and PITCHER [1972], and PITCHER [1977], also in the context of atmospheric predictability. It has since fallen into disuse, with a simpler one being used in current large-scale applications. The theoretical results of this article make a case that this approximation should be reconsidered for use in large-scale applications, however, because the second moment closure equations possess a property of energetic consistency that the approximate equations now in common use do not possess. A number of properties of solutions of the second-moment closure equations that result from this energetic consistency will be established.

  20. Evolution of complex higher brain centers and behaviors: behavioral correlates of mushroom body elaboration in insects.

    PubMed

    Farris, Sarah M

    2013-01-01

    Large, complex higher brain centers have evolved many times independently within the vertebrates, but the selective pressures driving these acquisitions have been difficult to pinpoint. It is well established that sensory brain centers become larger and more structurally complex to accommodate processing of a particularly important sensory modality. When higher brain centers such as the cerebral cortex become greatly expanded in a particular lineage, it is likely to support the coordination and execution of more complex behaviors, such as those that require flexibility, learning, and social interaction, in response to selective pressures that made these new behaviors advantageous. Vertebrate studies have established a link between complex behaviors, particularly those associated with sociality, and evolutionary expansions of telencephalic higher brain centers. Enlarged higher brain centers have convergently evolved in groups such as the insects, in which multimodal integration and learning and memory centers called the mushroom bodies have become greatly elaborated in at least four independent lineages. Is it possible that similar selective pressures acting on equivalent behavioral outputs drove the evolution of large higher brain centers in all bilaterians? Sociality has greatly impacted brain evolution in vertebrates such as primates, but it has not been a major driver of higher brain center enlargement in insects. However, feeding behaviors requiring flexibility and learning are associated with large higher brain centers in both phyla. Selection for the ability to support behavioral flexibility appears to be a common thread underlying the evolution of large higher brain centers, but the precise nature of these computations and behaviors may vary. © 2013 S. Karger AG, Basel.

  1. Factors Influencing Junior High School Teachers' Computer-Based Instructional Practices Regarding Their Instructional Evolution Stages

    ERIC Educational Resources Information Center

    Hsu, Ying-Shao; Wu, Hsin-Kai; Hwang, Fu-Kwun

    2007-01-01

    Sandholtz, Ringstaff, & Dwyer (1996) list five stages in the "evolution" of a teacher's capacity for computer-based instruction--entry, adoption, adaptation, appropriation and invention--which hereafter will be called the teacher's computer-based instructional evolution. In this study of approximately six hundred junior high school…

  2. SAChES: Scalable Adaptive Chain-Ensemble Sampling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Ray, Jaideep; Ebeida, Mohamed Salah

    We present the development of a parallel Markov Chain Monte Carlo (MCMC) method called SAChES, Scalable Adaptive Chain-Ensemble Sampling. This capability is targed to Bayesian calibration of com- putationally expensive simulation models. SAChES involves a hybrid of two methods: Differential Evo- lution Monte Carlo followed by Adaptive Metropolis. Both methods involve parallel chains. Differential evolution allows one to explore high-dimensional parameter spaces using loosely coupled (i.e., largely asynchronous) chains. Loose coupling allows the use of large chain ensembles, with far more chains than the number of parameters to explore. This reduces per-chain sampling burden, enables high-dimensional inversions and the usemore » of computationally expensive forward models. The large number of chains can also ameliorate the impact of silent-errors, which may affect only a few chains. The chain ensemble can also be sampled to provide an initial condition when an aberrant chain is re-spawned. Adaptive Metropolis takes the best points from the differential evolution and efficiently hones in on the poste- rior density. The multitude of chains in SAChES is leveraged to (1) enable efficient exploration of the parameter space; and (2) ensure robustness to silent errors which may be unavoidable in extreme-scale computational platforms of the future. This report outlines SAChES, describes four papers that are the result of the project, and discusses some additional results.« less

  3. Finding a roadmap to achieve large neuromorphic hardware systems

    PubMed Central

    Hasler, Jennifer; Marr, Bo

    2013-01-01

    Neuromorphic systems are gaining increasing importance in an era where CMOS digital computing techniques are reaching physical limits. These silicon systems mimic extremely energy efficient neural computing structures, potentially both for solving engineering applications as well as understanding neural computation. Toward this end, the authors provide a glimpse at what the technology evolution roadmap looks like for these systems so that Neuromorphic engineers may gain the same benefit of anticipation and foresight that IC designers gained from Moore's law many years ago. Scaling of energy efficiency, performance, and size will be discussed as well as how the implementation and application space of Neuromorphic systems are expected to evolve over time. PMID:24058330

  4. Turbulent structures in wall-bounded shear flows observed via three-dimensional numerical simulators. [using the Illiac 4 computer

    NASA Technical Reports Server (NTRS)

    Leonard, A.

    1980-01-01

    Three recent simulations of tubulent shear flow bounded by a wall using the Illiac computer are reported. These are: (1) vibrating-ribbon experiments; (2) study of the evolution of a spot-like disturbance in a laminar boundary layer; and (3) investigation of turbulent channel flow. A number of persistent flow structures were observed, including streamwise and vertical vorticity distributions near the wall, low-speed and high-speed streaks, and local regions of intense vertical velocity. The role of these structures in, for example, the growth or maintenance of turbulence is discussed. The problem of representing the large range of turbulent scales in a computer simulation is also discussed.

  5. Minimum time acceleration of aircraft turbofan engines by using an algorithm based on nonlinear programming

    NASA Technical Reports Server (NTRS)

    Teren, F.

    1977-01-01

    Minimum time accelerations of aircraft turbofan engines are presented. The calculation of these accelerations was made by using a piecewise linear engine model, and an algorithm based on nonlinear programming. Use of this model and algorithm allows such trajectories to be readily calculated on a digital computer with a minimal expenditure of computer time.

  6. Topology of the South African stock market network across the 2008 financial crisis

    NASA Astrophysics Data System (ADS)

    Majapa, Mohamed; Gossel, Sean Joss

    2016-03-01

    This study uses the cross-correlations in the daily closing prices of the South African Top 100 companies listed on the JSE All share index (ALSI) from June 2003 to June 2013 to compute minimum spanning tree maps. In addition to the full sample, the analysis also uses three sub-periods to investigate the topological evolution before, during, and after the 2008 financial crisis. The findings show that although there is substantial clustering and homogeneity on the JSE, the most connected nodes are in the financial and resources sectors. The sub-sample results further reveal that the JSE network tree shrank in the run-up to, and during the financial crisis, and slowly expanded afterwards. In addition, the different clusters in the network are connected by various nodes that are significantly affected by diversification and credit market dynamics.

  7. Micromechanics and constitutive models for soft active materials with phase evolution

    NASA Astrophysics Data System (ADS)

    Wang, Binglian

    Soft active materials, such as shape memory polymers, liquid crystal elastomers, soft tissues, gels etc., are materials that can deform largely in response to external stimuli. Micromechanics analysis of heterogeneous materials based on finite element method is a typically numerical way to study the thermal-mechanical behaviors of soft active materials with phase evolution. While the constitutive models that can precisely describe the stress and strain fields of materials in the process of phase evolution can not be found in the databases of some commercial finite element analysis (FEA) tools such as ANSYS or Abaqus, even the specific constitutive behavior for each individual phase either the new formed one or the original one has already been well-known. So developing a computationally efficient and general three dimensional (3D) thermal-mechanical constitutive model for soft active materials with phase evolution which can be implemented into FEA is eagerly demanded. This paper first solved this problem theoretically by recording the deformation history of each individual phase in the phase evolution process, and adopted the idea of effectiveness by regarding all the new formed phase as an effective phase with an effective deformation to make this theory computationally efficient. A user material subroutine (UMAT) code based on this theoretical constitutive model has been finished in this work which can be added into the material database in Abaqus or ANSYS and can be easily used for most soft active materials with phase evolution. Model validation also has been done through comparison between micromechanical FEA and experiments on a particular composite material, shape memory elastomeric composite (SMEC) which consisted of an elastomeric matrix and the crystallizable fibre. Results show that the micromechanics and the constitutive models developed in this paper for soft active materials with phase evolution are completely relied on.

  8. Modeling postshock evolution of large electropores

    NASA Astrophysics Data System (ADS)

    Neu, John C.; Krassowska, Wanda

    2003-02-01

    The Smoluchowski equation (SE), which describes the evolution of pores created by electric shocks, cannot be applied to modeling large and long-lived pores for two reasons: (1) it does not predict pores of radius above 20 nm without also predicting membrane rupture; (2) it does not predict postshock growth of pores. This study proposes a model in which pores are coupled by membrane tension, resulting in a nonlinear generalization of SE. The predictions of the model are explored using examples of homogeneous (all pore radii r are equal) and heterogeneous (0⩽r⩽rmax) distributions of pores. Pores in a homogeneous population either shrink to zero or assume a stable radius corresponding to the minimum of the bilayer energy. For a heterogeneous population, such a stable radius does not exist. All pores, except rmax, shrink to zero and rmax grows to infinity. However, the unbounded growth of rmax is not physical because the number of pores per cell decreases in time and the continuum model loses validity. When the continuum formulation is replaced by the discrete one, the model predicts the coarsening process: all pores, except rmax, shrink to zero and rmax assumes a stable radius. Thus, the model with tension-coupled pores does not predict membrane rupture and the predicted postshock growth of pores is consistent with experimental evidence.

  9. The role of a clinically based computer department of instruction in a school of medicine.

    PubMed

    Yamamoto, W S

    1991-10-01

    The evolution of activities and educational directions of a department of instruction in medical computer technology in a school of medicine are reviewed. During the 18 years covered, the society at large has undergone marked change in availability and use of computation in every aspect of medical care. It is argued that a department of instruction should be clinical and develop revenue sources based on patient care, perform technical services for the institution with a decentralized structure, and perform both health services and scientific research. Distinction should be drawn between utilization of computing in medical specialties, library function, and instruction in computer science. The last is the proper arena for the academic content of instruction and is best labelled as the philosophical basis of medical knowledge, in particular, its epistemology. Contemporary pressures for teaching introductory computer skills are probably temporary.

  10. Low-flow analysis and selected flow statistics representative of 1930-2002 for streamflow-gaging stations in or near West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2006-01-01

    Five time periods between 1930 and 2002 are identified as having distinct patterns of annual minimum daily mean flows (minimum flows). Average minimum flows increased around 1970 at many streamflow-gaging stations in West Virginia. Before 1930, however, there might have been a period of minimum flows greater than any period identified between 1930 and 2002. The effects of climate variability are probably the principal causes of the differences among the five time periods. Comparisons of selected streamflow statistics are made between values computed for the five identified time periods and values computed for the 1930-2002 interval for 15 streamflow-gaging stations. The average difference between statistics computed for the five time periods and the 1930-2002 interval decreases with increasing magnitude of the low-flow statistic. The greatest individual-station absolute difference was 582.5 percent greater for the 7-day 10-year low flow computed for 1970-1979 compared to the value computed for 1930-2002. The hydrologically based low flows indicate approximately equal or smaller absolute differences than biologically based low flows. The average 1-day 3-year biologically based low flow (1B3) and 4-day 3-year biologically based low flow (4B3) are less than the average 1-day 10-year hydrologically based low flow (1Q10) and 7-day 10-year hydrologic-based low flow (7Q10) respectively, and range between 28.5 percent less and 13.6 percent greater. Seasonally, the average difference between low-flow statistics computed for the five time periods and 1930-2002 is not consistent between magnitudes of low-flow statistics, and the greatest difference is for the summer (July 1-September 30) and fall (October 1-December 31) for the same time period as the greatest difference determined in the annual analysis. The greatest average difference between 1B3 and 4B3 compared to 1Q10 and 7Q10, respectively, is in the spring (April 1-June 30), ranging between 11.6 and 102.3 percent greater. Statistics computed for the individual station's record period may not represent the statistics computed for the period 1930 to 2002 because (1) station records are available predominantly after about 1970 when minimum flows were greater than the average between 1930 and 2002 and (2) some short-term station records are mostly during dry periods, whereas others are mostly during wet periods. A criterion-based sampling of the individual station's record periods at stations was taken to reduce the effects of statistics computed for the entire record periods not representing the statistics computed for 1930-2002. The criterion used to sample the entire record periods is based on a comparison between the regional minimum flows and the minimum flows at the stations. Criterion-based sampling of the available record periods was superior to record-extension techniques for this study because more stations were selected and areal distribution of stations was more widespread. Principal component and correlation analyses of the minimum flows at 20 stations in or near West Virginia identify three regions of the State encompassing stations with similar patterns of minimum flows: the Lower Appalachian Plateaus, the Upper Appalachian Plateaus, and the Eastern Panhandle. All record periods of 10 years or greater between 1930 and 2002 where the average of the regional minimum flows are nearly equal to the average for 1930-2002 are determined as representative of 1930-2002. Selected statistics are presented for the longest representative record period that matches the record period for 77 stations in West Virginia and 40 stations near West Virginia. These statistics can be used to develop equations for estimating flow in ungaged stream locations.

  11. The Genome Sequence of Taurine Cattle: A Window to Ruminant Biology and Evolution

    USDA-ARS?s Scientific Manuscript database

    As a major step toward understanding the biology and evolution of ruminants, the cattle genome was sequenced to ~7x coverage using a combined whole genome shotgun and BAC skim approach. The cattle genome contains a minimum of 22,000 genes, with a core set of 14,345 orthologs found in seven mammalian...

  12. Faculty Development in Medicine: A Field in Evolution

    ERIC Educational Resources Information Center

    Skeff, Kelley M.; Stratos, Georgette A.; Mount, Jane F. S.

    2007-01-01

    This article focuses on the evolution of faculty development in medicine. Of note, improving teaching in medical education is not a new concept. At a minimum, it was seriously discussed by pioneers like George Miller and Steve Abrahamson as early as the 1950s [Simpson & Bland (2002). Stephen Abrahamson, PhD, ScD, educationist: A stranger in a kind…

  13. Effects of forebody geometry on subsonic boundary-layer stability

    NASA Technical Reports Server (NTRS)

    Dodbele, Simha S.

    1990-01-01

    As part of an effort to develop computational techniques for design of natural laminar flow fuselages, a computational study was made of the effect of forebody geometry on laminar boundary layer stability on axisymmetric body shapes. The effects of nose radius on the stability of the incompressible laminar boundary layer was computationally investigated using linear stability theory for body length Reynolds numbers representative of small and medium-sized airplanes. The steepness of the pressure gradient and the value of the minimum pressure (both functions of fineness ratio) govern the stability of laminar flow possible on an axisymmetric body at a given Reynolds number. It was found that to keep the laminar boundary layer stable for extended lengths, it is important to have a small nose radius. However, nose shapes with extremely small nose radii produce large pressure peaks at off-design angles of attack and can produce vortices which would adversely affect transition.

  14. Design of automata theory of cubical complexes with applications to diagnosis and algorithmic description

    NASA Technical Reports Server (NTRS)

    Roth, J. P.

    1972-01-01

    The following problems are considered: (1) methods for development of logic design together with algorithms, so that it is possible to compute a test for any failure in the logic design, if such a test exists, and developing algorithms and heuristics for the purpose of minimizing the computation for tests; and (2) a method of design of logic for ultra LSI (large scale integration). It was discovered that the so-called quantum calculus can be extended to render it possible: (1) to describe the functional behavior of a mechanism component by component, and (2) to compute tests for failures, in the mechanism, using the diagnosis algorithm. The development of an algorithm for the multioutput two-level minimization problem is presented and the program MIN 360 was written for this algorithm. The program has options of mode (exact minimum or various approximations), cost function, cost bound, etc., providing flexibility.

  15. Crustal dynamics project data analysis, 1991: VLBI geodetic results, 1979 - 1990

    NASA Technical Reports Server (NTRS)

    Ma, C.; Ryan, J. W.; Caprette, D. S.

    1992-01-01

    The Goddard VLBI group reports the results of analyzing 1412 Mark II data sets acquired from fixed and mobile observing sites through the end of 1990 and available to the Crustal Dynamics Project. Three large solutions were used to obtain Earth rotation parameters, nutation offsets, global source positions, site velocities, and baseline evolution. Site positions are tabulated on a yearly basis from 1979 through 1992. Site velocities are presented in both geocentric Cartesian coordinates and topocentric coordinates. Baseline evolution is plotted for 175 baselines. Rates are computed for earth rotation and nutation parameters. Included are 104 sources, 88 fixed stations and mobile sites, and 688 baselines.

  16. Large-scale transportation network congestion evolution prediction using deep learning theory.

    PubMed

    Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai

    2015-01-01

    Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation.

  17. Large-Scale Transportation Network Congestion Evolution Prediction Using Deep Learning Theory

    PubMed Central

    Ma, Xiaolei; Yu, Haiyang; Wang, Yunpeng; Wang, Yinhai

    2015-01-01

    Understanding how congestion at one location can cause ripples throughout large-scale transportation network is vital for transportation researchers and practitioners to pinpoint traffic bottlenecks for congestion mitigation. Traditional studies rely on either mathematical equations or simulation techniques to model traffic congestion dynamics. However, most of the approaches have limitations, largely due to unrealistic assumptions and cumbersome parameter calibration process. With the development of Intelligent Transportation Systems (ITS) and Internet of Things (IoT), transportation data become more and more ubiquitous. This triggers a series of data-driven research to investigate transportation phenomena. Among them, deep learning theory is considered one of the most promising techniques to tackle tremendous high-dimensional data. This study attempts to extend deep learning theory into large-scale transportation network analysis. A deep Restricted Boltzmann Machine and Recurrent Neural Network architecture is utilized to model and predict traffic congestion evolution based on Global Positioning System (GPS) data from taxi. A numerical study in Ningbo, China is conducted to validate the effectiveness and efficiency of the proposed method. Results show that the prediction accuracy can achieve as high as 88% within less than 6 minutes when the model is implemented in a Graphic Processing Unit (GPU)-based parallel computing environment. The predicted congestion evolution patterns can be visualized temporally and spatially through a map-based platform to identify the vulnerable links for proactive congestion mitigation. PMID:25780910

  18. Effects of Energy Dissipation in the Sphere-Restricted Full Three-Body Problem

    NASA Astrophysics Data System (ADS)

    Gabriel, T. S. J.

    Recently, the classical N-Body Problem has been adjusted to account for celestial bodies made of constituents of finite density. By imposing a minima on the achievable distance between particles, minimum energy resting states are allowed by the problem. The Full N-Body Problem allows for the dissipation of mechanical energy through surface-surface interactions via impacts or by way of tidal deformation. Barring exogeneous forces and allowing for the dissipation of energy, these systems have discrete, and sometimes multiple, minimum energy states for a given angular momentum. Building the dynamical framework of such finite density systems is a necessary process in outlining the evolution of rubble pile asteroids and other gravitational-granular systems such as protoplanetary discs, and potentially planetary rings, from a theoretical point of view. In all cases, resting states are expected to occur as a necessary step in the ongoing processes of solar system formation and evolution. Previous studies of this problem have been performed in the N=3 case where the bodies are indistinguishable spheres, with all possible relative equilibria and their stability having been identified as a function of the angular momentum of the system. These studies uncovered that at certain levels of angular momentum there exists two minimum energy states, a global and local minimum. Thus a question of interest is in which of these states a dissipative system would preferentially settle and the sensitivity of results to changes in dissipation parameters. Assuming equal-sized, perfectly-rigid bodies, this study investigates the dynamical evolution of three spheres under the influence of mutual gravity and impact mechanics as a function of dissipation parameters. A purpose-written, C-based, Hard Sphere Discrete Element Method code has been developed to integrate trajectories and resolve contact mechanics as grains evolve into minimum energy configurations. By testing many randomized initial conditions, statistics are measured regarding minimum energy states for a given angular momentum range. A trend in the Sphere-Restricted Full Three-Body Problem producing an end state of one configuration over another is found as a function of angular momentum and restitution.

  19. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  20. 20 CFR 229.55 - Reduction for spouse social security benefit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...

  1. 20 CFR 229.56 - Reduction for child's social security benefit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...

  2. 26 CFR 1.6655-3 - Adjusted seasonal installment method.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... TAX (CONTINUED) INCOME TAXES Additions to the Tax, Additional Amounts, and Assessable Penalties § 1... under § 1.6655-2 apply to the computation of taxable income (and resulting tax) for purposes of... applying to alternative minimum taxable income, tentative minimum tax, and alternative minimum tax, the...

  3. 5 CFR 844.303 - Minimum disability annuity.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Minimum disability annuity. 844.303... Annuity § 844.303 Minimum disability annuity. Notwithstanding any other provision of this part, an annuity payable under this part cannot be less than the amount of an annuity computed under 5 U.S.C. 8415...

  4. 20 CFR 229.55 - Reduction for spouse social security benefit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...

  5. 20 CFR 229.55 - Reduction for spouse social security benefit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...

  6. 20 CFR 229.56 - Reduction for child's social security benefit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...

  7. 20 CFR 229.55 - Reduction for spouse social security benefit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...

  8. 20 CFR 229.56 - Reduction for child's social security benefit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...

  9. 20 CFR 229.56 - Reduction for child's social security benefit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...

  10. 20 CFR 229.55 - Reduction for spouse social security benefit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for spouse social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.55 Reduction for spouse social security benefit. A spouse benefit under the overall minimum, after any...

  11. 20 CFR 229.56 - Reduction for child's social security benefit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Reduction for child's social security benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.56 Reduction for child's social security benefit. A child's benefit under the overall minimum, after any...

  12. Dynamic modeling and ascent flight control of Ares-I Crew Launch Vehicle

    NASA Astrophysics Data System (ADS)

    Du, Wei

    This research focuses on dynamic modeling and ascent flight control of large flexible launch vehicles such as the Ares-I Crew Launch Vehicle (CLV). A complete set of six-degrees-of-freedom dynamic models of the Ares-I, incorporating its propulsion, aerodynamics, guidance and control, and structural flexibility, is developed. NASA's Ares-I reference model and the SAVANT Simulink-based program are utilized to develop a Matlab-based simulation and linearization tool for an independent validation of the performance and stability of the ascent flight control system of large flexible launch vehicles. A linearized state-space model as well as a non-minimum-phase transfer function model (which is typical for flexible vehicles with non-collocated actuators and sensors) are validated for ascent flight control design and analysis. This research also investigates fundamental principles of flight control analysis and design for launch vehicles, in particular the classical "drift-minimum" and "load-minimum" control principles. It is shown that an additional feedback of angle-of-attack can significantly improve overall performance and stability, especially in the presence of unexpected large wind disturbances. For a typical "non-collocated actuator and sensor" control problem for large flexible launch vehicles, non-minimum-phase filtering of "unstably interacting" bending modes is also shown to be effective. The uncertainty model of a flexible launch vehicle is derived. The robust stability of an ascent flight control system design, which directly controls the inertial attitude-error quaternion and also employs the non-minimum-phase filters, is verified by the framework of structured singular value (mu) analysis. Furthermore, nonlinear coupled dynamic simulation results are presented for a reference model of the Ares-I CLV as another validation of the feasibility of the ascent flight control system design. Another important issue for a single main engine launch vehicle is stability under mal-function of the roll control system. The roll motion of the Ares-I Crew Launch Vehicle under nominal flight conditions is actively stabilized by its roll control system employing thrusters. This dissertation describes the ascent flight control design problem of Ares-I in the event of disabled or failed roll control. A simple pitch/yaw control logic is developed for such a technically challenging problem by exploiting the inherent versatility of a quaternion-based attitude control system. The proposed scheme requires only the desired inertial attitude quaternion to be re-computed using the actual uncontrolled roll angle information to achieve an ascent flight trajectory identical to the nominal flight case with active roll control. Another approach that utilizes a simple adjustment of the proportional-derivative gains of the quaternion-based flight control system without active roll control is also presented. This approach doesn't require the re-computation of desired inertial attitude quaternion. A linear stability criterion is developed for proper adjustments of attitude and rate gains. The linear stability analysis results are validated by nonlinear simulations of the ascent flight phase. However, the first approach, requiring a simple modification of the desired attitude quaternion, is recommended for the Ares-I as well as other launch vehicles in the event of no active roll control. Finally, the method derived to stabilize a large flexible launch vehicle in the event of uncontrolled roll drift is generalized as a modified attitude quaternion feedback law. It is used to stabilize an axisymmetric rigid body by two independent control torques.

  13. Exploring the luminosity evolution and stellar mass assembly of 2SLAQ luminous red galaxies between redshifts 0.4 and 0.8

    NASA Astrophysics Data System (ADS)

    Banerji, Manda; Ferreras, Ignacio; Abdalla, Filipe B.; Hewett, Paul; Lahav, Ofer

    2010-03-01

    We present an analysis of the evolution of 8625 luminous red galaxies (LRGs) between z = 0.4 and 0.8 in the 2dF and Sloan Digital Sky Survey LRG and QSO (2SLAQ) survey. The LRGs are split into redshift bins and the evolution of both the luminosity and stellar mass function with redshift is considered and compared to the assumptions of a passive evolution scenario. We draw attention to several sources of systematic error that could bias the evolutionary predictions made in this paper. While the inferred evolution is found to be relatively unaffected by the exact choice of spectral evolution model used to compute K + e corrections, we conclude that photometric errors could be a source of significant bias in colour-selected samples such as this, in particular when using parametric maximum likelihood based estimators. We find that the evolution of the most massive LRGs is consistent with the assumptions of passive evolution and that the stellar mass assembly of the LRGs is largely complete by z ~ 0.8. Our findings suggest that massive galaxies with stellar masses above 1011Msolar must have undergone merging and star formation processes at a very early stage (z >~ 1). This supports the emerging picture of downsizing in both the star formation as well as the mass assembly of early-type galaxies. Given that our spectroscopic sample covers an unprecedentedly large volume and probes the most massive end of the galaxy mass function, we find that these observational results present a significant challenge for many current models of galaxy formation.

  14. Exploiting Identical Generators in Unit Commitment

    DOE PAGES

    Knueven, Ben; Ostrowski, Jim; Watson, Jean -Paul

    2017-12-14

    Here, we present sufficient conditions under which thermal generators can be aggregated in mixed-integer linear programming (MILP) formulations of the unit commitment (UC) problem, while maintaining feasibility and optimality for the original disaggregated problem. Aggregating thermal generators with identical characteristics (e.g., minimum/maximum power output, minimum up/down-time, and cost curves) into a single unit reduces redundancy in the search space induced by both exact symmetry (permutations of generator schedules) and certain classes of mutually non-dominated solutions. We study the impact of aggregation on two large-scale UC instances, one from the academic literature and another based on real-world operator data. Our computationalmore » tests demonstrate that when present, identical generators can negatively affect the performance of modern MILP solvers on UC formulations. Further, we show that our reformation of the UC MILP through aggregation is an effective method for mitigating this source of computational difficulty.« less

  15. Exploiting Identical Generators in Unit Commitment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knueven, Ben; Ostrowski, Jim; Watson, Jean -Paul

    Here, we present sufficient conditions under which thermal generators can be aggregated in mixed-integer linear programming (MILP) formulations of the unit commitment (UC) problem, while maintaining feasibility and optimality for the original disaggregated problem. Aggregating thermal generators with identical characteristics (e.g., minimum/maximum power output, minimum up/down-time, and cost curves) into a single unit reduces redundancy in the search space induced by both exact symmetry (permutations of generator schedules) and certain classes of mutually non-dominated solutions. We study the impact of aggregation on two large-scale UC instances, one from the academic literature and another based on real-world operator data. Our computationalmore » tests demonstrate that when present, identical generators can negatively affect the performance of modern MILP solvers on UC formulations. Further, we show that our reformation of the UC MILP through aggregation is an effective method for mitigating this source of computational difficulty.« less

  16. Effect of impurity doping on tunneling conductance in AB-stacked bi-layer graphene: A tight-binding study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rout, G. C., E-mail: siva1987@iopb.res.in, E-mail: skp@iopb.res.in, E-mail: gcr@iopb.res.in; Sahu, Sivabrata; Panda, S. K.

    2016-04-13

    We report here a microscopic tight-binding model calculation for AB-stacked bilayer graphene in presence of biasing potential between the two layers and the impurity effects to study the evolution of the total density of states with special emphasis on opening of band gap near Dirac point. We have calculated the electron Green’s functions for both the A and B sub-lattices by Zubarev technique. The imaginary part of the Green’s function gives the partial and total density of states of electrons. The density of states are computed numerically for 1000 × 1000 grid points of the electron momentum. The evolution ofmore » the opening of band gap near van-Hove singularities as well as near Dirac point is investigated by varying the different interlayer hoppings and the biasing potentials. The inter layer hopping splits the density of states at van-Hove singularities and produces a V-shaped gap near Dirac point. Further the biasing potential introduces a U shaped gap near Dirac point with a density minimum at the applied potential(i.e. at V/2).« less

  17. Site-directed protein recombination as a shortest-path problem.

    PubMed

    Endelman, Jeffrey B; Silberg, Jonathan J; Wang, Zhen-Gang; Arnold, Frances H

    2004-07-01

    Protein function can be tuned using laboratory evolution, in which one rapidly searches through a library of proteins for the properties of interest. In site-directed recombination, n crossovers are chosen in an alignment of p parents to define a set of p(n + 1) peptide fragments. These fragments are then assembled combinatorially to create a library of p(n+1) proteins. We have developed a computational algorithm to enrich these libraries in folded proteins while maintaining an appropriate level of diversity for evolution. For a given set of parents, our algorithm selects crossovers that minimize the average energy of the library, subject to constraints on the length of each fragment. This problem is equivalent to finding the shortest path between nodes in a network, for which the global minimum can be found efficiently. Our algorithm has a running time of O(N(3)p(2) + N(2)n) for a protein of length N. Adjusting the constraints on fragment length generates a set of optimized libraries with varying degrees of diversity. By comparing these optima for different sets of parents, we rapidly determine which parents yield the lowest energy libraries.

  18. A combined NLP-differential evolution algorithm approach for the optimization of looped water distribution systems

    NASA Astrophysics Data System (ADS)

    Zheng, Feifei; Simpson, Angus R.; Zecchin, Aaron C.

    2011-08-01

    This paper proposes a novel optimization approach for the least cost design of looped water distribution systems (WDSs). Three distinct steps are involved in the proposed optimization approach. In the first step, the shortest-distance tree within the looped network is identified using the Dijkstra graph theory algorithm, for which an extension is proposed to find the shortest-distance tree for multisource WDSs. In the second step, a nonlinear programming (NLP) solver is employed to optimize the pipe diameters for the shortest-distance tree (chords of the shortest-distance tree are allocated the minimum allowable pipe sizes). Finally, in the third step, the original looped water network is optimized using a differential evolution (DE) algorithm seeded with diameters in the proximity of the continuous pipe sizes obtained in step two. As such, the proposed optimization approach combines the traditional deterministic optimization technique of NLP with the emerging evolutionary algorithm DE via the proposed network decomposition. The proposed methodology has been tested on four looped WDSs with the number of decision variables ranging from 21 to 454. Results obtained show the proposed approach is able to find optimal solutions with significantly less computational effort than other optimization techniques.

  19. Solar photospheric network properties and their cycle variation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thibault, K.; Charbonneau, P.; Béland, M., E-mail: kim@astro.umontreal.ca-a, E-mail: paulchar@astro.umontreal.ca-b, E-mail: michel.beland@calculquebec.ca-c

    We present a numerical simulation of the formation and evolution of the solar photospheric magnetic network over a full solar cycle. The model exhibits realistic behavior as it produces large, unipolar concentrations of flux in the polar caps, a power-law flux distribution with index –1.69, a flux replacement timescale of 19.3 hr, and supergranule diameters of 20 Mm. The polar behavior is especially telling of model accuracy, as it results from lower-latitude activity, and accumulates the residues of any potential modeling inaccuracy and oversimplification. In this case, the main oversimplification is the absence of a polar sink for the flux,more » causing an amount of polar cap unsigned flux larger than expected by almost one order of magnitude. Nonetheless, our simulated polar caps carry the proper signed flux and dipole moment, and also show a spatial distribution of flux in good qualitative agreement with recent high-latitude magnetographic observations by Hinode. After the last cycle emergence, the simulation is extended until the network has recovered its quiet Sun initial condition. This permits an estimate of the network relaxation time toward the baseline state characterizing extended periods of suppressed activity, such as the Maunder Grand Minimum. Our simulation results indicate a network relaxation time of 2.9 yr, setting 2011 October as the soonest the time after which the last solar activity minimum could have qualified as a Maunder-type Minimum. This suggests that photospheric magnetism did not reach its baseline state during the recent extended minimum between cycles 23 and 24.« less

  20. Intermediate-mass-ratio black-hole binaries: numerical relativity meets perturbation theory.

    PubMed

    Lousto, Carlos O; Nakano, Hiroyuki; Zlochower, Yosef; Campanelli, Manuela

    2010-05-28

    We study black-hole binaries in the intermediate-mass-ratio regime 0.01≲q≲0.1 with a new technique that makes use of nonlinear numerical trajectories and efficient perturbative evolutions to compute waveforms at large radii for the leading and nonleading (ℓ, m) modes. As a proof-of-concept, we compute waveforms for q=1/10. We discuss applications of these techniques for LIGO and VIRGO data analysis and the possibility that our technique can be extended to produce accurate waveform templates from a modest number of fully nonlinear numerical simulations.

  1. Computing Trimmed, Mean-Camber Surfaces At Minimum Drag

    NASA Technical Reports Server (NTRS)

    Lamar, John E.; Hodges, William T.

    1995-01-01

    VLMD computer program determines subsonic mean-camber surfaces of trimmed noncoplanar planforms with minimum vortex drag at specified lift coefficient. Up to two planforms designed together. Method used that of subsonic vortex lattice method of chord loading specification, ranging from rectangular to triangular, left specified by user. Program versatile and applied to isolated wings, wing/canard configurations, tandem wing, and wing/-winglet configuration. Written in FORTRAN.

  2. Evolution of orbits of the Apollo group asteroids over 11550 years.

    NASA Astrophysics Data System (ADS)

    Zausaev, A. F.; Pushkarev, A. N.

    The Everhart method is used to study the evolution of the orbits of 20 asteroids of the Apollo group over the time period from 9300 B.C. to 2250 A.D. Minimum distances of the asteroids to the major planets over the evolution process are calculated. The stability of resonances with Venus and Earth over the 9300 B.C.to 2250 A.D. time period is shown. Theoretical coordinates of radiants for the initial and final integration times are presented.

  3. Towards Dynamic Remote Data Auditing in Computational Clouds

    PubMed Central

    Khurram Khan, Muhammad; Anuar, Nor Badrul

    2014-01-01

    Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server. PMID:25121114

  4. Towards dynamic remote data auditing in computational clouds.

    PubMed

    Sookhak, Mehdi; Akhunzada, Adnan; Gani, Abdullah; Khurram Khan, Muhammad; Anuar, Nor Badrul

    2014-01-01

    Cloud computing is a significant shift of computational paradigm where computing as a utility and storing data remotely have a great potential. Enterprise and businesses are now more interested in outsourcing their data to the cloud to lessen the burden of local data storage and maintenance. However, the outsourced data and the computation outcomes are not continuously trustworthy due to the lack of control and physical possession of the data owners. To better streamline this issue, researchers have now focused on designing remote data auditing (RDA) techniques. The majority of these techniques, however, are only applicable for static archive data and are not subject to audit the dynamically updated outsourced data. We propose an effectual RDA technique based on algebraic signature properties for cloud storage system and also present a new data structure capable of efficiently supporting dynamic data operations like append, insert, modify, and delete. Moreover, this data structure empowers our method to be applicable for large-scale data with minimum computation cost. The comparative analysis with the state-of-the-art RDA schemes shows that the proposed scheme is secure and highly efficient in terms of the computation and communication overhead on the auditor and server.

  5. Fixation of slightly beneficial mutations: effects of life history.

    PubMed

    Vindenes, Yngvild; Lee, Aline Magdalena; Engen, Steinar; Saether, Bernt-Erik

    2010-04-01

    Recent studies of rates of evolution have revealed large systematic differences among organisms with different life histories, both within and among taxa. Here, we consider how life history may affect the rate of evolution via its influence on the fixation probability of slightly beneficial mutations. Our approach is based on diffusion modeling for a finite, stage-structured population with stochastic population dynamics. The results, which are verified by computer simulations, demonstrate that even with complex population structure just two demographic parameters are sufficient to give an accurate approximation of the fixation probability of a slightly beneficial mutation. These are the reproductive value of the stage in which the mutation first occurs and the demographic variance of the population. The demographic variance also determines what influence population size has on the fixation probability. This model represents a substantial generalization of earlier models, covering a large range of life histories.

  6. Beyond the Baseline 1991: Proceedings of the Space Station Evolution Symposium. Volume 1: Space Station Freedom, part 2

    NASA Technical Reports Server (NTRS)

    1991-01-01

    This report contains the individual presentations delivered at the Space Station Evolution Symposium. The results of Space Station Freedom Advanced Studies provide a road map for the evolution of Freedom in terms of user requirements, utilization and operations concepts, and growth options for distributed systems. Regarding these specific systems, special attention is given to: highlighting changes made during restructuring; description of growth paths through the follow-on and evolution phases; identification of minimum impact provisions to allow flexibility in the baseline; and identification of enhancing and enabling technologies.

  7. Analysis of high-order SNP barcodes in mitochondrial D-loop for chronic dialysis susceptibility.

    PubMed

    Yang, Cheng-Hong; Lin, Yu-Da; Chuang, Li-Yeh; Chang, Hsueh-Wei

    2016-10-01

    Positively identifying disease-associated single nucleotide polymorphism (SNP) markers in genome-wide studies entails the complex association analysis of a huge number of SNPs. Such large numbers of SNP barcode (SNP/genotype combinations) continue to pose serious computational challenges, especially for high-dimensional data. We propose a novel exploiting SNP barcode method based on differential evolution, termed IDE (improved differential evolution). IDE uses a "top combination strategy" to improve the ability of differential evolution to explore high-order SNP barcodes in high-dimensional data. We simulate disease data and use real chronic dialysis data to test four global optimization algorithms. In 48 simulated disease models, we show that IDE outperforms existing global optimization algorithms in terms of exploring ability and power to detect the specific SNP/genotype combinations with a maximum difference between cases and controls. In real data, we show that IDE can be used to evaluate the relative effects of each individual SNP on disease susceptibility. IDE generated significant SNP barcode with less computational complexity than the other algorithms, making IDE ideally suited for analysis of high-order SNP barcodes. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Einstein-Home search for periodic gravitational waves in early S5 LIGO data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, B. P.; Abbott, R.; Adhikari, R.

    This paper reports on an all-sky search for periodic gravitational waves from sources such as deformed isolated rapidly spinning neutron stars. The analysis uses 840 hours of data from 66 days of the fifth LIGO science run (S5). The data were searched for quasimonochromatic waves with frequencies f in the range from 50 to 1500 Hz, with a linear frequency drift f (measured at the solar system barycenter) in the range -f/{tau}

  9. The Evolution of Alternative Rural Development Strategies in Ethiopia; Implications for Employment and Income Distribution. African Rural Employment Paper No. 12.

    ERIC Educational Resources Information Center

    Tecle, Tesfai

    As Ethiopia has designed and implemented numerous intensive (geographically concentrated) and minimum-package rural development programs between 1967-75, the purpose of this monograph is to: (1) trace the evolution of these package projects; (2) analyze package performances; and (3) identify the implications for Ethiopian planners and policy…

  10. A Case Study of the De Novo Evolution of a Complex Odometric Behavior in Digital Organisms

    PubMed Central

    Grabowski, Laura M.; Bryson, David M.; Dyer, Fred C.; Pennock, Robert T.; Ofria, Charles

    2013-01-01

    Investigating the evolution of animal behavior is difficult. The fossil record leaves few clues that would allow us to recapitulate the path that evolution took to build a complex behavior, and the large population sizes and long time scales required prevent us from re-evolving such behaviors in a laboratory setting. We present results of a study in which digital organisms–self-replicating computer programs that are subject to mutations and selection–evolved in different environments that required information about past experience for fitness-enhancing behavioral decisions. One population evolved a mechanism for step-counting, a surprisingly complex odometric behavior that was only indirectly related to enhancing fitness. We examine in detail the operation of the evolved mechanism and the evolutionary transitions that produced this striking example of a complex behavior. PMID:23577113

  11. Turbomachinery Airfoil Design Optimization Using Differential Evolution

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    An aerodynamic design optimization procedure that is based on a evolutionary algorithm known at Differential Evolution is described. Differential Evolution is a simple, fast, and robust evolutionary strategy that has been proven effective in determining the global optimum for several difficult optimization problems, including highly nonlinear systems with discontinuities and multiple local optima. The method is combined with a Navier-Stokes solver that evaluates the various intermediate designs and provides inputs to the optimization procedure. An efficient constraint handling mechanism is also incorporated. Results are presented for the inverse design of a turbine airfoil from a modern jet engine. The capability of the method to search large design spaces and obtain the optimal airfoils in an automatic fashion is demonstrated. Substantial reductions in the overall computing time requirements are achieved by using the algorithm in conjunction with neural networks.

  12. Evolution of domain walls in the early universe. Ph.D. Thesis - Chicago Univ.

    NASA Technical Reports Server (NTRS)

    Kawano, Lawrence

    1989-01-01

    The evolution of domain walls in the early universe is studied via 2-D computer simulation. The walls are initially configured on a triangular lattice and then released from the lattice, their evolution driven by wall curvature and by the universal expansion. The walls attain an average velocity of about 0.3c and their surface area per volume (as measured in comoving coordinates) goes down with a slope of -1 with respect to conformal time, regardless of whether the universe is matter or radiation dominated. The additional influence of vacuum pressure causes the energy density to fall away from this slope and steepen, thus allowing a situation in which domain walls can constitute a significant portion of the energy density of the universe without provoking an unacceptably large perturbation upon the microwave background.

  13. 20 CFR 229.50 - Age reduction in employee or spouse benefit.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...

  14. 20 CFR 229.50 - Age reduction in employee or spouse benefit.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...

  15. 20 CFR 229.50 - Age reduction in employee or spouse benefit.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...

  16. 20 CFR 229.50 - Age reduction in employee or spouse benefit.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...

  17. 20 CFR 229.50 - Age reduction in employee or spouse benefit.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Age reduction in employee or spouse benefit... RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.50 Age reduction in employee or spouse benefit. (a) When age reduction applies. The employee overall minimum...

  18. SOLAR WIND HEAVY IONS OVER SOLAR CYCLE 23: ACE/SWICS MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lepri, S. T.; Landi, E.; Zurbuchen, T. H.

    2013-05-01

    Solar wind plasma and compositional properties reflect the physical properties of the corona and its evolution over time. Studies comparing the previous solar minimum with the most recent, unusual solar minimum indicate that significant environmental changes are occurring globally on the Sun. For example, the magnetic field decreased 30% between the last two solar minima, and the ionic charge states of O have been reported to change toward lower values in the fast wind. In this work, we systematically and comprehensively analyze the compositional changes of the solar wind during cycle 23 from 2000 to 2010 while the Sun movedmore » from solar maximum to solar minimum. We find a systematic change of C, O, Si, and Fe ionic charge states toward lower ionization distributions. We also discuss long-term changes in elemental abundances and show that there is a {approx}50% decrease of heavy ion abundances (He, C, O, Si, and Fe) relative to H as the Sun went from solar maximum to solar minimum. During this time, the relative abundances in the slow wind remain organized by their first ionization potential. We discuss these results and their implications for models of the evolution of the solar atmosphere, and for the identification of the fast and slow wind themselves.« less

  19. More efficient evolutionary strategies for model calibration with watershed model for demonstration

    NASA Astrophysics Data System (ADS)

    Baggett, J. S.; Skahill, B. E.

    2008-12-01

    Evolutionary strategies allow automatic calibration of more complex models than traditional gradient based approaches, but they are more computationally intensive. We present several efficiency enhancements for evolution strategies, many of which are not new, but when combined have been shown to dramatically decrease the number of model runs required for calibration of synthetic problems. To reduce the number of expensive model runs we employ a surrogate objective function for an adaptively determined fraction of the population at each generation (Kern et al., 2006). We demonstrate improvements to the adaptive ranking strategy that increase its efficiency while sacrificing little reliability and further reduce the number of model runs required in densely sampled parts of parameter space. Furthermore, we include a gradient individual in each generation that is usually not selected when the search is in a global phase or when the derivatives are poorly approximated, but when selected near a smooth local minimum can dramatically increase convergence speed (Tahk et al., 2007). Finally, the selection of the gradient individual is used to adapt the size of the population near local minima. We show, by incorporating these enhancements into the Covariance Matrix Adaption Evolution Strategy (CMAES; Hansen, 2006), that their synergetic effect is greater than their individual parts. This hybrid evolutionary strategy exploits smooth structure when it is present but degrades to an ordinary evolutionary strategy, at worst, if smoothness is not present. Calibration of 2D-3D synthetic models with the modified CMAES requires approximately 10%-25% of the model runs of ordinary CMAES. Preliminary demonstration of this hybrid strategy will be shown for watershed model calibration problems. Hansen, N. (2006). The CMA Evolution Strategy: A Comparing Review. In J.A. Lozano, P. Larrañga, I. Inza and E. Bengoetxea (Eds.). Towards a new evolutionary computation. Advances in estimation of distribution algorithms. pp. 75-102, Springer Kern, S., N. Hansen and P. Koumoutsakos (2006). Local Meta-Models for Optimization Using Evolution Strategies. In Ninth International Conference on Parallel Problem Solving from Nature PPSN IX, Proceedings, pp.939-948, Berlin: Springer. Tahk, M., Woo, H., and Park. M, (2007). A hybrid optimization of evolutionary and gradient search. Engineering Optimization, (39), 87-104.

  20. Nemo: an evolutionary and population genetics programming framework.

    PubMed

    Guillaume, Frédéric; Rougemont, Jacques

    2006-10-15

    Nemo is an individual-based, genetically explicit and stochastic population computer program for the simulation of population genetics and life-history trait evolution in a metapopulation context. It comes as both a C++ programming framework and an executable program file. Its object-oriented programming design gives it the flexibility and extensibility needed to implement a large variety of forward-time evolutionary models. It provides developers with abstract models allowing them to implement their own life-history traits and life-cycle events. Nemo offers a large panel of population models, from the Island model to lattice models with demographic or environmental stochasticity and a variety of already implemented traits (deleterious mutations, neutral markers and more), life-cycle events (mating, dispersal, aging, selection, etc.) and output operators for saving data and statistics. It runs on all major computer platforms including parallel computing environments. The source code, binaries and documentation are available under the GNU General Public License at http://nemo2.sourceforge.net.

  1. Inflation expels runaways

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bachlechner, Thomas C.

    We argue that moduli stabilization generically restricts the evolution following transitions between weakly coupled de Sitter vacua and can induce a strong selection bias towards inflationary cosmologies. The energy density of domain walls between vacua typically destabilizes Kähler moduli and triggers a runaway towards large volume. This decompactification phase can collapse the new de Sitter region unless a minimum amount of inflation occurs after the transition. A stable vacuum transition is guaranteed only if the inflationary expansion generates overlapping past light cones for all observable modes originating from the reheating surface, which leads to an approximately flat and isotropic universe.more » High scale inflation is vastly favored. Finally, our results point towards a framework for studying parameter fine-tuning and inflationary initial conditions in flux compactifications.« less

  2. Inflation expels runaways

    NASA Astrophysics Data System (ADS)

    Bachlechner, Thomas C.

    2016-12-01

    We argue that moduli stabilization generically restricts the evolution following transitions between weakly coupled de Sitter vacua and can induce a strong selection bias towards inflationary cosmologies. The energy density of domain walls between vacua typically destabilizes Kähler moduli and triggers a runaway towards large volume. This decompactification phase can collapse the new de Sitter region unless a minimum amount of inflation occurs after the transition. A stable vacuum transition is guaranteed only if the inflationary expansion generates overlapping past light cones for all observable modes originating from the reheating surface, which leads to an approximately flat and isotropic universe. High scale inflation is vastly favored. Our results point towards a framework for studying parameter fine-tuning and inflationary initial conditions in flux compactifications.

  3. Inflation expels runaways

    DOE PAGES

    Bachlechner, Thomas C.

    2016-12-30

    We argue that moduli stabilization generically restricts the evolution following transitions between weakly coupled de Sitter vacua and can induce a strong selection bias towards inflationary cosmologies. The energy density of domain walls between vacua typically destabilizes Kähler moduli and triggers a runaway towards large volume. This decompactification phase can collapse the new de Sitter region unless a minimum amount of inflation occurs after the transition. A stable vacuum transition is guaranteed only if the inflationary expansion generates overlapping past light cones for all observable modes originating from the reheating surface, which leads to an approximately flat and isotropic universe.more » High scale inflation is vastly favored. Finally, our results point towards a framework for studying parameter fine-tuning and inflationary initial conditions in flux compactifications.« less

  4. Prognostic significance of lesion size for glioblastoma multiforme.

    PubMed

    Reeves, G I; Marks, J E

    1979-08-01

    From March 1974 to December 1976, 56 patients with glioblastoma multiforme had precraniotomy computed tomography (CT) scans from which the lesion size was determined by measuring the cross-sectional area. Thirty-two patients underwent surgery followed by irradiation, and 24 had surgery followed by irradiation and chemotherapy. There was no difference in survival between the 16 patients with small lesions and the 16 patients with large lesions in the surgery plus radiation alone group, nor in the 16 patients with small and 8 patients with large lesions in the surgery, radiation and chemotherapy group. Minimum follow-up was one year. Other possible prognostic factors including age, tumor grade, radiation dose, and performance status were comparable for each subgroup. Lesion size in glioblastoma multiforme appears unrelated to prognosis.

  5. The evolution of altruism in spatial threshold public goods games via an insurance mechanism

    NASA Astrophysics Data System (ADS)

    Zhang, Jianlei; Zhang, Chunyan

    2015-05-01

    The persistence of cooperation in public goods situations has become an important puzzle for researchers. This paper considers the threshold public goods games where the option of insurance is provided for players from the standpoint of diversification of risk, envisaging the possibility of multiple strategies in such scenarios. In this setting, the provision point is defined in terms of the minimum number of contributors in one threshold public goods game, below which the game fails. In the presence of risk and insurance, more contributions are motivated if (1) only cooperators can opt to be insured and thus their contribution loss in the aborted games can be (partly or full) covered by the insurance; (2) insured cooperators obtain larger compensation, at lower values of the threshold point (the required minimum number of contributors). Moreover, results suggest the dominance of insured defectors who get a better promotion by more profitable benefits from insurance. We provide results of extensive computer simulations in the realm of spatial games (random regular networks and scale-free networks here), and support this study with analytical results for well-mixed populations. Our study is expected to establish a causal link between the widespread altruistic behaviors and the existing insurance system.

  6. Analyzing systemic risk using non-linear marginal expected shortfall and its minimum spanning tree

    NASA Astrophysics Data System (ADS)

    Song, Jae Wook; Ko, Bonggyun; Chang, Woojin

    2018-02-01

    The aim of this paper is to propose a new theoretical framework for analyzing the systemic risk using the marginal expected shortfall (MES) and its correlation-based minimum spanning tree (MST). At first, we develop two parametric models of MES with their closed-form solutions based on the Capital Asset Pricing Model. Our models are derived from the non-symmetric quadratic form, which allows them to consolidate the non-linear relationship between the stock and market returns. Secondly, we discover the evidences related to the utility of our models and the possible association in between the non-linear relationship and the emergence of severe systemic risk by considering the US financial system as a benchmark. In this context, the evolution of MES also can be regarded as a reasonable proxy of systemic risk. Lastly, we analyze the structural properties of the systemic risk using the MST based on the computed series of MES. The topology of MST conveys the presence of sectoral clustering and strong co-movements of systemic risk leaded by few hubs during the crisis. Specifically, we discover that the Depositories are the majority sector leading the connections during the Non-Crisis period, whereas the Broker-Dealers are majority during the Crisis period.

  7. Harmonic Fourier beads method for studying rare events on rugged energy surfaces.

    PubMed

    Khavrutskii, Ilja V; Arora, Karunesh; Brooks, Charles L

    2006-11-07

    We present a robust, distributable method for computing minimum free energy paths of large molecular systems with rugged energy landscapes. The method, which we call harmonic Fourier beads (HFB), exploits the Fourier representation of a path in an appropriate coordinate space and proceeds iteratively by evolving a discrete set of harmonically restrained path points-beads-to generate positions for the next path. The HFB method does not require explicit knowledge of the free energy to locate the path. To compute the free energy profile along the final path we employ an umbrella sampling method in two generalized dimensions. The proposed HFB method is anticipated to aid the study of rare events in biomolecular systems. Its utility is demonstrated with an application to conformational isomerization of the alanine dipeptide in gas phase.

  8. Chemical reactivity parameters (HSAB) applied to magma evolution and ore formation

    NASA Astrophysics Data System (ADS)

    Vigneresse, Jean-Louis

    2012-11-01

    Magmas are commonly described through the usual content of 10 major oxides. This requires a complex dimensional plot. Concepts of hard-soft acid-base (HSAB) interactions allow estimating chemical reactivity of elements, such as electronegativity, i.e. the chemical potential changed of sign, hardness and electrophilicity. For complex system, those values result from equalization methods, i.e. the equalization of the respective chemical potentials, or from ab-initio computations through density functional theory (DFT). They help to characterize silicate magmas by a single value describing their reactivity. Principles of minimum electrophilicity (mEP), maximum hardness (MHP) and minimum polarizability (mPP) indicate trends towards regions of higher stability. Those parameters are plotted within a fitness landscape diagram, highlighting toward which principle reactions trend. Major oxides, main minerals and magmas determine the respective fields in which evolve natural rocks. Three poles are identified, represented by silica and alkalis, whereas oxidation forms the third trend. Mantle-derived rocks show a large variation in electrophilicity compared to hardness. They present all characters of a closed chemical system, being simply described by the free Gibbs energy. Conversely, rocks contaminated within the continental crust show a large variation in hardness between a silica pole and an alkaline, defining two separate trends. The trends show the character of an open chemical system, requiring a Grand Potential description (i.e. taking into account the difference in chemical potential). The terms open and closed systems refer to thermodynamical description, implying contamination for the crust and recycling for the mantle. The specific role of alkalis contrasts with other cations, pointing to their behavior in modifying silicate polymer structures. A second application deals with the reactivity of the melt and its fluid phase. It leads to a better understanding on the mechanisms that control sequestration and transport of metals within the different phases during igneous activity. Based on high gas/melt partitioning for metals and similar reactivity, the gaseous phase is more attractive for metals than silicate melts. The presence of halogens in the fluid phase tends to reinforce hardness, making the fluid phase attractive for hard metals such as Sn or W. Conversely, the presence of S decreases hardness of the fluid phase that becomes attractive for soft metals such as Au, Ag and Cu.

  9. Effect of electromagnetic radiation on the coils used in aneurysm embolization.

    PubMed

    Lv, Xianli; Wu, Zhongxue; Li, Youxiang

    2014-06-01

    This study evaluated the effects of electromagnetic radiation in our daily lives on the coils used in aneurysm embolization. Faraday's electromagnetic induction principle was applied to analyze the effects of electromagnetic radiation on the coils used in aneurysm embolization. To induce a current of 0.5mA in less than 5 mm platinum coils required to stimulate peripheral nerves, the minimum magnetic field will be 0.86 μT. To induce a current of 0.5 mA in platinum coils by a hair dryer, the minimum aneurysm radius is 2.5 mm (5 mm aneurysm). To induce a current of 0.5 mA in platinum coils by a computer or TV, the minimum aneurysm radius is 8.6 mm (approximate 17 mm aneurysm). The minimum magnetic field is much larger than the flux densities produced by computer and TV, while the minimum aneurysm radius is much larger than most aneurysm sizes to levels produced by computer and TV. At present, the effects of electromagnetic radiation in our daily lives on intracranial coils do not produce a harmful reaction. Patients with coiled aneurysm are advised to avoid using hair dryers. This theory needs to be proved by further detailed complex investigations. Doctors should give patients additional instructions before the procedure, depending on this study.

  10. Effect of Electromagnetic Radiation on the Coils Used in Aneurysm Embolization

    PubMed Central

    Lv, Xianli; Wu, Zhongxue; Li, Youxiang

    2014-01-01

    Summary This study evaluated the effects of electromagnetic radiation in our daily lives on the coils used in aneurysm embolization. Faraday’s electromagnetic induction principle was applied to analyze the effects of electromagnetic radiation on the coils used in aneurysm embolization. To induce a current of 0.5mA in less than 5 mm platinum coils required to stimulate peripheral nerves, the minimum magnetic field will be 0.86 μT. To induce a current of 0.5 mA in platinum coils by a hair dryer, the minimum aneurysm radius is 2.5 mm (5 mm aneurysm). To induce a current of 0.5 mA in platinum coils by a computer or TV, the minimum aneurysm radius is 8.6 mm (approximate 17 mm aneurysm). The minimum magnetic field is much larger than the flux densities produced by computer and TV, while the minimum aneurysm radius is much larger than most aneurysm sizes to levels produced by computer and TV. At present, the effects of electromagnetic radiation in our daily lives on intracranial coils do not produce a harmful reaction. Patients with coiled aneurysm are advised to avoid using hair dryers. This theory needs to be proved by further detailed complex investigations. Doctors should give patients additional instructions before the procedure, depending on this study. PMID:24976203

  11. Computer display and manipulation of biological molecules

    NASA Technical Reports Server (NTRS)

    Coeckelenbergh, Y.; Macelroy, R. D.; Hart, J.; Rein, R.

    1978-01-01

    This paper describes a computer model that was designed to investigate the conformation of molecules, macromolecules and subsequent complexes. Utilizing an advanced 3-D dynamic computer display system, the model is sufficiently versatile to accommodate a large variety of molecular input and to generate data for multiple purposes such as visual representation of conformational changes, and calculation of conformation and interaction energy. Molecules can be built on the basis of several levels of information. These include the specification of atomic coordinates and connectivities and the grouping of building blocks and duplicated substructures using symmetry rules found in crystals and polymers such as proteins and nucleic acids. Called AIMS (Ames Interactive Molecular modeling System), the model is now being used to study pre-biotic molecular evolution toward life.

  12. Paleo movement of continents, mantle dynamics and large wander of the rotational pole

    NASA Astrophysics Data System (ADS)

    Greff-Lefftz, M.; Besse, J.

    2010-12-01

    Polar wander is known to be mainly linked to mass distribution changes in its mantle or surface, and more particularly to subductions evolution. On one hand, the peri-pacific subductions seem to be a quite permanent feature of the earth's history at least since the Paleozoic, while the "Tethyan" subductions have a complex history with successive collisions of continental blocs (Hercynian, Kimmerian, Indian) and episodically rebirth of E-W subduction zones. We investigate plate motion during the last 350 million years in a reference frame where Africa is fixed, this last plate being a central plate from which most continents diverged since Pangea break-up. The exact amount of subduction is unknown before 120 Ma and we try to estimate it from the study of the subduction volcanism in the past and plate motion history, when available. Assuming that the subducted slabs sink vertically into the mantle and taking into account large-scale upwellings derived from present-day tomography and intra-plate volcanism in the past, we compute the time variation of mantle density heterogeneities since 350 Ma. By conservation of the angular momentum of the Earth, the temporal evolution of the rotational axis, with respect to the fixed Africa, is computed and compared to the Apparent Polar Wander (APW) observed by paleomagnetism since 280 Ma. We find that a major trend of the computed APW can be described as successive oscillatory clockwise or counter-clockwise motions and that the cusps (around 230 Ma and 170 Ma), both in the observed Africa APW and in the computed pole, are essentially due to the Hercynian (340-300 Ma) and Kimmerian (270-230 Ma) continental collisions.

  13. An Early Quantum Computing Proposal

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Stephen Russell; Alexander, Francis Joseph; Barros, Kipton Marcos

    The D-Wave 2X is the third generation of quantum processing created by D-Wave. NASA (with Google and USRA) and Lockheed Martin (with USC), both own D-Wave systems. Los Alamos National Laboratory (LANL) purchased a D-Wave 2X in November 2015. The D-Wave 2X processor contains (nominally) 1152 quantum bits (or qubits) and is designed to specifically perform quantum annealing, which is a well-known method for finding a global minimum of an optimization problem. This methodology is based on direct execution of a quantum evolution in experimental quantum hardware. While this can be a powerful method for solving particular kinds of problems,more » it also means that the D-Wave 2X processor is not a general computing processor and cannot be programmed to perform a wide variety of tasks. It is a highly specialized processor, well beyond what NNSA currently thinks of as an “advanced architecture.”A D-Wave is best described as a quantum optimizer. That is, it uses quantum superposition to find the lowest energy state of a system by repeated doses of power and settling stages. The D-Wave produces multiple solutions to any suitably formulated problem, one of which is the lowest energy state solution (global minimum). Mapping problems onto the D-Wave requires defining an objective function to be minimized and then encoding that function in the Hamiltonian of the D-Wave system. The quantum annealing method is then used to find the lowest energy configuration of the Hamiltonian using the current D-Wave Two, two-level, quantum processor. This is not always an easy thing to do, and the D-Wave Two has significant limitations that restrict problem sizes that can be run and algorithmic choices that can be made. Furthermore, as more people are exploring this technology, it has become clear that it is very difficult to come up with general approaches to optimization that can both utilize the D-Wave and that can do better than highly developed algorithms on conventional computers for specific applications. These are all fundamental challenges that must be overcome for the D-Wave, or similar, quantum computing technology to be broadly applicable.« less

  14. Time evolution of gamma rays from supernova remnants

    NASA Astrophysics Data System (ADS)

    Gaggero, Daniele; Zandanel, Fabio; Cristofari, Pierre; Gabici, Stefano

    2018-04-01

    We present a systematic phenomenological study focused on the time evolution of the non-thermal radiation - from radio waves to gamma rays - emitted by typical supernova remnants via hadronic and leptonic mechanisms, for two classes of progenitors: thermonuclear and core-collapse. To this aim, we develop a numerical tool designed to model the evolution of the cosmic ray spectrum inside a supernova remnant, and compute the associated multi-wavelength emission. We demonstrate the potential of this tool in the context of future population studies based on large collection of high-energy gamma-ray data. We discuss and explore the relevant parameter space involved in the problem, and focus in particular on their impact on the maximum energy of accelerated particles, in order to study the effectiveness and duration of the PeVatron phase. We outline the crucial role of the ambient medium through which the shock propagates during the remnant evolution. In particular, we point out the role of dense clumps in creating a significant hardening in the hadronic gamma-ray spectrum.

  15. Cyndi: a multi-objective evolution algorithm based method for bioactive molecular conformational generation.

    PubMed

    Liu, Xiaofeng; Bai, Fang; Ouyang, Sisheng; Wang, Xicheng; Li, Honglin; Jiang, Hualiang

    2009-03-31

    Conformation generation is a ubiquitous problem in molecule modelling. Many applications require sampling the broad molecular conformational space or perceiving the bioactive conformers to ensure success. Numerous in silico methods have been proposed in an attempt to resolve the problem, ranging from deterministic to non-deterministic and systemic to stochastic ones. In this work, we described an efficient conformation sampling method named Cyndi, which is based on multi-objective evolution algorithm. The conformational perturbation is subjected to evolutionary operation on the genome encoded with dihedral torsions. Various objectives are designated to render the generated Pareto optimal conformers to be energy-favoured as well as evenly scattered across the conformational space. An optional objective concerning the degree of molecular extension is added to achieve geometrically extended or compact conformations which have been observed to impact the molecular bioactivity (J Comput -Aided Mol Des 2002, 16: 105-112). Testing the performance of Cyndi against a test set consisting of 329 small molecules reveals an average minimum RMSD of 0.864 A to corresponding bioactive conformations, indicating Cyndi is highly competitive against other conformation generation methods. Meanwhile, the high-speed performance (0.49 +/- 0.18 seconds per molecule) renders Cyndi to be a practical toolkit for conformational database preparation and facilitates subsequent pharmacophore mapping or rigid docking. The copy of precompiled executable of Cyndi and the test set molecules in mol2 format are accessible in Additional file 1. On the basis of MOEA algorithm, we present a new, highly efficient conformation generation method, Cyndi, and report the results of validation and performance studies comparing with other four methods. The results reveal that Cyndi is capable of generating geometrically diverse conformers and outperforms other four multiple conformer generators in the case of reproducing the bioactive conformations against 329 structures. The speed advantage indicates Cyndi is a powerful alternative method for extensive conformational sampling and large-scale conformer database preparation.

  16. Method and Apparatus for Powered Descent Guidance

    NASA Technical Reports Server (NTRS)

    Acikmese, Behcet (Inventor); Blackmore, James C. L. (Inventor); Scharf, Daniel P. (Inventor)

    2013-01-01

    A method and apparatus for landing a spacecraft having thrusters with non-convex constraints is described. The method first computes a solution to a minimum error landing problem for a convexified constraints, then applies that solution to a minimum fuel landing problem for convexified constraints. The result is a solution that is a minimum error and minimum fuel solution that is also a feasible solution to the analogous system with non-convex thruster constraints.

  17. Evolutionary Topology of a Currency Network in Asia

    NASA Astrophysics Data System (ADS)

    Feng, Xiaobing; Wang, Xiaofan

    Although recently there are extensive research on currency network using minimum spanning trees approach, the knowledge about the actual evolution of a currency web in Asia is still limited. In the paper, we study the structural evolution of an Asian network using daily exchange rate data. It was found that the correlation between Asian currencies and US Dollar, the previous regional key currency has become weaker and the intra-Asia interactions have increased. This becomes more salient after the exchange rate reform of China. Different from the previous studies, we further reveal that it is the trade volume, national wealth gap and countries growth cycle that has contributed to the evolutionary topology of the minimum spanning tree. These findings provide a valuable platform for theoretical modeling and further analysis.

  18. Simulations of the Formation and Evolution of X-ray Clusters

    NASA Astrophysics Data System (ADS)

    Bryan, G. L.; Klypin, A.; Norman, M. L.

    1994-05-01

    We describe results from a set of Omega = 1 Cold plus Hot Dark Matter (CHDM) and Cold Dark Matter (CDM) simulations. We examine the formation and evolution of X-ray clusters in a cosmological setting with sufficient numbers to perform statistical analysis. We find that CDM, normalized to COBE, seems to produce too many large clusters, both in terms of the luminosity (dn/dL) and temperature (dn/dT) functions. The CHDM simulation produces fewer clusters and the temperature distribution (our numerically most secure result) matches observations where they overlap. The computed cluster luminosity function drops below observations, but we are almost surely underestimating the X-ray luminosity. Because of the lower fluctuations in CHDM, there are only a small number of bright clusters in our simulation volume; however we can use the simulated clusters to fix the relation between temperature and velocity dispersion, allowing us to use collisionless N-body codes to probe larger length scales with correspondingly brighter clusters. The hydrodynamic simulations have been performed with a hybrid particle-mesh scheme for the dark matter and a high resolution grid-based piecewise parabolic method for the adiabatic gas dynamics. This combination has been implemented for massively parallel computers, allowing us to achive grids as large as 512(3) .

  19. Probing the mutational interplay between primary and promiscuous protein functions: a computational-experimental approach.

    PubMed

    Garcia-Seisdedos, Hector; Ibarra-Molero, Beatriz; Sanchez-Ruiz, Jose M

    2012-01-01

    Protein promiscuity is of considerable interest due its role in adaptive metabolic plasticity, its fundamental connection with molecular evolution and also because of its biotechnological applications. Current views on the relation between primary and promiscuous protein activities stem largely from laboratory evolution experiments aimed at increasing promiscuous activity levels. Here, on the other hand, we attempt to assess the main features of the simultaneous modulation of the primary and promiscuous functions during the course of natural evolution. The computational/experimental approach we propose for this task involves the following steps: a function-targeted, statistical coupling analysis of evolutionary data is used to determine a set of positions likely linked to the recruitment of a promiscuous activity for a new function; a combinatorial library of mutations on this set of positions is prepared and screened for both, the primary and the promiscuous activities; a partial-least-squares reconstruction of the full combinatorial space is carried out; finally, an approximation to the Pareto set of variants with optimal primary/promiscuous activities is derived. Application of the approach to the emergence of folding catalysis in thioredoxin scaffolds reveals an unanticipated scenario: diverse patterns of primary/promiscuous activity modulation are possible, including a moderate (but likely significant in a biological context) simultaneous enhancement of both activities. We show that this scenario can be most simply explained on the basis of the conformational diversity hypothesis, although alternative interpretations cannot be ruled out. Overall, the results reported may help clarify the mechanisms of the evolution of new functions. From a different viewpoint, the partial-least-squares-reconstruction/Pareto-set-prediction approach we have introduced provides the computational basis for an efficient directed-evolution protocol aimed at the simultaneous enhancement of several protein features and should therefore open new possibilities in the engineering of multi-functional enzymes.

  20. Probing the Mutational Interplay between Primary and Promiscuous Protein Functions: A Computational-Experimental Approach

    PubMed Central

    Garcia-Seisdedos, Hector; Ibarra-Molero, Beatriz; Sanchez-Ruiz, Jose M.

    2012-01-01

    Protein promiscuity is of considerable interest due its role in adaptive metabolic plasticity, its fundamental connection with molecular evolution and also because of its biotechnological applications. Current views on the relation between primary and promiscuous protein activities stem largely from laboratory evolution experiments aimed at increasing promiscuous activity levels. Here, on the other hand, we attempt to assess the main features of the simultaneous modulation of the primary and promiscuous functions during the course of natural evolution. The computational/experimental approach we propose for this task involves the following steps: a function-targeted, statistical coupling analysis of evolutionary data is used to determine a set of positions likely linked to the recruitment of a promiscuous activity for a new function; a combinatorial library of mutations on this set of positions is prepared and screened for both, the primary and the promiscuous activities; a partial-least-squares reconstruction of the full combinatorial space is carried out; finally, an approximation to the Pareto set of variants with optimal primary/promiscuous activities is derived. Application of the approach to the emergence of folding catalysis in thioredoxin scaffolds reveals an unanticipated scenario: diverse patterns of primary/promiscuous activity modulation are possible, including a moderate (but likely significant in a biological context) simultaneous enhancement of both activities. We show that this scenario can be most simply explained on the basis of the conformational diversity hypothesis, although alternative interpretations cannot be ruled out. Overall, the results reported may help clarify the mechanisms of the evolution of new functions. From a different viewpoint, the partial-least-squares-reconstruction/Pareto-set-prediction approach we have introduced provides the computational basis for an efficient directed-evolution protocol aimed at the simultaneous enhancement of several protein features and should therefore open new possibilities in the engineering of multi-functional enzymes. PMID:22719242

  1. Universal adiabatic quantum computation via the space-time circuit-to-Hamiltonian construction.

    PubMed

    Gosset, David; Terhal, Barbara M; Vershynina, Anna

    2015-04-10

    We show how to perform universal adiabatic quantum computation using a Hamiltonian which describes a set of particles with local interactions on a two-dimensional grid. A single parameter in the Hamiltonian is adiabatically changed as a function of time to simulate the quantum circuit. We bound the eigenvalue gap above the unique ground state by mapping our model onto the ferromagnetic XXZ chain with kink boundary conditions; the gap of this spin chain was computed exactly by Koma and Nachtergaele using its q-deformed version of SU(2) symmetry. We also discuss a related time-independent Hamiltonian which was shown by Janzing to be capable of universal computation. We observe that in the limit of large system size, the time evolution is equivalent to the exactly solvable quantum walk on Young's lattice.

  2. Universal Adiabatic Quantum Computation via the Space-Time Circuit-to-Hamiltonian Construction

    NASA Astrophysics Data System (ADS)

    Gosset, David; Terhal, Barbara M.; Vershynina, Anna

    2015-04-01

    We show how to perform universal adiabatic quantum computation using a Hamiltonian which describes a set of particles with local interactions on a two-dimensional grid. A single parameter in the Hamiltonian is adiabatically changed as a function of time to simulate the quantum circuit. We bound the eigenvalue gap above the unique ground state by mapping our model onto the ferromagnetic X X Z chain with kink boundary conditions; the gap of this spin chain was computed exactly by Koma and Nachtergaele using its q -deformed version of SU(2) symmetry. We also discuss a related time-independent Hamiltonian which was shown by Janzing to be capable of universal computation. We observe that in the limit of large system size, the time evolution is equivalent to the exactly solvable quantum walk on Young's lattice.

  3. Enforcing dust mass conservation in 3D simulations of tightly coupled grains with the PHANTOM SPH code

    NASA Astrophysics Data System (ADS)

    Ballabio, G.; Dipierro, G.; Veronesi, B.; Lodato, G.; Hutchison, M.; Laibe, G.; Price, D. J.

    2018-06-01

    We describe a new implementation of the one-fluid method in the SPH code PHANTOM to simulate the dynamics of dust grains in gas protoplanetary discs. We revise and extend previously developed algorithms by computing the evolution of a new fluid quantity that produces a more accurate and numerically controlled evolution of the dust dynamics. Moreover, by limiting the stopping time of uncoupled grains that violate the assumptions of the terminal velocity approximation, we avoid fatal numerical errors in mass conservation. We test and validate our new algorithm by running 3D SPH simulations of a large range of disc models with tightly and marginally coupled grains.

  4. Predictive biophysical modeling and understanding of the dynamics of mRNA translation and its evolution

    PubMed Central

    Zur, Hadas; Tuller, Tamir

    2016-01-01

    mRNA translation is the fundamental process of decoding the information encoded in mRNA molecules by the ribosome for the synthesis of proteins. The centrality of this process in various biomedical disciplines such as cell biology, evolution and biotechnology, encouraged the development of dozens of mathematical and computational models of translation in recent years. These models aimed at capturing various biophysical aspects of the process. The objective of this review is to survey these models, focusing on those based and/or validated on real large-scale genomic data. We consider aspects such as the complexity of the models, the biophysical aspects they regard and the predictions they may provide. Furthermore, we survey the central systems biology discoveries reported on their basis. This review demonstrates the fundamental advantages of employing computational biophysical translation models in general, and discusses the relative advantages of the different approaches and the challenges in the field. PMID:27591251

  5. Inflation with a graceful exit in a random landscape

    NASA Astrophysics Data System (ADS)

    Pedro, F. G.; Westphal, A.

    2017-03-01

    We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.

  6. Application of modern computer-aided technologies in the production of individual bone graft: A case report.

    PubMed

    Mirković, Sinisa; Budak, Igor; Puskar, Tatjana; Tadić, Ana; Sokac, Mario; Santosi, Zeljko; Djurdjević-Mirković, Tatjana

    2015-12-01

    An autologous bone (bone derived from the patient himself) is considered to be a "golden standard" in the treatment of bone defects and partial atrophic alveolar ridge. However, large defects and bone losses are difficult to restore in this manner, because extraction of large amounts of autologous tissue can cause donor-site problems. Alternatively, data from computed tomographic (CT) scan can be used to shape a precise 3D homologous bone block using a computer-aided design-computer-aided manufacturing (CAD-CAM) system. A 63-year old male patient referred to the Clinic of Dentistry of Vojvodina in Novi Sad, because of teeth loss in the right lateral region of the lower jaw. Clinical examination revealed a pronounced resorption of the residual ridge of the lower jaw in the aforementioned region, both horizontal and vertical. After clinical examination, the patient was referred for 3D cone beam (CB)CT scan that enables visualization of bony structures and accurate measurement of dimensions of the residual alveolar ridge. Considering the large extent of bone resorption, the required ridge augmentation was more than 3 mm in height and 2 mm in width along the length of some 2 cm, thus the use of granular material was excluded. After consulting prosthodontists and engineers from the Faculty of Technical Sciences in Novi Sad we decided to fabricate an individual (custom) bovine-derived bone graft designed according to the obtained-3D CBCT scan. Application of 3D CBCT images, computer-aided systems and software in manufacturing custom bone grafts represents the most recent method of guided bone regeneration. This method substantially reduces time of recovery and carries minimum risk of postoperative complications, yet the results fully satisfy the requirements of both the patient and the therapist.

  7. Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.

    PubMed

    Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado

    2017-01-01

    Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.

  8. Legume genome evolution viewed through the Medicago truncatula and Lotus japonicus genomes

    PubMed Central

    Cannon, Steven B.; Sterck, Lieven; Rombauts, Stephane; Sato, Shusei; Cheung, Foo; Gouzy, Jérôme; Wang, Xiaohong; Mudge, Joann; Vasdewani, Jayprakash; Schiex, Thomas; Spannagl, Manuel; Monaghan, Erin; Nicholson, Christine; Humphray, Sean J.; Schoof, Heiko; Mayer, Klaus F. X.; Rogers, Jane; Quétier, Francis; Oldroyd, Giles E.; Debellé, Frédéric; Cook, Douglas R.; Retzel, Ernest F.; Roe, Bruce A.; Town, Christopher D.; Tabata, Satoshi; Van de Peer, Yves; Young, Nevin D.

    2006-01-01

    Genome sequencing of the model legumes, Medicago truncatula and Lotus japonicus, provides an opportunity for large-scale sequence-based comparison of two genomes in the same plant family. Here we report synteny comparisons between these species, including details about chromosome relationships, large-scale synteny blocks, microsynteny within blocks, and genome regions lacking clear correspondence. The Lotus and Medicago genomes share a minimum of 10 large-scale synteny blocks, each with substantial collinearity and frequently extending the length of whole chromosome arms. The proportion of genes syntenic and collinear within each synteny block is relatively homogeneous. Medicago–Lotus comparisons also indicate similar and largely homogeneous gene densities, although gene-containing regions in Mt occupy 20–30% more space than Lj counterparts, primarily because of larger numbers of Mt retrotransposons. Because the interpretation of genome comparisons is complicated by large-scale genome duplications, we describe synteny, synonymous substitutions and phylogenetic analyses to identify and date a probable whole-genome duplication event. There is no direct evidence for any recent large-scale genome duplication in either Medicago or Lotus but instead a duplication predating speciation. Phylogenetic comparisons place this duplication within the Rosid I clade, clearly after the split between legumes and Salicaceae (poplar). PMID:17003129

  9. One-Dimensional Convective Thermal Evolution Calculation Using a Modified Mixing Length Theory: Application to Saturnian Icy Satellites

    NASA Astrophysics Data System (ADS)

    Kamata, Shunichi

    2018-01-01

    Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection, for a bottom-heated convective layer. Adopting this new definition of l, I investigate the thermal evolution of Saturnian icy satellites, Dione and Enceladus, under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a thick global subsurface ocean suggested from geophysical analyses. Dynamical tides may be able to account for such an amount of heat, though the reference viscosity of Dione's ice and the ammonia content of Dione's ocean need to be very high. Otherwise, a thick global ocean in Dione cannot be maintained, implying that its shell is not in a minimum stress state.

  10. A METHOD FOR COUPLING DYNAMICAL AND COLLISIONAL EVOLUTION OF DUST IN CIRCUMSTELLAR DISKS: THE EFFECT OF A DEAD ZONE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Charnoz, Sebastien; Taillifet, Esther, E-mail: charnoz@cea.fr

    Dust is a major component of protoplanetary and debris disks as it is the main observable signature of planetary formation. However, since dust dynamics are size-dependent (because of gas drag or radiation pressure) any attempt to understand the full dynamical evolution of circumstellar dusty disks that neglect the coupling of collisional evolution with dynamical evolution is thwarted because of the feedback between these two processes. Here, a new hybrid Lagrangian/Eulerian code is presented that overcomes some of these difficulties. The particles representing 'dust clouds' are tracked individually in a Lagrangian way. This system is then mapped on an Eulerian spatialmore » grid, inside the cells of which the local collisional evolutions are computed. Finally, the system is remapped back in a collection of discrete Lagrangian particles, keeping their number constant. An application example of dust growth in a turbulent protoplanetary disk at 1 AU is presented. First, the growth of dust is considered in the absence of a dead zone and the vertical distribution of dust is self-consistently computed. It is found that the mass is rapidly dominated by particles about a fraction of a millimeter in size. Then the same case with an embedded dead zone is investigated and it is found that coagulation is much more efficient and produces, in a short timescale, 1-10 cm dust pebbles that dominate the mass. These pebbles may then be accumulated into embryo-sized objects inside large-scale turbulent structures as shown recently.« less

  11. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography.

    PubMed

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-04-01

    Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model's template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, [Formula: see text], was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using [Formula: see text] from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO.

  12. Impact of number of repeated scans on model observer performance for a low-contrast detection task in computed tomography

    PubMed Central

    Ma, Chi; Yu, Lifeng; Chen, Baiyu; Favazza, Christopher; Leng, Shuai; McCollough, Cynthia

    2016-01-01

    Abstract. Channelized Hotelling observer (CHO) models have been shown to correlate well with human observers for several phantom-based detection/classification tasks in clinical computed tomography (CT). A large number of repeated scans were used to achieve an accurate estimate of the model’s template. The purpose of this study is to investigate how the experimental and CHO model parameters affect the minimum required number of repeated scans. A phantom containing 21 low-contrast objects was scanned on a 128-slice CT scanner at three dose levels. Each scan was repeated 100 times. For each experimental configuration, the low-contrast detectability, quantified as the area under receiver operating characteristic curve, Az, was calculated using a previously validated CHO with randomly selected subsets of scans, ranging from 10 to 100. Using Az from the 100 scans as the reference, the accuracy from a smaller number of scans was determined. Our results demonstrated that the minimum number of repeated scans increased when the radiation dose level decreased, object size and contrast level decreased, and the number of channels increased. As a general trend, it increased as the low-contrast detectability decreased. This study provides a basis for the experimental design of task-based image quality assessment in clinical CT using CHO. PMID:27284547

  13. Solving SAT Problem Based on Hybrid Differential Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Kunqi; Zhang, Jingmin; Liu, Gang; Kang, Lishan

    Satisfiability (SAT) problem is an NP-complete problem. Based on the analysis about it, SAT problem is translated equally into an optimization problem on the minimum of objective function. A hybrid differential evolution algorithm is proposed to solve the Satisfiability problem. It makes full use of strong local search capacity of hill-climbing algorithm and strong global search capability of differential evolution algorithm, which makes up their disadvantages, improves the efficiency of algorithm and avoids the stagnation phenomenon. The experiment results show that the hybrid algorithm is efficient in solving SAT problem.

  14. Large Eddy Simulation of a Turbulent Jet

    NASA Technical Reports Server (NTRS)

    Webb, A. T.; Mansour, Nagi N.

    2001-01-01

    Here we present the results of a Large Eddy Simulation of a non-buoyant jet issuing from a circular orifice in a wall, and developing in neutral surroundings. The effects of the subgrid scales on the large eddies have been modeled with the dynamic large eddy simulation model applied to the fully 3D domain in spherical coordinates. The simulation captures the unsteady motions of the large-scales within the jet as well as the laminar motions in the entrainment region surrounding the jet. The computed time-averaged statistics (mean velocity, concentration, and turbulence parameters) compare well with laboratory data without invoking an empirical entrainment coefficient as employed by line integral models. The use of the large eddy simulation technique allows examination of unsteady and inhomogeneous features such as the evolution of eddies and the details of the entrainment process.

  15. Computation of the target state and feedback controls for time optimal consensus in multi-agent systems

    NASA Astrophysics Data System (ADS)

    Mulla, Ameer K.; Patil, Deepak U.; Chakraborty, Debraj

    2018-02-01

    N identical agents with bounded inputs aim to reach a common target state (consensus) in the minimum possible time. Algorithms for computing this time-optimal consensus point, the control law to be used by each agent and the time taken for the consensus to occur, are proposed. Two types of multi-agent systems are considered, namely (1) coupled single-integrator agents on a plane and, (2) double-integrator agents on a line. At the initial time instant, each agent is assumed to have access to the state information of all the other agents. An algorithm, using convexity of attainable sets and Helly's theorem, is proposed, to compute the final consensus target state and the minimum time to achieve this consensus. Further, parts of the computation are parallelised amongst the agents such that each agent has to perform computations of O(N2) run time complexity. Finally, local feedback time-optimal control laws are synthesised to drive each agent to the target point in minimum time. During this part of the operation, the controller for each agent uses measurements of only its own states and does not need to communicate with any neighbouring agents.

  16. Computer simulations of optimum boost and buck-boost converters

    NASA Technical Reports Server (NTRS)

    Rahman, S.

    1982-01-01

    The development of mathematicl models suitable for minimum weight boost and buck-boost converter designs are presented. The facility of an augumented Lagrangian (ALAG) multiplier-based nonlinear programming technique is demonstrated for minimum weight design optimizations of boost and buck-boost power converters. ALAG-based computer simulation results for those two minimum weight designs are discussed. Certain important features of ALAG are presented in the framework of a comprehensive design example for boost and buck-boost power converter design optimization. The study provides refreshing design insight of power converters and presents such information as weight annd loss profiles of various semiconductor components and magnetics as a function of the switching frequency.

  17. Practical aspects of protein co-evolution.

    PubMed

    Ochoa, David; Pazos, Florencio

    2014-01-01

    Co-evolution is a fundamental aspect of Evolutionary Theory. At the molecular level, co-evolutionary linkages between protein families have been used as indicators of protein interactions and functional relationships from long ago. Due to the complexity of the problem and the amount of genomic data required for these approaches to achieve good performances, it took a relatively long time from the appearance of the first ideas and concepts to the quotidian application of these approaches and their incorporation to the standard toolboxes of bioinformaticians and molecular biologists. Today, these methodologies are mature (both in terms of performance and usability/implementation), and the genomic information that feeds them large enough to allow their general application. This review tries to summarize the current landscape of co-evolution-based methodologies, with a strong emphasis on describing interesting cases where their application to important biological systems, alone or in combination with other computational and experimental approaches, allowed getting new insight into these.

  18. Practical aspects of protein co-evolution

    PubMed Central

    Ochoa, David; Pazos, Florencio

    2014-01-01

    Co-evolution is a fundamental aspect of Evolutionary Theory. At the molecular level, co-evolutionary linkages between protein families have been used as indicators of protein interactions and functional relationships from long ago. Due to the complexity of the problem and the amount of genomic data required for these approaches to achieve good performances, it took a relatively long time from the appearance of the first ideas and concepts to the quotidian application of these approaches and their incorporation to the standard toolboxes of bioinformaticians and molecular biologists. Today, these methodologies are mature (both in terms of performance and usability/implementation), and the genomic information that feeds them large enough to allow their general application. This review tries to summarize the current landscape of co-evolution-based methodologies, with a strong emphasis on describing interesting cases where their application to important biological systems, alone or in combination with other computational and experimental approaches, allowed getting new insight into these. PMID:25364721

  19. Nonlinear dimension reduction and clustering by Minimum Curvilinearity unfold neuropathic pain and tissue embryological classes.

    PubMed

    Cannistraci, Carlo Vittorio; Ravasi, Timothy; Montevecchi, Franco Maria; Ideker, Trey; Alessio, Massimo

    2010-09-15

    Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. 'Minimum Curvilinearity' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. https://sites.google.com/site/carlovittoriocannistraci/home.

  20. Constraining Landscape History and Glacial Erosivity Using Paired Cosmogenic Nuclides in Upernavik, Northwest Greenland

    NASA Technical Reports Server (NTRS)

    Corbett, Lee B.; Bierman, Paul R.; Graly, Joseph A.; Neumann, Thomas A.; Rood, Dylan H.

    2013-01-01

    High-latitude landscape evolution processes have the potential to preserve old, relict surfaces through burial by cold-based, nonerosive glacial ice. To investigate landscape history and age in the high Arctic, we analyzed in situ cosmogenic Be(sup 10) and Al (sup 26) in 33 rocks from Upernavik, northwest Greenland. We sampled adjacent bedrock-boulder pairs along a 100 km transect at elevations up to 1000 m above sea level. Bedrock samples gave significantly older apparent exposure ages than corresponding boulder samples, and minimum limiting ages increased with elevation. Two-isotope calculations Al(sup26)/B(sup 10) on 20 of the 33 samples yielded minimum limiting exposure durations up to 112 k.y., minimum limiting burial durations up to 900 k.y., and minimum limiting total histories up to 990 k.y. The prevalence of BE(sup 10) and Al(sup 26) inherited from previous periods of exposure, especially in bedrock samples at high elevation, indicates that these areas record long and complex surface exposure histories, including significant periods of burial with little subglacial erosion. The long total histories suggest that these high elevation surfaces were largely preserved beneath cold-based, nonerosive ice or snowfields for at least the latter half of the Quaternary. Because of high concentrations of inherited nuclides, only the six youngest boulder samples appear to record the timing of ice retreat. These six samples suggest deglaciation of the Upernavik coast at 11.3 +/- 0.5 ka (average +/- 1 standard deviation). There is no difference in deglaciation age along the 100 km sample transect, indicating that the ice-marginal position retreated rapidly at rates of approx.120 m yr(sup-1).

  1. Large-eddy simulation of sand dune morphodynamics

    NASA Astrophysics Data System (ADS)

    Khosronejad, Ali; Sotiropoulos, Fotis; St. Anthony Falls Laboratory, University of Minnesota Team

    2015-11-01

    Sand dunes are natural features that form under complex interaction between turbulent flow and bed morphodynamics. We employ a fully-coupled 3D numerical model (Khosronejad and Sotiropoulos, 2014, Journal of Fluid Mechanics, 753:150-216) to perform high-resolution large-eddy simulations of turbulence and bed morphodynamics in a laboratory scale mobile-bed channel to investigate initiation, evolution and quasi-equilibrium of sand dunes (Venditti and Church, 2005, J. Geophysical Research, 110:F01009). We employ a curvilinear immersed boundary method along with convection-diffusion and bed-morphodynamics modules to simulate the suspended sediment and the bed-load transports respectively. The coupled simulation were carried out on a grid with more than 100 million grid nodes and simulated about 3 hours of physical time of dune evolution. The simulations provide the first complete description of sand dune formation and long-term evolution. The geometric characteristics of the simulated dunes are shown to be in excellent agreement with observed data obtained across a broad range of scales. This work was supported by NSF Grants EAR-0120914 (as part of the National Center for Earth-Surface Dynamics). Computational resources were provided by the University of Minnesota Supercomputing Institute.

  2. Effect of Variable Spatial Scales on USLE-GIS Computations

    NASA Astrophysics Data System (ADS)

    Patil, R. J.; Sharma, S. K.

    2017-12-01

    Use of appropriate spatial scale is very important in Universal Soil Loss Equation (USLE) based spatially distributed soil erosion modelling. This study aimed at assessment of annual rates of soil erosion at different spatial scales/grid sizes and analysing how changes in spatial scales affect USLE-GIS computations using simulation and statistical variabilities. Efforts have been made in this study to recommend an optimum spatial scale for further USLE-GIS computations for management and planning in the study area. The present research study was conducted in Shakkar River watershed, situated in Narsinghpur and Chhindwara districts of Madhya Pradesh, India. Remote Sensing and GIS techniques were integrated with Universal Soil Loss Equation (USLE) to predict spatial distribution of soil erosion in the study area at four different spatial scales viz; 30 m, 50 m, 100 m, and 200 m. Rainfall data, soil map, digital elevation model (DEM) and an executable C++ program, and satellite image of the area were used for preparation of the thematic maps for various USLE factors. Annual rates of soil erosion were estimated for 15 years (1992 to 2006) at four different grid sizes. The statistical analysis of four estimated datasets showed that sediment loss dataset at 30 m spatial scale has a minimum standard deviation (2.16), variance (4.68), percent deviation from observed values (2.68 - 18.91 %), and highest coefficient of determination (R2 = 0.874) among all the four datasets. Thus, it is recommended to adopt this spatial scale for USLE-GIS computations in the study area due to its minimum statistical variability and better agreement with the observed sediment loss data. This study also indicates large scope for use of finer spatial scales in spatially distributed soil erosion modelling.

  3. System enhancements of Mesoscale Analysis and Space Sensor (MASS) computer system

    NASA Technical Reports Server (NTRS)

    Hickey, J. S.; Karitani, S.

    1985-01-01

    The interactive information processing for the mesoscale analysis and space sensor (MASS) program is reported. The development and implementation of new spaceborne remote sensing technology to observe and measure atmospheric processes is described. The space measurements and conventional observational data are processed together to gain an improved understanding of the mesoscale structure and dynamical evolution of the atmosphere relative to cloud development and precipitation processes. A Research Computer System consisting of three primary computers was developed (HP-1000F, Perkin-Elmer 3250, and Harris/6) which provides a wide range of capabilities for processing and displaying interactively large volumes of remote sensing data. The development of a MASS data base management and analysis system on the HP-1000F computer and extending these capabilities by integration with the Perkin-Elmer and Harris/6 computers using the MSFC's Apple III microcomputer workstations is described. The objectives are: to design hardware enhancements for computer integration and to provide data conversion and transfer between machines.

  4. Large-deformation modal coordinates for nonrigid vehicle dynamics

    NASA Technical Reports Server (NTRS)

    Likins, P. W.; Fleischer, G. E.

    1972-01-01

    The derivation of minimum-dimension sets of discrete-coordinate and hybrid-coordinate equations of motion of a system consisting of an arbitrary number of hinge-connected rigid bodies assembled in tree topology is presented. These equations are useful for the simulation of dynamical systems that can be idealized as tree-like arrangements of substructures, with each substructure consisting of either a rigid body or a collection of elastically interconnected rigid bodies restricted to small relative rotations at each connection. Thus, some of the substructures represent elastic bodies subjected to small strains or local deformations, but possibly large gross deformations, in the hybrid formulation, distributed coordinates referred to herein as large-deformation modal coordinates, are used for the deformations of these substructures. The equations are in a form suitable for incorporation into one or more computer programs to be used as multipurpose tools in the simulation of spacecraft and other complex electromechanical systems.

  5. Resolving Properties of Polymers and Nanoparticle Assembly through Coarse-Grained Computational Studies.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grest, Gary S.

    2017-09-01

    Coupled length and time scales determine the dynamic behavior of polymers and polymer nanocomposites and underlie their unique properties. To resolve the properties over large time and length scales it is imperative to develop coarse grained models which retain the atomistic specificity. Here we probe the degree of coarse graining required to simultaneously retain significant atomistic details a nd access large length and time scales. The degree of coarse graining in turn sets the minimum length scale instrumental in defining polymer properties and dynamics. Using polyethylene as a model system, we probe how the coarse - graining scale affects themore » measured dynamics with different number methylene group s per coarse - grained beads. Using these models we simulate polyethylene melts for times over 500 ms to study the viscoelastic properties of well - entangled polymer melts and large nanoparticle assembly as the nanoparticles are driven close enough to form nanostructures.« less

  6. On the impacts of computing daily temperatures as the average of the daily minimum and maximum temperatures

    NASA Astrophysics Data System (ADS)

    Villarini, Gabriele; Khouakhi, Abdou; Cunningham, Evan

    2017-12-01

    Daily temperature values are generally computed as the average of the daily minimum and maximum observations, which can lead to biases in the estimation of daily averaged values. This study examines the impacts of these biases on the calculation of climatology and trends in temperature extremes at 409 sites in North America with at least 25 years of complete hourly records. Our results show that the calculation of daily temperature based on the average of minimum and maximum daily readings leads to an overestimation of the daily values of 10+ % when focusing on extremes and values above (below) high (low) thresholds. Moreover, the effects of the data processing method on trend estimation are generally small, even though the use of the daily minimum and maximum readings reduces the power of trend detection ( 5-10% fewer trends detected in comparison with the reference data).

  7. Temperature fine-tunes Mediterranean Arabidopsis thaliana life-cycle phenology geographically.

    PubMed

    Marcer, A; Vidigal, D S; James, P M A; Fortin, M-J; Méndez-Vigo, B; Hilhorst, H W M; Bentsink, L; Alonso-Blanco, C; Picó, F X

    2018-01-01

    To understand how adaptive evolution in life-cycle phenology operates in plants, we need to unravel the effects of geographic variation in putative agents of natural selection on life-cycle phenology by considering all key developmental transitions and their co-variation patterns. We address this goal by quantifying the temperature-driven and geographically varying relationship between seed dormancy and flowering time in the annual Arabidopsis thaliana across the Iberian Peninsula. We used data on genetic variation in two major life-cycle traits, seed dormancy (DSDS50) and flowering time (FT), in a collection of 300 A. thaliana accessions from the Iberian Peninsula. The geographically varying relationship between life-cycle traits and minimum temperature, a major driver of variation in DSDS50 and FT, was explored with geographically weighted regressions (GWR). The environmentally varying correlation between DSDS50 and FT was analysed by means of sliding window analysis across a minimum temperature gradient. Maximum local adjustments between minimum temperature and life-cycle traits were obtained in the southwest Iberian Peninsula, an area with the highest minimum temperatures. In contrast, in off-southwest locations, the effects of minimum temperature on DSDS50 were rather constant across the region, whereas those of minimum temperature on FT were more variable, with peaks of strong local adjustments of GWR models in central and northwest Spain. Sliding window analysis identified a minimum temperature turning point in the relationship between DSDS50 and FT around a minimum temperature of 7.2 °C. Above this minimum temperature turning point, the variation in the FT/DSDS50 ratio became rapidly constrained and the negative correlation between FT and DSDS50 did not increase any further with increasing minimum temperatures. The southwest Iberian Peninsula emerges as an area where variation in life-cycle phenology appears to be restricted by the duration and severity of the hot summer drought. The temperature-driven varying relationship between DSDS50 and FT detected environmental boundaries for the co-evolution between FT and DSDS50 in A. thaliana. In the context of global warming, we conclude that A. thaliana phenology from the southwest Iberian Peninsula, determined by early flowering and deep seed dormancy, might become the most common life-cycle phenotype for this annual plant in the region. © 2017 German Botanical Society and The Royal Botanical Society of the Netherlands.

  8. Digital data collection in paleoanthropology.

    PubMed

    Reed, Denné; Barr, W Andrew; Mcpherron, Shannon P; Bobe, René; Geraads, Denis; Wynn, Jonathan G; Alemseged, Zeresenay

    2015-01-01

    Understanding patterns of human evolution across space and time requires synthesizing data collected by independent research teams, and this effort is part of a larger trend to develop cyber infrastructure and e-science initiatives. At present, paleoanthropology cannot easily answer basic questions about the total number of fossils and artifacts that have been discovered, or exactly how those items were collected. In this paper, we examine the methodological challenges to data integration, with the hope that mitigating the technical obstacles will further promote data sharing. At a minimum, data integration efforts must document what data exist and how the data were collected (discovery), after which we can begin standardizing data collection practices with the aim of achieving combined analyses (synthesis). This paper outlines a digital data collection system for paleoanthropology. We review the relevant data management principles for a general audience and supplement this with technical details drawn from over 15 years of paleontological and archeological field experience in Africa and Europe. The system outlined here emphasizes free open-source software (FOSS) solutions that work on multiple computer platforms; it builds on recent advances in open-source geospatial software and mobile computing. © 2015 Wiley Periodicals, Inc.

  9. Adsorption and solvation of ethanol at the water liquid-vapor interface: a molecular dynamics study

    NASA Technical Reports Server (NTRS)

    Wilson, M. A.; Pohorille, A.

    1997-01-01

    The free energy profiles of methanol and ethanol at the water liquid-vapor interface at 310K were calculated using molecular dynamics computer simulations. Both alcohols exhibit a pronounced free energy minimum at the interface and, therefore, have positive adsorption at this interface. The surface excess was computed from the Gibbs adsorption isotherm and was found to be in good agreement with experimental results. Neither compound exhibits a free energy barrier between the bulk and the surface adsorbed state. Scattering calculations of ethanol molecules from a gas phase thermal distribution indicate that the mass accommodation coefficient is 0.98, and the molecules become thermalized within 10 ps of striking the interface. It was determined that the formation of the solvation structure around the ethanol molecule at the interface is not the rate-determining step in its uptake into water droplets. The motion of an ethanol molecule in a water lamella was followed for 30 ns. The time evolution of the probability distribution of finding an ethanol molecule that was initially located at the interface is very well described by the diffusion equation on the free energy surface.

  10. On the Improvement of Convergence Performance for Integrated Design of Wind Turbine Blade Using a Vector Dominating Multi-objective Evolution Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, L.; Wang, T. G.; Wu, J. H.; Cheng, G. P.

    2016-09-01

    A novel multi-objective optimization algorithm incorporating evolution strategies and vector mechanisms, referred as VD-MOEA, is proposed and applied in aerodynamic- structural integrated design of wind turbine blade. In the algorithm, a set of uniformly distributed vectors is constructed to guide population in moving forward to the Pareto front rapidly and maintain population diversity with high efficiency. For example, two- and three- objective designs of 1.5MW wind turbine blade are subsequently carried out for the optimization objectives of maximum annual energy production, minimum blade mass, and minimum extreme root thrust. The results show that the Pareto optimal solutions can be obtained in one single simulation run and uniformly distributed in the objective space, maximally maintaining the population diversity. In comparison to conventional evolution algorithms, VD-MOEA displays dramatic improvement of algorithm performance in both convergence and diversity preservation for handling complex problems of multi-variables, multi-objectives and multi-constraints. This provides a reliable high-performance optimization approach for the aerodynamic-structural integrated design of wind turbine blade.

  11. Signal Processing for a Lunar Array: Minimizing Power Consumption

    NASA Technical Reports Server (NTRS)

    D'Addario, Larry; Simmons, Samuel

    2011-01-01

    Motivation for the study is: (1) Lunar Radio Array for low frequency, high redshift Dark Ages/Epoch of Reionization observations (z =6-50, f=30-200 MHz) (2) High precision cosmological measurements of 21 cm H I line fluctuations (3) Probe universe before first star formation and provide information about the Intergalactic Medium and evolution of large scale structures (5) Does the current cosmological model accurately describe the Universe before reionization? Lunar Radio Array is for (1) Radio interferometer based on the far side of the moon (1a) Necessary for precision measurements, (1b) Shielding from earth-based and solar RFI (12) No permanent ionosphere, (2) Minimum collecting area of approximately 1 square km and brightness sensitivity 10 mK (3)Several technologies must be developed before deployment The power needed to process signals from a large array of nonsteerable elements is not prohibitive, even for the Moon, and even in current technology. Two different concepts have been proposed: (1) Dark Ages Radio Interferometer (DALI) (2)( Lunar Array for Radio Cosmology (LARC)

  12. Information Extraction from Large-Multi-Layer Social Networks

    DTIC Science & Technology

    2015-08-06

    mization [4]. Methods that fall into this category include spec- tral algorithms, modularity methods, and methods that rely on statistical inference...Snijders and Chris Baerveldt, “A multilevel network study of the effects of delinquent behavior on friendship evolution,” Journal of mathematical sociol- ogy...1970. [10] Ulrike Luxburg, “A tutorial on spectral clustering,” Statistics and Computing, vol. 17, no. 4, pp. 395–416, Dec. 2007. [11] R. A. Fisher, “On

  13. Quantum dynamics of the Einstein-Rosen wormhole throat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kunstatter, Gabor; Peltola, Ari; Louko, Jorma

    2011-02-15

    We consider the polymer quantization of the Einstein wormhole throat theory for an eternal Schwarzschild black hole. We numerically solve the difference equation describing the quantum evolution of an initially Gaussian, semiclassical wave packet. As expected from previous work on loop quantum cosmology, the wave packet remains semiclassical until it nears the classical singularity at which point it enters a quantum regime in which the fluctuations become large. The expectation value of the radius reaches a minimum as the wave packet is reflected from the origin and emerges to form a near-Gaussian but asymmetrical semiclassical state at late times. Themore » value of the minimum depends in a nontrivial way on the initial mass/energy of the pulse, its width, and the polymerization scale. For wave packets that are sufficiently narrow near the bounce, the semiclassical bounce radius is obtained. Although the numerics become difficult to control in this limit, we argue that for pulses of finite width the bounce persists as the polymerization scale goes to zero, suggesting that in this model the loop quantum gravity effects mimicked by polymer quantization do not play a crucial role in the quantum bounce.« less

  14. Computational evolution: taking liberties.

    PubMed

    Correia, Luís

    2010-09-01

    Evolution has, for a long time, inspired computer scientists to produce computer models mimicking its behavior. Evolutionary algorithm (EA) is one of the areas where this approach has flourished. EAs have been used to model and study evolution, but they have been especially developed for their aptitude as optimization tools for engineering. Developed models are quite simple in comparison with their natural sources of inspiration. However, since EAs run on computers, we have the freedom, especially in optimization models, to test approaches both realistic and outright speculative, from the biological point of view. In this article, we discuss different common evolutionary algorithm models, and then present some alternatives of interest. These include biologically inspired models, such as co-evolution and, in particular, symbiogenetics and outright artificial operators and representations. In each case, the advantages of the modifications to the standard model are identified. The other area of computational evolution, which has allowed us to study basic principles of evolution and ecology dynamics, is the development of artificial life platforms for open-ended evolution of artificial organisms. With these platforms, biologists can test theories by directly manipulating individuals and operators, observing the resulting effects in a realistic way. An overview of the most prominent of such environments is also presented. If instead of artificial platforms we use the real world for evolving artificial life, then we are dealing with evolutionary robotics (ERs). A brief description of this area is presented, analyzing its relations to biology. Finally, we present the conclusions and identify future research avenues in the frontier of computation and biology. Hopefully, this will help to draw the attention of more biologists and computer scientists to the benefits of such interdisciplinary research.

  15. Domain decomposition methods for the parallel computation of reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1988-01-01

    Domain decomposition is a natural route to parallel computing for partial differential equation solvers. Subdomains of which the original domain of definition is comprised are assigned to independent processors at the price of periodic coordination between processors to compute global parameters and maintain the requisite degree of continuity of the solution at the subdomain interfaces. In the domain-decomposed solution of steady multidimensional systems of PDEs by finite difference methods using a pseudo-transient version of Newton iteration, the only portion of the computation which generally stands in the way of efficient parallelization is the solution of the large, sparse linear systems arising at each Newton step. For some Jacobian matrices drawn from an actual two-dimensional reacting flow problem, comparisons are made between relaxation-based linear solvers and also preconditioned iterative methods of Conjugate Gradient and Chebyshev type, focusing attention on both iteration count and global inner product count. The generalized minimum residual method with block-ILU preconditioning is judged the best serial method among those considered, and parallel numerical experiments on the Encore Multimax demonstrate for it approximately 10-fold speedup on 16 processors.

  16. A computational imaging target specific detectivity metric

    NASA Astrophysics Data System (ADS)

    Preece, Bradley L.; Nehmetallah, George

    2017-05-01

    Due to the large quantity of low-cost, high-speed computational processing available today, computational imaging (CI) systems are expected to have a major role for next generation multifunctional cameras. The purpose of this work is to quantify the performance of theses CI systems in a standardized manner. Due to the diversity of CI system designs that are available today or proposed in the near future, significant challenges in modeling and calculating a standardized detection signal-to-noise ratio (SNR) to measure the performance of these systems. In this paper, we developed a path forward for a standardized detectivity metric for CI systems. The detectivity metric is designed to evaluate the performance of a CI system searching for a specific known target or signal of interest, and is defined as the optimal linear matched filter SNR, similar to the Hotelling SNR, calculated in computational space with special considerations for standardization. Therefore, the detectivity metric is designed to be flexible, in order to handle various types of CI systems and specific targets, while keeping the complexity and assumptions of the systems to a minimum.

  17. Equalization of energy density in boiling water reactors (as exemplified by WB-50). Development and testing of WB -50 computational model on the basis of MCU-RR code

    NASA Astrophysics Data System (ADS)

    Chertkov, Yu B.; Disyuk, V. V.; Pimenov, E. Yu; Aksenova, N. V.

    2017-01-01

    Within the framework of research in possibility and prospects of power density equalization in boiling water reactors (as exemplified by WB-50) a work was undertaken to improve prior computational model of the WB-50 reactor implemented in MCU-RR software. Analysis of prior works showed that critical state calculations have deviation of calculated reactivity exceeding ±0.3 % (ΔKef/Kef) for minimum concentrations of boric acid in the reactor water and reaching 2 % for maximum concentration values. Axial coefficient of nonuniform burnup distribution reaches high values in the WB-50 reactor. Thus, the computational model needed refinement to take into account burnup inhomogeneity along the fuel assembly height. At this stage, computational results with mean square deviation of less than 0.7 % (ΔKef/Kef) and dispersion of design values of ±1 % (ΔK/K) shall be deemed acceptable. Further lowering of these parameters apparently requires root cause analysis of such large values and paying more attention to experimental measurement techniques.

  18. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    NASA Astrophysics Data System (ADS)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work needs to be performed in three areas: (1) exploitation of state-of-the-art parallel computation, (2) improvement of theoretical formulation of the scattering process for better computation efficiency, and (3) development of better methods for guiding the non-linear inversion. (Abstract shortened by UMI.).

  19. Stochastic hybrid systems for studying biochemical processes.

    PubMed

    Singh, Abhyudai; Hespanha, João P

    2010-11-13

    Many protein and mRNA species occur at low molecular counts within cells, and hence are subject to large stochastic fluctuations in copy numbers over time. Development of computationally tractable frameworks for modelling stochastic fluctuations in population counts is essential to understand how noise at the cellular level affects biological function and phenotype. We show that stochastic hybrid systems (SHSs) provide a convenient framework for modelling the time evolution of population counts of different chemical species involved in a set of biochemical reactions. We illustrate recently developed techniques that allow fast computations of the statistical moments of the population count, without having to run computationally expensive Monte Carlo simulations of the biochemical reactions. Finally, we review different examples from the literature that illustrate the benefits of using SHSs for modelling biochemical processes.

  20. Heavy Analysis and Light Virtualization of Water Use Data with Python

    NASA Astrophysics Data System (ADS)

    Kim, H.; Bijoor, N.; Famiglietti, J. S.

    2014-12-01

    Water utilities possess a large amount of water data that could be used to inform urban ecohydrology, management decisions, and conservation policies, but such data are rarely analyzed owing to difficulty in analyzation, visualization, and interpretion. We have developed a high performance computing resource for this purpose. We partnered with 6 water agencies in Orange County who provided 10 years of parcel-level monthly water use billing data for a pilot study. The first challenge that we overcame was to refine all human errors and unify the many different formats of data over all agencies. Second, we tested and applied experimental approaches to the data, including complex calculations, with high efficiency. Third, we developed a method to refine the data so it can be browsed along a time series index and/or geo-spatial queries with high efficiency, no matter how large the data. Python scientific libraries were the best match to handle arbitrary data sets in our environment. Further milestones include agency entry, sets of formulae, and maintaining 15M rows X 70 columns of data with high performance of cpu-bound processes. To deal with billions of rows, we performed an analysis virtualization stack by leveraging iPython parallel computing. With this architecture, one agency could be considered one computing node or virtual machine that maintains its own data sets respectively. For example, a big agency could use a large node, and a small agency could use a micro node. Under the minimum required raw data specs, more agencies could be analyzed. The program developed in this study simplifies data analysis, visualization, and interpretation of large water datasets, and can be used to analyze large data volumes from water agencies nationally or worldwide.

  1. A Lagrangian analysis of cold cloud clusters and their life cycles with satellite observations

    PubMed Central

    Esmaili, Rebekah Bradley; Tian, Yudong; Vila, Daniel Alejandro; Kim, Kyu-Myong

    2018-01-01

    Cloud movement and evolution signify the complex water and energy transport in the atmosphere-ocean-land system. Detecting, clustering, and tracking clouds as semi-coherent cluster objects enables study of their evolution which can complement climate model simulations and enhance satellite retrieval algorithms, where there are large gaps between overpasses. Using an area-overlap cluster tracking algorithm, in this study we examine the trajectories, horizontal extent, and brightness temperature variations of millions of individual cloud clusters over their lifespan, from infrared satellite observations at 30-minute, 4-km resolution, for a period of 11 years. We found that the majority of cold clouds were both small and short-lived and that their frequency and location are influenced by El Niño. More importantly, this large sample of individually tracked clouds shows their horizontal size and temperature evolution. Longer lived clusters tended to achieve their temperature and size maturity milestones at different times, while these stages often occurred simultaneously in shorter lived clusters. On average, clusters with this lag also exhibited a greater rainfall contribution than those where minimum temperature and maximum size stages occurred simultaneously. Furthermore, by examining the diurnal cycle of cluster development over Africa and the Indian subcontinent, we observed differences in the local timing of the maximum occurrence at different life cycle stages. Over land there was a strong diurnal peak in the afternoon while over the ocean there was a semi-diurnal peak composed of longer-lived clusters in the early morning hours and shorter-lived clusters in the afternoon. Building on regional specific work, this study provides a long-term, high-resolution, and global survey of object-based cloud characteristics. PMID:29744257

  2. Three-phase Interstellar Medium in Galaxies Resolving Evolution with Star Formation and Supernova Feedback (TIGRESS): Algorithms, Fiducial Model, and Convergence

    NASA Astrophysics Data System (ADS)

    Kim, Chang-Goo; Ostriker, Eve C.

    2017-09-01

    We introduce TIGRESS, a novel framework for multi-physics numerical simulations of the star-forming interstellar medium (ISM) implemented in the Athena MHD code. The algorithms of TIGRESS are designed to spatially and temporally resolve key physical features, including: (1) the gravitational collapse and ongoing accretion of gas that leads to star formation in clusters; (2) the explosions of supernovae (SNe), both near their progenitor birth sites and from runaway OB stars, with time delays relative to star formation determined by population synthesis; (3) explicit evolution of SN remnants prior to the onset of cooling, which leads to the creation of the hot ISM; (4) photoelectric heating of the warm and cold phases of the ISM that tracks the time-dependent ambient FUV field from the young cluster population; (5) large-scale galactic differential rotation, which leads to epicyclic motion and shears out overdense structures, limiting large-scale gravitational collapse; (6) accurate evolution of magnetic fields, which can be important for vertical support of the ISM disk as well as angular momentum transport. We present tests of the newly implemented physics modules, and demonstrate application of TIGRESS in a fiducial model representing the solar neighborhood environment. We use a resolution study to demonstrate convergence and evaluate the minimum resolution {{Δ }}x required to correctly recover several ISM properties, including the star formation rate, wind mass-loss rate, disk scale height, turbulent and Alfvénic velocity dispersions, and volume fractions of warm and hot phases. For the solar neighborhood model, all these ISM properties are converged at {{Δ }}x≤slant 8 {pc}.

  3. A Lagrangian analysis of cold cloud clusters and their life cycles with satellite observations.

    PubMed

    Esmaili, Rebekah Bradley; Tian, Yudong; Vila, Daniel Alejandro; Kim, Kyu-Myong

    2016-10-16

    Cloud movement and evolution signify the complex water and energy transport in the atmosphere-ocean-land system. Detecting, clustering, and tracking clouds as semi-coherent cluster objects enables study of their evolution which can complement climate model simulations and enhance satellite retrieval algorithms, where there are large gaps between overpasses. Using an area-overlap cluster tracking algorithm, in this study we examine the trajectories, horizontal extent, and brightness temperature variations of millions of individual cloud clusters over their lifespan, from infrared satellite observations at 30-minute, 4-km resolution, for a period of 11 years. We found that the majority of cold clouds were both small and short-lived and that their frequency and location are influenced by El Niño. More importantly, this large sample of individually tracked clouds shows their horizontal size and temperature evolution. Longer lived clusters tended to achieve their temperature and size maturity milestones at different times, while these stages often occurred simultaneously in shorter lived clusters. On average, clusters with this lag also exhibited a greater rainfall contribution than those where minimum temperature and maximum size stages occurred simultaneously. Furthermore, by examining the diurnal cycle of cluster development over Africa and the Indian subcontinent, we observed differences in the local timing of the maximum occurrence at different life cycle stages. Over land there was a strong diurnal peak in the afternoon while over the ocean there was a semi-diurnal peak composed of longer-lived clusters in the early morning hours and shorter-lived clusters in the afternoon. Building on regional specific work, this study provides a long-term, high-resolution, and global survey of object-based cloud characteristics.

  4. A Lagrangian Analysis of Cold Cloud Clusters and Their Life Cycles With Satellite Observations

    NASA Technical Reports Server (NTRS)

    Esmaili, Rebekah Bradley; Tian, Yudong; Vila, Daniel Alejandro; Kim, Kyu-Myong

    2016-01-01

    Cloud movement and evolution signify the complex water and energy transport in the atmosphere-ocean-land system. Detecting, clustering, and tracking clouds as semi coherent cluster objects enables study of their evolution which can complement climate model simulations and enhance satellite retrieval algorithms, where there are large gaps between overpasses. Using an area-overlap cluster tracking algorithm, in this study we examine the trajectories, horizontal extent, and brightness temperature variations of millions of individual cloud clusters over their lifespan, from infrared satellite observations at 30-minute, 4-km resolution, for a period of 11 years. We found that the majority of cold clouds were both small and short-lived and that their frequency and location are influenced by El Nino. More importantly, this large sample of individually tracked clouds shows their horizontal size and temperature evolution. Longer lived clusters tended to achieve their temperature and size maturity milestones at different times, while these stages often occurred simultaneously in shorter lived clusters. On average, clusters with this lag also exhibited a greater rainfall contribution than those where minimum temperature and maximum size stages occurred simultaneously. Furthermore, by examining the diurnal cycle of cluster development over Africa and the Indian subcontinent, we observed differences in the local timing of the maximum occurrence at different life cycle stages. Over land there was a strong diurnal peak in the afternoon while over the ocean there was a semi-diurnal peak composed of longer-lived clusters in the early morning hours and shorter-lived clusters in the afternoon. Building on regional specific work, this study provides a long-term, high-resolution, and global survey of object-based cloud characteristics.

  5. Molecules-in-molecules fragment-based method for the calculation of chiroptical spectra of large molecules: Vibrational circular dichroism and Raman optical activity spectra of alanine polypeptides.

    PubMed

    Jose, K V Jovan; Raghavachari, Krishnan

    2016-12-01

    The molecules-in-molecules (MIM) fragment-based method has recently been adapted to evaluate the chiroptical (vibrational circular dichroism [VCD] and Raman optical activity [ROA]) spectra of large molecules such as peptides. In the MIM-VCD and MIM-ROA methods, the relevant higher energy derivatives of the parent molecule are assembled from the corresponding derivatives of smaller fragment subsystems. In addition, the missing long-range interfragment interactions are accounted at a computationally less expensive level of theory (MIM2). In this work we employed the MIM-VCD and MIM-ROA fragment-based methods to explore the evolution of the chiroptical spectroscopic characteristics of 3 10 -helix, α-helix, β-hairpin, γ-turn, and β-extended conformers of gas phase polyalanine (chain length n = 6-14). The different conformers of polyalanine show distinctive features in the MIM chiroptical spectra and the associated spectral intensities increase with evolution of system size. For a better understanding the site-specific effects on the vibrational spectra, isotopic substitutions were also performed employing the MIM method. An increasing redshift with the number of isotopically labeled 13 C=O functional groups in the peptide molecule was seen. For larger polypeptides, we implemented the two-step-MIM model to circumvent the high computational expense associated with the evaluation of chiroptical spectra at a high level of theory using large basis sets. The chiroptical spectra of α-(alanine) 20 polypeptide obtained using the two-step-MIM model, including continuum solvation effects, show good agreement with the full calculations and experiment. This benchmark study suggests that the MIM-fragment approach can assist in predicting and interpreting chiroptical spectra of large polypeptides. © 2016 Wiley Periodicals, Inc.

  6. The reliable solution and computation time of variable parameters logistic model

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  7. Computability, Gödel's incompleteness theorem, and an inherent limit on the predictability of evolution

    PubMed Central

    Day, Troy

    2012-01-01

    The process of evolutionary diversification unfolds in a vast genotypic space of potential outcomes. During the past century, there have been remarkable advances in the development of theory for this diversification, and the theory's success rests, in part, on the scope of its applicability. A great deal of this theory focuses on a relatively small subset of the space of potential genotypes, chosen largely based on historical or contemporary patterns, and then predicts the evolutionary dynamics within this pre-defined set. To what extent can such an approach be pushed to a broader perspective that accounts for the potential open-endedness of evolutionary diversification? There have been a number of significant theoretical developments along these lines but the question of how far such theory can be pushed has not been addressed. Here a theorem is proven demonstrating that, because of the digital nature of inheritance, there are inherent limits on the kinds of questions that can be answered using such an approach. In particular, even in extremely simple evolutionary systems, a complete theory accounting for the potential open-endedness of evolution is unattainable unless evolution is progressive. The theorem is closely related to Gödel's incompleteness theorem, and to the halting problem from computability theory. PMID:21849390

  8. On the Minimum Core Mass for Giant Planet Formation

    NASA Astrophysics Data System (ADS)

    Piso, Ana-Maria; Youdin, Andrew; Murray-Clay, Ruth

    2013-07-01

    The core accretion model proposes that giant planets form by the accretion of gas onto a solid protoplanetary core. Previous studies have found that there exists a "critical core mass" past which hydrostatic solutions can no longer be found and unstable atmosphere collapse occurs. This core mass is typically quoted to be around 10Me. In standard calculations of the critical core mass, planetesimal accretion deposits enough heat to alter the luminosity of the atmosphere, increasing the core mass required for the atmosphere to collapse. In this study we consider the limiting case in which planetesimal accretion is negligible and Kelvin-Helmholtz contraction dominates the luminosity evolution of the planet. We develop a two-layer atmosphere model with an inner convective region and an outer radiative zone that matches onto the protoplanetary disk, and we determine the minimum core mass for a giant planet to form within the typical disk lifetime for a variety of disk conditions. We denote this mass as critical core mass. The absolute minimum core mass required to nucleate atmosphere collapse is ˜ 8Me at 5 AU and steadily decreases to ˜ 3.5Me at 100 AU, for an ideal diatomic gas with a solar composition and a standard ISM opacity law. Lower opacity and disk temperature significantly reduce the critical core mass, while a decrease in the mean molecular weight of the nebular gas results in a larger critical core mass. Our results yield lower mass cores than corresponding studies for large planetesimal accretion rates.

  9. Free energy decomposition of protein-protein interactions.

    PubMed

    Noskov, S Y; Lim, C

    2001-08-01

    A free energy decomposition scheme has been developed and tested on antibody-antigen and protease-inhibitor binding for which accurate experimental structures were available for both free and bound proteins. Using the x-ray coordinates of the free and bound proteins, the absolute binding free energy was computed assuming additivity of three well-defined, physical processes: desolvation of the x-ray structures, isomerization of the x-ray conformation to a nearby local minimum in the gas-phase, and subsequent noncovalent complex formation in the gas phase. This free energy scheme, together with the Generalized Born model for computing the electrostatic solvation free energy, yielded binding free energies in remarkable agreement with experimental data. Two assumptions commonly used in theoretical treatments; viz., the rigid-binding approximation (which assumes no conformational change upon complexation) and the neglect of vdW interactions, were found to yield large errors in the binding free energy. Protein-protein vdW and electrostatic interactions between complementary surfaces over a relatively large area (1400--1700 A(2)) were found to drive antibody-antigen and protease-inhibitor binding.

  10. Interferometer for Space Station Windows

    NASA Technical Reports Server (NTRS)

    Hall, Gregory

    2003-01-01

    Inspection of space station windows for micrometeorite damage would be a difficult task insitu using current inspection techniques. Commercially available optical profilometers and inspection systems are relatively large, about the size of a desktop computer tower, and require a stable platform to inspect the test object. Also, many devices currently available are designed for a laboratory or controlled environments requiring external computer control. This paper presents an approach using a highly developed optical interferometer to inspect the windows from inside the space station itself using a self- contained hand held device. The interferometer would be capable as a minimum of detecting damage as small as one ten thousands of an inch in diameter and depth while interrogating a relatively large area. The current developmental state of this device is still in the proof of concept stage. The background section of this paper will discuss the current state of the art of profilometers as well as the desired configuration of the self-contained, hand held device. Then, a discussion of the developments and findings that will allow the configuration change with suggested approaches appearing in the proof of concept section.

  11. History of the numerical aerodynamic simulation program

    NASA Technical Reports Server (NTRS)

    Peterson, Victor L.; Ballhaus, William F., Jr.

    1987-01-01

    The Numerical Aerodynamic Simulation (NAS) program has reached a milestone with the completion of the initial operating configuration of the NAS Processing System Network. This achievement is the first major milestone in the continuing effort to provide a state-of-the-art supercomputer facility for the national aerospace community and to serve as a pathfinder for the development and use of future supercomputer systems. The underlying factors that motivated the initiation of the program are first identified and then discussed. These include the emergence and evolution of computational aerodynamics as a powerful new capability in aerodynamics research and development, the computer power required for advances in the discipline, the complementary nature of computation and wind tunnel testing, and the need for the government to play a pathfinding role in the development and use of large-scale scientific computing systems. Finally, the history of the NAS program is traced from its inception in 1975 to the present time.

  12. A comparison of approaches for finding minimum identifying codes on graphs

    NASA Astrophysics Data System (ADS)

    Horan, Victoria; Adachi, Steve; Bak, Stanley

    2016-05-01

    In order to formulate mathematical conjectures likely to be true, a number of base cases must be determined. However, many combinatorial problems are NP-hard and the computational complexity makes this research approach difficult using a standard brute force approach on a typical computer. One sample problem explored is that of finding a minimum identifying code. To work around the computational issues, a variety of methods are explored and consist of a parallel computing approach using MATLAB, an adiabatic quantum optimization approach using a D-Wave quantum annealing processor, and lastly using satisfiability modulo theory (SMT) and corresponding SMT solvers. Each of these methods requires the problem to be formulated in a unique manner. In this paper, we address the challenges of computing solutions to this NP-hard problem with respect to each of these methods.

  13. Basilar-membrane responses to broadband noise modeled using linear filters with rational transfer functions.

    PubMed

    Recio-Spinoso, Alberto; Fan, Yun-Hui; Ruggero, Mario A

    2011-05-01

    Basilar-membrane responses to white Gaussian noise were recorded using laser velocimetry at basal sites of the chinchilla cochlea with characteristic frequencies near 10 kHz and first-order Wiener kernels were computed by cross correlation of the stimuli and the responses. The presence or absence of minimum-phase behavior was explored by fitting the kernels with discrete linear filters with rational transfer functions. Excellent fits to the kernels were obtained with filters with transfer functions including zeroes located outside the unit circle, implying nonminimum-phase behavior. These filters accurately predicted basilar-membrane responses to other noise stimuli presented at the same level as the stimulus for the kernel computation. Fits with all-pole and other minimum-phase discrete filters were inferior to fits with nonminimum-phase filters. Minimum-phase functions predicted from the amplitude functions of the Wiener kernels by Hilbert transforms were different from the measured phase curves. These results, which suggest that basilar-membrane responses do not have the minimum-phase property, challenge the validity of models of cochlear processing, which incorporate minimum-phase behavior. © 2011 IEEE

  14. Fluid thermodynamics control thermal weakening during earthquake rupture.

    NASA Astrophysics Data System (ADS)

    Acosta, M.; Passelegue, F. X.; Schubnel, A.; Violay, M.

    2017-12-01

    Although fluids are pervasive among tectonic faults, thermo-hydro-mechanical couplings during earthquake slip remain unclear. We report full dynamic records of stick-slip events, performed on saw cut Westerly Granite samples loaded under triaxial conditions at stresses representative of the upper continental crust (σ3' 70 MPa) Three fluid pressure conditions were tested, dry, low , and high pressure (i.e. Pf=0, 1, and 25 MPa). Friction (μ) evolution recorded at 10 MHz sampling frequency showed that, for a single event, μ initially increased from its static pre-stress level, μ0 to a peak value μ p it then abruptly dropped to a minimum dynamic value μd before recovering to its residual value μr, where the fault reloaded elastically. Under dry and low fluid pressure conditions, dynamic friction (μd) was extremely low ( 0.2) and co-seismic slip (δ) was large ( 250 and 200 μm respectively) due to flash heating (FH) and melting of asperities as supported by microstructures. Conversely, at pf=25 MPa, μd was higher ( 0.45), δ was smaller ( 80 μm), and frictional melting was not found. We calculated flash temperatures at asperity contacts including heat buffering by on-fault fluid. Considering the isobaric evolution of water's thermodynamic properties with rising temperature showed that pressurized water controlled fault heating and weakening, through sharp variations of specific heat (cpw) and density (ρw) at water's phase transitions. Injecting the computed flash temperatures into slip-on-a-plane model for thermal pressurization (TP) showed that: (i) if pf was low enough so that frictional heating induced liquid/vapour phase transition, FH operated, allowing very low μd during earthquakes. (ii) Conversely, if pf was high enough that shear heating induced a sharp phase transition directly from liquid to supercritical state, an extraordinary rise in water's specific heat acted as a major energy sink inhibiting FH and limiting TP, allowing higher dynamic fault strengths. Further extrapolation of this simplified model to mid- and low- crustal depths shows that, large cpw rise during phase transitions makes TP the dominant weakening mechanism up to 5 km depth. Increasing depth allows somewhat larger shear stress and reduced cpw rise, and so substantial shear heating at low slip rates, favouring FH for fault weakening.

  15. Sensitivity of Austempering Heat Treatment of Ductile Irons to Changes in Process Parameters

    NASA Astrophysics Data System (ADS)

    Boccardo, A. D.; Dardati, P. M.; Godoy, L. A.; Celentano, D. J.

    2018-06-01

    Austempered ductile iron (ADI) is frequently obtained by means of a three-step austempering heat treatment. The parameters of this process play a crucial role on the microstructure of the final product. This paper considers the influence of some process parameters ( i.e., the initial microstructure of ductile iron and the thermal cycle) on key features of the heat treatment (such as minimum required time for austenitization and austempering and microstructure of the final product). A computational simulation of the austempering heat treatment is reported in this work, which accounts for a coupled thermo-metallurgical behavior in terms of the evolution of temperature at the scale of the part being investigated (the macroscale) and the evolution of phases at the scale of microconstituents (the microscale). The paper focuses on the sensitivity of the process by looking at a sensitivity index and scatter plots. The sensitivity indices are determined by using a technique based on the variance of the output. The results of this study indicate that both the initial microstructure and the thermal cycle parameters play a key role in the production of ADI. This work also provides a guideline to help selecting values of the appropriate process parameters to obtain parts with a required microstructural characteristic.

  16. A Microworld Approach to the Formalization of Musical Knowledge.

    ERIC Educational Resources Information Center

    Honing, Henkjan

    1993-01-01

    Discusses the importance of applying computational modeling and artificial intelligence techniques to music cognition and computer music research. Recommends three uses of microworlds to trim computational theories to their bare minimum, allowing for better and easier comparison. (CFR)

  17. Automated design of minimum drag light aircraft fuselages and nacelles

    NASA Technical Reports Server (NTRS)

    Smetana, F. O.; Fox, S. R.; Karlin, B. E.

    1982-01-01

    The constrained minimization algorithm of Vanderplaats is applied to the problem of designing minimum drag faired bodies such as fuselages and nacelles. Body drag is computed by a variation of the Hess-Smith code. This variation includes a boundary layer computation. The encased payload provides arbitrary geometric constraints, specified a priori by the designer, below which the fairing cannot shrink. The optimization may include engine cooling air flows entering and exhausting through specific port locations on the body.

  18. Recent Evolution of the Introductory Curriculum in Computing.

    ERIC Educational Resources Information Center

    Tucker, Allen B.; Garnick, David K.

    1991-01-01

    Traces the evolution of introductory computing courses for undergraduates based on the Association for Computing Machinery (ACM) guidelines published in "Curriculum 78." Changes in the curricula are described, including the role of discrete mathematics and theory; and the need for a broader model for designing introductory courses is…

  19. PARALLEL EVOLUTION OF QUASI-SEPARATRIX LAYERS AND ACTIVE REGION UPFLOWS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mandrini, C. H.; Cristiani, G. D.; Nuevo, F. A.

    2015-08-10

    Persistent plasma upflows were observed with Hinode’s EUV Imaging Spectrometer (EIS) at the edges of active region (AR) 10978 as it crossed the solar disk. We analyze the evolution of the photospheric magnetic and velocity fields of the AR, model its coronal magnetic field, and compute the location of magnetic null-points and quasi-sepratrix layers (QSLs) searching for the origin of EIS upflows. Magnetic reconnection at the computed null points cannot explain all of the observed EIS upflow regions. However, EIS upflows and QSLs are found to evolve in parallel, both temporarily and spatially. Sections of two sets of QSLs, calledmore » outer and inner, are found associated to EIS upflow streams having different characteristics. The reconnection process in the outer QSLs is forced by a large-scale photospheric flow pattern, which is present in the AR for several days. We propose a scenario in which upflows are observed, provided that a large enough asymmetry in plasma pressure exists between the pre-reconnection loops and lasts as long as a photospheric forcing is at work. A similar mechanism operates in the inner QSLs; in this case, it is forced by the emergence and evolution of the bipoles between the two main AR polarities. Our findings provide strong support for the results from previous individual case studies investigating the role of magnetic reconnection at QSLs as the origin of the upflowing plasma. Furthermore, we propose that persistent reconnection along QSLs does not only drive the EIS upflows, but is also responsible for the continuous metric radio noise-storm observed in AR 10978 along its disk transit by the Nançay Radio Heliograph.« less

  20. Language evolution and human-computer interaction

    NASA Technical Reports Server (NTRS)

    Grudin, Jonathan; Norman, Donald A.

    1991-01-01

    Many of the issues that confront designers of interactive computer systems also appear in natural language evolution. Natural languages and human-computer interfaces share as their primary mission the support of extended 'dialogues' between responsive entities. Because in each case one participant is a human being, some of the pressures operating on natural languages, causing them to evolve in order to better support such dialogue, also operate on human-computer 'languages' or interfaces. This does not necessarily push interfaces in the direction of natural language - since one entity in this dialogue is not a human, this is not to be expected. Nonetheless, by discerning where the pressures that guide natural language evolution also appear in human-computer interaction, we can contribute to the design of computer systems and obtain a new perspective on natural languages.

  1. The Evolution of the Earth's Mantle Structure and Surface and Core-mantle Boundary Heat Flux since the Paleozoic

    NASA Astrophysics Data System (ADS)

    Zhang, N.; Zhong, S.

    2010-12-01

    The cause for and time evolution of the seismically observed African and Pacific slow anomalies (i.e., superplumes) are still unclear with two competing proposals. First, the African and Pacific superplumes have remained largely unchanged for at least the last 300 Ma and possibly much longer. Second, the African superplume is formed sometime after the formation of Pangea (i.e., at 330 Ma ago) and the mantle in the African hemisphere is predominated by cold downwelling structures before and during the assembly of Pangea, while the Pacific superplume has been stable for the Pangea supercontinent cycle (i.e., globally a degree-1 structure before the Pangea formation). Here, we construct a plate motion history back to 450 Ma and use it as time-dependent surface boundary conditions in 3-dimensional spherical models of thermochemical mantle convection to study the evolution of mantle structure as well as the surface and core-mantle boundary heat flux. Our results for the mantle structures suggest that while the mantle in the African hemisphere before the assembly of Pangea is predominated by the cold downwelling structure resulting from plate convergence between Gondwana and Laurussia, it is unlikely that the bulk of the African superplume structure can be formed before ~240 Ma (i.e., ~100 Ma after the assembly of Pangea). The evolution of mantle structure has implications for heat flux at the surface and core-mantle boundary (CMB). Our results show that while the plate motion controls the surface heat flux, the major cold downwellings control the core-mantle boundary heat flux. A notable feature in surface heat flux from our models is that the surface heat flux peaks at ~100 Ma ago but decreases for the last 100 Ma due to the breakup of Pangea and its subsequent plate evolution. The CMB heat flux in the equatorial regions shows two minima during period 320-250 Ma and period 120-84 Ma. The first minimum clearly results from the disappearance of a major cold downwelling above the CMB below the Pangea after the assembly of Pangea ends the subduction and convergence between Gondwana and Laurussia. The second minimum arises because the break-up of Pangea leads to subduction of much smaller and younger oceanic lithosphere in the equatorial regions of the CMB. Considering the recent suggestion that CMB heat flux in the equatorial regions controls the frequency of magnetic polarity reversals (Olson et al., 2010), our results have important implications for the Kaiman Reversal Superchron and Cretaceous Normal Superchron.

  2. Global- to Micro-Scale Evolution of the Pinatubo Aerosol: Using Composite Data Sets to Build the Picture and Assess Consistency of Different Measurements

    NASA Technical Reports Server (NTRS)

    Russell, P. B.; Pueschel, R. F.; Livingston, J. M.; Bergstrom, R.; Lawless, James G. (Technical Monitor)

    1994-01-01

    This paper brings together experimental. evidence required to build realistic models of the global evolution of physical, chemical, and optical properties of the aerosol resulting from the 1991 Pinatubo volcanic eruption. Such models are needed to compute the effects of the aerosol on atmospheric chemistry, dynamics, radiation, and temperature. Whereas there is now a large and growing body of post-Pinatubo measurements by a variety of techniques, some results are in conflict, and a self-consistent, unified picture is needed, along with an assessment of remaining uncertainties. This paper examines data from photometers, radiometers, impactors, optical counters/sizers, and lidars operated on the ground, aircraft, balloons, and spacecraft.

  3. A single-degenerate channel for the progenitors of Type Ia supernovae with different metallicities

    NASA Astrophysics Data System (ADS)

    Meng, X.; Chen, X.; Han, Z.

    2009-06-01

    A single-degenerate channel for the progenitors of Type Ia supernovae (SNe Ia) is currently accepted, in which a carbon-oxygen white dwarf (CO WD) accretes hydrogen-rich material from its companion, increases its mass to the Chandrasekhar mass limit and then explodes as a SN Ia. Incorporating the prescription of Hachisu et al. for the accretion efficiency into Eggleton's stellar evolution code, and assuming that the prescription is valid for all metallicities, we performed binary stellar evolution calculations for more than 25000 close WD binaries with metallicities Z = 0.06, 0.05, 0.04, 0.03, 0.02, 0.01, 0.004, 0.001, 0.0003 and 0.0001. For our calculations, the companions are assumed to be unevolved or slightly evolved stars (WD + MS). As a result, the initial parameter spaces for SNe Ia at various Z are presented in the orbital period-secondary mass (logPi, Mi2) plane. Our study shows that both the initial mass of the secondary and the initial orbital period increase with metallicity. Thus, the minimum mass of the CO WD for SNe Ia decreases with metallicity Z. The difference in the minimum mass may be as large as 0.24Msolar for different Z. Adopting the results above, we studied the birth rate of SNe Ia for various Z via a binary population synthesis approach. If a single starburst is assumed, SNe Ia occur systemically earlier and the peak value of the birth rate is larger for a high Z. The Galactic birth rate from the WD + MS channel is lower than (but comparable to) that inferred from observations. Our study indicates that supernovae like SN2002ic will not occur in extremely low-metallicity environments, if the delayed dynamical-instability model is appropriate.

  4. Review of integrated digital systems: evolution and adoption

    NASA Astrophysics Data System (ADS)

    Fritz, Lawrence W.

    The factors that are influencing the evolution of photogrammetric and remote sensing technology to transition into fully integrated digital systems are reviewed. These factors include societal pressures for new, more timely digital products from the Spatial Information Sciencesand the adoption of rapid technological advancements in digital processing hardware and software. Current major developments in leading government mapping agencies of the USA, such as the Digital Production System (DPS) modernization programme at the Defense Mapping Agency, and the Automated Nautical Charting System II (ANCS-II) programme and Integrated Digital Photogrammetric Facility (IDPF) at NOAA/National Ocean Service, illustrate the significant benefits to be realized. These programmes are examples of different levels of integrated systems that have been designed to produce digital products. They provide insights to the management complexities to be considered for very large integrated digital systems. In recognition of computer industry trends, a knowledge-based architecture for managing the complexity of the very large spatial information systems of the future is proposed.

  5. Morphologic Evolution of the Mount St. Helens Crater Area, Washington

    NASA Technical Reports Server (NTRS)

    Beach, G. L.

    1985-01-01

    The large rockslide-avalanche that preceded the eruption of Mount St. Helens on 18 May 1980 removed approximately 2.8 cubic km of material from the summit and north flank of the volcano, forming a horseshoe-shaped crater 2.0 km wide and 3.9 km long. A variety of erosional and depositional processes, notably mass wasting and gully development, acted to modify the topographic configuration of the crater area. To document this morphologic evolution, a series of annual large-scale topographic maps is being produced as a base for comparitive geomorphic analysis. Four topographic maps of the Mount St. Helens crater area at a scale of 1:4000 were produced by the National Mapping Division of the U. S. Geological Survey. Stereo aerial photography for the maps was obtained on 23 October 1980, 10 September 1981, 1 September 1982, and 17 August 1983. To quantify topographic changes in the study area, each topographic map is being digitized and corresponding X, Y, and Z values from successive maps are being computer-compared.

  6. Duality quantum algorithm efficiently simulates open quantum systems

    PubMed Central

    Wei, Shi-Jie; Ruan, Dong; Long, Gui-Lu

    2016-01-01

    Because of inevitable coupling with the environment, nearly all practical quantum systems are open system, where the evolution is not necessarily unitary. In this paper, we propose a duality quantum algorithm for simulating Hamiltonian evolution of an open quantum system. In contrast to unitary evolution in a usual quantum computer, the evolution operator in a duality quantum computer is a linear combination of unitary operators. In this duality quantum algorithm, the time evolution of the open quantum system is realized by using Kraus operators which is naturally implemented in duality quantum computer. This duality quantum algorithm has two distinct advantages compared to existing quantum simulation algorithms with unitary evolution operations. Firstly, the query complexity of the algorithm is O(d3) in contrast to O(d4) in existing unitary simulation algorithm, where d is the dimension of the open quantum system. Secondly, By using a truncated Taylor series of the evolution operators, this duality quantum algorithm provides an exponential improvement in precision compared with previous unitary simulation algorithm. PMID:27464855

  7. Computational modeling of radiofrequency ablation: evaluation on ex vivo data using ultrasound monitoring

    NASA Astrophysics Data System (ADS)

    Audigier, Chloé; Kim, Younsu; Dillow, Austin; Boctor, Emad M.

    2017-03-01

    Radiofrequency ablation (RFA) is the most widely used minimally invasive ablative therapy for liver cancer, but it is challenged by a lack of patient-specific monitoring. Inter-patient tissue variability and the presence of blood vessels make the prediction of the RFA difficult. A monitoring tool which can be personalized for a given patient during the intervention would be helpful to achieve a complete tumor ablation. However, the clinicians do not have access to such a tool, which results in incomplete treatment and a large number of recurrences. Computational models can simulate the phenomena and mechanisms governing this therapy. The temperature evolution as well as the resulted ablation can be modeled. When combined together with intraoperative measurements, computational modeling becomes an accurate and powerful tool to gain quantitative understanding and to enable improvements in the ongoing clinical settings. This paper shows how computational models of RFA can be evaluated using intra-operative measurements. First, simulations are used to demonstrate the feasibility of the method, which is then evaluated on two ex vivo datasets. RFA is simulated on a simplified geometry to generate realistic longitudinal temperature maps and the resulted necrosis. Computed temperatures are compared with the temperature evolution recorded using thermometers, and with temperatures monitored by ultrasound (US) in a 2D plane containing the ablation tip. Two ablations are performed on two cadaveric bovine livers, and we achieve error of 2.2 °C on average between the computed and the thermistors temperature and 1.4 °C and 2.7 °C on average between the temperature computed and monitored by US during the ablation at two different time points (t = 240 s and t = 900 s).

  8. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    PubMed

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  9. BEAGLE: An Application Programming Interface and High-Performance Computing Library for Statistical Phylogenetics

    PubMed Central

    Ayres, Daniel L.; Darling, Aaron; Zwickl, Derrick J.; Beerli, Peter; Holder, Mark T.; Lewis, Paul O.; Huelsenbeck, John P.; Ronquist, Fredrik; Swofford, David L.; Cummings, Michael P.; Rambaut, Andrew; Suchard, Marc A.

    2012-01-01

    Abstract Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software. PMID:21963610

  10. "Simulated molecular evolution" or computer-generated artifacts?

    PubMed

    Darius, F; Rojas, R

    1994-11-01

    1. The authors define a function with value 1 for the positive examples and 0 for the negative ones. They fit a continuous function but do not deal at all with the error margin of the fit, which is almost as large as the function values they compute. 2. The term "quality" for the value of the fitted function gives the impression that some biological significance is associated with values of the fitted function strictly between 0 and 1, but there is no justification for this kind of interpretation and finding the point where the fit achieves its maximum does not make sense. 3. By neglecting the error margin the authors try to optimize the fitted function using differences in the second, third, fourth, and even fifth decimal place which have no statistical significance. 4. Even if such a fit could profit from more data points, the authors should first prove that the region of interest has some kind of smoothness, that is, that a continuous fit makes any sense at all. 5. "Simulated molecular evolution" is a misnomer. We are dealing here with random search. Since the margin of error is so large, the fitted function does not provide statistically significant information about the points in search space where strings with cleavage sites could be found. This implies that the method is a highly unreliable stochastic search in the space of strings, even if the neural network is capable of learning some simple correlations. 6. Classical statistical methods are for these kind of problems with so few data points clearly superior to the neural networks used as a "black box" by the authors, which in the way they are structured provide a model with an error margin as large as the numbers being computed.7. And finally, even if someone would provide us with a function which separates strings with cleavage sites from strings without them perfectly, so-called simulated molecular evolution would not be better than random selection.Since a perfect fit would only produce exactly ones or zeros,starting a search in a region of space where all strings in the neighborhood get the value zero would not provide any kind of directional information for new iterations. We would just skip from one point to the other in a typical random walk manner.

  11. A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.

    2016-12-01

    Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.

  12. Phylogenetic review of tonal sound production in whales in relation to sociality

    PubMed Central

    May-Collado, Laura J; Agnarsson, Ingi; Wartzok, Douglas

    2007-01-01

    Background It is widely held that in toothed whales, high frequency tonal sounds called 'whistles' evolved in association with 'sociality' because in delphinids they are used in a social context. Recently, whistles were hypothesized to be an evolutionary innovation of social dolphins (the 'dolphin hypothesis'). However, both 'whistles' and 'sociality' are broad concepts each representing a conglomerate of characters. Many non-delphinids, whether solitary or social, produce tonal sounds that share most of the acoustic characteristics of delphinid whistles. Furthermore, hypotheses of character correlation are best tested in a phylogenetic context, which has hitherto not been done. Here we summarize data from over 300 studies on cetacean tonal sounds and social structure and phylogenetically test existing hypotheses on their co-evolution. Results Whistles are 'complex' tonal sounds of toothed whales that demark a more inclusive clade than the social dolphins. Whistles are also used by some riverine species that live in simple societies, and have been lost twice within the social delphinoids, all observations that are inconsistent with the dolphin hypothesis as stated. However, cetacean tonal sounds and sociality are intertwined: (1) increased tonal sound modulation significantly correlates with group size and social structure; (2) changes in tonal sound complexity are significantly concentrated on social branches. Also, duration and minimum frequency correlate as do group size and mean minimum frequency. Conclusion Studying the evolutionary correlation of broad concepts, rather than that of their component characters, is fraught with difficulty, while limits of available data restrict the detail in which component character correlations can be analyzed in this case. Our results support the hypothesis that sociality influences the evolution of tonal sound complexity. The level of social and whistle complexity are correlated, suggesting that complex tonal sounds play an important role in social communication. Minimum frequency is higher in species with large groups, and correlates negatively with duration, which may reflect the increased distances over which non-social species communicate. Our findings are generally stable across a range of alternative phylogenies. Our study points to key species where future studies would be particularly valuable for enriching our understanding of the interplay of acoustic communication and sociality. PMID:17692128

  13. Power limits for microbial life.

    PubMed

    LaRowe, Douglas E; Amend, Jan P

    2015-01-01

    To better understand the origin, evolution, and extent of life, we seek to determine the minimum flux of energy needed for organisms to remain viable. Despite the difficulties associated with direct measurement of the power limits for life, it is possible to use existing data and models to constrain the minimum flux of energy required to sustain microorganisms. Here, a we apply a bioenergetic model to a well characterized marine sedimentary environment in order to quantify the amount of power organisms use in an ultralow-energy setting. In particular, we show a direct link between power consumption in this environment and the amount of biomass (cells cm(-3)) found in it. The power supply resulting from the aerobic degradation of particular organic carbon (POC) at IODP Site U1370 in the South Pacific Gyre is between ∼10(-12) and 10(-16) W cm(-3). The rates of POC degradation are calculated using a continuum model while Gibbs energies have been computed using geochemical data describing the sediment as a function of depth. Although laboratory-determined values of maintenance power do a poor job of representing the amount of biomass in U1370 sediments, the number of cells per cm(-3) can be well-captured using a maintenance power, 190 zW cell(-1), two orders of magnitude lower than the lowest value reported in the literature. In addition, we have combined cell counts and calculated power supplies to determine that, on average, the microorganisms at Site U1370 require 50-3500 zW cell(-1), with most values under ∼300 zW cell(-1). Furthermore, we carried out an analysis of the absolute minimum power requirement for a single cell to remain viable to be on the order of 1 zW cell(-1).

  14. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...

  15. 25 CFR 542.10 - What are the minimum internal control standards for keno?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... keno? (a) Computer applications. For any computer applications utilized, alternate documentation and/or... restricted transaction log or computer storage media concurrently with the generation of the ticket. (3) Keno personnel shall be precluded from having access to the restricted transaction log or computer storage media...

  16. Comparative phyloinformatics of virus genes at micro and macro levels in a distributed computing environment.

    PubMed

    Singh, Dadabhai T; Trehan, Rahul; Schmidt, Bertil; Bretschneider, Timo

    2008-01-01

    Preparedness for a possible global pandemic caused by viruses such as the highly pathogenic influenza A subtype H5N1 has become a global priority. In particular, it is critical to monitor the appearance of any new emerging subtypes. Comparative phyloinformatics can be used to monitor, analyze, and possibly predict the evolution of viruses. However, in order to utilize the full functionality of available analysis packages for large-scale phyloinformatics studies, a team of computer scientists, biostatisticians and virologists is needed--a requirement which cannot be fulfilled in many cases. Furthermore, the time complexities of many algorithms involved leads to prohibitive runtimes on sequential computer platforms. This has so far hindered the use of comparative phyloinformatics as a commonly applied tool in this area. In this paper the graphical-oriented workflow design system called Quascade and its efficient usage for comparative phyloinformatics are presented. In particular, we focus on how this task can be effectively performed in a distributed computing environment. As a proof of concept, the designed workflows are used for the phylogenetic analysis of neuraminidase of H5N1 isolates (micro level) and influenza viruses (macro level). The results of this paper are hence twofold. Firstly, this paper demonstrates the usefulness of a graphical user interface system to design and execute complex distributed workflows for large-scale phyloinformatics studies of virus genes. Secondly, the analysis of neuraminidase on different levels of complexity provides valuable insights of this virus's tendency for geographical based clustering in the phylogenetic tree and also shows the importance of glycan sites in its molecular evolution. The current study demonstrates the efficiency and utility of workflow systems providing a biologist friendly approach to complex biological dataset analysis using high performance computing. In particular, the utility of the platform Quascade for deploying distributed and parallelized versions of a variety of computationally intensive phylogenetic algorithms has been shown. Secondly, the analysis of the utilized H5N1 neuraminidase datasets at macro and micro levels has clearly indicated a pattern of spatial clustering of the H5N1 viral isolates based on geographical distribution rather than temporal or host range based clustering.

  17. Coronal and heliospheric magnetic flux circulation and its relation to open solar flux evolution

    NASA Astrophysics Data System (ADS)

    Lockwood, Mike; Owens, Mathew J.; Imber, Suzanne M.; James, Matthew K.; Bunce, Emma J.; Yeoman, Timothy K.

    2017-06-01

    Solar cycle 24 is notable for three features that can be found in previous cycles but which have been unusually prominent: (1) sunspot activity was considerably greater in the northern/southern hemisphere during the rising/declining phase; (2) accumulation of open solar flux (OSF) during the rising phase was modest, but rapid in the early declining phase; (3) the heliospheric current sheet (HCS) tilt showed large fluctuations. We show that these features had a major influence on the progression of the cycle. All flux emergence causes a rise then a fall in OSF, but only OSF with foot points in opposing hemispheres progresses the solar cycle via the evolution of the polar fields. Emergence in one hemisphere, or symmetric emergence without some form of foot point exchange across the heliographic equator, causes poleward migrating fields of both polarities in one or both (respectively) hemispheres which temporarily enhance OSF but do not advance the polar field cycle. The heliospheric field observed near Mercury and Earth reflects the asymmetries in emergence. Using magnetograms, we find evidence that the poleward magnetic flux transport (of both polarities) is modulated by the HCS tilt, revealing an effect on OSF loss rate. The declining phase rise in OSF was caused by strong emergence in the southern hemisphere with an anomalously low HCS tilt. This implies the recent fall in the southern polar field will be sustained and that the peak OSF has limited implications for the polar field at the next sunspot minimum and hence for the amplitude of cycle 25.Plain Language SummaryThere is growing interest in being able to predict the evolution in solar conditions on a better basis than past experience, which is necessarily limited. Two of the key features of the solar magnetic cycle are that the polar fields reverse just after the peak of each sunspot cycle and that the polar field that has accumulated by the time of each sunspot minimum is a good indicator of the amplitude of the following cycle. Thus, understanding the evolution of the polar fields becomes crucial. We here use observations of the magnetic fields at the surface of the Sun and from satellites near Earth and Mercury, to identify how three unusually pronounced features of the most recent solar cycle have revealed that not all the magnetic flux emerging in sunspot regions progresses the evolution of the polar fields. The results have important implications for our understanding and prediction of the long-term evolution of the Sun and the "space climate" it produces near Earth, which will influence the design and performance of several of humankind's operational systems such as spacecraft, long pipelines, and power grids.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19870032217&hterms=theories+formation+solar+system&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dtheories%2Bformation%2Bsolar%2Bsystem','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19870032217&hterms=theories+formation+solar+system&qs=Ntx%3Dmode%2Bmatchall%26Ntk%3DAll%26N%3D0%26No%3D50%26Ntt%3Dtheories%2Bformation%2Bsolar%2Bsystem"><span>The global evolution of the primordial solar nebula</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Ruden, S. P.; Lin, D. N. C.</p> <p>1986-01-01</p> <p>Complete radial, time-dependent calculations of the structure and evolution of the primordial solar nebula during the viscous diffusion stage are presented. The viscous stress is derived from analytic one-zone models of the vertical nebular structure based on detailed grain opacities. Comparisons with full numerical integrations indicate that the effective viscous alpha parameter is about 0.01. The evolution time of a minimum mass nebula is one-million yr or less. The flow pattern of fluid elements in the disk is examined and the implications the results have on the theory of the formation of the solar system are discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/19920008130','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/19920008130"><span>Beyond the Baseline 1991: Proceedings of the Space Station Evolution Symposium. Volume 2: Space Station Freedom, part 2</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p></p> <p>1991-01-01</p> <p>Individual presentations delivered at the Space Station Evolution Symposium in League City, Texas, on August 6, 7, and 8, 1991 are given in viewgraph form. Personnel responsible for Advanced Systems Studies and Advanced Development within the Space Station Freedom Program reported on the results of their work to date. Special attention is given to highlighting changes made during restructuring; a description of the growth paths through the follow-on and evolution stages; identification of the minimum impact provisions to allow flexibility in the baseline; and identification of enhancing and enabling technologies.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=appliances&pg=4&id=EJ636620','ERIC'); return false;" href="https://eric.ed.gov/?q=appliances&pg=4&id=EJ636620"><span>Adapting Teaching Strategies To Encompass New Technologies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Oravec, Jo Ann</p> <p>2001-01-01</p> <p>The explosion of special-purpose computing devices--Internet appliances, handheld computers, wireless Internet, networked household appliances--challenges business educators attempting to provide computer literacy education. At a minimum, they should address connectivity, expanded applications, and social and public policy implications of these…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JMPSo..97..156N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JMPSo..97..156N"><span>Quasi-static responses and variational principles in gradient plasticity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nguyen, Quoc-Son</p> <p>2016-12-01</p> <p>Gradient models have been much discussed in the literature for the study of time-dependent or time-independent processes such as visco-plasticity, plasticity and damage. This paper is devoted to the theory of Standard Gradient Plasticity at small strain. A general and consistent mathematical description available for common time-independent behaviours is presented. Our attention is focussed on the derivation of general results such as the description of the governing equations for the global response and the derivation of related variational principles in terms of the energy and the dissipation potentials. It is shown that the quasi-static response under a loading path is a solution of an evolution variational inequality as in classical plasticity. The rate problem and the rate minimum principle are revisited. A time-discretization by the implicit scheme of the evolution equation leads to the increment problem. An increment of the response associated with a load increment is a solution of a variational inequality and satisfies also a minimum principle if the energy potential is convex. The increment minimum principle deals with stables solutions of the variational inequality. Some numerical methods are discussed in view of the numerical simulation of the quasi-static response.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=%22digital+organism%22&id=EJ781290','ERIC'); return false;" href="https://eric.ed.gov/?q=%22digital+organism%22&id=EJ781290"><span>Learning Evolution and the Nature of Science Using Evolutionary Computing and Artificial Life</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Pennock, Robert T.</p> <p>2007-01-01</p> <p>Because evolution in natural systems happens so slowly, it is difficult to design inquiry-based labs where students can experiment and observe evolution in the way they can when studying other phenomena. New research in evolutionary computation and artificial life provides a solution to this problem. This paper describes a new A-Life software…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1462656','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=1462656"><span>Genetic dissection of hybrid incompatibilities between Drosophila simulans and D. mauritiana. I. Differential accumulation of hybrid male sterility effects on the X and autosomes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Tao, Yun; Chen, Sining; Hartl, Daniel L; Laurie, Cathy C</p> <p>2003-01-01</p> <p>The genetic basis of hybrid incompatibility in crosses between Drosophila mauritiana and D. simulans was investigated to gain insight into the evolutionary mechanisms of speciation. In this study, segments of the D. mauritiana third chromosome were introgressed into a D. simulans genetic background and tested as homozygotes for viability, male fertility, and female fertility. The entire third chromosome was covered with partially overlapping segments. Many segments were male sterile, while none were female sterile or lethal, confirming previous reports of the rapid evolution of hybrid male sterility (HMS). A statistical model was developed to quantify the HMS accumulation. In comparison with previous work on the X chromosome, we estimate that the X has approximately 2.5 times the density of HMS factors as the autosomes. We also estimate that the whole genome contains approximately 15 HMS "equivalents"-i.e., 15 times the minimum number of incompatibility factors necessary to cause complete sterility. Although some caveats for the quantitative estimate of a 2.5-fold density difference are described, this study supports the notion that the X chromosome plays a special role in the evolution of reproductive isolation. Possible mechanisms of a "large X" effect include selective fixation of new mutations that are recessive or partially recessive and the evolution of sex-ratio distortion systems. PMID:12930747</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/12930747','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/12930747"><span>Genetic dissection of hybrid incompatibilities between Drosophila simulans and D. mauritiana. I. Differential accumulation of hybrid male sterility effects on the X and autosomes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tao, Yun; Chen, Sining; Hartl, Daniel L; Laurie, Cathy C</p> <p>2003-08-01</p> <p>The genetic basis of hybrid incompatibility in crosses between Drosophila mauritiana and D. simulans was investigated to gain insight into the evolutionary mechanisms of speciation. In this study, segments of the D. mauritiana third chromosome were introgressed into a D. simulans genetic background and tested as homozygotes for viability, male fertility, and female fertility. The entire third chromosome was covered with partially overlapping segments. Many segments were male sterile, while none were female sterile or lethal, confirming previous reports of the rapid evolution of hybrid male sterility (HMS). A statistical model was developed to quantify the HMS accumulation. In comparison with previous work on the X chromosome, we estimate that the X has approximately 2.5 times the density of HMS factors as the autosomes. We also estimate that the whole genome contains approximately 15 HMS "equivalents"-i.e., 15 times the minimum number of incompatibility factors necessary to cause complete sterility. Although some caveats for the quantitative estimate of a 2.5-fold density difference are described, this study supports the notion that the X chromosome plays a special role in the evolution of reproductive isolation. Possible mechanisms of a "large X" effect include selective fixation of new mutations that are recessive or partially recessive and the evolution of sex-ratio distortion systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JThSc..27....8Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JThSc..27....8Z"><span>Pressure fluctuation generated by the interaction of blade and tongue</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zheng, Lulu; Dou, Hua-Shu; Chen, Xiaoping; Zhu, Zuchao; Cui, Baoling</p> <p>2018-02-01</p> <p>Pressure fluctuation around the tongue has large effect on the stable operation of a centrifugal pump. In this paper, the Reynolds averaged Navier-Stokes equations (RANS) and the RNG k-epsilon turbulence model is employed to simulate the flow in a pump. The flow field in the centrifugal pump is computed for a range of flow rate. The simulation results have been compared with the experimental data and good agreement has been achieved. In order to study the interaction of the tongue with the impeller, fifteen monitor probes are evenly distributed circumferentially at three radii around the tongue. Pressure distribution is investigated at various blade positions while the blade approaches to and leaves the tongue region. Results show that pressure signal fluctuates largely around the tongue, and it is more intense near the tongue surface. At design condition, standard deviation of pressure fluctuation is the minimum. At large flow rate, the increased low pressure region at the blade trailing edge results in the increases of pressure fluctuation amplitude and pressure spectra at the monitor probes. Minimum pressure is obtained when the blade is facing to the tongue. It is found that the amplitude of pressure fluctuation strongly depends on the blade positions at large flow rate, and pressure fluctuation is caused by the relative movement between blades and tongue. At small flow rate, the rule of pressure fluctuation is mainly depending on the structure of vortex flow at blade passage exit besides the influence from the relative position between the blade and the tongue.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20130011014','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20130011014"><span>Emergence of Complexity in Protein Functions and Metabolic Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Pohorille, Andzej</p> <p>2009-01-01</p> <p>In modern organisms proteins perform a majority of cellular functions, such as chemical catalysis, energy transduction and transport of material across cell walls. Although great strides have been made towards understanding protein evolution, a meaningful extrapolation from contemporary proteins to their earliest ancestors is virtually impossible. In an alternative approach, the origin of water-soluble proteins was probed through the synthesis of very large libraries of random amino acid sequences and subsequently subjecting them to in vitro evolution. In combination with computer modeling and simulations, these experiments allow us to address a number of fundamental questions about the origins of proteins. Can functionality emerge from random sequences of proteins? How did the initial repertoire of functional proteins diversify to facilitate new functions? Did this diversification proceed primarily through drawing novel functionalities from random sequences or through evolution of already existing proto-enzymes? Did protein evolution start from a pool of proteins defined by a frozen accident and other collections of proteins could start a different evolutionary pathway? Although we do not have definitive answers to these questions, important clues have been uncovered. Considerable progress has been also achieved in understanding the origins of membrane proteins. We will address this issue in the example of ion channels - proteins that mediate transport of ions across cell walls. Remarkably, despite overall complexity of these proteins in contemporary cells, their structural motifs are quite simple, with -helices being most common. By combining results of experimental and computer simulation studies on synthetic models and simple, natural channels, I will show that, even though architectures of membrane proteins are not nearly as diverse as those of water-soluble proteins, they are sufficiently flexible to adapt readily to the functional demands arising during evolution.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EGUGA..17.7475R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EGUGA..17.7475R"><span>Estuarine wetland evolution including sea-level rise and infrastructure effects.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rodriguez, Jose Fernando; Trivisonno, Franco; Rojas, Steven Sandi; Riccardi, Gerardo; Stenta, Hernan; Saco, Patricia Mabel</p> <p>2015-04-01</p> <p>Estuarine wetlands are an extremely valuable resource in terms of biotic diversity, flood attenuation, storm surge protection, groundwater recharge, filtering of surface flows and carbon sequestration. On a large scale the survival of these systems depends on the slope of the land and a balance between the rates of accretion and sea-level rise, but local man-made flow disturbances can have comparable effects. Climate change predictions for most of Australia include an accelerated sea level rise, which may challenge the survival of estuarine wetlands. Furthermore, coastal infrastructure poses an additional constraint on the adaptive capacity of these ecosystems. Numerical models are increasingly being used to assess wetland dynamics and to help manage some of these situations. We present results of a wetland evolution model that is based on computed values of hydroperiod and tidal range that drive vegetation preference. Our first application simulates the long term evolution of an Australian wetland heavily constricted by infrastructure that is undergoing the effects of predicted accelerated sea level rise. The wetland presents a vegetation zonation sequence mudflats - mangrove - saltmarsh from the seaward margin and up the topographic gradient but is also affected by compartmentalization due to internal road embankments and culverts that effectively attenuates tidal input to the upstream compartments. For this reason, the evolution model includes a 2D hydrodynamic module which is able to handle man-made flow controls and spatially varying roughness. It continually simulates tidal inputs into the wetland and computes annual values of hydroperiod and tidal range to update vegetation distribution based on preference to hydrodynamic conditions of the different vegetation types. It also computes soil accretion rates and updates roughness coefficient values according to evolving vegetation types. In order to explore in more detail the magnitude of flow attenuation due to roughness and its effects on the computation of tidal range and hydroperiod, we performed numerical experiments simulating floodplain flow on the side of a tidal creek using different roughness values. Even though the values of roughness that produce appreciable changes in hydroperiod and tidal range are relatively high, they are within the range expected for some of the wetland vegetation. Both applications of the model show that flow attenuation can play a major role in wetland hydrodynamics and that its effects must be considered when predicting wetland evolution under climate change scenarios, particularly in situations where existing infrastructure affects the flow.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16641985','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16641985"><span>The large-scale structure of the Universe.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Springel, Volker; Frenk, Carlos S; White, Simon D M</p> <p>2006-04-27</p> <p>Research over the past 25 years has led to the view that the rich tapestry of present-day cosmic structure arose during the first instants of creation, where weak ripples were imposed on the otherwise uniform and rapidly expanding primordial soup. Over 14 billion years of evolution, these ripples have been amplified to enormous proportions by gravitational forces, producing ever-growing concentrations of dark matter in which ordinary gases cool, condense and fragment to make galaxies. This process can be faithfully mimicked in large computer simulations, and tested by observations that probe the history of the Universe starting from just 400,000 years after the Big Bang.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9913E..0QW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9913E..0QW"><span>Efficient receiver tuning using differential evolution strategies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wheeler, Caleb H.; Toland, Trevor G.</p> <p>2016-08-01</p> <p>Differential evolution (DE) is a powerful and computationally inexpensive optimization strategy that can be used to search an entire parameter space or to converge quickly on a solution. The Kilopixel Array Pathfinder Project (KAPPa) is a heterodyne receiver system delivering 5 GHz of instantaneous bandwidth in the tuning range of 645-695 GHz. The fully automated KAPPa receiver test system finds optimal receiver tuning using performance feedback and DE. We present an adaptation of DE for use in rapid receiver characterization. The KAPPa DE algorithm is written in Python 2.7 and is fully integrated with the KAPPa instrument control, data processing, and visualization code. KAPPa develops the technologies needed to realize heterodyne focal plane arrays containing 1000 pixels. Finding optimal receiver tuning by investigating large parameter spaces is one of many challenges facing the characterization phase of KAPPa. This is a difficult task via by-hand techniques. Characterizing or tuning in an automated fashion without need for human intervention is desirable for future large scale arrays. While many optimization strategies exist, DE is ideal for time and performance constraints because it can be set to converge to a solution rapidly with minimal computational overhead. We discuss how DE is utilized in the KAPPa system and discuss its performance and look toward the future of 1000 pixel array receivers and consider how the KAPPa DE system might be applied.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19227365','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19227365"><span>Structural and mechanical properties of glassy water in nanoscale confinement.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lombardo, Thomas G; Giovambattista, Nicolás; Debenedetti, Pablo G</p> <p>2009-01-01</p> <p>We investigate the structure and mechanical properties of glassy water confined between silica-based surfaces with continuously tunable hydrophobicity and hydrophilicity by computing and analyzing minimum energy, mechanically stable configurations (inherent structures). The structured silica substrate imposes long-range order on the first layer of water molecules under hydrophobic confinement at high density (p > or = 1.0 g cm(-3)). This proximal layer is also structured in hydrophilic confinement at very low density (p approximately 0.4 g cm(-3)). The ordering of water next to the hydrophobic surface greatly enhances the mechanical strength of thin films (0.8 nm). This leads to a substantial stress anisotropy; the transverse strength of the film exceeds the normal strength by 500 MPa. The large transverse strength results in a minimum in the equation of state of the energy landscape that does not correspond to a mechanical instability, but represents disruption of the ordered layer of water next to the wall. In addition, we find that the mode of mechanical failure is dependent on the type of confinement. Under large lateral strain, water confined by hydrophilic surfaces preferentially forms voids in the middle of the film and fails cohesively. In contrast, water under hydrophobic confinement tends to form voids near the walls and fails by loss of adhesion.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/18586750','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/18586750"><span>Guided genome halving: hardness, heuristics and the history of the Hemiascomycetes.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zheng, Chunfang; Zhu, Qian; Adam, Zaky; Sankoff, David</p> <p>2008-07-01</p> <p>Some present day species have incurred a whole genome doubling event in their evolutionary history, and this is reflected today in patterns of duplicated segments scattered throughout their chromosomes. These duplications may be used as data to 'halve' the genome, i.e. to reconstruct the ancestral genome at the moment of doubling, but the solution is often highly nonunique. To resolve this problem, we take account of outgroups, external reference genomes, to guide and narrow down the search. We improve on a previous, computationally costly, 'brute force' method by adapting the genome halving algorithm of El-Mabrouk and Sankoff so that it rapidly and accurately constructs an ancestor close the outgroups, prior to a local optimization heuristic. We apply this to reconstruct the predoubling ancestor of Saccharomyces cerevisiae and Candida glabrata, guided by the genomes of three other yeasts that diverged before the genome doubling event. We analyze the results in terms (1) of the minimum evolution criterion, (2) how close the genome halving result is to the final (local) minimum and (3) how close the final result is to an ancestor manually constructed by an expert with access to additional information. We also visualize the set of reconstructed ancestors using classic multidimensional scaling to see what aspects of the two doubled and three unduplicated genomes influence the differences among the reconstructions. The experimental software is available on request.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5177884','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5177884"><span>Detection of timescales in evolving complex systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Darst, Richard K.; Granell, Clara; Arenas, Alex; Gómez, Sergio; Saramäki, Jari; Fortunato, Santo</p> <p>2016-01-01</p> <p>Most complex systems are intrinsically dynamic in nature. The evolution of a dynamic complex system is typically represented as a sequence of snapshots, where each snapshot describes the configuration of the system at a particular instant of time. This is often done by using constant intervals but a better approach would be to define dynamic intervals that match the evolution of the system’s configuration. To this end, we propose a method that aims at detecting evolutionary changes in the configuration of a complex system, and generates intervals accordingly. We show that evolutionary timescales can be identified by looking for peaks in the similarity between the sets of events on consecutive time intervals of data. Tests on simple toy models reveal that the technique is able to detect evolutionary timescales of time-varying data both when the evolution is smooth as well as when it changes sharply. This is further corroborated by analyses of several real datasets. Our method is scalable to extremely large datasets and is computationally efficient. This allows a quick, parameter-free detection of multiple timescales in the evolution of a complex system. PMID:28004820</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22243058','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22243058"><span>Compressed quantum simulation of the Ising model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kraus, B</p> <p>2011-12-16</p> <p>Jozsa et al. [Proc. R. Soc. A 466, 809 2009)] have shown that a match gate circuit running on n qubits can be compressed to a universal quantum computation on log(n)+3 qubits. Here, we show how this compression can be employed to simulate the Ising interaction of a 1D chain consisting of n qubits using a universal quantum computer running on log(n) qubits. We demonstrate how the adiabatic evolution can be realized on this exponentially smaller system and how the magnetization, which displays a quantum phase transition, can be measured. This shows that the quantum phase transition of very large systems can be observed experimentally with current technology. © 2011 American Physical Society</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..1413848C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..1413848C"><span>Climatic variability effects on summer cropping systems of the Iberian Peninsula</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Capa-Morocho, M.; Rodríguez-Fonseca, B.; Ruiz-Ramos, M.</p> <p>2012-04-01</p> <p>Climate variability and changes in the frequency of extremes events have a direct impact on crop yield and damages. Climate anomalies projections at monthly and yearly timescale allows us for adapting a cropping system (crops, varieties and management) to take advantage of favorable conditions or reduce the effect of adverse conditions. The objective of this work is to develop indices to evaluate the effect of climatic variability in summer cropping systems of Iberian Peninsula, in an attempt of relating yield variability to climate variability, extending the work of Rodríguez-Puebla (2004). This paper analyses the evolution of the yield anomalies of irrigated maize in several representative agricultural locations in Spain with contrasting temperature and precipitation regimes and compare it to the evolution of different patterns of climate variability, extending the methodology of Porter and Semenov (2005). To simulate maize yields observed daily data of radiation, maximum and minimum temperature and precipitation were used. These data were obtained from the State Meteorological Agency of Spain (AEMET). Time series of simulated maize yields were computed with CERES-maize model for periods ranging from 22 to 49 years, depending on the observed climate data available for each location. The computed standardized anomalies yields were projected on different oceanic and atmospheric anomalous fields and the resulting patterns were compared with a set of documented patterns from the National Oceanic and Atmospheric Administration (NOAA). The results can be useful also for climate change impact assessment, providing a scientific basis for selection of climate change scenarios where combined natural and forced variability represent a hazard for agricultural production. Interpretation of impact projections would also be enhanced.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20030068519','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20030068519"><span>[Activities of Research Institute for Advanced Computer Science</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Gross, Anthony R. (Technical Monitor); Leiner, Barry M.</p> <p>2001-01-01</p> <p>The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1346068','DOE-PATENT-XML'); return false;" href="https://www.osti.gov/servlets/purl/1346068"><span>Modeling the long-term evolution of space debris</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/doepatents">DOEpatents</a></p> <p>Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.</p> <p>2017-03-07</p> <p>A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29369710','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29369710"><span>Undecidability and Irreducibility Conditions for Open-Ended Evolution and Emergence.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hernández-Orozco, Santiago; Hernández-Quiroz, Francisco; Zenil, Hector</p> <p>2018-01-01</p> <p>Is undecidability a requirement for open-ended evolution (OEE)? Using methods derived from algorithmic complexity theory, we propose robust computational definitions of open-ended evolution and the adaptability of computable dynamical systems. Within this framework, we show that decidability imposes absolute limits on the stable growth of complexity in computable dynamical systems. Conversely, systems that exhibit (strong) open-ended evolution must be undecidable, establishing undecidability as a requirement for such systems. Complexity is assessed in terms of three measures: sophistication, coarse sophistication, and busy beaver logical depth. These three complexity measures assign low complexity values to random (incompressible) objects. As time grows, the stated complexity measures allow for the existence of complex states during the evolution of a computable dynamical system. We show, however, that finding these states involves undecidable computations. We conjecture that for similar complexity measures that assign low complexity values, decidability imposes comparable limits on the stable growth of complexity, and that such behavior is necessary for nontrivial evolutionary systems. We show that the undecidability of adapted states imposes novel and unpredictable behavior on the individuals or populations being modeled. Such behavior is irreducible. Finally, we offer an example of a system, first proposed by Chaitin, that exhibits strong OEE.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27913843','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27913843"><span>Molecular Evolution in Historical Perspective.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Suárez-Díaz, Edna</p> <p>2016-12-01</p> <p>In the 1960s, advances in protein chemistry and molecular genetics provided new means for the study of biological evolution. Amino acid sequencing, nucleic acid hybridization, zone gel electrophoresis, and immunochemistry were some of the experimental techniques that brought about new perspectives to the study of the patterns and mechanisms of evolution. New concepts, such as the molecular evolutionary clock, and the discovery of unexpected molecular phenomena, like the presence of repetitive sequences in eukaryotic genomes, eventually led to the realization that evolution might occur at a different pace at the organismic and the molecular levels, and according to different mechanisms. These developments sparked important debates between defendants of the molecular and organismic approaches. The most vocal confrontations focused on the relation between primates and humans, and the neutral theory of molecular evolution. By the 1980s and 1990s, the construction of large protein and DNA sequences databases, and the development of computer-based statistical tools, facilitated the coming together of molecular and evolutionary biology. Although in its contemporary form the field of molecular evolution can be traced back to the last five decades, the field has deep roots in twentieth century experimental life sciences. For historians of science, the origins and consolidation of molecular evolution provide a privileged field for the study of scientific debates, the relation between technological advances and scientific knowledge, and the connection between science and broader social concerns.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MNRAS.465.4735M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MNRAS.465.4735M"><span>The role of disc self-gravity in circumbinary planet systems - I. Disc structure and evolution</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mutter, Matthew M.; Pierens, Arnaud; Nelson, Richard P.</p> <p>2017-03-01</p> <p>We present the results of two-dimensional hydrodynamic simulations of self-gravitating circumbinary discs around binaries whose parameters match those of the circumbinary planet-hosting systems Kepler-16, Kepler-34 and Kepler-35. Previous work has shown that non-self-gravitating discs in these systems form an eccentric precessing inner cavity due to tidal truncation by the binary, and planets which form at large radii migrate until stalling at this cavity. Whilst this scenario appears to provide a natural explanation for the observed orbital locations of the circumbinary planets, previous simulations have failed to match the observed planet orbital parameters. The aim of this work is to examine the role of self-gravity in modifying circumbinary disc structure as a function of disc mass, prior to considering the evolution of embedded circumbinary planets. In agreement with previous work, we find that for disc masses between one and five times the minimum mass solar nebula (MMSN), disc self-gravity affects modest changes in the structure and evolution of circumbinary discs. Increasing the disc mass to 10 or 20 MMSN leads to two dramatic changes in disc structure. First, the scale of the inner cavity shrinks substantially, bringing its outer edge closer to the binary. Secondly, in addition to the eccentric inner cavity, additional precessing eccentric ring-like features develop in the outer regions of the discs. If planet formation starts early in the disc lifetime, these changes will have a significant impact on the formation and evolution of planets and precursor material.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1422555-steady-rans-methodology-calculating-pressure-drop-line-molten-salt-compact-crossflow-heat-exchanger','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1422555-steady-rans-methodology-calculating-pressure-drop-line-molten-salt-compact-crossflow-heat-exchanger"><span>Steady RANS methodology for calculating pressure drop in an in-line molten salt compact crossflow heat exchanger</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Carasik, Lane B.; Shaver, Dillon R.; Haefner, Jonah B.; ...</p> <p>2017-08-21</p> <p>We report the development of molten salt cooled reactors (MSR) and fluoride-salt cooled high temperature reactors (FHR) requires the use of advanced design tools for the primary heat exchanger design. Due to geometric and flow characteristics, compact (pitch to diameter ratios equal to or less than 1.25) heat exchangers with a crossflow flow arrangement can become desirable for these reactors. Unfortunately, the available experimental data is limited for compact tube bundles or banks in crossflow. Computational Fluid Dynamics can be used to alleviate the lack of experimental data in these tube banks. Previous computational efforts have been primarily focused onmore » large S/D ratios (larger than 1.4) using unsteady Reynolds averaged Navier-Stokes and Large Eddy Simulation frameworks. These approaches are useful, but have large computational requirements that make comprehensive design studies impractical. A CFD study was conducted with steady RANS in an effort to provide a starting point for future design work. The study was performed for an in-line tube bank geometry with FLiBe (LiF-BeF2), a frequently selected molten salt, as the working fluid. Based on the estimated pressure drops, the pressure and velocity distributions in the domain, an appropriate meshing strategy was determined and presented. Periodic boundaries in the spanwise direction transverse flow were determined to be an appropriate boundary condition for reduced computational domains. The domain size was investigated and a minimum of 2-flow channels for a domain is recommended to ensure the behavior is accounted for. Finally, the standard low Re κ-ε (Lien) turbulence model was determined to be the most appropriate for steady RANS of this case at the time of writing.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1422555-steady-rans-methodology-calculating-pressure-drop-line-molten-salt-compact-crossflow-heat-exchanger','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1422555-steady-rans-methodology-calculating-pressure-drop-line-molten-salt-compact-crossflow-heat-exchanger"><span>Steady RANS methodology for calculating pressure drop in an in-line molten salt compact crossflow heat exchanger</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Carasik, Lane B.; Shaver, Dillon R.; Haefner, Jonah B.</p> <p></p> <p>We report the development of molten salt cooled reactors (MSR) and fluoride-salt cooled high temperature reactors (FHR) requires the use of advanced design tools for the primary heat exchanger design. Due to geometric and flow characteristics, compact (pitch to diameter ratios equal to or less than 1.25) heat exchangers with a crossflow flow arrangement can become desirable for these reactors. Unfortunately, the available experimental data is limited for compact tube bundles or banks in crossflow. Computational Fluid Dynamics can be used to alleviate the lack of experimental data in these tube banks. Previous computational efforts have been primarily focused onmore » large S/D ratios (larger than 1.4) using unsteady Reynolds averaged Navier-Stokes and Large Eddy Simulation frameworks. These approaches are useful, but have large computational requirements that make comprehensive design studies impractical. A CFD study was conducted with steady RANS in an effort to provide a starting point for future design work. The study was performed for an in-line tube bank geometry with FLiBe (LiF-BeF2), a frequently selected molten salt, as the working fluid. Based on the estimated pressure drops, the pressure and velocity distributions in the domain, an appropriate meshing strategy was determined and presented. Periodic boundaries in the spanwise direction transverse flow were determined to be an appropriate boundary condition for reduced computational domains. The domain size was investigated and a minimum of 2-flow channels for a domain is recommended to ensure the behavior is accounted for. Finally, the standard low Re κ-ε (Lien) turbulence model was determined to be the most appropriate for steady RANS of this case at the time of writing.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21818295','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21818295"><span>Information dynamics in living systems: prokaryotes, eukaryotes, and cancer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Frieden, B Roy; Gatenby, Robert A</p> <p>2011-01-01</p> <p>Living systems use information and energy to maintain stable entropy while far from thermodynamic equilibrium. The underlying first principles have not been established. We propose that stable entropy in living systems, in the absence of thermodynamic equilibrium, requires an information extremum (maximum or minimum), which is invariant to first order perturbations. Proliferation and death represent key feedback mechanisms that promote stability even in a non-equilibrium state. A system moves to low or high information depending on its energy status, as the benefit of information in maintaining and increasing order is balanced against its energy cost. Prokaryotes, which lack specialized energy-producing organelles (mitochondria), are energy-limited and constrained to an information minimum. Acquisition of mitochondria is viewed as a critical evolutionary step that, by allowing eukaryotes to achieve a sufficiently high energy state, permitted a phase transition to an information maximum. This state, in contrast to the prokaryote minima, allowed evolution of complex, multicellular organisms. A special case is a malignant cell, which is modeled as a phase transition from a maximum to minimum information state. The minimum leads to a predicted power-law governing the in situ growth that is confirmed by studies measuring growth of small breast cancers. We find living systems achieve a stable entropic state by maintaining an extreme level of information. The evolutionary divergence of prokaryotes and eukaryotes resulted from acquisition of specialized energy organelles that allowed transition from information minima to maxima, respectively. Carcinogenesis represents a reverse transition: of an information maximum to minimum. The progressive information loss is evident in accumulating mutations, disordered morphology, and functional decline characteristics of human cancers. The findings suggest energy restriction is a critical first step that triggers the genetic mutations that drive somatic evolution of the malignant phenotype.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2808708','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2808708"><span>Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Di; Kleinberg, Robert D.</p> <p>2009-01-01</p> <p>Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20161596','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20161596"><span>Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Di; Kleinberg, Robert D</p> <p>2009-11-28</p> <p>Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2011-title20-vol1/pdf/CFR-2011-title20-vol1-sec229-45.pdf','CFR2011'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2011-title20-vol1/pdf/CFR-2011-title20-vol1-sec229-45.pdf"><span>20 CFR 229.45 - Employee benefit.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2011&page.go=Go">Code of Federal Regulations, 2011 CFR</a></p> <p></p> <p>2011-04-01</p> <p>... 20 Employees' Benefits 1 2011-04-01 2011-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2010-title20-vol1/pdf/CFR-2010-title20-vol1-sec229-45.pdf','CFR'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2010-title20-vol1/pdf/CFR-2010-title20-vol1-sec229-45.pdf"><span>20 CFR 229.45 - Employee benefit.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2010&page.go=Go">Code of Federal Regulations, 2010 CFR</a></p> <p></p> <p>2010-04-01</p> <p>... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2014-title20-vol1/pdf/CFR-2014-title20-vol1-sec229-45.pdf','CFR2014'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2014-title20-vol1/pdf/CFR-2014-title20-vol1-sec229-45.pdf"><span>20 CFR 229.45 - Employee benefit.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2014&page.go=Go">Code of Federal Regulations, 2014 CFR</a></p> <p></p> <p>2014-04-01</p> <p>... 20 Employees' Benefits 1 2014-04-01 2012-04-01 true Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title20-vol1/pdf/CFR-2013-title20-vol1-sec229-45.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title20-vol1/pdf/CFR-2013-title20-vol1-sec229-45.pdf"><span>20 CFR 229.45 - Employee benefit.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-04-01</p> <p>... 20 Employees' Benefits 1 2013-04-01 2012-04-01 true Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2012-title20-vol1/pdf/CFR-2012-title20-vol1-sec229-45.pdf','CFR2012'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2012-title20-vol1/pdf/CFR-2012-title20-vol1-sec229-45.pdf"><span>20 CFR 229.45 - Employee benefit.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2012&page.go=Go">Code of Federal Regulations, 2012 CFR</a></p> <p></p> <p>2012-04-01</p> <p>... 20 Employees' Benefits 1 2012-04-01 2012-04-01 false Employee benefit. 229.45 Section 229.45 Employees' Benefits RAILROAD RETIREMENT BOARD REGULATIONS UNDER THE RAILROAD RETIREMENT ACT SOCIAL SECURITY OVERALL MINIMUM GUARANTEE Computation of the Overall Minimum Rate § 229.45 Employee benefit. The original...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JMoSt1149..387S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JMoSt1149..387S"><span>Pyrrole multimers and pyrrole-acetylene hydrogen bonded complexes studied in N2 and para-H2 matrixes using matrix isolation infrared spectroscopy and ab initio computations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sarkar, Shubhra; Ramanathan, N.; Gopi, R.; Sundararajan, K.</p> <p>2017-12-01</p> <p>Hydrogen bonded interaction of pyrrole multimer and acetylene-pyrrole complexes were studied in N2 and p-H2 matrixes. DFT computations showed T-shaped geometry for the pyrrole dimer and cyclic complex for the trimer and tetramer were the most stable structures, stabilized by Nsbnd H⋯π interactions. The experimental vibrational wavenumbers observed in N2 and p-H2 matrixes for the pyrrole multimers were correlated with the computed wavenumbers. Computations performed at MP2/aug-cc-pVDZ level of theory showed that C2H2 and C4H5N forms 1:1 hydrogen-bonded complexes stabilized by Csbnd H⋯π interaction (Complex A), Nsbnd H⋯π interaction (Complex B) and π⋯π interaction (Complex C), where the former complex is the global minimum and latter two complexes were the first and second local minima, respectively. Experimentally, 1:1 C2H2sbnd C4H5N complexes A (global minimum) and B (first local minimum) were identified from the shifts in the Nsbnd H stretching, Nsbnd H bending, Csbnd H bending region of pyrrole and Csbnd H asymmetric stretching and bending region of C2H2 in N2 and p-H2 matrixes. Computations were also performed for the higher complexes and found two minima corresponding to the 1:2 C2H2sbnd C4H5N and three minima for the 2:1 C2H2sbnd C4H5N complexes. Experimentally the global minimum 1:2 and 2:1 C2H2sbnd C4H5N complexes were identified in N2 and p-H2 matrixes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2013-title25-vol2/pdf/CFR-2013-title25-vol2-sec542-11.pdf','CFR2013'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2013-title25-vol2/pdf/CFR-2013-title25-vol2-sec542-11.pdf"><span>25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2013&page.go=Go">Code of Federal Regulations, 2013 CFR</a></p> <p></p> <p>2013-04-01</p> <p>... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2012-title25-vol2/pdf/CFR-2012-title25-vol2-sec542-11.pdf','CFR2012'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2012-title25-vol2/pdf/CFR-2012-title25-vol2-sec542-11.pdf"><span>25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2012&page.go=Go">Code of Federal Regulations, 2012 CFR</a></p> <p></p> <p>2012-04-01</p> <p>... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.gpo.gov/fdsys/pkg/CFR-2014-title25-vol2/pdf/CFR-2014-title25-vol2-sec542-11.pdf','CFR2014'); return false;" href="https://www.gpo.gov/fdsys/pkg/CFR-2014-title25-vol2/pdf/CFR-2014-title25-vol2-sec542-11.pdf"><span>25 CFR 542.11 - What are the minimum internal control standards for pari-mutuel wagering?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.gpo.gov/fdsys/browse/collectionCfr.action?selectedYearFrom=2014&page.go=Go">Code of Federal Regulations, 2014 CFR</a></p> <p></p> <p>2014-04-01</p> <p>... percentage of the handle. (b) Computer applications. For any computer applications utilized, alternate.... In case of computer failure between the pari-mutuel book and the hub, no tickets shall be manually... writer/cashier shall sign on and the computer shall document gaming operation name (or identification...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5207603','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5207603"><span>Evolution of Collective Behaviour in an Artificial World Using Linguistic Fuzzy Rule-Based Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Lebar Bajec, Iztok</p> <p>2017-01-01</p> <p>Collective behaviour is a fascinating and easily observable phenomenon, attractive to a wide range of researchers. In biology, computational models have been extensively used to investigate various properties of collective behaviour, such as: transfer of information across the group, benefits of grouping (defence against predation, foraging), group decision-making process, and group behaviour types. The question ‘why,’ however remains largely unanswered. Here the interest goes into which pressures led to the evolution of such behaviour, and evolutionary computational models have already been used to test various biological hypotheses. Most of these models use genetic algorithms to tune the parameters of previously presented non-evolutionary models, but very few attempt to evolve collective behaviour from scratch. Of these last, the successful attempts display clumping or swarming behaviour. Empirical evidence suggests that in fish schools there exist three classes of behaviour; swarming, milling and polarized. In this paper we present a novel, artificial life-like evolutionary model, where individual agents are governed by linguistic fuzzy rule-based systems, which is capable of evolving all three classes of behaviour. PMID:28045964</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28045964','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28045964"><span>Evolution of Collective Behaviour in an Artificial World Using Linguistic Fuzzy Rule-Based Systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Demšar, Jure; Lebar Bajec, Iztok</p> <p>2017-01-01</p> <p>Collective behaviour is a fascinating and easily observable phenomenon, attractive to a wide range of researchers. In biology, computational models have been extensively used to investigate various properties of collective behaviour, such as: transfer of information across the group, benefits of grouping (defence against predation, foraging), group decision-making process, and group behaviour types. The question 'why,' however remains largely unanswered. Here the interest goes into which pressures led to the evolution of such behaviour, and evolutionary computational models have already been used to test various biological hypotheses. Most of these models use genetic algorithms to tune the parameters of previously presented non-evolutionary models, but very few attempt to evolve collective behaviour from scratch. Of these last, the successful attempts display clumping or swarming behaviour. Empirical evidence suggests that in fish schools there exist three classes of behaviour; swarming, milling and polarized. In this paper we present a novel, artificial life-like evolutionary model, where individual agents are governed by linguistic fuzzy rule-based systems, which is capable of evolving all three classes of behaviour.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhRvB..95s5154L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhRvB..95s5154L"><span>Gradient optimization of finite projected entangled pair states</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Wen-Yuan; Dong, Shao-Jun; Han, Yong-Jian; Guo, Guang-Can; He, Lixin</p> <p>2017-05-01</p> <p>Projected entangled pair states (PEPS) methods have been proven to be powerful tools to solve strongly correlated quantum many-body problems in two dimensions. However, due to the high computational scaling with the virtual bond dimension D , in a practical application, PEPS are often limited to rather small bond dimensions, which may not be large enough for some highly entangled systems, for instance, frustrated systems. Optimization of the ground state using the imaginary time evolution method with a simple update scheme may go to a larger bond dimension. However, the accuracy of the rough approximation to the environment of the local tensors is questionable. Here, we demonstrate that by combining the imaginary time evolution method with a simple update, Monte Carlo sampling techniques and gradient optimization will offer an efficient method to calculate the PEPS ground state. By taking advantage of massive parallel computing, we can study quantum systems with larger bond dimensions up to D =10 without resorting to any symmetry. Benchmark tests of the method on the J1-J2 model give impressive accuracy compared with exact results.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23871964','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23871964"><span>A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei</p> <p>2013-10-01</p> <p>The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..1410611B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..1410611B"><span>A fast, parallel algorithm to solve the basic fluvial erosion/transport equations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Braun, J.</p> <p>2012-04-01</p> <p>Quantitative models of landform evolution are commonly based on the solution of a set of equations representing the processes of fluvial erosion, transport and deposition, which leads to predict the geometry of a river channel network and its evolution through time. The river network is often regarded as the backbone of any surface processes model (SPM) that might include other physical processes acting at a range of spatial and temporal scales along hill slopes. The basic laws of fluvial erosion requires the computation of local (slope) and non-local (drainage area) quantities at every point of a given landscape, a computationally expensive operation which limits the resolution of most SPMs. I present here an algorithm to compute the various components required in the parameterization of fluvial erosion (and transport) and thus solve the basic fluvial geomorphic equation, that is very efficient because it is O(n) (the number of required arithmetic operations is linearly proportional to the number of nodes defining the landscape), and is fully parallelizable (the computation cost decreases in a direct inverse proportion to the number of processors used to solve the problem). The algorithm is ideally suited for use on latest multi-core processors. Using this new technique, geomorphic problems can be solved at an unprecedented resolution (typically of the order of 10,000 X 10,000 nodes) while keeping the computational cost reasonable (order 1 sec per time step). Furthermore, I will show that the algorithm is applicable to any regular or irregular representation of the landform, and is such that the temporal evolution of the landform can be discretized by a fully implicit time-marching algorithm, making it unconditionally stable. I will demonstrate that such an efficient algorithm is ideally suited to produce a fully predictive SPM that links observationally based parameterizations of small-scale processes to the evolution of large-scale features of the landscapes on geological time scales. It can also be used to model surface processes at the continental or planetary scale and be linked to lithospheric or mantle flow models to predict the potential interactions between tectonics driving surface uplift in orogenic areas, mantle flow producing dynamic topography on continental scales and surface processes.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2871904','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=2871904"><span>The constructal law of design and evolution in nature</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Bejan, Adrian; Lorente, Sylvie</p> <p>2010-01-01</p> <p>Constructal theory is the view that (i) the generation of images of design (pattern, rhythm) in nature is a phenomenon of physics and (ii) this phenomenon is covered by a principle (the constructal law): ‘for a finite-size flow system to persist in time (to live) it must evolve such that it provides greater and greater access to the currents that flow through it’. This law is about the necessity of design to occur, and about the time direction of the phenomenon: the tape of the design evolution ‘movie’ runs such that existing configurations are replaced by globally easier flowing configurations. The constructal law has two useful sides: the prediction of natural phenomena and the strategic engineering of novel architectures, based on the constructal law, i.e. not by mimicking nature. We show that the emergence of scaling laws in inanimate (geophysical) flow systems is the same phenomenon as the emergence of allometric laws in animate (biological) flow systems. Examples are lung design, animal locomotion, vegetation, river basins, turbulent flow structure, self-lubrication and natural multi-scale porous media. This article outlines the place of the constructal law as a self-standing law in physics, which covers all the ad hoc (and contradictory) statements of optimality such as minimum entropy generation, maximum entropy generation, minimum flow resistance, maximum flow resistance, minimum time, minimum weight, uniform maximum stresses and characteristic organ sizes. Nature is configured to flow and move as a conglomerate of ‘engine and brake’ designs. PMID:20368252</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/20368252','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/20368252"><span>The constructal law of design and evolution in nature.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bejan, Adrian; Lorente, Sylvie</p> <p>2010-05-12</p> <p>Constructal theory is the view that (i) the generation of images of design (pattern, rhythm) in nature is a phenomenon of physics and (ii) this phenomenon is covered by a principle (the constructal law): 'for a finite-size flow system to persist in time (to live) it must evolve such that it provides greater and greater access to the currents that flow through it'. This law is about the necessity of design to occur, and about the time direction of the phenomenon: the tape of the design evolution 'movie' runs such that existing configurations are replaced by globally easier flowing configurations. The constructal law has two useful sides: the prediction of natural phenomena and the strategic engineering of novel architectures, based on the constructal law, i.e. not by mimicking nature. We show that the emergence of scaling laws in inanimate (geophysical) flow systems is the same phenomenon as the emergence of allometric laws in animate (biological) flow systems. Examples are lung design, animal locomotion, vegetation, river basins, turbulent flow structure, self-lubrication and natural multi-scale porous media. This article outlines the place of the constructal law as a self-standing law in physics, which covers all the ad hoc (and contradictory) statements of optimality such as minimum entropy generation, maximum entropy generation, minimum flow resistance, maximum flow resistance, minimum time, minimum weight, uniform maximum stresses and characteristic organ sizes. Nature is configured to flow and move as a conglomerate of 'engine and brake' designs.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>