NASA Astrophysics Data System (ADS)
Urano, Ryo; Okamoto, Yuko
2015-12-01
We propose a replica-exchange method (REM) which does not use pseudo random numbers. For this purpose, we first give a conditional probability for Gibbs sampling replica-exchange method (GSREM) based on the heat bath method. In GSREM, replica exchange is performed by conditional probability based on the weight of states using pseudo random numbers. From the conditional probability, we propose a new method called deterministic replica-exchange method (DETREM) that produces thermal equilibrium distribution based on a differential equation instead of using pseudo random numbers. This method satisfies the detailed balance condition using a conditional probability of Gibbs heat bath method and thus results can reproduce the Boltzmann distribution within the condition of the probability. We confirmed that the equivalent results were obtained by REM and DETREM with two-dimensional Ising model. DETREM can avoid problems of choice of seeds in pseudo random numbers for parallel computing of REM and gives analytic method for REM using a differential equation.
NASA Astrophysics Data System (ADS)
Plunkett, G.; McDermott, C.; Swindles, G. T.; Brown, D. M.
2013-02-01
Climate change, whether gradual or sudden, has frequently been invoked as a causal factor to explain many aspects of cultural change during the prehistoric and early historic periods. Critiquing such theories has often proven difficult, not least because of the imprecise dating of many aspects of the palaeoclimate or archaeological records and the difficulties of merging the two strands of research. Here we consider one example of the archaeological record - peatland site construction in Ireland - which has previously been interpreted in terms of social response to climate change and examine whether close scrutiny of the archaeological and palaeoenvironmental records upholds the climatically deterministic hypotheses. We evaluate evidence for phasing in the temporal distribution of trackways and related sites in Irish peatlands, of which more than 3500 examples have been recorded, through the examination of ˜350 dendrochronological and 14C dates from these structures. The role of climate change in influencing when such sites were constructed is assessed by comparing visually and statistically the frequency of sites over the last 4500 years with well-dated, multiproxy climate reconstructions from Irish peatlands. We demonstrate that national patterns of “peatland activity” exist that indicate that the construction of sites in bogs was neither a constant nor random phenomenon. Phases of activity (i.e. periods in which the number of structures increased), as well as the ‘lulls’ that separate them, show no consistent correlation with periods of wetter or drier conditions on the bogs, suggesting that the impetus for the start or cessation of such activity was not climatically-determined. We propose that trigger(s) for peatland site construction in Ireland must instead also be sought within the wider, contemporary social background. Perhaps not surprisingly, a comparison with archaeological and palynological evidence shows that peatland activity tends to occur at
Grace, Matthew; Lowry, Thomas Stephen; Arnold, Bill Walter; James, Scott Carlton; Gray, Genetha Anne; Ahlmann, Michael
2008-08-01
Uncertainty in site characterization arises from a lack of data and knowledge about a site and includes uncertainty in the boundary conditions, uncertainty in the characteristics, location, and behavior of major features within an investigation area (e.g., major faults as barriers or conduits), uncertainty in the geologic structure, as well as differences in numerical implementation (e.g., 2-D versus 3-D, finite difference versus finite element, grid resolution, deterministic versus stochastic, etc.). Since the true condition at a site can never be known, selection of the best conceptual model is very difficult. In addition, limiting the understanding to a single conceptualization too early in the process, or before data can support that conceptualization, may lead to confidence in a characterization that is unwarranted as well as to data collection efforts and field investigations that are misdirected and/or redundant. Using a series of numerical modeling experiments, this project examined the application and use of information criteria within the site characterization process. The numerical experiments are based on models of varying complexity that were developed to represent one of two synthetically developed groundwater sites; (1) a fully hypothetical site that represented a complex, multi-layer, multi-faulted site, and (2) a site that was based on the Horonobe site in northern Japan. Each of the synthetic sites were modeled in detail to provide increasingly informative 'field' data over successive iterations to the representing numerical models. The representing numerical models were calibrated to the synthetic site data and then ranked and compared using several different information criteria approaches. Results show, that for the early phases of site characterization, low-parameterized models ranked highest while more complex models generally ranked lowest. In addition, predictive capabilities were also better with the low-parameterized models. For the
NASA Astrophysics Data System (ADS)
Agrinier, Pierre; Javoy, Marc
2016-09-01
Two methods are available in order to evaluate the equilibrium isotope fractionation factors between exchange sites or phases from partial isotope exchange experiments. The first one developed by Northrop and Clayton (1966) is designed for isotope exchanges between two exchange sites (hereafter, the N&C method), the second one from Zheng et al. (1994) is a refinement of the first one to account for a third isotope exchanging site (hereafter, the Z method). In this paper, we use a simple model of isotope kinetic exchange for a 3-exchange site system (such as hydroxysilicates where oxygen occurs as OH and non-OH groups like in muscovite, chlorite, serpentine, or water or calcite) to explore the behavior of the N&C and Z methods. We show that these two methods lead to significant biases that cannot be detected with the usual graphical tests proposed by the authors. Our model shows that biases originate because isotopes are fractionated between all these exchanging sites. Actually, we point out that the variable mobility (or exchangeability) of isotopes in and between the exchange sites only controls the amplitude of the bias, but is not essential to the production of this bias as previously suggested. Setting a priori two of the three exchange sites at isotopic equilibrium remove the bias and thus is required for future partial exchange experiments to produce accurate and unbiased extrapolated equilibrium fractionation factors. Our modeling applied to published partial oxygen isotope exchange experiments for 3-exchange site systems (the muscovite-calcite (Chacko et al., 1996), the chlorite-water (Cole and Ripley, 1998) and the serpentine-water (Saccocia et al., 2009)) shows that the extrapolated equilibrium fractionation factors (reported as 1000 ln(α)) using either the N&C or the Z methods lead to bias that may reach several δ per mil in a few cases. These problematic cases, may be because experiments were conducted at low temperature and did not reach high
NASA Astrophysics Data System (ADS)
Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.
2006-12-01
Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer
Two-site exchange revisited: a new method for extracting exchange parameters in biological systems.
Mulkern, R V; Bleier, A R; Adzamli, I K; Spencer, R G; Sandor, T; Jolesz, F A
1989-01-01
A new analysis is presented which links real volume fractions, relaxation rates, and intracompartmental exchange rates directly with apparent volume fractions and relaxation rates obtained from biexponential fits of transverse magnetization decay curves. The analysis differs from previous methods in that measurements from two paramagnetic doping levels are used to close the two-site exchange equations. Both the new method and one previously described by Herbst and Goldstein (HG) have been applied to paramagnetically doped whole-blood data sets. Significant differences in the calculated exchange parameters are found between the two methods. A small dependence of the intracellular relaxation rate on extracellular paramagnetic agent concentration, assumed nonexistent with the HG method, is inferred from the new analysis. The analysis was also applied to published data on perfused rat hearts, and we obtained a limited assessment of two-site exchange in this system. PMID:2713436
Exchange interactions in systems with multiple magnetic sites.
Paul, Satadal; Misra, Anirban
2010-06-24
Nonequivalent magnetic interactions in systems with multiple magnetic centers can be explored through a proper description of exchange coupling. The magnetic exchange coupling constant (J) in systems with two magnetic sites is reliably estimated using Heisenberg-Dirac-van Vleck (HDVV) model through broken symmetry approach (BS) within a density functional theory (DFT) framework. However, in case of systems with multiple magnetic centers, exchange coupling constants, evaluated through state-of-the-art techniques, are often found to be inadequate to produce a correct fingerprint of the nature of magnetic interactions therein. This work suggests a new scheme to estimate exchange coupling constants in such systems. In this strategy, distribution of spins on magnetic sites in the ground state of systems with multiple magnetic centers is computed. On the basis of this spin mapping, exchange coupling constants between specific pairs are estimated through BS-DFT approach while keeping all other paramagnetic atoms magnetically inactive. Nonetheless, the effect of magnetically inert paramagnetic sites is already taken into account by the process of spin mapping, which is further justified through expressing the HDVV Hamiltonian in terms of spin density operators. We employ this technique to hypothetical benchmark systems, H(3)He(3) and H(4)He(4) followed by real molecules, cationic manganese trimer, 1,3,5-benzenetriyltris (N-tert-butyl nitroxide), and a pentanuclear manganese complex. Results are found to be concordant with the already established nature of magnetic interaction in these systems. This strategy is different from the most popular scheme to compute J in systems with multiple magnetic centers in the sense that it avoids the formation of a large matrix out of different spin configurations and thus provides a reliable and computationally economic way to address the magnetic interactions in non isotropic systems with multiple magnetic sites. PMID:20496941
Youngs, R.R.; Coppersmith, K.J.; Stephenson, D.E.; Silva, W.
1991-12-31
Ground motion assessments are presented for evaluation of the seismic safety of K-Reactor at the Savannah River Site. Two earthquake sources are identified as the most significant to seismic hazard at the site, a M 7.5 earthquake occurring in Charleston, South Carolina, and a M 5 event occurring in the site vicinity. These events control the low frequency and high frequency portions of the spectrum, respectively. Three major issues were identified in the assessment of ground motions for the Savannah River site; specification of the appropriate stress drop for the Charleston source earthquake, specification of the appropriate levels of soil damping at large depths for site response analyses, and the appropriateness of western US recordings for specification of ground motions in the eastern US.
Youngs, R.R.; Coppersmith, K.J. ); Stephenson, D.E. ); Silva, W. )
1991-01-01
Ground motion assessments are presented for evaluation of the seismic safety of K-Reactor at the Savannah River Site. Two earthquake sources are identified as the most significant to seismic hazard at the site, a M 7.5 earthquake occurring in Charleston, South Carolina, and a M 5 event occurring in the site vicinity. These events control the low frequency and high frequency portions of the spectrum, respectively. Three major issues were identified in the assessment of ground motions for the Savannah River site; specification of the appropriate stress drop for the Charleston source earthquake, specification of the appropriate levels of soil damping at large depths for site response analyses, and the appropriateness of western US recordings for specification of ground motions in the eastern US.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-28
... Notice of Proposed Information Collection: Exchange Programs Alumni Web Site Registration, DS-7006 ACTION... burden on those who are to respond. Abstract of Proposed Collection The Exchange Programs Alumni Web site requires information to process users' voluntary requests for participation in the Web site. Other...
Mansoor, K; Maley, M; Demir, Z; Hoffman, F
2001-08-08
Lawrence Livermore National Laboratory (LLNL) is a large Superfund site in California that is implementing an extensive ground water remediation program. The site is underlain by a thick sequence of heterogeneous alluvial sediments. Defining ground-water flow pathways in this complex geologic setting is difficult. To better evaluate these pathways, a deterministic approach was applied to define hydrostratigraphic units (HSUS) on the basis of identifiable hydraulic behavior and contaminant migration trends. The conceptual model based on this approach indicates that groundwater flow and contaminant transport occurs within packages of sediments bounded by thin, low-permeability confining layers. To aid in the development of the remediation program, a three-dimensional finite-element model was developed for two of the HSUS at LLNL. The primary objectives of this model are to test the conceptual model with a numerical model, and provide well field management support for the large ground-water remediation system. The model was successfully calibrated to 12 years of ground water flow and contaminant transport data. These results confirm that the thin, low-permeability confining layers within the heterogeneous alluvial sediments are the dominant hydraulic control to flow and transport. This calibrated model is currently being applied to better manage the large site-wide ground water extraction system by optimizing the location of new extraction wells, managing pumping rates for extraction wells, and providing performance estimates for long-term planning and budgeting.
NASA Astrophysics Data System (ADS)
Stephens, Michael B.; Follin, Sven; Petersson, Jesper; Isaksson, Hans; Juhlin, Christopher; Simeonov, Assen
2015-06-01
This paper presents a review of the data sets and methodologies used to construct deterministic models for the spatial distribution of deformation zones and intervening fracture domains in 3-D space at Forsmark, Fennoscandian Shield, Sweden. These models formed part of the investigations to characterize this site, recently proposed as a repository for the storage of spent nuclear fuel in Sweden. The pronounced spatial variability in the distribution of bedrock structures, formed under ductile (lower amphibolite- or greenschist-facies) and subsequently brittle conditions, was controlled by two factors; firstly, the multiphase reactivation, around and after 1.8 Ga, of older ductile structures with a strong anisotropy formed under higher-temperature conditions at 1.87-1.86 Ga; and, secondly, by the release of rock stresses in connection with loading and unloading cycles, after 1.6 Ga. The spatial variability in bedrock structures is accompanied by a significant heterogeneity in the hydraulic flow properties, the most transmissive fractures being sub-horizontal or gently dipping. Although the bedrock structures at Forsmark are ancient features, the present-day aperture of fractures and their hydraulic tranmissivity are inferred to be influenced by the current stress state. It is apparent that the aperture of fractures can change throughout geological time as the stress field evolves. For this reason, the assessment of the long-term (up to 100,000 years) safety of a site for the storage of spent nuclear fuel in crystalline bedrock requires an evaluation of all fractures at the site, not only the currently open fractures that are connected and conductive to groundwater flow. This study also highlights the need for an integration of structural data from the ground surface and boreholes with magnetic field and seismic reflection data with high spatial resolution, during the characterization of structures at a possible site for the storage of spent nuclear fuel in
NASA Astrophysics Data System (ADS)
Ritzi, Robert W.; Soltanian, Mohamad Reza
2015-12-01
In the method of deterministic geostatistics (sensu Isaaks and Srivastava, 1988), highly-resolved data sets are used to compute sample spatial-bivariate statistics within a deterministic framework. The general goal is to observe what real, highly resolved, sample spatial-bivariate correlation looks like when it is well-quantified in naturally-occurring sedimentary aquifers. Furthermore, it is to understand how this correlation structure, (i.e. shape and correlation range) is related to independent and physically quantifiable attributes of the sedimentary architecture. The approach has evolved among work by Rubin (1995, 2003), Barrash and Clemo (2002), Ritzi et al. (2004, 2007, 2013), Dai et al. (2005), and Ramanathan et al. (2010). In this evolution, equations for sample statistics have been developed which allow tracking the facies types at the heads and tails of lag vectors. The goal is to observe and thereby understand how aspects of the sedimentary architecture affect the well-supported sample statistics. The approach has been used to study heterogeneity at a number of sites, representing a variety of depositional environments, with highly resolved data sets. What have we learned? We offer and support an opinion that the single most important insight derived from these studies is that the structure of spatial-bivariate correlation is essentially the cross-transition probability structure, determined by the sedimentary architecture. More than one scale of hierarchical sedimentary architecture has been represented in these studies, and a hierarchy of cross-transition probability structures was found to define the correlation structure in all cases. This insight allows decomposing contributions from different scales of the sedimentary architecture, and has led to a more fundamental understanding of mass transport processes including mechanical dispersion of solutes within aquifers, and the time-dependent retardation of reactive solutes. These processes can now be
Deterministic Walks with Choice
Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.; Hunter, Meagan N.; Barr, Peter S.
2014-01-10
This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.
Mechanism of extracellular ion exchange and binding-site occlusion in a sodium/calcium exchanger.
Liao, Jun; Marinelli, Fabrizio; Lee, Changkeun; Huang, Yihe; Faraldo-Gómez, José D; Jiang, Youxing
2016-06-01
Na(+)/Ca(2+) exchangers use the Na(+) electrochemical gradient across the plasma membrane to extrude intracellular Ca(2+) and play a central role in Ca(2+) homeostasis. Here, we elucidate their mechanisms of extracellular ion recognition and exchange through a structural analysis of the exchanger from Methanococcus jannaschii (NCX_Mj) bound to Na(+), Ca(2+) or Sr(2+) in various occupancies and in an apo state. This analysis defines the binding mode and relative affinity of these ions, establishes the structural basis for the anticipated 3:1 Na(+)/Ca(2+)-exchange stoichiometry and reveals the conformational changes at the onset of the alternating-access transport mechanism. An independent analysis of the dynamics and conformational free-energy landscape of NCX_Mj in different ion-occupancy states, based on enhanced-sampling molecular dynamics simulations, demonstrates that the crystal structures reflect mechanistically relevant, interconverting conformations. These calculations also reveal the mechanism by which the outward-to-inward transition is controlled by the ion occupancy, thereby explaining the emergence of strictly coupled Na(+)/Ca(2+) antiport. PMID:27183196
Processes Impacting Atmosphere-Surface Exchanges at Arctic Terrestrial Sites
NASA Astrophysics Data System (ADS)
Persson, Ola; Grachev, Andrey; Konopleva, Elena; Cox, Chris; Stone, Robert; Crepinsek, Sara; Shupe, Matthew; Uttal, Taneil
2015-04-01
Surface energy fluxes are key to the annual cycle of near-surface and soil temperature and biologic activity in the Arctic. While these energy fluxes are undoubtedly changing to produce the changes observed in the Arctic ecosystem over the last few decades, measurements have generally not been available to quantify what processes are regulating these fluxes and what is determining the characteristics of these annual cycles. The U.S. National Oceanic and Atmospheric Administration has established, or contributed to the establishment of, several terrestrial "supersites" around the perimeter of the Arctic Ocean at which detailed measurements of atmospheric structure, surface fluxes, and soil thermal properties are being made. These sites include Barrow, Alaska; Eureka and Alert, Canada; and Tiksi, Russia. Atmospheric structure measurements vary, but include radiosoundings at all sites and remote sensing of clouds at two sites. Additionally, fluxes of sensible heat and momentum are made at all of the sites, while fluxes of moisture and CO2 are made at two of the sites. Soil temperatures are also measured in the upper 120 cm at all sites, which is deep enough to define the soil active layer. The sites have been operating between 3 years (Tiksi) and 24 years (Barrow). While all sites are located north of 71° N, the summer vegetation range from lush tundra grasses to rocky soils with little vegetation. This presentation will illustrate some of the atmospheric processes that are key for determining the annual energy and temperature cycles at these sites, and some of the key characteristics that lead to differences in, for instance, the length of the summer soil active layer between the sites. Atmospheric features and processes such as cloud characteristics, snowfall, downslope wind events, and sea-breezes have impacts on the annual energy cycle. The presence of a "zero curtain" period, when autumn surface temperature remains approximately constant at the freezing point
31 CFR 330.8 - Payment or redemption-exchange by a TRS Site.
Code of Federal Regulations, 2012 CFR
2012-07-01
... with the instructions set forth in 31 CFR part 321. The transmittals must be accompanied by appropriate... TRS Site. 330.8 Section 330.8 Money and Finance: Treasury Regulations Relating to Money and Finance... SHARES) § 330.8 Payment or redemption-exchange by a TRS Site. Specially endorsed securities that an...
31 CFR 330.8 - Payment or redemption-exchange by a TRS Site.
Code of Federal Regulations, 2014 CFR
2014-07-01
... with the instructions set forth in 31 CFR part 321. The transmittals must be accompanied by appropriate... TRS Site. 330.8 Section 330.8 Money and Finance: Treasury Regulations Relating to Money and Finance... SHARES) § 330.8 Payment or redemption-exchange by a TRS Site. Specially endorsed securities that an...
31 CFR 330.8 - Payment or redemption-exchange by a TRS Site.
Code of Federal Regulations, 2013 CFR
2013-07-01
... with the instructions set forth in 31 CFR part 321. The transmittals must be accompanied by appropriate... TRS Site. 330.8 Section 330.8 Money and Finance: Treasury Regulations Relating to Money and Finance... SHARES) § 330.8 Payment or redemption-exchange by a TRS Site. Specially endorsed securities that an...
Spires, Renee; Punch, Timothy; McCabe, Daniel
2009-02-11
The Department of Energy (DOE) has developed, modeled, and tested several different ion exchange media and column designs for cesium removal. One elutable resin and one non-elutable resin were considered for this salt processing application. Deployment of non-elutable Crystalline Silicotitanate and elutable Resorcinol Formaldehyde in several different column configurations were assessed in a formal Systems Engineering Evaluation (SEE). Salt solutions were selected that would allow a grouping of non-compliant tanks to be closed. Tests were run with the elutable resin to determine compatibility with the resin configuration required for an in-tank ion exchange system. Models were run to estimate the ion exchange cycles required with the two resins in several column configurations. Material balance calculations were performed to estimate the impact on the High Level Waste (HLW) system at the Savannah River Site (SRS). Conceptual process diagrams were used to support the hazard analysis. Data from the hazard analysis was used to determine the relative impact on safety. This report will discuss the technical inputs, SEE methods, results and path forward to complete the technical maturation of ion exchange.
Rosenfeld, Daniel E.; Kwak, Kyungwon; Gengeliczki, Zsolt
2010-01-01
Hydrogen bonded complexes between phenol and phenylacetylene are studied using ultrafast two-dimensional infrared (2D IR) chemical exchange spectroscopy. Phenylacetylene has two possible π hydrogen bonding acceptor sites (phenyl or acetylene) that compete for hydrogen bond donors in solution at room temperature. The OD stretch frequency of deuterated phenol is sensitive to which acceptor site it is bound. The appearance of off-diagonal peaks between the two vibrational frequencies in the 2D IR spectrum reports on the exchange process between the two competitive hydrogen bonding sites of phenol-phenylacetylene complexes in the neat phenylacetylene solvent. The chemical exchange process occurs in ∼5 ps, and is assigned to direct hydrogen bond migration along the phenylacetylene molecule. Other non-migration mechanisms are ruled out by performing 2D IR experiments on phenol dissolved in the phenylacetylene/carbon tetrachloride mixed solvent. The observation of direct hydrogen bond migration can have implications for macromolecular systems. PMID:20121275
Tubulin exchanges divalent cations at both guanine nucleotide-binding sites.
Correia, J J; Beth, A H; Williams, R C
1988-08-01
The tubulin heterodimer binds a molecule of GTP at the nonexchangeable nucleotide-binding site (N-site) and either GDP or GTP at the exchangeable nucleotide-binding site (E-site). Mg2+ is known to be tightly linked to the binding of GTP at the E-site (Correia, J. J., Baty, L. T., and Williams, R. C., Jr. (1987) J. Biol. Chem. 262, 17278-17284). Measurements of the exchange of Mn2+ for bound Mg2+ (as monitored by atomic absorption and EPR) demonstrate that tubulin which has GDP at the E-site possesses one high affinity metal-binding site and that tubulin which has GTP at the E-site possesses two such sites. The apparent association constants are 0.7-1.1 x 10(6) M-1 for Mg2+ and approximately 4.1-4.9 x 10(7) M-1 for Mn2+. Divalent cations do bind to GDP at the E-site, but with much lower affinity (2.0-2.3 x 10(3) M-1 for Mg2+ and 3.9-6.6 x 10(3) M-1 for Mn2+). These data suggest that divalent cations are involved in GTP binding to both the N- and E-sites of tubulin. The N-site metal exchanges slowly (kapp = 0.020 min-1), suggesting a mechanism involving protein "breathing" or heterodimer dissociation. The N-site metal exchange rate is independent of the concentration of protein and metal, an observation consistent with the possibility that a dynamic breathing process is the rate-limiting step. The exchange of Mn2+ for Mg2+ has no effect on the secondary structure of tubulin at 4 degrees C or on the ability of tubulin to form microtubules. These results have important consequences for the interpretation of distance measurements within the tubulin dimer using paramagnetic ions. They are also relevant to the detailed mechanism of divalent cation release from microtubules after GTP hydrolysis. PMID:3392036
Dunn, Katherine A.; Jiang, Wenyi; Field, Christopher; Bielawski, Joseph P.
2013-01-01
Adequate modeling of mitochondrial sequence evolution is an essential component of mitochondrial phylogenomics (comparative mitogenomics). There is wide recognition within the field that lineage-specific aspects of mitochondrial evolution should be accommodated through lineage-specific amino-acid exchangeability matrices (e.g., mtMam for mammalian data). However, such a matrix must be applied to all sites and this implies that all sites are subject to the same, or largely similar, evolutionary constraints. This assumption is unjustified. Indeed, substantial differences are expected to arise from three-dimensional structures that impose different physiochemical environments on individual amino acid residues. The objectives of this paper are (1) to investigate the extent to which amino acid evolution varies among sites of mitochondrial proteins, and (2) to assess the potential benefits of explicitly modeling such variability. To achieve this, we developed a novel method for partitioning sites based on amino acid physiochemical properties. We apply this method to two datasets derived from complete mitochondrial genomes of mammals and fish, and use maximum likelihood to estimate amino acid exchangeabilities for the different groups of sites. Using this approach we identified large groups of sites evolving under unique physiochemical constraints. Estimates of amino acid exchangeabilities differed significantly among such groups. Moreover, we found that joint estimates of amino acid exchangeabilities do not adequately represent the natural variability in evolutionary processes among sites of mitochondrial proteins. Significant improvements in likelihood are obtained when the new matrices are employed. We also find that maximum likelihood estimates of branch lengths can be strongly impacted. We provide sets of matrices suitable for groups of sites subject to similar physiochemical constraints, and discuss how they might be used to analyze real data. We also discuss how
ERIC Educational Resources Information Center
Mital, Monika; Israel, D.; Agarwal, Shailja
2010-01-01
Purpose: The purpose of this paper is to examine the mediating effect of trust on the relationship between the type of information exchange (IE) and information disclosure (ID) on social networking web sites (SNWs). Design/methodology/approach: Constructs were developed for type of IE and trust. To understand the mediating role of trust a…
"Actually, I Wanted to Learn": Study-Related Knowledge Exchange on Social Networking Sites
ERIC Educational Resources Information Center
Wodzicki, Katrin; Schwammlein, Eva; Moskaliuk, Johannes
2012-01-01
Social media open up multiple options to add a new dimension to learning and knowledge processes. Particularly, social networking sites allow students to connect formal and informal learning settings. Students can find like-minded people and organize informal knowledge exchange for educational purposes. However, little is known about in which way…
NASA Astrophysics Data System (ADS)
Beyer, C.; Höper, H.
2015-04-01
During the last decades an increasing area of drained peatlands has been rewetted. Especially in Germany, rewetting is the principal treatment on cutover sites when peat extraction is finished. The objectives are bog restoration and the reduction of greenhouse gas (GHG) emissions. The first sites were rewetted in the 1980s. Thus, there is a good opportunity to study long-term effects of rewetting on greenhouse gas exchange, which has not been done so far on temperate cutover peatlands. Moreover, Sphagnum cultivating may become a new way to use cutover peatlands and agriculturally used peatlands as it permits the economical use of bogs under wet conditions. The climate impact of such measures has not been studied yet. We conducted a field study on the exchange of carbon dioxide, methane and nitrous oxide at three rewetted sites with a gradient from dry to wet conditions and at a Sphagnum cultivation site in NW Germany over the course of more than 2 years. Gas fluxes were measured using transparent and opaque closed chambers. The ecosystem respiration (CO2) and the net ecosystem exchange (CO2) were modelled at a high temporal resolution. Measured and modelled values fit very well together. Annually cumulated gas flux rates, net ecosystem carbon balances (NECB) and global warming potential (GWP) balances were determined. The annual net ecosystem exchange (CO2) varied strongly at the rewetted sites (from -201.7 ± 126.8 to 29.7± 112.7g CO2-C m-2 a-1) due to differing weather conditions, water levels and vegetation. The Sphagnum cultivation site was a sink of CO2 (-118.8 ± 48.1 and -78.6 ± 39.8 g CO2-C m-2 a-1). The annual CH4 balances ranged between 16.2 ± 2.2 and 24.2 ± 5.0g CH4-C m-2 a-1 at two inundated sites, while one rewetted site with a comparatively low water level and the Sphagnum farming site show CH4 fluxes close to 0. The net N2O fluxes were low and not significantly different between the four sites. The annual NECB was between -185.5 ± 126.9 and 49
Boltz, J.C.
1992-09-01
EXCHANGE is published monthly by the Idaho National Engineering Laboratory (INEL), a multidisciplinary facility operated for the US Department of Energy (DOE). The purpose of EXCHANGE is to inform computer users about about recent changes and innovations in both the mainframe and personal computer environments and how these changes can affect work being performed at DOE facilities.
Membrane Contact Sites: Complex Zones for Membrane Association and Lipid Exchange
Quon, Evan; Beh, Christopher T.
2015-01-01
Lipid transport between membranes within cells involves vesicle and protein carriers, but as agents of nonvesicular lipid transfer, the role of membrane contact sites has received increasing attention. As zones for lipid metabolism and exchange, various membrane contact sites mediate direct associations between different organelles. In particular, membrane contact sites linking the plasma membrane (PM) and the endoplasmic reticulum (ER) represent important regulators of lipid and ion transfer. In yeast, cortical ER is stapled to the PM through membrane-tethering proteins, which establish a direct connection between the membranes. In this review, we consider passive and facilitated models for lipid transfer at PM–ER contact sites. Besides the tethering proteins, we examine the roles of an additional repertoire of lipid and protein regulators that prime and propagate PM–ER membrane association. We conclude that instead of being simple mediators of membrane association, regulatory components of membrane contact sites have complex and multilayered functions. PMID:26949334
Deterministic hierarchical networks
NASA Astrophysics Data System (ADS)
Barrière, L.; Comellas, F.; Dalfó, C.; Fiol, M. A.
2016-06-01
It has been shown that many networks associated with complex systems are small-world (they have both a large local clustering coefficient and a small diameter) and also scale-free (the degrees are distributed according to a power law). Moreover, these networks are very often hierarchical, as they describe the modularity of the systems that are modeled. Most of the studies for complex networks are based on stochastic methods. However, a deterministic method, with an exact determination of the main relevant parameters of the networks, has proven useful. Indeed, this approach complements and enhances the probabilistic and simulation techniques and, therefore, it provides a better understanding of the modeled systems. In this paper we find the radius, diameter, clustering coefficient and degree distribution of a generic family of deterministic hierarchical small-world scale-free networks that has been considered for modeling real-life complex systems.
NASA Astrophysics Data System (ADS)
Trefan, Gyorgy
1993-01-01
The goal of this thesis is to contribute to the ambitious program of the foundation of developing statistical physics using chaos. We build a deterministic model of Brownian motion and provide a microscopic derivation of the Fokker-Planck equation. Since the Brownian motion of a particle is the result of the competing processes of diffusion and dissipation, we create a model where both diffusion and dissipation originate from the same deterministic mechanism--the deterministic interaction of that particle with its environment. We show that standard diffusion which is the basis of the Fokker-Planck equation rests on the Central Limit Theorem, and, consequently, on the possibility of deriving it from a deterministic process with a quickly decaying correlation function. The sensitive dependence on initial conditions, one of the defining properties of chaos insures this rapid decay. We carefully address the problem of deriving dissipation from the interaction of a particle with a fully deterministic nonlinear bath, that we term the booster. We show that the solution of this problem essentially rests on the linear response of a booster to an external perturbation. This raises a long-standing problem concerned with Kubo's Linear Response Theory and the strong criticism against it by van Kampen. Kubo's theory is based on a perturbation treatment of the Liouville equation, which, in turn, is expected to be totally equivalent to a first-order perturbation treatment of single trajectories. Since the boosters are chaotic, and chaos is essential to generate diffusion, the single trajectories are highly unstable and do not respond linearly to weak external perturbation. We adopt chaotic maps as boosters of a Brownian particle, and therefore address the problem of the response of a chaotic booster to an external perturbation. We notice that a fully chaotic map is characterized by an invariant measure which is a continuous function of the control parameters of the map
NASA Astrophysics Data System (ADS)
Mahan, H. R.; Wagle, P.; Bajgain, R.; Zhou, Y.; Basara, J. B.; Xiao, X.; Duckles, J. M.; Steiner, J. L.; Starks, P. J.; Northup, B. K.
2014-12-01
Quantifying methane (CH4), carbon dioxide (CO2), and water vapor fluxes between land surface and boundary layer using the eddy covariance method have many applicable uses across several disciplines. Three eddy flux towers have been established over no-till winter wheat (Triticum aestivum L.), and native and improved pastures at the USDA ARS Grazinglands Research Laboratory, El Reno, OK. An additional tower will be established in fall 2014 over till winter wheat. Each flux site is equipped with an eddy covariance system, PhenoCam, COSMOS, and in-situ observations of soil and atmospheric state variables. The objective of this research is to measure, compare, and model the land-atmosphere exchange of CO2, water vapor, and CH4 in different land cover types and management practices (till vs no-till, grazing vs no-grazing, native vs improved pasture). Models that focus on net ecosystem CO2 exchange (NEE), gross primary production (GPP), evapotranspiration (ET), and CH4 fluxes can be improved by the cross verification of these measurements. Another application will be to link the in-situ measurements with satellite remote sensing in order to scale-up flux measurements from small spatial scales to local and regional scales. Preliminary data analysis from the native grassland site revealed that CH4 concentration was negligible (~ 0), and it increased significantly when cattle were introduced into the site. In summer 2014, daily ET magnitude was about 4-5 mm day-1 and the NEE magnitude was 4-5 g C day-1 at the native grassland site. Further analysis of data for all the sites for longer temporal periods will enhance understanding of biotic and abiotic factors that govern carbon, water, and energy exchanges between the land surface and atmosphere under different land cover and management systems. The research findings will help predict the responses of these ecosystems to management practices and global environmental change in the future.
Vascular Patterns in Iguanas and Other Squamates: Blood Vessels and Sites of Thermal Exchange.
Porter, William Ruger; Witmer, Lawrence M
2015-01-01
Squamates use the circulatory system to regulate body and head temperatures during both heating and cooling. The flexibility of this system, which possibly exceeds that of endotherms, offers a number of physiological mechanisms to gain or retain heat (e.g., increase peripheral blood flow and heart rate, cooling the head to prolong basking time for the body) as well as to shed heat (modulate peripheral blood flow, expose sites of thermal exchange). Squamates also have the ability to establish and maintain the same head-to-body temperature differential that birds, crocodilians, and mammals demonstrate, but without a discrete rete or other vascular physiological device. Squamates offer important anatomical and phylogenetic evidence for the inference of the blood vessels of dinosaurs and other extinct archosaurs in that they shed light on the basal diapsid condition. Given this basal positioning, squamates likewise inform and constrain the range of physiological thermoregulatory mechanisms that may have been found in Dinosauria. Unfortunately, the literature on squamate vascular anatomy is limited. Cephalic vascular anatomy of green iguanas (Iguana iguana) was investigated using a differential-contrast, dual-vascular injection (DCDVI) technique and high-resolution X-ray microcomputed tomography (μCT). Blood vessels were digitally segmented to create a surface representation of vascular pathways. Known sites of thermal exchange, consisting of the oral, nasal, and orbital regions, were given special attention due to their role in brain and cephalic thermoregulation. Blood vessels to and from sites of thermal exchange were investigated to detect conserved vascular patterns and to assess their ability to deliver cooled blood to the dural venous sinuses. Arteries within sites of thermal exchange were found to deliver blood directly and through collateral pathways. The venous drainage was found to have multiple pathways that could influence neurosensory tissue temperature
Vascular Patterns in Iguanas and Other Squamates: Blood Vessels and Sites of Thermal Exchange
Porter, William Ruger; Witmer, Lawrence M.
2015-01-01
Squamates use the circulatory system to regulate body and head temperatures during both heating and cooling. The flexibility of this system, which possibly exceeds that of endotherms, offers a number of physiological mechanisms to gain or retain heat (e.g., increase peripheral blood flow and heart rate, cooling the head to prolong basking time for the body) as well as to shed heat (modulate peripheral blood flow, expose sites of thermal exchange). Squamates also have the ability to establish and maintain the same head-to-body temperature differential that birds, crocodilians, and mammals demonstrate, but without a discrete rete or other vascular physiological device. Squamates offer important anatomical and phylogenetic evidence for the inference of the blood vessels of dinosaurs and other extinct archosaurs in that they shed light on the basal diapsid condition. Given this basal positioning, squamates likewise inform and constrain the range of physiological thermoregulatory mechanisms that may have been found in Dinosauria. Unfortunately, the literature on squamate vascular anatomy is limited. Cephalic vascular anatomy of green iguanas (Iguana iguana) was investigated using a differential-contrast, dual-vascular injection (DCDVI) technique and high-resolution X-ray microcomputed tomography (μCT). Blood vessels were digitally segmented to create a surface representation of vascular pathways. Known sites of thermal exchange, consisting of the oral, nasal, and orbital regions, were given special attention due to their role in brain and cephalic thermoregulation. Blood vessels to and from sites of thermal exchange were investigated to detect conserved vascular patterns and to assess their ability to deliver cooled blood to the dural venous sinuses. Arteries within sites of thermal exchange were found to deliver blood directly and through collateral pathways. The venous drainage was found to have multiple pathways that could influence neurosensory tissue temperature
Deterministic Bilinear System Identification
NASA Astrophysics Data System (ADS)
Lee, Cheh-Han; Juang, Jer-Nan
2013-12-01
A unified identification method is proposed for system realization of a deterministic continuous-time/discrete-time bilinear models from input and output measurement data. A generalized Hankel matrix is formed with the output measurements obtained by applying a set of repeated input sequences to a bilinear system. A computational procedure is developed to extract a time varying discrete-time state-space model from the generalized Hankel matrix. The bilinear system models are realized by transforming the identified time varying discrete-time model to the bilinear models. Numerical simulations are given to show the effectiveness of the proposed identification method.
Chengjiang Mao
1996-12-31
In typical AI systems, we employ so-called non-deterministic reasoning (NDR), which resorts to some systematic search with backtracking in the search spaces defined by knowledge bases (KBs). An eminent property of NDR is that it facilitates programming, especially programming for those difficult AI problems such as natural language processing for which it is difficult to find algorithms to tell computers what to do at every step. However, poor efficiency of NDR is still an open problem. Our work aims at overcoming this efficiency problem.
NASA Astrophysics Data System (ADS)
Zhang, Jingjing; Kitova, Elena N.; Li, Jun; Eugenio, Luiz; Ng, Kenneth; Klassen, John S.
2016-01-01
The application of hydrogen/deuterium exchange mass spectrometry (HDX-MS) to localize ligand binding sites in carbohydrate-binding proteins is described. Proteins from three bacterial toxins, the B subunit homopentamers of Cholera toxin and Shiga toxin type 1 and a fragment of Clostridium difficile toxin A, and their interactions with native carbohydrate receptors, GM1 pentasaccharides (β-Gal-(1→3)-β-GalNAc-(1→4)[α-Neu5Ac-(2→3)]-β-Gal-(1→4)-Glc), Pk trisaccharide (α-Gal-(1→4)-β-Gal-(1→4)-Glc) and CD-grease (α-Gal-(1→3)-β-Gal-(1→4)-β-GlcNAcO(CH2)8CO2CH3), respectively, served as model systems for this study. Comparison of the differences in deuterium uptake for peptic peptides produced in the absence and presence of ligand revealed regions of the proteins that are protected against deuterium exchange upon ligand binding. Notably, protected regions generally coincide with the carbohydrate binding sites identified by X-ray crystallography. However, ligand binding can also result in increased deuterium exchange in other parts of the protein, presumably through allosteric effects. Overall, the results of this study suggest that HDX-MS can serve as a useful tool for localizing the ligand binding sites in carbohydrate-binding proteins. However, a detailed interpretation of the changes in deuterium exchange upon ligand binding can be challenging because of the presence of ligand-induced changes in protein structure and dynamics.
NASA Astrophysics Data System (ADS)
Wright, L. Paige; Zhang, Leiming
2015-03-01
The bidirectional air-surface exchange for gaseous elemental mercury (GEM) and existing measurements of the compensation points over a variety of canopy types are reviewed. Deposition and emission of GEM are dependent on several factors such as the type of canopy, temperature, season, atmospheric GEM concentrations, and meteorological conditions, with compensation points varying between 0.5 and 33 ng m-3. Emissions tend to increase from the spring to summer seasons, as the GEM accumulates in the foliage of the vegetation. A strong dependence on solar radiation has been observed, with higher emissions under light conditions. A bidirectional air-surface exchange flux model is proposed for estimating GEM fluxes at a two-hourly time resolution for the National Atmospheric Deposition Program's, Atmospheric Mercury Network (AMNet) sites. Compared to the unidirectional dry deposition model used in Zhang et al. (2012), two additional parameters, stomatal and soil emission potential, were needed in the bidirectional model and were chosen based on knowledge gained in the literature review and model sensitivity test results. Application of this bidirectional model to AMNet sites have produced annual net deposition fluxes comparable to those estimated in Zhang et al. (2012) at the majority of the sites. In this study, the net GEM dry deposition has been estimated separately for each dominant land use type surrounding each site, and this approach is also recommended for future calculations for easy application of the results to assessments of the mercury effects on various ecosystems.
O2 activation by binuclear Cu sites: Noncoupled versus exchange coupled reaction mechanisms
NASA Astrophysics Data System (ADS)
Chen, Peng; Solomon, Edward I.
2004-09-01
Binuclear Cu proteins play vital roles in O2 binding and activation in biology and can be classified into coupled and noncoupled binuclear sites based on the magnetic interaction between the two Cu centers. Coupled binuclear Cu proteins include hemocyanin, tyrosinase, and catechol oxidase. These proteins have two Cu centers strongly magnetically coupled through direct bridging ligands that provide a mechanism for the 2-electron reduction of O2 to a µ-2:2 side-on peroxide bridged species. This side-on bridged peroxo-CuII2 species is activated for electrophilic attack on the phenolic ring of substrates. Noncoupled binuclear Cu proteins include peptidylglycine -hydroxylating monooxygenase and dopamine -monooxygenase. These proteins have binuclear Cu active sites that are distant, that exhibit no exchange interaction, and that activate O2 at a single Cu center to generate a reactive CuII/O2 species for H-atom abstraction from the C-H bond of substrates. O2 intermediates in the coupled binuclear Cu enzymes can be trapped and studied spectroscopically. Possible intermediates in noncoupled binuclear Cu proteins can be defined through correlation to mononuclear CuII/O2 model complexes. The different intermediates in these two classes of binuclear Cu proteins exhibit different reactivities that correlate with their different electronic structures and exchange coupling interactions between the binuclear Cu centers. These studies provide insight into the role of exchange coupling between the Cu centers in their reaction mechanisms.
An antibody binding site on cytochrome c defined by hydrogen exchange and two-dimensional NMR
Paterson, Y.; Englander, S.W.; Roder, H. )
1990-08-17
The interaction of a protein antigen, horse cytochrome c (cyt c), with a monoclonal antibody has been studied by hydrogen-deuterium (H-D) exchange labeling and two-dimensional nuclear magnetic resonance (2D NMR) methods. The H-exchange rate of residues in three discontiguous regions of the cyt c polypeptide backbone was slowed by factors up to 340-fold in the antibody-antigen complex compared with free cyt c. The protected residues, 36 to 38, 59, 60, 64 to 67, 100, and 101, and their hydrogen-bond acceptors, are brought together in the three-dimensional structure to form a contiguous, largely exposed protein surface with an area of about 750 square angstroms. The interaction site determined in this way is consistent with prior epitope mapping studies and includes several residues that were not previously identified. The hydrogen exchange labeling approach can be used to map binding sites on small proteins in antibody-antigen complexes and may be applicable to protein-protein and protein-ligand interactions in general.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Sulfate-chloride exchange by lobster hepatopancreas is regulated by pH-sensitive modifier sites
Cattey, M.A.; Ahearn, G.A.; Gerencser, G.A. Univ. of Florida, Gainesville )
1991-03-15
{sup 35}SO{sub 4}{sup 2{minus}} uptake by Atlantic lobster (Homarus americanus) hepatopancreatic epithelial brush border membrane vesicles (BBMV) was stimulated by internal Cl{sup {minus}}, but not internal HCO{sub 3}{sup {minus}}, or external Na{sup +}. Sulfate-chloride exchange was stimulated by inside positive, and inhibited by inside negative, trans-membrane K diffusion potentials. {sup 35}SO{sub 4}{sup 2{minus}}-Cl{sup {minus}} exchange was strongly inhibited by 4,4{prime} diisothiocyanostilbene-2,2{prime}-disulfonic acid (DIDS), 4-acetamido-4{prime}-isotheocynaostilbene-2,2{prime}-disulfonic acid, (SITS), and thiosulfate. Chloride, bicarbonate, furosamide, and bumetanide slightly, yet significantly, cis-inhibited {sup 35}SO{sub 4}{sup 2{minus}}-Cl{sup {minus}} exchange. Altering bilateral pH from 8.0 to 5.4 stimulated {sup 35}SO{sub 4}{sup 2{minus}}-Cl{sup {minus}} exchange when vesicles were loaded with Cl{sup {minus}}, but reduced bilateral pH alone or the presence of pH gradients did not affect {sup 35}SO{sub 4}{sup 2{minus}} transport in the absence of internal Cl{sup {minus}}. {sup 36}Cl uptake into SO{sub 4}{sup 2{minus}}-loaded BBMV was stimulated by an internal negative membrane potential and inhibited when the interior was electrically positive. A model is proposed which suggests that SO{sub 4}{sup 2{minus}}-Cl{sup {minus}} exchange is regulated by internal and external pH-sensitive modifier sites on the anion antiporter and by coupling to the electrogenic 2 Na{sup +}/1 H{sup +} antiporter and by coupling to the electrogenic 2 Na{sup +}/1 H{sup +} antiporter on the same membrane.
Deterministic Switching in Bismuth Ferrite Nanoislands.
Morelli, Alessio; Johann, Florian; Burns, Stuart R; Douglas, Alan; Gregg, J Marty
2016-08-10
We report deterministic selection of polarization variant in bismuth BiFeO3 nanoislands via a two-step scanning probe microscopy procedure. The polarization orientation in a nanoisland is toggled to the desired variant after a reset operation by scanning a conductive atomic force probe in contact over the surface while a bias is applied. The final polarization variant is determined by the direction of the inhomogeneous in-plane trailing field associated with the moving probe tip. This work provides the framework for better control of switching in rhombohedral ferroelectrics and for a deeper understanding of exchange coupling in multiferroic nanoscale heterostructures toward the realization of magnetoelectric devices. PMID:27454612
Site-specific transformation of Drosophila via phiC31 integrase-mediated cassette exchange.
Bateman, Jack R; Lee, Anne M; Wu, C-ting
2006-06-01
Position effects can complicate transgene analyses. This is especially true when comparing transgenes that have inserted randomly into different genomic positions and are therefore subject to varying position effects. Here, we introduce a method for the precise targeting of transgenic constructs to predetermined genomic sites in Drosophila using the C31 integrase system in conjunction with recombinase-mediated cassette exchange (RMCE). We demonstrate the feasibility of this system using two donor cassettes, one carrying the yellow gene and the other carrying GFP. At all four genomic sites tested, we observed exchange of donor cassettes with an integrated target cassette carrying the mini-white gene. Furthermore, because RMCE-mediated integration of the donor cassette is necessarily accompanied by loss of the target cassette, we were able to identify integrants simply by the loss of mini-white eye color. Importantly, this feature of the technology will permit integration of unmarked constructs into Drosophila, even those lacking functional genes. Thus, C31 integrase-mediated RMCE should greatly facilitate transgene analysis as well as permit new experimental designs. PMID:16547094
Controlling factors of biosphere-atmosphere ammonia exchange at a semi-natural peatland site
NASA Astrophysics Data System (ADS)
Brummer, C.; Richter, U.; Smith, J. J.; Delorme, J. P.; Kutsch, W. L.
2014-12-01
Recent advancements in laser spectrometry offer new opportunities to investigate net biosphere-atmosphere exchange of ammonia. During a three month field campaign from February to May 2014, we tested the performance of a quantum cascade laser within an eddy-covariance setup. The laser was operated at a semi-natural peatland site that is surrounded by highly fertilized agricultural land and intensive livestock production (~1 km distance). Ammonia concentrations were highly variable between 2 and almost 100 ppb with an average value of 15 ppb. Different concentration patterns could be identified. The variability was closely linked to the timing of management practices and the prevailing local climate, particularly wind direction, temperature and surface wetness with the latter indicating higher non-stomatal uptake under wet conditions leading to decreased concentrations. Average ammonia fluxes were around -15 ng N m-2 s-1 at the beginning of the campaign in February and shifted towards a neutral average exchange regime of -1 to 0 ng N m-2 s-1 in April and May. Intriguingly, during the time of decreasing ammonia uptake, concentrations were considerably rising, which clearly indicated N saturation in the predominant vegetation such as bog heather, purple moor-grass, and cotton grass. The cumulative net uptake for the period of investigation was ~300 g N ha-1. This stresses the importance of a thorough method inter-comparison, e.g. with denuder systems in combination with dry deposition modeling. As previous results from the latter methods showed an annual uptake of ~9 kg N ha-1 for the same site, the implementation of adequate ammonia compensation point parameterizations become crucial in surface-atmosphere exchange schemes for bog vegetation. Through their high temporal resolution, robustness and continuous measurement mode, quantum cascade lasers will help assessing the effects of atmospheric N loads to vulnerable N-limited ecosystems such as peatlands.
Li, Zhongsen; Moon, Bryan P; Xing, Aiqiu; Liu, Zhan-Bin; McCardell, Richard P; Damude, Howard G; Falco, S Carl
2010-10-01
Recombinase-mediated DNA cassette exchange (RMCE) has been successfully used to insert transgenes at previously characterized genomic sites in plants. Following the same strategy, groups of transgenes can be stacked to the same site through multiple rounds of RMCE. A gene-silencing cassette, designed to simultaneously silence soybean (Glycine max) genes fatty acid ω-6 desaturase 2 (FAD2) and acyl-acyl carrier protein thioesterase 2 (FATB) to improve oleic acid content, was first inserted by RMCE at a precharacterized genomic site in soybean. Selected transgenic events were subsequently retransformed with the second DNA construct containing a Yarrowia lipolytica diacylglycerol acyltransferase gene (DGAT1) to increase oil content by the enhancement of triacylglycerol biosynthesis and three other genes, a Corynebacterium glutamicum dihydrodipicolinate synthetase gene (DHPS), a barley (Hordeum vulgare) high-lysine protein gene (BHL8), and a truncated soybean cysteine synthase gene (CGS), to improve the contents of the essential amino acids lysine and methionine. Molecular characterization confirmed that the second RMCE successfully stacked the four overexpression cassettes to the previously integrated FAD2-FATB gene-silencing cassette. Phenotypic analyses indicated that all the transgenes expressed expected phenotypes. PMID:20720171
Impact: a case study examining the closure of a large urban fixed site needle exchange in Canada
2010-01-01
Introduction In 2008, one of the oldest fixed site needle exchanges in a large urban city in Canada was closed due to community pressure. This service had been in existence for over 20 years. Case Description This case study focuses on the consequences of the switch to mobile needle exchange services immediately after the closure and examines the impact of the closure on changes in risk behavior related to drug use, needle distribution and access to services The context surrounding the closure was also examined. Discussion and Evaluation After the closure of the fixed site exchange, access to needle exchange services decreased as evidenced by the sharp decline in numbers of clients reached, and the numbers of needles distributed and collected monthly. Reports related to needle reuse and selling of syringes suggest changes in risk behaviors. Thousands of needles remain unaccounted for in the community. To date, a new fixed site has not been found. Conclusion Closing the fixed site needle exchange had an adverse effect on already vulnerable clients and reduced access to comprehensive harm reduction services. While official public policy supports a fixed site, politicization of the issue has meant a significant setback for harm reduction with reduced potential to meet public health targets related to reducing the spread of blood borne diseases. This situation is unacceptable from a public health perspective. PMID:20500870
Haghighat-Khah, Roya Elaine; Scaife, Sarah; Martins, Sara; St John, Oliver; Matzen, Kelly Jean; Morrison, Neil; Alphey, Luke
2015-01-01
Genetically engineered insects are being evaluated as potential tools to decrease the economic and public health burden of mosquitoes and agricultural pest insects. Here we describe a new tool for the reliable and targeted genome manipulation of pest insects for research and field release using recombinase mediated cassette exchange (RMCE) mechanisms. We successfully demonstrated the established ΦC31-RMCE method in the yellow fever mosquito, Aedes aegypti, which is the first report of RMCE in mosquitoes. A new variant of this RMCE system, called iRMCE, combines the ΦC31-att integration system and Cre or FLP-mediated excision to remove extraneous sequences introduced as part of the site-specific integration process. Complete iRMCE was achieved in two important insect pests, Aedes aegypti and the diamondback moth, Plutella xylostella, demonstrating the transferability of the system across a wide phylogenetic range of insect pests. PMID:25830287
Haghighat-Khah, Roya Elaine; Scaife, Sarah; Martins, Sara; St John, Oliver; Matzen, Kelly Jean; Morrison, Neil; Alphey, Luke
2015-01-01
Genetically engineered insects are being evaluated as potential tools to decrease the economic and public health burden of mosquitoes and agricultural pest insects. Here we describe a new tool for the reliable and targeted genome manipulation of pest insects for research and field release using recombinase mediated cassette exchange (RMCE) mechanisms. We successfully demonstrated the established ΦC31-RMCE method in the yellow fever mosquito, Aedes aegypti, which is the first report of RMCE in mosquitoes. A new variant of this RMCE system, called iRMCE, combines the ΦC31-att integration system and Cre or FLP-mediated excision to remove extraneous sequences introduced as part of the site-specific integration process. Complete iRMCE was achieved in two important insect pests, Aedes aegypti and the diamondback moth, Plutella xylostella, demonstrating the transferability of the system across a wide phylogenetic range of insect pests. PMID:25830287
Liu, Chongxuan; Zachara, John M; Smith, Steve C
2004-02-01
A theoretical and experimental study of cation exchange in high ionic strength electrolytes was performed using pristine subsurface sediments from the U.S. Department of Energy Hanford site. These sediments are representative of the site contaminated sediments impacted by release of high level waste (HLW) solutions containing 137Cs+ in NaNO3 brine. The binary exchange behavior of Cs+-Na+, Cs+-K+, and Na+-K+ was measured over a range in electrolyte concentration. Vanselow selectivity coefficients (Kv) that were calculated from the experimental data using Pitzer model ion activity corrections for aqueous species showed monotonic increases with increasing electrolyte concentrations. The influence of electrolyte concentration was greater on the exchange of Na+-Cs+ than K+-Cs+, an observation consistent with the differences in ion hydration energy of the exchanging cations. A previously developed two-site ion exchange model [Geochimica et Cosmochimica Acta 66 (2002) 193] was modified to include solvent (water) activity changes in the exchanger phase through application of the Gibbs-Duhem equation. This water activity-corrected model well described the ionic strength effect on binary Cs+ exchange, and was extended to the ternary exchange system of Cs+-Na+-K+ on the pristine sediment. The model was also used to predict 137Cs+ distribution between sediment and aqueous phase (Kd) beneath a leaked HLW tank in Hanfordd's S-SX tank using the analytical aqueous data from the field and the binary ion exchange coefficients for the pristine sediment. The Kd predictions closely followed the trend in the field data and were improved by consideration of water activity effects that were considerable in certain regions of the vadose zone plume. PMID:14734247
NASA Astrophysics Data System (ADS)
Liu, Chongxuan; Zachara, John M.; Smith, Steve C.
2004-02-01
A theoretical and experimental study of cation exchange in high ionic strength electrolytes was performed using pristine subsurface sediments from the U.S. Department of Energy Hanford site. These sediments are representative of the site contaminated sediments impacted by release of high level waste (HLW) solutions containing 137Cs + in NaNO 3 brine. The binary exchange behavior of Cs +-Na +, Cs +-K +, and Na +-K + was measured over a range in electrolyte concentration. Vanselow selectivity coefficients ( Kv) that were calculated from the experimental data using Pitzer model ion activity corrections for aqueous species showed monotonic increases with increasing electrolyte concentrations. The influence of electrolyte concentration was greater on the exchange of Na +-Cs + than K +-Cs +, an observation consistent with the differences in ion hydration energy of the exchanging cations. A previously developed two-site ion exchange model [Geochimica et Cosmochimica Acta 66 (2002) 193] was modified to include solvent (water) activity changes in the exchanger phase through application of the Gibbs-Duhem equation. This water activity-corrected model well described the ionic strength effect on binary Cs + exchange, and was extended to the ternary exchange system of Cs +-Na +-K + on the pristine sediment. The model was also used to predict 137Cs + distribution between sediment and aqueous phase ( Kd) beneath a leaked HLW tank in Hanfordd's S-SX tank using the analytical aqueous data from the field and the binary ion exchange coefficients for the pristine sediment. The Kd predictions closely followed the trend in the field data and were improved by consideration of water activity effects that were considerable in certain regions of the vadose zone plume.
Deterministic methods in radiation transport
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Multiparty Controlled Deterministic Secure Quantum Communication Through Entanglement Swapping
NASA Astrophysics Data System (ADS)
Dong, Li; Xiu, Xiao-Ming; Gao, Ya-Jun; Chi, Feng
A three-party controlled deterministic secure quantum communication scheme through entanglement swapping is proposed firstly. In the scheme, the sender needs to prepare a class of Greenberger-Horne-Zeilinger (GHZ) states which are used as quantum channel. The two communicators may securely communicate under the control of the controller if the quantum channel is safe. The roles of the sender, the receiver, and the controller can be exchanged owing to the symmetry of the quantum channel. Different from other controlled quantum secure communication schemes, the scheme needs lesser additional classical information for transferring secret information. Finally, it is generalized to a multiparty controlled deterministic secure quantum communication scheme.
Shape-Controlled Deterministic Assembly of Nanowires.
Zhao, Yunlong; Yao, Jun; Xu, Lin; Mankin, Max N; Zhu, Yinbo; Wu, Hengan; Mai, Liqiang; Zhang, Qingjie; Lieber, Charles M
2016-04-13
Large-scale, deterministic assembly of nanowires and nanotubes with rationally controlled geometries could expand the potential applications of one-dimensional nanomaterials in bottom-up integrated nanodevice arrays and circuits. Control of the positions of straight nanowires and nanotubes has been achieved using several assembly methods, although simultaneous control of position and geometry has not been realized. Here, we demonstrate a new concept combining simultaneous assembly and guided shaping to achieve large-scale, high-precision shape controlled deterministic assembly of nanowires. We lithographically pattern U-shaped trenches and then shear transfer nanowires to the patterned substrate wafers, where the trenches serve to define the positions and shapes of transferred nanowires. Studies using semicircular trenches defined by electron-beam lithography yielded U-shaped nanowires with radii of curvature defined by inner surface of the trenches. Wafer-scale deterministic assembly produced U-shaped nanowires for >430,000 sites with a yield of ∼90%. In addition, mechanistic studies and simulations demonstrate that shaping results in primarily elastic deformation of the nanowires and show clearly the diameter-dependent limits achievable for accessible forces. Last, this approach was used to assemble U-shaped three-dimensional nanowire field-effect transistor bioprobe arrays containing 200 individually addressable nanodevices. By combining the strengths of wafer-scale top-down fabrication with diverse and tunable properties of one-dimensional building blocks in novel structural configurations, shape-controlled deterministic nanowire assembly is expected to enable new applications in many areas including nanobioelectronics and nanophotonics. PMID:26999059
Deterministic multidimensional nonuniform gap sampling
NASA Astrophysics Data System (ADS)
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.
NASA Astrophysics Data System (ADS)
Yang, Li-Jun; Cao, Jun-Peng; Yang, Wen-Li
2015-10-01
We propose an integrable spin-1/2 Heisenberg model where the exchange couplings and Dzyloshinky-Moriya interactions are dependent on the sites. By employing the quantum inverse scattering method, we obtain the eigenvalues and the Bethe ansatz equation of the system with the periodic boundary condition. Furthermore, we obtain the exact solution and study the boundary effect of the system with the anti-periodic boundary condition via the off-diagonal Bethe ansatz. The operator identities of the transfer matrix at the inhomogeneous points are proved at the operator level. We construct the T-Q relation based on them. From which, we obtain the energy spectrum of the system. The corresponding eigenstates are also constructed. We find an interesting coherence state that is induced by the topological boundary. Project supported by the National Natural Science Foundation of China (Grant Nos. 11174335, 11375141, 11374334, and 11434013) and the National Program for Basic Research of China and the Fund from the Chinese Academy of Sciences.
Lee, Hong Jo; Lee, Hyung Chul; Kim, Young Min; Hwang, Young Sun; Park, Young Hyun; Park, Tae Sub; Han, Jae Yong
2016-02-01
Targeted genome recombination has been applied in diverse research fields and has a wide range of possible applications. In particular, the discovery of specific loci in the genome that support robust and ubiquitous expression of integrated genes and the development of genome-editing technology have facilitated rapid advances in various scientific areas. In this study, we produced transgenic (TG) chickens that can induce recombinase-mediated gene cassette exchange (RMCE), one of the site-specific recombination technologies, and confirmed RMCE in TG chicken-derived cells. As a result, we established TG chicken lines that have, Flipase (Flp) recognition target (FRT) pairs in the chicken genome, mediated by piggyBac transposition. The transgene integration patterns were diverse in each TG chicken line, and the integration diversity resulted in diverse levels of expression of exogenous genes in each tissue of the TG chickens. In addition, the replaced gene cassette was expressed successfully and maintained by RMCE in the FRT predominant loci of TG chicken-derived cells. These results indicate that targeted genome recombination technology with RMCE could be adaptable to TG chicken models and that the technology would be applicable to specific gene regulation by cis-element insertion and customized expression of functional proteins at predicted levels without epigenetic influence. PMID:26443821
Leclerc, Monique Y.
2014-11-17
This final report presents the main activities and results of the project “A Carbon Flux Super Site: New Insights and Innovative Atmosphere-Terrestrial Carbon Exchange Measurements and Modeling” from 10/1/2006 to 9/30/2014. It describes the new AmeriFlux tower site (Aiken) at Savanna River Site (SC) and instrumentation, long term eddy-covariance, sodar, microbarograph, soil and other measurements at the site, and intensive field campaigns of tracer experiment at the Carbon Flux Super Site, SC, in 2009 and at ARM-CF site, Lamont, OK, and experiments in Plains, GA. The main results on tracer experiment and modeling, on low-level jet characteristics and their impact on fluxes, on gravity waves and their influence on eddy fluxes, and other results are briefly described in the report.
Deterministic models for traffic jams
NASA Astrophysics Data System (ADS)
Nagel, Kai; Herrmann, Hans J.
1993-10-01
We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.
NASA Astrophysics Data System (ADS)
Willett, Roger D.; Gomez-Garcia, Carlos J.
2007-09-01
The title compound, Cu(TIM)CuBr 4 (where TIM is a macrocycle ligand) is a member of the Cu(TIM)MX 4 family, which contains linear chain structures with ⋯Cu⋯X-M-X⋯Cu⋯X-M-⋯ linkages. This chain structure defines an alternating exchange/alternating site 1d system. For M=Cu, alternating FM/AFM chains are formed with JFM>| JAFM|. Structural and magnetic data are presented, along with an analysis of the exchange pathways.
Duignan, M.; Nash, C.
2010-03-31
A principal goal at the Savannah River Site (SRS) is to safely dispose of the large volume of liquid nuclear waste held in many storage tanks. In-tank ion exchange (IX) columns are being considered for cesium removal. The spherical form of resorcinol formaldehyde ion exchange resin (sRF) is being evaluated for decontamination of dissolved saltcake waste at SRS, which is generally lower in potassium and organic components than Hanford waste. The sRF performance with SRS waste was evaluated in two phases: resin batch contacts and IX column testing with both simulated and actual dissolved salt waste. The tests, equipment, and results are discussed.
Trafficking of Na+/Ca2+ exchanger to the site of persistent inflammation in nociceptive afferents.
Scheff, Nicole N; Gold, Michael S
2015-06-01
Persistent inflammation results in an increase in the amplitude and duration of depolarization-evoked Ca(2+) transients in putative nociceptive afferents. Previous data indicated that these changes were the result of neither increased neuronal excitability nor an increase in the amplitude of depolarization. Subsequent data also ruled out an increase in voltage-gated Ca(2+) currents and recruitment of Ca(2+)-induced Ca(2+) release. Parametric studies indicated that the inflammation-induced increase in the duration of the evoked Ca(2+) transient required a relatively large and long-lasting increase in the concentration of intracellular Ca(2+) implicating the Na(+)/Ca(2+) exchanger (NCX), a major Ca(2+) extrusion mechanism activated with high intracellular Ca(2+) loads. The contribution of NCX to the inflammation-induced increase in the evoked Ca(2+) transient in rat sensory neurons was tested using fura-2 AM imaging and electrophysiological recordings. Changes in NCX expression and protein were assessed with real-time PCR and Western blot analysis, respectively. An inflammation-induced decrease in NCX activity was observed in a subpopulation of putative nociceptive neurons innervating the site of inflammation. The time course of the decrease in NCX activity paralleled that of the inflammation-induced changes in nociceptive behavior. The change in NCX3 in the cell body was associated with a decrease in NCX3 protein in the ganglia, an increase in the peripheral nerve (sciatic) yet no change in the central root. This single response to inflammation is associated with changes in at least three different segments of the primary afferent, all of which are likely to contribute to the dynamic response to persistent inflammation. PMID:26041911
NASA Astrophysics Data System (ADS)
Zhe Sun, Phillip; Livio Longo, Dario; Hu, Wei; Xiao, Gang; Wu, Renhua
2014-08-01
pH-sensitive chemical exchange saturation transfer (CEST) MRI holds great promise for in vivo applications. However, the CEST effect depends on not only exchange rate and hence pH, but also on the contrast agent concentration, which must be determined independently for pH quantification. Ratiometric CEST MRI normalizes the concentration effect by comparing CEST measurements of multiple labile protons to simplify pH determination. Iopamidol, a commonly used x-ray contrast agent, has been explored as a ratiometric CEST agent for imaging pH. However, iopamidol CEST properties have not been solved, determination of which is important for optimization and quantification of iopamidol pH imaging. Our study numerically solved iopamidol multi-site pH-dependent chemical exchange properties. We found that iopamidol CEST MRI is suitable for measuring pH between 6 and 7.5 despite that T1 and T2 measurements varied substantially with pH and concentration. The pH MRI precision decreased with pH and concentration. The standard deviation of pH determined from MRI was 0.2 and 0.4 pH unit for 40 and 20 mM iopamidol solution of pH 6, and it improved to be less than 0.1 unit for pH above 7. Moreover, we determined base-catalyzed chemical exchange for 2-hydrooxypropanamido (ksw = 1.2*10pH-4.1) and amide (ksw = 1.2*10pH-4.6) protons that are statistically different from each other (P < 0.01, ANCOVA), understanding of which should help guide in vivo translation of iopamidol pH imaging.
NASA Astrophysics Data System (ADS)
McKinley, J. P.; Zachara, J. M.; Smith, S. C.; Liu, C.
2007-01-01
Nuclear waste that bore 90Sr 2+ was accidentally leaked into the vadose zone at the Hanford site, and was immobilized at relatively shallow depths in sediments containing little apparent clay or silt-sized components. Sr 2+, 90Sr 2+, Mg 2+, and Ca 2+ was desorbed and total inorganic carbon concentration was monitored during the equilibration of this sediment with varying concentrations of Na +, Ca 2+. A cation exchange model previously developed for similar sediments was applied to these results as a predictor of final solution compositions. The model included binary exchange reactions for the four operant cations and an equilibrium dissolution/precipitation reaction for calcite. The model successfully predicted the desorption data. The contaminated sediment was also examined using digital autoradiography, a sensitive tool for imaging the distribution of radioactivity. The exchanger phase containing 90Sr was found to consist of smectite formed from weathering of mesostasis glass in basaltic lithic fragments. These clasts are a significant component of Hanford formation sands. The relatively small but significant cation exchange capacity of these sediments was thus a consequence of reaction with physically sequestered clays in sediment that contained essentially no fine-grained material. The nature of this exchange component explained the relatively slow (scale of days) evolution of desorption solutions. The experimental and model results indicated that there is little risk of migration of 90Sr 2+ to the water table.
NASA Astrophysics Data System (ADS)
McKinley, J. P.; Zachara, J. M.; Smith, S. C.; Liu, C.
2005-12-01
Nuclear waste that bore 90Sr2+ was accidentally leaked into the vadose zone at the Hanford site, and was immobilized at relatively shallow depths in sediments containing little apparent clay or silt-sized components. We desorbed Sr2+, 90Sr2+, Mg2+, and Ca2+, and monitored total inorganic carbon concentration during the equilibration of this sediment with varying concentrations of Na+ and Ca2+. A cation exchange model previously developed for similar sediments was applied to these results as a predictor of final solution compositions. The model included binary exchange reactions for the four operant cations and an equilibrium dissolution/precipitation reaction for calcite. The model produced an excellent prediction for desorption data. We also examined the contaminated sediment using digital autoradiography, a sensitive tool for imaging the distribution of radioactivity. The exchanger phase containing 90Sr was found to consist of smectite formed from weathering of mesostasis glass in basaltic lithic fragments. These clasts are a significant component of Hanford formation sands. The relatively small but significant cation exchange capacity of these sediments was thus a consequence of reaction with physically sequestered clays in a sediment that contained essentially no fine-grained material. The nature of this exchange component explains the relatively slow (scale of days) evolution of desorption solutions. The experimental and model results indicate that there is little risk of migration of 90Sr2+ to the water table.
Deterministic relativistic quantum bit commitment
NASA Astrophysics Data System (ADS)
Adlam, Emily; Kent, Adrian
2015-06-01
We describe new unconditionally secure bit commitment schemes whose security is based on Minkowski causality and the monogamy of quantum entanglement. We first describe an ideal scheme that is purely deterministic, in the sense that neither party needs to generate any secret randomness at any stage. We also describe a variant that allows the committer to proceed deterministically, requires only local randomness generation from the receiver, and allows the commitment to be verified in the neighborhood of the unveiling point. We show that these schemes still offer near-perfect security in the presence of losses and errors, which can be made perfect if the committer uses an extra single random secret bit. We discuss scenarios where these advantages are significant.
Espada, Alfonso; Broughton, Howard; Jones, Spencer; Chalmers, Michael J; Dodge, Jeffrey A
2016-03-10
Computational assessment of the IL-17A structure identified two distinct binding pockets, the β-hairpin pocket and the α-helix pocket. The β-hairpin pocket was hypothesized to be the site of binding for peptide macrocycles. Support for this hypothesis was obtained using HDX-MS which revealed protection to exchange only within the β-hairpin pocket. This data represents the first direct structural evidence of a small molecule binding site on IL-17A that functions to disrupt the interaction with its receptor. PMID:26854023
NASA Astrophysics Data System (ADS)
Kaneta, Yasunori; Ishino, Shiori; Chen, Ying; Iwata, Shuichi; Iwase, Akihiro
2011-10-01
To clarify the relationship between a magnetic property and a defect structure in FeRh inter-metallic compound theoretically, energy band calculations are performed based on the density functional theory. Under the assumption that the majority of defect structure is a type of site-exchanged one between Fe and Rh atoms, total energy for various magnetic structures is evaluated within a super-cell of 2×2×2 cubic cells. Due to the site-exchange defect pair of nearest neighbor Fe and Rh atoms in 12.5%/f.u. (f.u.: formula unit) density, the total energy increases by 1.91 eV/pair in the anti-ferromagnetic structure and 0.88 eV/pair in the ferromagnetic structure. Although the anti-ferromagnetic structure is the stable state at low temperatures in defect-free FeRh, it becomes unstable with an amount of the site-exchange defect density. Threshold defect density to stabilize ferromagnetic state is estimated to be 0.8%/f.u. This phenomenon is expected in ion irradiated FeRh.
Analysis of FBC deterministic chaos
Daw, C.S.
1996-06-01
It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.
Site-directed point mutations in embryonic stem cells: a gene-targeting tag-and-exchange strategy.
Askew, G R; Doetschman, T; Lingrel, J B
1993-01-01
Sequential gene targeting was used to introduce point mutations into one alpha 2 isoform Na,K-ATPase homolog in mouse embryonic stem (ES) cells. In the first round of targeted replacement, the gene was tagged with selectable markers by insertion of a Neor/HSV-tk gene cassette, and this event was selected for by gain of neomycin (G418) resistance. In the second targeted replacement event, the tagged genomic sequence was exchanged with a vector consisting of homologous genomic sequences carrying five site-directed nucleotide substitutions. Embryonic stem cell clones modified by exchange with the mutation vector were selected for loss of the HSV-tk gene by resistance to ganciclovir. Candidate clones were further screened and identified by polymerase chain reaction and Southern blot analysis. By this strategy, the endogenous alpha 2 isoform Na,K-ATPase gene was altered to encode two other amino acids so that the enzyme is resistant to inhibition by cardiac glycosides while maintaining its transmembrane ion-pumping function. Since the initial tagging event and the subsequent mutation-exchange event are independent of one another, a tagged cell line can be used to generate a variety of mutant lines by exchange with various mutation vectors at the tagged locus. This method should be useful for testing specific mutations introduced into the genomes of tissue culture cells and animals and for developing animal models encompassing the mutational variability of known genetic disorders. Images PMID:8391633
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-06
...The Department of State is seeking Office of Management and Budget (OMB) approval for the information collection described below. The purpose of this notice is to allow 60 days for public comment in the Federal Register preceding submission to OMB. We are conducting this process in accordance with the Paperwork Reduction Act of 1995. Title of Information Collection: Exchange Programs......
NASA Astrophysics Data System (ADS)
Biederman, J. A.; Scott, R. L.; Goulden, M.
2014-12-01
Climate change is predicted to increase the frequency and severity of water limitation, altering terrestrial ecosystems and their carbon exchange with the atmosphere. Here we compare site-level temporal sensitivity of annual carbon fluxes to interannual variations in water availability against cross-site spatial patterns over a network of 19 eddy covariance flux sites. This network represents one order of magnitude in mean annual productivity and includes western North American desert shrublands and grasslands, savannahs, woodlands, and forests with continuous records of 4 to 12 years. Our analysis reveals site-specific patterns not identifiable in prior syntheses that pooled sites. We interpret temporal variability as an indicator of ecosystem response to annual water availability due to fast-changing factors such as leaf stomatal response and microbial activity, while cross-site spatial patterns are used to infer ecosystem adjustment to climatic water availability through slow-changing factors such as plant community and organic carbon pools. Using variance decomposition, we directly quantify how terrestrial carbon balance depends on slow- and fast-changing components of gross ecosystem production (GEP) and total ecosystem respiration (TER). Slow factors explain the majority of variance in annual net ecosystem production (NEP) across the dataset, and their relative importance is greater at wetter, forest sites than desert ecosystems. Site-specific offsets from spatial patterns of GEP and TER explain one third of NEP variance, likely due to slow-changing factors not directly linked to water, such as disturbance. TER and GEP are correlated across sites as previously shown, but our site-level analysis reveals surprisingly consistent linear relationships between these fluxes in deserts and savannahs, indicating fast coupling of TER and GEP in more arid ecosystems. Based on the uncertainty associated with slow and fast factors, we suggest a framework for improved
Deterministic scale-free networks
NASA Astrophysics Data System (ADS)
Barabási, Albert-László; Ravasz, Erzsébet; Vicsek, Tamás
2001-10-01
Scale-free networks are abundant in nature and society, describing such diverse systems as the world wide web, the web of human sexual contacts, or the chemical network of a cell. All models used to generate a scale-free topology are stochastic, that is they create networks in which the nodes appear to be randomly connected to each other. Here we propose a simple model that generates scale-free networks in a deterministic fashion. We solve exactly the model, showing that the tail of the degree distribution follows a power law.
An exact solution for R2,eff in CPMG experiments in the case of two site chemical exchange
Baldwin, Andrew J.
2014-01-01
The Carr–Purcell–Meiboom–Gill (CPMG) experiment is widely used to quantitatively analyse the effects of chemical exchange on NMR spectra. In a CPMG experiment, the effective transverse relaxation rate, R2,eff, is typically measured as a function of the pulse frequency, νCPMG. Here, an exact expression for how R2,eff varies with νCPMG is derived for the commonly encountered scenario of two-site chemical exchange of in-phase magnetisation. This result, summarised in Appendix A, generalises a frequently used equation derived by Carver and Richards, published in 1972. The expression enables more rapid analysis of CPMG data by both speeding up calculation of R2,eff over numerical methods by a factor of ca. 130, and yields exact derivatives for use in data analysis. Moreover, the derivation provides insight into the physical principles behind the experiment. PMID:24852115
Narsimhan, Karthik; Michaelis, Vladimir K; Mathies, Guinevere; Gunther, William R; Griffin, Robert G; Román-Leshkov, Yuriy
2015-02-11
The selective low temperature oxidation of methane is an attractive yet challenging pathway to convert abundant natural gas into value added chemicals. Copper-exchanged ZSM-5 and mordenite (MOR) zeolites have received attention due to their ability to oxidize methane into methanol using molecular oxygen. In this work, the conversion of methane into acetic acid is demonstrated using Cu-MOR by coupling oxidation with carbonylation reactions. The carbonylation reaction, known to occur predominantly in the 8-membered ring (8MR) pockets of MOR, is used as a site-specific probe to gain insight into important mechanistic differences existing between Cu-MOR and Cu-ZSM-5 during methane oxidation. For the tandem reaction sequence, Cu-MOR generated drastically higher amounts of acetic acid when compared to Cu-ZSM-5 (22 vs 4 μmol/g). Preferential titration with sodium showed a direct correlation between the number of acid sites in the 8MR pockets in MOR and acetic acid yield, indicating that methoxy species present in the MOR side pockets undergo carbonylation. Coupled spectroscopic and reactivity measurements were used to identify the genesis of the oxidation sites and to validate the migration of methoxy species from the oxidation site to the carbonylation site. Our results indicate that the Cu(II)-O-Cu(II) sites previously associated with methane oxidation in both Cu-MOR and Cu-ZSM-5 are oxidation active but carbonylation inactive. In turn, combined UV-vis and EPR spectroscopic studies showed that a novel Cu(2+) site is formed at Cu/Al <0.2 in MOR. These sites oxidize methane and promote the migration of the product to a Brønsted acid site in the 8MR to undergo carbonylation. PMID:25562431
Site selective syntheses of [(3)H]omeprazole using hydrogen isotope exchange chemistry.
Pollack, Scott R; Schenk, David J
2015-01-01
Omeprazole (Prilosec®) is a selective and irreversible proton pump inhibitor used to treat various medical conditions related to the production of excess stomach acids. It functions by suppressing secretion of those acids. Radiolabeled compounds are commonly employed in the drug discovery and development process to support efforts including library screening, target identification, receptor binding, assay development and validation and safety assessment. Herein, we describe synthetic approaches to the controlled and selective labeling of omeprazole with tritium via hydrogen isotope exchange chemistry. The chemistry may also be used to prepare tritium labeled esomeprazole (Nexium®), the active pure (S)-enantiomer of omeprazole. PMID:26380956
Survivability of Deterministic Dynamical Systems
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Survivability of Deterministic Dynamical Systems
NASA Astrophysics Data System (ADS)
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-07-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures.
Survivability of Deterministic Dynamical Systems.
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Comment on: Supervisory Asymmetric Deterministic Secure Quantum Communication
NASA Astrophysics Data System (ADS)
Kao, Shih-Hung; Tsai, Chia-Wei; Hwang, Tzonelih
2012-12-01
In 2010, Xiu et al. (Optics Communications 284:2065-2069, 2011) proposed several applications based on a new secure four-site distribution scheme using χ-type entangled states. This paper points out that one of these applications, namely, supervisory asymmetric deterministic secure quantum communication, is subject to an information leakage problem, in which the receiver can extract two bits of a three-bit secret message without the supervisor's permission. An enhanced protocol is proposed to resolve this problem.
Momentum, water vapor, and carbon dioxide exchange at a centrally located prairie site during FIFE
NASA Technical Reports Server (NTRS)
Verma, Shashi B.; Kim, Joon; Clement, Robert J.
1992-01-01
Eddy correlation measurements were taken of momentum, water vapor, sensible heat, and CO2 at a centrally located plateau site in the FIFE study area from May to October 1987. Approximately 82 percent of the vegetation at the site was composed of several C4 grass species, with the remainder being C3 grasses, forbs, wedges, and woody plants. Precipitation was about normal during the study period, except for a three week dry period in late July to early August that caused moisture stress conditions.
Are earthquakes deterministic or chaotic?
NASA Astrophysics Data System (ADS)
Rundle, John B.; Julian, Bruce R.; Turcotte, Donald L.
During the last decade, physicists and applied mathematicians have made substantial headway in understanding the dynamics of complex nonlinear systems. Progress has been possible due to the development of several new tools, including the renormalization group approach, phase portraits, and scaling methods (fractals). At the same time, mathematical geophysicists interested in earthquakes have begun to utilize these same concepts to generate models of faults and fractures.In order to bring these scientific communities together, it was decided to convene the workshop, Physics of Earthquake Faults: Deterministic or Chaotic?, held February 12-15, at the Asilomar conference center near Monterey, Calif. Thirty-six Earth scientists met with 15 physicists and applied mathematicians to discuss how recent advances in nonlinear systems might be applied to better understand earthquakes. Funding was provided by the Geodynamics Branch of the National Aeronautics and Space Administration, the National Science Foundation, and the Office of Basic Energy Sciences of the U.S. Department of Energy. Organizational and logistical support were provided by the U.S. Geological Survey.
Sharp, David W.
1980-01-01
In a coal gasification operation or similar conversion process carried out in the presence of an alkali metal-containing catalyst wherein solid particles containing alkali metal residues are produced, alkali metal constituents are recovered for the particles by contacting or washing them with an aqueous solution containing calcium or magnesium ions in an alkali metal recovery zone at a low temperature, preferably below about 249.degree. F. During the washing or leaching process, the calcium or magnesium ions displace alkali metal ions held by ion exchange sites in the particles thereby liberating the ions and producing an aqueous effluent containing alkali metal constituents. The aqueous effluent from the alkali metal recovery zone is then recycled to the conversion process where the alkali metal constituents serve as at least a portion of the alkali metal constituents which comprise the alkali metal-containing catalyst.
Beak, P.; Musick, T.J.; Chen, C.
1988-05-25
Reactions in which there is formal intramolecular transfer of an acidic deuterium to a site of halogen-lithium exchange could be interpreted to show that initial halogen-lithium exchange occurs faster than loss of the acidic deuterium. However studies of the competition between halogen-metal-deuterium loss for N-deuterio-N-alkyl-o, -m-, and -p-halobenzimides are not consistent with that mechanism. They suggest an alternative in which initial loss of the acidic deuterium is followed by halogen-lithium exchange to give a dilithiated intermediate. Deuterium transfer to the site of halogen-lithium exchange then occurs by reaction of the dilithiated species intermolecularly with unreacted N-deuteriated amide. The halogen-lithium exchange is faster than complete mixing of the reactants and can occur either in an initially formed deprotonated complex or in a transient high local concentration of organolithium reagent. Evidence for both possibilities is provided. Two reactions from the literature in which halogen-lithium exchange appears to be faster than transfer of an acidic hydrogen have been reinvestigated and found to be interpretable in terms of similar sequences.
King, A.W.
1986-01-01
Ecological models of the seasonal exchange of carbon dioxide between the atmosphere and the terrestrial biosphere are needed in the study of changes in atmospheric CO/sub 2/ concentration. In response to this need, a set of site-specific models of seasonal terrestrial carbon dynamics was assembled from open-literature sources. The collection was chosen as a base for the development of biome-level models for each of the earth's principal terrestrial biomes or vegetation complexes. Two methods of extrapolation were tested. The first approach was a simple extrapolation that assumed relative within-biome homogeneity, and generated CO/sub 2/ source functions that differed dramatically from published estimates of CO/sub 2/ exchange. The differences were so great that the simple extrapolation was rejected as a means of incorporating site-specific models in a global CO/sub 2/ source function. The second extrapolation explicitly incorporated within-biome variability in the abiotic variables that drive seasonal biosphere-atmosphere CO/sub 2/ exchange. Simulated site-specific CO/sub 2/ dynamics were treated as a function of multiple random variables. The predicated regional CO/sub 2/ exchange is the computed expected value of simulated site-specific exchanges for that region times the area of the region. The test involved the regional extrapolation of tundra and a coniferous forest carbon exchange model. Comparisons between the CO/sub 2/ exchange estimated by extrapolation and published estimates of regional exchange for the latitude belt support the appropriateness of extrapolation by expected value.
Zherebker, Alexander; Kostyukevich, Yury; Kononikhin, Alexey; Roznyatovsky, Vitaliy A; Popov, Igor; Grishin, Yuri K; Perminova, Irina V; Nikolaev, Eugene
2016-04-21
Hydrogen/deuterium exchange coupled with high-resolution mass spectrometry has become a powerful analytical approach for structural investigations of complex organic matrices. Here we report the feasibility of the site-specific H/D exchange of non-labile hydrogens directly in the electrospray ionization (ESI) source, which was facilitated by an increase in the desolvation temperature from 200 °C up to 400 °C. We have found that the exchanges at non-labile sites were observed only for the model compounds capable of keto-enol tautomeric transformations (e.g., 2,3-, 2,4-dihydroxybenzoic acids, gallic acid, DOPA), and only when water was used as a solvent. We hypothesized that the detected additional exchanges were induced by the presence of hydroxyls in the sprayed water droplets generated in the negative ESI mode. It was indicative of the exchange reactions taking place in the sprayed droplets rather than in the gas phase. To support this hypothesis, the H/D exchange experiments were run in deuterated water under base-catalyzed conditions for three model compounds, which showed the most intensive exchanges in the MS experiments: DOPA, 2,4-DHB, and 5-acetylsalicylic acid. (2)H NMR spectroscopy has confirmed keto-enolic transformations of the model compounds leading to the specific labeling of the corresponding non-labile sites. We believe that the proposed technique will be useful for structural investigations of natural complex mixtures (e.g. proteins, humic substances) using site-specific H/D exchange. PMID:27002310
Köhler, S; Jungkunst, H F; Gutzler, C; Herrera, R; Gerold, G
2012-09-01
In the light of global change, the necessity to monitor atmospheric depositions that have relevant effects on ecosystems is ever increasing particularly for tropical sites. For this study, atmospheric ionic depositions were measured on tropical Central Sulawesi at remote sites with both a conventional bulk water collector system (BWS collector) and with a passive ion exchange resin collector system (IER collector). The principle of IER collector to fix all ionic depositions, i.e. anions and cations, has certain advantages referring to (1) post-deposition transformation processes, (2) low ionic concentrations and (3) low rainfall and associated particulate inputs, e.g. dust or sand. The ionic concentrations to be measured for BWS collectors may easily fall below detection limits under low deposition conditions which are common for tropical sites of low land use intensity. Additionally, BWS collections are not as independent from the amount of rain fallen as are IER collections. For this study, the significant differences between both collectors found for nearly all measured elements were partly correlated to the rainfall pattern, i.e. for calcium, magnesium, potassium and sodium. However, the significant differences were, in most cases, not highly relevant. More relevant differences between the systems were found for aluminium and nitrate (434-484 %). Almost five times higher values for nitrate clarified the advantage of the IER system particularly for low deposition rate which is one particularity of atmospheric ionic deposition in tropical sites of extensive land use. The monthly resolution of the IER data offers new insights into the temporal distribution of annual ionic depositions. Here, it did not follow the tropical rain pattern of a drier season within generally wet conditions. PMID:22865942
Vo, Uybach; Vajpai, Navratna; Flavell, Liz; Bobby, Romel; Breeze, Alexander L.; Embrey, Kevin J.; Golovanov, Alexander P.
2016-01-01
The activity of Ras is controlled by the interconversion between GTP- and GDP-bound forms partly regulated by the binding of the guanine nucleotide exchange factor Son of Sevenless (Sos). The details of Sos binding, leading to nucleotide exchange and subsequent dissociation of the complex, are not completely understood. Here, we used uniformly 15N-labeled Ras as well as [13C]methyl-Met,Ile-labeled Sos for observing site-specific details of Ras-Sos interactions in solution. Binding of various forms of Ras (loaded with GDP and mimics of GTP or nucleotide-free) at the allosteric and catalytic sites of Sos was comprehensively characterized by monitoring signal perturbations in the NMR spectra. The overall affinity of binding between these protein variants as well as their selected functional mutants was also investigated using intrinsic fluorescence. The data support a positive feedback activation of Sos by Ras·GTP with Ras·GTP binding as a substrate for the catalytic site of activated Sos more weakly than Ras·GDP, suggesting that Sos should actively promote unidirectional GDP → GTP exchange on Ras in preference of passive homonucleotide exchange. Ras·GDP weakly binds to the catalytic but not to the allosteric site of Sos. This confirms that Ras·GDP cannot properly activate Sos at the allosteric site. The novel site-specific assay described may be useful for design of drugs aimed at perturbing Ras-Sos interactions. PMID:26565026
Vo, Uybach; Vajpai, Navratna; Flavell, Liz; Bobby, Romel; Breeze, Alexander L; Embrey, Kevin J; Golovanov, Alexander P
2016-01-22
The activity of Ras is controlled by the interconversion between GTP- and GDP-bound forms partly regulated by the binding of the guanine nucleotide exchange factor Son of Sevenless (Sos). The details of Sos binding, leading to nucleotide exchange and subsequent dissociation of the complex, are not completely understood. Here, we used uniformly (15)N-labeled Ras as well as [(13)C]methyl-Met,Ile-labeled Sos for observing site-specific details of Ras-Sos interactions in solution. Binding of various forms of Ras (loaded with GDP and mimics of GTP or nucleotide-free) at the allosteric and catalytic sites of Sos was comprehensively characterized by monitoring signal perturbations in the NMR spectra. The overall affinity of binding between these protein variants as well as their selected functional mutants was also investigated using intrinsic fluorescence. The data support a positive feedback activation of Sos by Ras·GTP with Ras·GTP binding as a substrate for the catalytic site of activated Sos more weakly than Ras·GDP, suggesting that Sos should actively promote unidirectional GDP → GTP exchange on Ras in preference of passive homonucleotide exchange. Ras·GDP weakly binds to the catalytic but not to the allosteric site of Sos. This confirms that Ras·GDP cannot properly activate Sos at the allosteric site. The novel site-specific assay described may be useful for design of drugs aimed at perturbing Ras-Sos interactions. PMID:26565026
Comito, Robert J; Fritzsching, Keith J; Sundell, Benjamin J; Schmidt-Rohr, Klaus; Dincă, Mircea
2016-08-17
The manufacture of advanced polyolefins has been critically enabled by the development of single-site heterogeneous catalysts. Metal-organic frameworks (MOFs) show great potential as heterogeneous catalysts that may be designed and tuned on the molecular level. In this work, exchange of zinc ions in Zn5Cl4(BTDD)3, H2BTDD = bis(1H-1,2,3-triazolo[4,5-b],[4',5'-i])dibenzo[1,4]dioxin) (MFU-4l) with reactive metals serves to establish a general platform for selective olefin polymerization in a high surface area solid promising for industrial catalysis. Characterization of polyethylene produced by these materials demonstrates both molecular and morphological control. Notably, reactivity approaches single-site catalysis, as evidenced by low polydispersity indices, and good molecular weight control. We further show that these new catalysts copolymerize ethylene and propylene. Uniform growth of the polymer around the catalyst particles provides a mechanism for controlling the polymer morphology, a relevant metric for continuous flow processes. PMID:27443860
Momentum, water vapor, and carbon dioxide exchange at a centrally located prairie site during FIFE
NASA Astrophysics Data System (ADS)
Verma, Shashi B.; Kim, Joon; Clement, Robert J.
1992-11-01
Eddy correlation measurements were made of fluxes of momentum, sensible heat, water vapor, and carbon dioxide at a centrally located plateau site in the FIFE study area during the period from May to October 1987. About 82% of the vegetation at the site was comprised of several C4 grass species (big bluestem, Indian grass, switchgrass, tall dropseed, little bluestem, and blue grama), with the remainder being C3 grasses, sedges, forbs, and woody plants. The prairie was burned in mid-April and was not grazed. Precipitation during the study period was about normal, except for a 3-week dry period in late July to early August, which caused moisture stress conditions. The drag coefficient (Cd=u*2/u¯2, where u* is the friction velocity and ū is the mean wind speed at 2.25 m above the ground) of the prairie vegetation ranged from 0.0087 to 0.0099. The average d/zc and z0/zc (where d is the zero plane displacement, z0 is the roughness parameter, and zc is the canopy height) were estimated to be about 0.71 and 0.028, respectively. Information was developed on the aerodynamic conductance (ga) in terms of mean wind speed (measured at a reference height) for different periods in the growing season. During the early and peak growth stages, with favorable soil moisture, the daily evapotranspiration (ET) rates ranged from 3.9 to 6.6 mm d-1. The ET rate during the dry period was between 2.9 and 3.8 mm d-1. The value of the Priestley-Taylor coefficient (α), calculated as the ratio of the measured ET to the equilibrium ET, averaged around 1.26 when the canopy stomatal resistance (rc) was less than 100 s m-1. When rc increased above 100 s m-1, α decreased rapidly. The atmospheric CO2 flux data (eddy correlation) were used, in conjunction with estimated soil CO2 flux, to evaluate canopy photosynthesis (Pc). The dependence of Pc on photosynthetically active radiation (KPAR), vapor pressure deficit, and soil moisture was examined. Under nonlimiting soil moisture conditions, Pc was
Ligand binding and proton exchange dynamics in site-specific mutants of human myoglobin
Lambright, D.G.
1992-01-01
Site specific mutagenesis was used to make substitutions of four residues in the distal heme pocket of human myoglobin: Val68, His64, Lys45, and Asp60. Strongly diffracting crystals of the conservative mutation K45R in the met aquo form were grown in the trigonal space group P3[sub 2]21 and the X-ray crystal structure determined at 1.6 [angstrom] resolution. The overall structure is similar to that of sperm whale met aquo myoglobin. Several of the mutant proteins were characterized by 2-D NMR spectroscopy. The NMR data suggest the structural changes are localized to the region of the mutation. The dynamics of ligand binding to myoglobin mutants were studied by transient absorption spectroscopy following photolysis of the CO complexes. Transient absorption kinetics and spectra on the ns to ms timescale were measured in aqueous solution from 280 K to 310 K and in 75% glycerol: water from 250 K to 310 K. Two significant basis spectra were obtained from singular value decomposition of the matrix of time dependent spectra. The information was used to obtain approximations for the extent of ligand rebinding and the kinetics of conformational relaxation. Except for K45R, substitutions at Lys45 or Asp60 produce changes in the kinetics for ligand rebinding. Replacement of Lys45 with Arg increases the rate of ligand rebinding from the protein matrix by a factor of 2, but does not alter the rates for ligand escape or entry into the protein or the dynamics of the conformational relaxation. Substitutions at His64 and Val68 influence the kinetics of ligand rebinding and the dynamics of conformational relaxation. The results do not support the hypothesis that ligand migration between the heme pocket and solvent is determined solely by fluctuations of Arg45 and His64 between open and closed conformations of the heme pocket but can be rationalized if ligand diffusion through the protein matrix involves multiple competing pathways.
NASA Astrophysics Data System (ADS)
Bhatia, G.; Bubier, J. L.
2001-05-01
Peatlands play a significant role in the global carbon cycle sequestering approximately one-third of the global pool of soil carbon. An increased understanding of the carbon cycle in these critical ecosystems is imperative to further our comprehension of the role they play in future global warming. Net ecosystem exchange (NEE) of carbon dioxide was measured at Mer Bleue Bog in Ottawa, Ontario, Canada from May through August 2000. Dominant species at Mer Bleue included Ledum groenlandicum, Chamaedaphne calyculata, Eriophorum vaginatum, Carex oligosperma and Sphagnum species. In order to understand the controls and variability of NEE a range of sites were considered, including a beaver pond, a bog and a poor fen. This study aimed at comparing overall seasonal patterns and ranges of NEE, photosynthesis and respiration and understanding the relationships with photosynthetically active radiation (PAR), water table, temperature, species composition and plant biomass. A clear lexan and teflon film climate-controlled chamber was used to measure the rate of respiration and photosynthesis on a bi-weekly basis in all sites. The chamber was attached to a LI-COR 6200 portable photosynthesis system, which included a LI-6250 infrared gas analyzer, quantum sensor and data logger. Shrouds of different mesh sizes were used to regulate the amount of light entering the chamber in order to measure NEE at a wide range of PAR. An opaque shroud was used to measure ecosystem respiration. Photosynthesis was calculated as the difference between NEE and respiration. Seasonal patterns showed a peak season from June 23rd through July 15th where higher PAR and temperature levels led to increased photosynthesis and respiration measurements. Although NEE rates at the sites varied, during peak season NEE ranged in increasing order: bog hummock and hollow (6 to -6.5 μ mol CO2 m{-2} s{-1}) < beaver pond (6 to -7 μ mol CO2 m{-2} s{-1}) < poor fen (10 to -8 μ mol CO2 m{-2}s {-1}).
Risk-based and deterministic regulation
Fischer, L.E.; Brown, N.W.
1995-07-01
Both risk-based and deterministic methods are used for regulating the nuclear industry to protect the public safety and health from undue risk. The deterministic method is one where performance standards are specified for each kind of nuclear system or facility. The deterministic performance standards address normal operations and design basis events which include transient and accident conditions. The risk-based method uses probabilistic risk assessment methods to supplement the deterministic one by (1) addressing all possible events (including those beyond the design basis events), (2) using a systematic, logical process for identifying and evaluating accidents, and (3) considering alternative means to reduce accident frequency and/or consequences. Although both deterministic and risk-based methods have been successfully applied, there is need for a better understanding of their applications and supportive roles. This paper describes the relationship between the two methods and how they are used to develop and assess regulations in the nuclear industry. Preliminary guidance is suggested for determining the need for using risk based methods to supplement deterministic ones. However, it is recommended that more detailed guidance and criteria be developed for this purpose.
Stucker, Valerie; Ranville, James; Newman, Mark; Peacock, Aaron; Cho, Jaehyun; Hatfield, Kirk
2011-10-15
Laboratory tests and a field validation experiment were performed to evaluate anion exchange resins for uranium sorption and desorption in order to develop a uranium passive flux meter (PFM). The mass of uranium sorbed to the resin and corresponding masses of alcohol tracers eluted over the duration of groundwater installation are then used to determine the groundwater and uranium contaminant fluxes. Laboratory based batch experiments were performed using Purolite A500, Dowex 21K and 21K XLT, Lewatit S6328 A resins and silver impregnated activated carbon to examine uranium sorption and extraction for each material. The Dowex resins had the highest uranium sorption, followed by Lewatit, Purolite and the activated carbon. Recoveries from all ion exchange resins were in the range of 94-99% for aqueous uranium in the environmentally relevant concentration range studied (0.01-200 ppb). Due to the lower price and well-characterized tracer capacity, Lewatit S6328 A was used for field-testing of PFMs at the DOE UMTRA site in Rifle, CO. The effect on the flux measurements of extractant (nitric acid)/resin ratio, and uranium loading were investigated. Higher cumulative uranium fluxes (as seen with concentrations>1 ug U/gram resin) yielded more homogeneous resin samples versus lower cumulative fluxes (<1 ug U/gram resin), which caused the PFM to have areas of localized concentration of uranium. Resin homogenization and larger volume extractions yield reproducible results for all levels of uranium fluxes. Although PFM design can be improved to measure flux and groundwater flow direction, the current methodology can be applied to uranium transport studies. PMID:21798572
Recent Achievements of the Neo-Deterministic Seismic Hazard Assessment in the CEI Region
Panza, G. F.; Kouteva, M.; Vaccari, F.; Peresan, A.; Romanelli, F.; Cioflan, C. O.; Radulian, M.; Marmureanu, G.; Paskaleva, I.; Gribovszki, K.; Varga, P.; Herak, M.; Zaichenco, A.; Zivcic, M.
2008-07-08
A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales--regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown.
Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.
2003-01-01
Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent
Kamboj, Sunita; Cheng, Jing-Jy; Yu, Charley
2005-05-01
The dose assessments for sites containing residual radioactivity usually involve the use of computer models that employ input parameters describing the physical conditions of the contaminated and surrounding media and the living and consumption patterns of the receptors in analyzing potential doses to the receptors. The precision of the dose results depends on the precision of the input parameter values. The identification of sensitive parameters that have great influence on the dose results would help set priorities in research and information gathering for parameter values so that a more precise dose assessment can be conducted. Two methods of identifying site-specific sensitive parameters, deterministic and probabilistic, were compared by applying them to the RESRAD computer code for analyzing radiation exposure for a residential farmer scenario. The deterministic method has difficulty in evaluating the effect of simultaneous changes in a large number of input parameters on the model output results. The probabilistic method easily identified the most sensitive parameters, but the sensitivity measure of other parameters was obscured. The choice of sensitivity analysis method would depend on the availability of site-specific data. Generally speaking, the deterministic method would identify the same set of sensitive parameters as the probabilistic method when 1) the baseline values used in the deterministic method were selected near the mean or median value of each parameter and 2) the selected range of parameter values used in the deterministic method was wide enough to cover the 5th to 95th percentile values from the distribution of that parameter. PMID:15824576
Stochastic search with Poisson and deterministic resetting
NASA Astrophysics Data System (ADS)
Bhat, Uttam; De Bacco, Caterina; Redner, S.
2016-08-01
We investigate a stochastic search process in one, two, and three dimensions in which N diffusing searchers that all start at x 0 seek a target at the origin. Each of the searchers is also reset to its starting point, either with rate r, or deterministically, with a reset time T. In one dimension and for a small number of searchers, the search time and the search cost are minimized at a non-zero optimal reset rate (or time), while for sufficiently large N, resetting always hinders the search. In general, a single searcher leads to the minimum search cost in one, two, and three dimensions. When the resetting is deterministic, several unexpected feature arise for N searchers, including the search time being independent of T for 1/T\\to 0 and the search cost being independent of N over a suitable range of N. Moreover, deterministic resetting typically leads to a lower search cost than in Poisson resetting.
Optimal partial deterministic quantum teleportation of qubits
Mista, Ladislav Jr.; Filip, Radim
2005-02-01
We propose a protocol implementing optimal partial deterministic quantum teleportation for qubits. This is a teleportation scheme realizing deterministically an optimal 1{yields}2 asymmetric universal cloning where one imperfect copy of the input state emerges at the sender's station while the other copy emerges at receiver's possibly distant station. The optimality means that the fidelities of the copies saturate the asymmetric cloning inequality. The performance of the protocol relies on the partial deterministic nondemolition Bell measurement that allows us to continuously control the flow of information among the outgoing qubits. We also demonstrate that the measurement is optimal two-qubit operation in the sense of the trade-off between the state disturbance and the information gain.
Deterministic evolutionary game dynamics in finite populations.
Altrock, Philipp M; Traulsen, Arne
2009-07-01
Evolutionary game dynamics describes the spreading of successful strategies in a population of reproducing individuals. Typically, the microscopic definition of strategy spreading is stochastic such that the dynamics becomes deterministic only in infinitely large populations. Here, we present a microscopic birth-death process that has a fully deterministic strong selection limit in well-mixed populations of any size. Additionally, under weak selection, from this process the frequency-dependent Moran process is recovered. This makes it a natural extension of the usual evolutionary dynamics under weak selection. We find simple expressions for the fixation probabilities and average fixation times of the process in evolutionary games with two players and two strategies. For cyclic games with two players and three strategies, we show that the resulting deterministic dynamics crucially depends on the initial condition in a nontrivial way. PMID:19658731
Effect of Uncertainty on Deterministic Runway Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2012-01-01
Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.
Quantum secure direct communication and deterministic secure quantum communication
NASA Astrophysics Data System (ADS)
Long, Gui-Lu; Deng, Fu-Guo; Wang, Chuan; Li, Xi-Han; Wen, Kai; Wang, Wan-Ying
2007-07-01
In this review article, we review the recent development of quantum secure direct communication (QSDC) and deterministic secure quantum communication (DSQC) which both are used to transmit secret message, including the criteria for QSDC, some interesting QSDC protocols, the DSQC protocols and QSDC network, etc. The difference between these two branches of quantum communication is that DSQC requires the two parties exchange at least one bit of classical information for reading out the message in each qubit, and QSDC does not. They are attractive because they are deterministic, in particular, the QSDC protocol is fully quantum mechanical. With sophisticated quantum technology in the future, the QSDC may become more and more popular. For ensuring the safety of QSDC with single photons and quantum information sharing of single qubit in a noisy channel, a quantum privacy amplification protocol has been proposed. It involves very simple CHC operations and reduces the information leakage to a negligible small level. Moreover, with the one-party quantum error correction, a relation has been established between classical linear codes and quantum one-party codes, hence it is convenient to transfer many good classical error correction codes to the quantum world. The one-party quantum error correction codes are especially designed for quantum dense coding and related QSDC protocols based on dense coding.
Deterministic mediated superdense coding with linear optics
NASA Astrophysics Data System (ADS)
Pavičić, Mladen
2016-02-01
We present a scheme of deterministic mediated superdense coding of entangled photon states employing only linear-optics elements. Ideally, we are able to deterministically transfer four messages by manipulating just one of the photons. Two degrees of freedom, polarization and spatial, are used. A new kind of source of heralded down-converted photon pairs conditioned on detection of another pair with an efficiency of 92% is proposed. Realistic probabilistic experimental verification of the scheme with such a source of preselected pairs is feasible with today's technology. We obtain the channel capacity of 1.78 bits for a full-fledged implementation.
Deterministic aggregation kinetics of superparamagnetic colloidal particles
NASA Astrophysics Data System (ADS)
Reynolds, Colin P.; Klop, Kira E.; Lavergne, François A.; Morrow, Sarah M.; Aarts, Dirk G. A. L.; Dullens, Roel P. A.
2015-12-01
We study the irreversible aggregation kinetics of superparamagnetic colloidal particles in two dimensions in the presence of an in-plane magnetic field at low packing fractions. Optical microscopy and image analysis techniques are used to follow the aggregation process and in particular study the packing fraction and field dependence of the mean cluster size. We compare these to the theoretically predicted scalings for diffusion limited and deterministic aggregation. It is shown that the aggregation kinetics for our experimental system is consistent with a deterministic mechanism, which thus shows that the contribution of diffusion is negligible.
Nine challenges for deterministic epidemic models
Roberts, Mick; Andreasen, Viggo; Lloyd, Alun; Pellis, Lorenzo
2016-01-01
Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections, infections with time-varying infectivity, and those where super infection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern the uses and limitations of deterministic models as approximations to stochastic systems. PMID:25843383
A deterministic discrete ordinates transport proxy application
Energy Science and Technology Software Center (ESTSC)
2014-06-03
Kripke is a simple 3D deterministic discrete ordinates (Sn) particle transport code that maintains the computational load and communications pattern of a real transport code. It is intended to be a research tool to explore different data layouts, new programming paradigms and computer architectures.
STATISTICAL ANALYSIS OF A DETERMINISTIC STOCHASTIC ORBIT
Kaufman, Allan N.; Abarbanel, Henry D.I.; Grebogi, Celso
1980-05-01
If the solution of a deterministic equation is stochastic (in the sense of orbital instability), it can be subjected to a statistical analysis. This is illustrated for a coded orbit of the Chirikov mapping. Statistical dependence and the Markov assumption are tested. The Kolmogorov-Sinai entropy is related to the probability distribution for the orbit.
NASA Astrophysics Data System (ADS)
Deka, Ramesh Ch.; Kinkar Roy, Ram; Hirao, Kimihiko
2004-05-01
Lewis acidity of alkali cation-exchanged zeolite is studied using local reactivity descriptors based on hard-soft acid-base (HSAB) concept. The local softness for nucleophilic attack ( sx+), local softness for electrophilic attack ( sx-) and their ratio, which is called `relative electrophilicity' ( sx+/ sx-), are calculated for the exchanged cations and Lewis acidity of the cations is found to decrease in the order: Li + > Na + > K + > Rb + > Cs +. Calculated blue shift of CO vibrational frequency (Δ ν) and interaction energy of CO molecule with alkali cation-exchanged zeolite clusters vary linearly with sx+/ sx- values.
McKinley, James P.; Zachara, John M.; Smith, Steven C.; Liu, Chongxuan
2007-01-15
Nuclear waste that bore 90Sr2+ was accidentally leaked into the vadose zone at the Hanford site, and was immobilized at relatively shallow depths in sediments containing little apparent clay or silt-sized components. Sr2+, 90Sr2+, Mg2+, and Ca2+ was desorbed and total inorganic carbon concentration was monitored during the equilibration of this sediment with varying concentrations of Na+, Ca2+. A cation exchange model previously developed for similar sediments was applied to these results as a predictor of final solution compositions. The model included binary exchange reactions for the four operant cations and an equilibrium dissolution/precipitation reaction for calcite. The model successfully predicted the desorption data. The contaminated sediment was also examined using digital autoradiography, a sensitive tool for imaging the distribution of radioactivity. The exchanger phase containing 90Sr was found to consist of smectite formed from weathering of mesostasis glass in basaltic lithic fragments. These clasts are a significant component of Hanford formation sands. The relatively small but significant cation exchange capacity of these sediments was thus a consequence of reaction with physically sequestered clays in sediment that contained essentially no fine-grained material. The nature of this exchange component explained the relatively slow (scale of days) evolution of desorption solutions. The experimental and model results indicated that there is little risk of migration of 90Sr2+ to the water table.
Schwalm, Christopher R.; Williams, Christopher A.; Schaefer, Kevin; Anderson, Ryan; Arain, A.; Baker, Ian; Lokupitiya, Erandathie; Barr, Alan; Black, T. A.; Gu, Lianhong; Riciutto, Dan M.
2010-12-01
Our current understanding of terrestrial carbon processes is represented in various models used to integrate and scale measurements of CO2 exchange from remote sensing and other spatiotemporal data. Yet assessments are rarely conducted to determine how well models simulate carbon processes across vegetation types and environmental conditions. Using standardized data from the North American Carbon Program we compare observed and simulated monthly CO2 exchange from 44 eddy covariance flux towers in North America and 22 terrestrial biosphere models. The analysis period spans 220 site-years, 10 biomes, and includes two large-scale drought events, providing a natural experiment to evaluate model skill as a function of drought and seasonality. We evaluate models' ability to simulate the seasonal cycle of CO2 exchange using multiple model skill metrics and analyze links between model characteristics, site history, and model skill. Overall model performance was poor; the difference between observations and simulations was 10 times observational uncertainty, with forested ecosystems better predicted than nonforested. Model-data agreement was highest in summer and in temperate evergreen forests. In contrast, model performance declined in spring and fall, especially in ecosystems with large deciduous components, and in dry periods during the growing season. Models used across multiple biomes and sites, the mean model ensemble, and a model using assimilated parameter values showed high consistency with observations. Models with the highest skill across all biomes all used prescribed canopy phenology, calculated NEE as the difference between GPP and ecosystem respiration, and did not use a daily time step.
Deterministic Analysis and Upscaling of Bromide Transport in a Heterogeneous Vadose Zone
Technology Transfer Automated Retrieval System (TEKTRAN)
Conservativ solute transport experiments were conducted at a field plot and on an undisturbed soil core from the same site. The hydraulic and solute transport properties were extensively characterized so that the data could be analyzed from a deterministic perspective. To investigate the influence o...
Burba, P; Van den Bergh, J; Klockow, D
2001-11-01
Humic-rich hydrocolloids and their metal loading in selected German bog-waters have been characterized by a novel on-site approach. By use of an on-line multistage ultrafiltration (MST-UF) unit equipped with conventional polyethersulfone (PES)-based flat membranes (nominal cut-off 0.45, 0.22, and 0.1 microm, or 100, 50, 10, 5, 3 kDa) the hydrocolloids could be fractionated on-site in both sub-particulate and macromolecular size ranges. Characterization (dissolved organic carbon (DOC), metals) of the colloid fractions obtained this way was performed off-site by use of conventional instrumental methods (carbon analyzer, AAS, ICP-OES, and TXRF (total reflection X-ray fluorescence)). Major DOC fractions of the hydrocolloids studied were found to be in the size range <5 kDa. The assessed metals (Al, Cu, Fe, Mn, Pb, and Zn) were, however, predominantly enriched in the macromolecular and sub-particulate range, depending on the metal and the sample, respectively. In addition, metal species bound to these hydrocolloids were kinetically characterized on-site by use of competitive ligand (EDTA (ethylenediaminetetraacetate)) and metal (Cu(II)) exchange; the EDTA complexes formed and the metal ions exchanged were separated by means of a small time-controlled tangential-flow UF unit (cut-off 1 kDa). Bound metal fractions, in particular Al and Fe, reacted only slowly (500 to 1000 min) with EDTA; the conditional availability was 60-99%, depending on the hydrocolloid. In contrast, the Cu(II) exchange of colloid-bound metal species approached equilibrium within 5-10 min, with characteristic exchange constants, Kex, of the order of 0.01 to 90 for the metals (Fe
Are earthquakes an example of deterministic chaos?
NASA Technical Reports Server (NTRS)
Huang, Jie; Turcotte, Donald L.
1990-01-01
A simple mass-spring model is used to systematically examine the dynamical behavior introduced by fault zone heterogeneities. The model consists of two sliding blocks coupled to each other and to a constant velocity driver by elastic springs. The state of this system can be characterized by the positions of the two blocks relative to the driver. A simple static/dynamic friction law is used. When the system is symmetric, cyclic behavior is observed. For an asymmetric system, where the frictional forces for the two blocks are not equal, the solutions exhibit deterministic chaos. Chaotic windows occur repeatedly between regions of limit cycles on bifurcation diagrams. The model behavior is similar to that of the one-dimensional logistic map. The results provide substantial evidence that earthquakes are an example of deterministic chaos.
Deterministic dynamics in the minority game
NASA Astrophysics Data System (ADS)
Jefferies, P.; Hart, M. L.; Johnson, N. F.
2002-01-01
The minority game (MG) behaves as a stochastically disturbed deterministic system due to the coin toss invoked to resolve tied strategies. Averaging over this stochasticity yields a description of the MG's deterministic dynamics via mapping equations for the strategy score and global information. The strategy-score map contains both restoring-force and bias terms, whose magnitudes depend on the game's quenched disorder. Approximate analytical expressions are obtained and the effect of ``market impact'' is discussed. The global-information map represents a trajectory on a de Bruijn graph. For small quenched disorder, a Eulerian trail represents a stable attractor. It is shown analytically how antipersistence arises. The response to perturbations and different initial conditions is also discussed.
The deterministic and statistical Burgers equation
NASA Astrophysics Data System (ADS)
Fournier, J.-D.; Frisch, U.
Fourier-Lagrangian representations of the UV-region inviscid-limit solutions of the equations of Burgers (1939) are developed for deterministic and random initial conditions. The Fourier-mode amplitude behavior of the deterministic case is characterized by complex singularities with fast decrease, power-law preshocks with k indices of about -4/3, and shocks with k to the -1. In the random case, shocks are associated with a k to the -2 spectrum which overruns the smaller wavenumbers and appears immediately under Gaussian initial conditions. The use of the Hopf-Cole solution in the random case is illustrated in calculations of the law of energy decay by a modified Kida (1979) method. Graphs and diagrams of the results are provided.
Ada programming guidelines for deterministic storage management
NASA Technical Reports Server (NTRS)
Auty, David
1988-01-01
Previous reports have established that a program can be written in the Ada language such that the program's storage management requirements are determinable prior to its execution. Specific guidelines for ensuring such deterministic usage of Ada dynamic storage requirements are described. Because requirements may vary from one application to another, guidelines are presented in a most-restrictive to least-restrictive fashion to allow the reader to match appropriate restrictions to the particular application area under investigation.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
King, A.W.
1986-01-01
Ecological models of the seasonal exchange of carbon dioxide (CO/sub 2/) between the atmosphere and the terrestrial biosphere are needed in the study of changes in atmospheric CO/sub 2/ concentration. In response to this need, a set of site-specific models of seasonal terrestrial carbon dynamics was assembled from open-literature sources. The collection was chosen as a base for the development of biome-level models for each of the earth's principal terrestrial biomes or vegetation complexes. The primary disadvantage of this approach is the problem of extrapolating the site-specific models across large regions having considerable biotic, climatic, and edaphic heterogeneity. Two methods of extrapolation were tested. The first approach was a simple extrapolation that assumed relative within-biome homogeneity, and generated CO/sub 2/ source functions that differed dramatically from published estimates of CO/sub 2/ exchange. The second extrapolation explicitly incorporated within-biome variability in the abiotic variables that drive seasonal biosphere-atmosphere CO/sub 2/ exchange.
NASA Astrophysics Data System (ADS)
Lüers, J.; Westermann, S.; Piel, K.; Boike, J.
2014-01-01
The annual variability of CO2 exchange in most ecosystems is primarily driven by the activities of plants and soil microorganisms. However, little is known about the carbon balance and its controlling factors outside the growing season in arctic regions dominated by soil freeze/thaw-processes, long-lasting snow cover, and several months of darkness. This study presents a complete annual cycle of the CO2 net ecosystem exchange (NEE) dynamics for a High Arctic tundra area on the west coast of Svalbard based on eddy-covariance flux measurements. The annual cumulative CO2 budget is close to zero grams carbon per square meter per year, but shows a very strong seasonal variability. Four major CO2 exchange seasons have been identified. (1) During summer (ground snow-free), the CO2 exchange occurs mainly as a result of biological activity, with a predominance of strong CO2 assimilation by the ecosystem. (2) The autumn (ground snow-free or partly snow-covered) is dominated by CO2 respiration as a result of biological activity. (3) In winter and spring (ground snow-covered), low but persistent CO2 release occur, overlain by considerable CO2 exchange events in both directions associated with changes of air masses and air and atmospheric CO2 pressure. (4) The snow melt season (pattern of snow-free and snow-covered areas), where both, meteorological and biological forcing, resulting in a visible carbon uptake by the high arctic ecosystem. Data related to this article are archived under: http://doi.pangaea.de/10.1594/PANGAEA.809507.
NASA Astrophysics Data System (ADS)
Lüers, J.; Westermann, S.; Piel, K.; Boike, J.
2014-11-01
The annual variability of CO2 exchange in most ecosystems is primarily driven by the activities of plants and soil microorganisms. However, little is known about the carbon balance and its controlling factors outside the growing season in Arctic regions dominated by soil freeze/thaw processes, long-lasting snow cover, and several months of darkness. This study presents a complete annual cycle of the CO2 net ecosystem exchange (NEE) dynamics for a high Arctic tundra area at the west coast of Svalbard based on eddy covariance flux measurements. The annual cumulative CO2 budget is close to 0 g C m-2 yr-1, but displays a strong seasonal variability. Four major CO2 exchange seasons have been identified. (1) During summer (snow-free ground), the CO2 exchange occurs mainly as a result of biological activity, with a dominance of strong CO2 assimilation by the ecosystem. (2) The autumn (snow-free ground or partly snow-covered) is dominated by CO2 respiration as a result of biological activity. (3) In winter and spring (snow-covered ground), low but persistent CO2 release occurs, overlayed by considerable CO2 exchange events in both directions associated with high wind speed and changes of air masses and atmospheric air pressure. (4) The snow melt season (pattern of snow-free and snow-covered areas) is associated with both meteorological and biological forcing, resulting in a carbon uptake by the high Arctic ecosystem. Data related to this article are archived at http://doi.pangaea.de/10.1594/PANGAEA.809507.
Song, Hongjian; Olsen, Ole H.; Persson, Egon; Rand, Kasper D.
2014-01-01
Factor VIIa (FVIIa) is a trypsin-like protease that plays an important role in initiating blood coagulation. Very limited structural information is available for the free, inactive form of FVIIa that circulates in the blood prior to vascular injury and the molecular details of its activity enhancement remain elusive. Here we have applied hydrogen/deuterium exchange mass spectrometry coupled to electron transfer dissociation to pinpoint individual residues in the heavy chain of FVIIa whose conformation and/or local interaction pattern changes when the enzyme transitions to the active form, as induced either by its cofactor tissue factor or a covalent active site inhibitor. Identified regulatory residues are situated at key sites across one continuous surface of the protease domain spanning the TF-binding helix across the activation pocket to the calcium binding site and are embedded in elements of secondary structure and at the base of flexible loops. Thus these residues are optimally positioned to mediate crosstalk between functional sites in FVIIa, particularly the cofactor binding site and the active site. Our results unambiguously show that the conformational allosteric activation signal extends to the EGF1 domain in the light chain of FVIIa, underscoring a remarkable intra- and interdomain allosteric regulation of this trypsin-like protease. PMID:25344622
Deterministic processes vary during community assembly for ecologically dissimilar taxa
Powell, Jeff R.; Karunaratne, Senani; Campbell, Colin D.; Yao, Huaiying; Robinson, Lucinda; Singh, Brajesh K.
2015-01-01
The continuum hypothesis states that both deterministic and stochastic processes contribute to the assembly of ecological communities. However, the contextual dependency of these processes remains an open question that imposes strong limitations on predictions of community responses to environmental change. Here we measure community and habitat turnover across multiple vertical soil horizons at 183 sites across Scotland for bacteria and fungi, both dominant and functionally vital components of all soils but which differ substantially in their growth habit and dispersal capability. We find that habitat turnover is the primary driver of bacterial community turnover in general, although its importance decreases with increasing isolation and disturbance. Fungal communities, however, exhibit a highly stochastic assembly process, both neutral and non-neutral in nature, largely independent of disturbance. These findings suggest that increased focus on dispersal limitation and biotic interactions are necessary to manage and conserve the key ecosystem services provided by these assemblages. PMID:26436640
NASA Astrophysics Data System (ADS)
Esemann, H.; Förster, H.
1999-05-01
Far infrared (FIR) and X-ray absorption spectroscopy (XAS) assisted by computer modelling were tested for their aptitude to study ion exchange, cation siting, NO adsorption and redox behavior of CuZSM-5.
Schwalm, C.R.; Williams, C.A.; Schaefer, K.; Anderson, R.; Arain, M.A.; Baker, I.; Black, T.A.; Chen, G.; Ciais, P.; Davis, K. J.; Desai, A. R.; Dietze, M.; Dragoni, D.; Fischer, M.L.; Flanagan, L.B.; Grant, R.F.; Gu, L.; Hollinger, D.; Izaurralde, R.C.; Kucharik, C.; Lafleur, P.M.; Law, B.E.; Li, L.; Li, Z.; Liu, S.; Lokupitiya, E.; Luo, Y.; Ma, S.; Margolis, H.; Matamala, R.; McCaughey, H.; Monson, R. K.; Oechel, W. C.; Peng, C.; Poulter, B.; Price, D.T.; Riciutto, D.M.; Riley, W.J.; Sahoo, A.K.; Sprintsin, M.; Sun, J.; Tian, H.; Tonitto, C.; Verbeeck, H.; Verma, S.B.
2011-06-01
Our current understanding of terrestrial carbon processes is represented in various models used to integrate and scale measurements of CO{sub 2} exchange from remote sensing and other spatiotemporal data. Yet assessments are rarely conducted to determine how well models simulate carbon processes across vegetation types and environmental conditions. Using standardized data from the North American Carbon Program we compare observed and simulated monthly CO{sub 2} exchange from 44 eddy covariance flux towers in North America and 22 terrestrial biosphere models. The analysis period spans {approx}220 site-years, 10 biomes, and includes two large-scale drought events, providing a natural experiment to evaluate model skill as a function of drought and seasonality. We evaluate models' ability to simulate the seasonal cycle of CO{sub 2} exchange using multiple model skill metrics and analyze links between model characteristics, site history, and model skill. Overall model performance was poor; the difference between observations and simulations was {approx}10 times observational uncertainty, with forested ecosystems better predicted than nonforested. Model-data agreement was highest in summer and in temperate evergreen forests. In contrast, model performance declined in spring and fall, especially in ecosystems with large deciduous components, and in dry periods during the growing season. Models used across multiple biomes and sites, the mean model ensemble, and a model using assimilated parameter values showed high consistency with observations. Models with the highest skill across all biomes all used prescribed canopy phenology, calculated NEE as the difference between GPP and ecosystem respiration, and did not use a daily time step.
Minimal Deterministic Physicality Applied to Cosmology
NASA Astrophysics Data System (ADS)
Valentine, John S.
This report summarizes ongoing research and development since our 2012 foundation paper, including the emergent effects of a deterministic mechanism for fermion interactions: (1) the coherence of black holes and particles using a quantum chaotic model; (2) wide-scale (anti)matter prevalence from exclusion and weak interaction during the fermion reconstitution process; and (3) red-shift due to variations of vacuum energy density. We provide a context for Standard Model fields, and show how gravitation can be accountably unified in the same mechanism, but not as a unified field.
Deterministic convergence in iterative phase shifting
Luna, Esteban; Salas, Luis; Sohn, Erika; Ruiz, Elfego; Nunez, Juan M.; Herrera, Joel
2009-03-10
Previous implementations of the iterative phase shifting method, in which the phase of a test object is computed from measurements using a phase shifting interferometer with unknown positions of the reference, do not provide an accurate way of knowing when convergence has been attained. We present a new approach to this method that allows us to deterministically identify convergence. The method is tested with a home-built Fizeau interferometer that measures optical surfaces polished to {lambda}/100 using the Hydra tool. The intrinsic quality of the measurements is better than 0.5 nm. Other possible applications for this technique include fringe projection or any problem where phase shifting is involved.
Deterministic quantum computation with one photonic qubit
NASA Astrophysics Data System (ADS)
Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.
2015-07-01
We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.
NASA Astrophysics Data System (ADS)
Kazazić, Saša; Bertoša, Branimir; Luić, Marija; Mikleušević, Goran; Tarnowski, Krzysztof; Dadlez, Michal; Narczyk, Marta; Bzowska, Agnieszka
2016-01-01
The biologically active form of purine nucleoside phosphorylase (PNP) from Escherichia coli (EC 2.4.2.1) is a homohexamer unit, assembled as a trimer of dimers. Upon binding of phosphate, neighboring monomers adopt different active site conformations, described as open and closed. To get insight into the functions of the two distinctive active site conformations, virtually inactive Arg24Ala mutant is complexed with phosphate; all active sites are found to be in the open conformation. To understand how the sites of neighboring monomers communicate with each other, we have combined H/D exchange (H/DX) experiments with molecular dynamics (MD) simulations. Both methods point to the mobility of the enzyme, associated with a few flexible regions situated at the surface and within the dimer interface. Although H/DX provides an average extent of deuterium uptake for all six hexamer active sites, it was able to indicate the dynamic mechanism of cross-talk between monomers, allostery. Using this technique, it was found that phosphate binding to the wild type (WT) causes arrest of the molecular motion in backbone fragments that are flexible in a ligand-free state. This was not the case for the Arg24Ala mutant. Upon nucleoside substrate/inhibitor binding, some release of the phosphate-induced arrest is observed for the WT, whereas the opposite effects occur for the Arg24Ala mutant. MD simulations confirmed that phosphate is bound tightly in the closed active sites of the WT; conversely, in the open conformation of the active site of the WT phosphate is bound loosely moving towards the exit of the active site. In Arg24Ala mutant binary complex Pi is bound loosely, too.
Schwalm, Christopher R; Williams, Christopher A; Schaefer, Kevin; Anderson, Ryan; Arain, M A; Baker, Ian; Barr, Alan; Black, T Andrew; Chen, Guangsheng; Chen, Jing Ming; Ciais, Philippe; Davis, Kenneth J; Desai, Ankur R; Dietze, Michael; Dragoni, Danilo; Fischer, Marc; Flanagan, Lawrence; Grant, Robert; Gu, Lianghong; Hollinger, D; Izaurralde, Roberto C; Kucharik, Chris; Lafleur, Peter; Law, Beverly E; Li, Longhui; Li, Zhengpeng; Liu, Shuguang; Lokupitiya, Erandathie; Luo, Yiqi; Ma, Siyan; Margolis, Hank; Matamala, R; McCaughey, Harry; Monson, Russell K; Oechel, Walter C; Peng, Changhui; Poulter, Benjamin; Price, David T; Riciutto, Dan M; Riley, William; Sahoo, Alok Kumar; Sprintsin, Michael; Sun, Jianfeng; Tian, Hanqin; Tonitto, Christine; Verbeeck, Hans; Verma, Shashi B
2010-12-09
There is a continued need for models to improve consistency and agreement with observations [Friedlingstein et al., 2006], both overall and under more frequent extreme climatic events related to global environmental change such as drought [Trenberth et al., 2007]. Past validation studies of terrestrial biosphere models have focused only on few models and sites, typically in close proximity and primarily in forested biomes [e.g., Amthor et al., 2001; Delpierre et al., 2009; Grant et al., 2005; Hanson et al., 2004; Granier et al., 2007; Ichii et al., 2009; Ito, 2008; Siqueira et al., 2006; Zhou et al., 2008]. Furthermore, assessing model-data agreement relative to drought requires, in addition to high-quality observedCO2 exchange data, a reliable drought metric as well as a natural experiment across sites and drought conditions.
Beitia, Anton Oscar; Kuperman, Gilad; Delman, Bradley N; Shapiro, Jason S
2013-01-01
We evaluated the performance of LOINC® and RadLex standard terminologies for covering CT test names from three sites in a health information exchange (HIE) with the eventual goal of building an HIE-based clinical decision support system to alert providers of prior duplicate CTs. Given the goal, the most important parameter to assess was coverage for high frequency exams that were most likely to be repeated. We showed that both LOINC® and RadLex provided sufficient coverage for our use case through calculations of (a) high coverage of 90% and 94%, respectively for the subset of CTs accounting for 99% of exams performed and (b) high concept token coverage (total percentage of exams performed that map to terminologies) of 92% and 95%, respectively. With trends toward greater interoperability, this work may provide a framework for those wishing to map radiology site codes to a standard nomenclature for purposes of tracking resource utilization. PMID:24551324
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
Ballistic annihilation and deterministic surface growth
NASA Astrophysics Data System (ADS)
Belitsky, Vladimir; Ferrari, Pablo A.
1995-08-01
A model of deterministic surface growth studied by Krug and Spohn, a model of the annihilating reaction A+B→inert studied by Elskens and Frisch, a one-dimensional three-color cyclic cellular automaton studied by Fisch, and a particular automaton that has the number 184 in the classification of Wolfram can be studied via a cellular automaton with stochastic initial data called ballistic annihilation. This automaton is defined by the following rules: At time t=0, one particle is put at each integer point of ℝ. To each particle, a velocity is assigned in such a way that it may be either +1 or -1 with probabilities 1/2, independent of the velocities of the other particles. As time goes on, each particle moves along ℝ at the velocity assigned to it and annihilates when it collides with another particle. In the present paper we compute the distribution of this automaton for each time t ∈ ℕ. We then use this result to obtain the hydrodynamic limit for the surface profile from the model of deterministic surface growth mentioned above. We also show the relation of this limit process to the process which we call moving local minimum of Brownian motion. The latter is the process B {/x min}, x ∈ ℝ, defined by B {/x min}≔min{ B y ; x-1≤ y≤ x+1} for every x ∈ ℝ, where B x , x ∈ ℝ, is the standard Brownian motion with B 0=0.
Moment equations for a piecewise deterministic PDE
NASA Astrophysics Data System (ADS)
Bressloff, Paul C.; Lawley, Sean D.
2015-03-01
We analyze a piecewise deterministic PDE consisting of the diffusion equation on a finite interval Ω with randomly switching boundary conditions and diffusion coefficient. We proceed by spatially discretizing the diffusion equation using finite differences and constructing the Chapman-Kolmogorov (CK) equation for the resulting finite-dimensional stochastic hybrid system. We show how the CK equation can be used to generate a hierarchy of equations for the r-th moments of the stochastic field, which take the form of r-dimensional parabolic PDEs on {{Ω }r} that couple to lower order moments at the boundaries. We explicitly solve the first and second order moment equations (r = 2). We then describe how the r-th moment of the stochastic PDE can be interpreted in terms of the splitting probability that r non-interacting Brownian particles all exit at the same boundary; although the particles are non-interacting, statistical correlations arise due to the fact that they all move in the same randomly switching environment. Hence the stochastic diffusion equation describes two levels of randomness: Brownian motion at the individual particle level and a randomly switching environment. Finally, in the limit of fast switching, we use a quasi-steady state approximation to reduce the piecewise deterministic PDE to an SPDE with multiplicative Gaussian noise in the bulk and a stochastically-driven boundary.
Deterministic prediction of surface wind speed variations
NASA Astrophysics Data System (ADS)
Drisya, G. V.; Kiplangat, D. C.; Asokan, K.; Satheesh Kumar, K.
2014-11-01
Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Deterministic Creation of Macroscopic Cat States
Lombardo, Daniel; Twamley, Jason
2015-01-01
Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane’s opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane’s position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157
Deterministic forward scatter from surface gravity waves.
Deane, Grant B; Preisig, James C; Tindle, Chris T; Lavery, Andone; Stokes, M Dale
2012-12-01
Deterministic structures in sound reflected by gravity waves, such as focused arrivals and Doppler shifts, have implications for underwater acoustics and sonar, and the performance of underwater acoustic communications systems. A stationary phase analysis of the Helmholtz-Kirchhoff scattering integral yields the trajectory of focused arrivals and their relationship to the curvature of the surface wave field. Deterministic effects along paths up to 70 water depths long are observed in shallow water measurements of surface-scattered sound at the Martha's Vineyard Coastal Observatory. The arrival time and amplitude of surface-scattered pulses are reconciled with model calculations using measurements of surface waves made with an upward-looking sonar mounted mid-way along the propagation path. The root mean square difference between the modeled and observed pulse arrival amplitude and delay, respectively, normalized by the maximum range of amplitudes and delays, is found to be 0.2 or less for the observation periods analyzed. Cross-correlation coefficients for modeled and observed pulse arrival delays varied from 0.83 to 0.16 depending on surface conditions. Cross-correlation coefficients for normalized pulse energy for the same conditions were small and varied from 0.16 to 0.06. In contrast, the modeled and observed pulse arrival delay and amplitude statistics were in good agreement. PMID:23231099
Deterministic Creation of Macroscopic Cat States.
Lombardo, Daniel; Twamley, Jason
2015-01-01
Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane's opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane's position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157
Brozek, Carl K.; Cozzolino, Anthony F.; Teat, Simon J.; Chen, Yu-Sheng; Dinc,; #259; Mircea,
2013-09-23
We employed multiwavelength anomalous X-ray dispersion to determine the relative cation occupation at two crystallographically distinct metal sites in Fe^{2+}-, Cu^{2+}-, and Zn^{2+}-exchanged versions of the microporous metal–organic framework (MOF) known as MnMnBTT (BTT = 1,3,5-benzenetristetrazolate). By exploiting the dispersive differences between Mn, Fe, Cu, and Zn, the extent and location of cation exchange were determined from single crystal X-ray diffraction data sets collected near the K edges of Mn^{2+} and of the substituting metal, and at a wavelength remote from either edge as a reference. Comparing the anomalous dispersion between these measurements indicated that the extent of Mn^{2+} replacement depends on the identity of the substituting metal. We contrasted two unique methods to analyze this data with a conventional approach and evaluated their limitations with emphasis on the general application of this method to other heterometallic MOFs, where site-specific metal identification is fundamental to tuning catalytic and physical properties.
Hans Peter Schmid; Craig Wayson
2009-05-05
The primary objective of this project was to evaluate carbon exchange dynamics across a region of North America between the Great Plains and the East Coast. This region contains about 40 active carbon cycle research (AmeriFlux) sites in a variety of climatic and landuse settings, from upland forest to urban development. The core research involved a scaling strategy that uses measured fluxes of CO{sub 2}, energy, water, and other biophysical and biometric parameters to train and calibrate surface-vegetation-atmosphere models, in conjunction with satellite (MODIS) derived drivers. To achieve matching of measured and modeled fluxes, the ecosystem parameters of the models will be adjusted to the dynamically variable flux-tower footprints following Schmid (1997). High-resolution vegetation index variations around the flux sites have been derived from Landsat data for this purpose. The calibrated models are being used in conjunction with MODIS data, atmospheric re-analysis data, and digital land-cover databases to derive ecosystem exchange fluxes over the study domain.
Baxter, Lisa K; Burke, Janet; Lunden, Melissa; Turpin, Barbara J; Rich, David Q; Thevenet-Morrison, Kelly; Hodas, Natasha; Ökaynak, Halûk
2013-01-01
Central-site monitors do not account for factors such as outdoor-to-indoor transport and human activity patterns that influence personal exposures to ambient fine-particulate matter (PM(2.5)). We describe and compare different ambient PM(2.5) exposure estimation approaches that incorporate human activity patterns and time-resolved location-specific particle penetration and persistence indoors. Four approaches were used to estimate exposures to ambient PM(2.5) for application to the New Jersey Triggering of Myocardial Infarction Study. These include: Tier 1, central-site PM(2.5) mass; Tier 2A, the Stochastic Human Exposure and Dose Simulation (SHEDS) model using literature-based air exchange rates (AERs); Tier 2B, the Lawrence Berkeley National Laboratory (LBNL) Aerosol Penetration and Persistence (APP) and Infiltration models; and Tier 3, the SHEDS model where AERs were estimated using the LBNL Infiltration model. Mean exposure estimates from Tier 2A, 2B, and 3 exposure modeling approaches were lower than Tier 1 central-site PM(2.5) mass. Tier 2A estimates differed by season but not across the seven monitoring areas. Tier 2B and 3 geographical patterns appeared to be driven by AERs, while seasonal patterns appeared to be due to variations in PM composition and time activity patterns. These model results demonstrate heterogeneity in exposures that are not captured by the central-site monitor. PMID:23321856
NASA Astrophysics Data System (ADS)
Hadley, J. L.; Kuzeja, P. S.
2004-05-01
We measured carbon and water exchange by the eddy covariance method at a younger, drier deciduous forest and compared it to the well-known Harvard Forest deciduous site during two growing seasons (2002 and 2003) and an intervening dormant season. Forests at both sites are dominated by red oak (Quercus rubra) and red maple (Acer rubrum), but the younger forest is situated near a hilltop, as opposed to the long-term Harvard Forest site, which is in a lowland area within 100 m of a stream and about 200 m from a bog. The younger forest had a maximum tree age of about 44 years within 200 m of the eddy flux tower (owing to an intense fire in the autumn of 1957); this compares to maximum tree ages of 65 to 90 years, depending on exact location, near the long-term site. The younger, drier forest stored about 1.7 Mg C/ha from May 2002 through April 2003. We estimate that this was about 30% less than annual storage in the older, moister forest at the long-term site, but as the 12-month periods on which this comparison is based are not completely overlapping for the two sites, this comparison may change slightly. Light-saturated net ecosystem carbon uptake of both sites was about 22 μ mol m-2 s-1 in June 2002, but by August the value for the drier site was only about 20 μ mol m-2 s-1 compared to about 24 μ mol m-2 s-1 for the long-term site, suggesting that water availability may have become a limiting factor for photosynthesis in the drier forest. At the younger site in 2003 compared to 2002, we estimated less C storage in May and June but more C storage in July, August and September, with an overall increase in growing season C storage of about 0.4 Mg/ha. Lower early-growing season in carbon storage in 2003 versus 2002 was associated with slightly lower net ecosystem carbon uptake at all light levels in June 2003 compared to a year earlier. Cloudy and cool weather in May and early June 2003 reduced C uptake directly by reducing light available for photosynthesis, and
NASA Astrophysics Data System (ADS)
Morawski, Markus; Reinert, Tilo; Meyer-Klaucke, Wolfram; Wagner, Friedrich E.; Tröger, Wolfgang; Reinert, Anja; Jäger, Carsten; Brückner, Gert; Arendt, Thomas
2015-12-01
Perineuronal nets (PNs) are a specialized form of brain extracellular matrix, consisting of negatively charged glycosaminoglycans, glycoproteins and proteoglycans in the direct microenvironment of neurons. Still, locally immobilized charges in the tissue have not been accessible so far to direct observations and quantifications. Here, we present a new approach to visualize and quantify fixed charge-densities on brain slices using a focused proton-beam microprobe in combination with ionic metallic probes. For the first time, we can provide quantitative data on the distribution and net amount of pericellularly fixed charge-densities, which, determined at 0.4-0.5 M, is much higher than previously assumed. PNs, thus, represent an immobilized ion exchanger with ion sorting properties high enough to partition mobile ions in accord with Donnan-equilibrium. We propose that fixed charge-densities in the brain are involved in regulating ion mobility, the volume fraction of extracellular space and the viscosity of matrix components.
Statistical properties of deterministic Bernoulli flows
Radunskaya, A.E.
1992-12-31
This thesis presents several new theorems about the stability and the statistical properties of deterministic chaotic flows. Many concrete systems known to exhibit deterministic chaos have so far been shown to be of a class known as Bernoulli Flows. This class of flows is characterized by the Finitely Determined property, which can be checked in specific cases. The first theorem says that these flows can be modeled arbitrarily well for all time by continuous-time finite state Markov processes. In other words it is theoretically possible to model the flow arbitrarily well by a computer equipped with a roulette wheel. There follows a stability result, which says that one can distort the measurements made on the processes without affecting the approximation. These results are than applied to the problem of distinguishing deterministic chaos from stochastic processes in the analysis of time series. The second part of the thesis deals with a specific set of examples. Although it has been possible to analyze specific systems to determine whether they lie in the class of Bernoulli systems, the standard techniques rely on the construction of expanding and contracting fibers in the phase space of the system. These fibers are then used to coordinatize the phase space and to prove the existence of a hyperbolic structure. Unfortunately such methods may fail in the general case, where smoothness conditions and a small singular set cannot be assumed. For example, suppose the standard billiard flow on a square table with a perfectly round obstacle, which is known to be Bernoulli, is replaced by a similar flow on a table with a bumpy fractal-like obstacle: a model perhaps closer to nature. It is shown that these fibers no longer exist and hence cannot be used in the standard manner to prove Bernoulliness or ergodicity. But, one can use the fact that the class of Bernoulli flows is closed in the d-bar metric to show that this billard flow with a bumpy obstacle is in fact Bernoulli.
Deterministic, Nanoscale Fabrication of Mesoscale Objects
Jr., R M; Gilmer, J; Rubenchik, A; Shirk, M
2004-12-08
Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-D features and with 100-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels, as well as materials such as diamond and vanadium. The motivation for this project was to investigate the physics and chemistry that control the interactions of solid surfaces with laser beams and ion beams, with a view towards their applicability to the desired deterministic fabrication processes. As part of this LDRD project, one of our goals was to advance the state of the art for experimental work, but, in order to create ultimately a deterministic capability for such precision micromachining, another goal was to form a new modeling/simulation capability that could also extend the state of the art in this field. We have achieved both goals. In this project, we have, for the first time, combined a 1-D hydrocode (''HYADES'') with a 3-D molecular dynamics simulator (''MDCASK'') in our modeling studies. In FY02 and FY03, we investigated the ablation/surface-modification processes that occur on copper, gold, and nickel substrates with the use of sub-ps laser pulses. In FY04, we investigated laser ablation of carbon, including laser-enhanced chemical reaction on the carbon surface for both vitreous carbon and carbon aerogels. Both experimental and modeling results will be presented in the report that follows. The immediate impact of our investigation was a much better understanding of the chemical and physical processes that ensure when solid materials are exposed to femtosecond laser pulses. More broadly, we have better positioned LLNL to design a cluster tool for fabricating mesoscale objects utilizing laser pulses and ion-beams as well as more traditional machining/manufacturing techniques for applications such as components in NIF targets, remote sensors, including
ERIC Educational Resources Information Center
Hamilton, Kendra
2004-01-01
Luann Wright, founder and president of NoIndoctrination.org, a Web site devoted to policing professors accused of harassing conservative students in their classrooms, firmly believes that what she's doing is a public service. "The university should be a market place of ideas, a safe place to explore a variety of perspectives," she says. "'But I…
Central limit behavior of deterministic dynamical systems
NASA Astrophysics Data System (ADS)
Tirnakli, Ugur; Beck, Christian; Tsallis, Constantino
2007-04-01
We investigate the probability density of rescaled sums of iterates of deterministic dynamical systems, a problem relevant for many complex physical systems consisting of dependent random variables. A central limit theorem (CLT) is valid only if the dynamical system under consideration is sufficiently mixing. For the fully developed logistic map and a cubic map we analytically calculate the leading-order corrections to the CLT if only a finite number of iterates is added and rescaled, and find excellent agreement with numerical experiments. At the critical point of period doubling accumulation, a CLT is not valid anymore due to strong temporal correlations between the iterates. Nevertheless, we provide numerical evidence that in this case the probability density converges to a q -Gaussian, thus leading to a power-law generalization of the CLT. The above behavior is universal and independent of the order of the maximum of the map considered, i.e., relevant for large classes of critical dynamical systems.
Deterministic multi-zone ice accretion modeling
NASA Technical Reports Server (NTRS)
Yamaguchi, K.; Hansman, R. J., Jr.; Kazmierczak, M.
1991-01-01
The study focuses on a deterministic model of the surface roughness transition behavior of glaze ice and analyzes the initial smooth/rough transition location, bead formation, and the propagation of the transition location. Based on a hypothesis that the smooth/rough transition location coincides with the laminar/turbulent boundary-layer transition location, a multizone model is implemented in the LEWICE code. In order to verify the effectiveness of the model, ice accretion predictions for simple cylinders calculated by the multizone LEWICE are compared to experimental ice shapes. The glaze ice shapes are found to be sensitive to the laminar surface roughness and bead thickness parameters controlling the transition location, while the ice shapes are found to be insensitive to the turbulent surface roughness.
Deterministic multi-zone ice accretion modeling
NASA Technical Reports Server (NTRS)
Yamaguchi, K.; Hansman, R. John, Jr.; Kazmierczak, Michael
1991-01-01
The focus here is on a deterministic model of the surface roughness transition behavior of glaze ice. The initial smooth/rough transition location, bead formation, and the propagation of the transition location are analyzed. Based on the hypothesis that the smooth/rough transition location coincides with the laminar/turbulent boundary layer transition location, a multizone model is implemented in the LEWICE code. In order to verify the effectiveness of the model, ice accretion predictions for simple cylinders calculated by the multizone LEWICE are compared to experimental ice shapes. The glaze ice shapes are found to be sensitive to the laminar surface roughness and bead thickness parameters controlling the transition location, while the ice shapes are found to be insensitive to the turbulent surface roughness.
Fast combinatorial optimization using generalized deterministic annealing
NASA Astrophysics Data System (ADS)
Acton, Scott T.; Ghosh, Joydeep; Bovik, Alan C.
1993-08-01
Generalized Deterministic Annealing (GDA) is a useful new tool for computing fast multi-state combinatorial optimization of difficult non-convex problems. By estimating the stationary distribution of simulated annealing (SA), GDA yields equivalent solutions to practical SA algorithms while providing a significant speed improvement. Using the standard GDA, the computational time of SA may be reduced by an order of magnitude, and, with a new implementation improvement, Windowed GDA, the time improvements reach two orders of magnitude with a trivial compromise in solution quality. The fast optimization of GDA has enabled expeditious computation of complex nonlinear image enhancement paradigms, such as the Piecewise Constant (PICO) regression examples used in this paper. To validate our analytical results, we apply GDA to the PICO regression problem and compare the results to other optimization methods. Several full image examples are provided that show successful PICO image enhancement using GDA in the presence of both Laplacian and Gaussian additive noise.
Deterministic polishing from theory to practice
NASA Astrophysics Data System (ADS)
Hooper, Abigail R.; Hoffmann, Nathan N.; Sarkas, Harry W.; Escolas, John; Hobbs, Zachary
2015-10-01
Improving predictability in optical fabrication can go a long way towards increasing profit margins and maintaining a competitive edge in an economic environment where pressure is mounting for optical manufacturers to cut costs. A major source of hidden cost is rework - the share of production that does not meet specification in the first pass through the polishing equipment. Rework substantially adds to the part's processing and labor costs as well as bottlenecks in production lines and frustration for managers, operators and customers. The polishing process consists of several interacting variables including: glass type, polishing pads, machine type, RPM, downforce, slurry type, baume level and even the operators themselves. Adjusting the process to get every variable under control while operating in a robust space can not only provide a deterministic polishing process which improves profitability but also produces a higher quality optic.
Targeted activation in deterministic and stochastic systems
NASA Astrophysics Data System (ADS)
Eisenhower, Bryan; Mezić, Igor
2010-02-01
Metastable escape is ubiquitous in many physical systems and is becoming a concern in engineering design as these designs (e.g., swarms of vehicles, coupled building energetics, nanoengineering, etc.) become more inspired by dynamics of biological, molecular and other natural systems. In light of this, we study a chain of coupled bistable oscillators which has two global conformations and we investigate how specialized or targeted disturbance is funneled in an inverse energy cascade and ultimately influences the transition process between the conformations. We derive a multiphase averaged approximation to these dynamics which illustrates the influence of actions in modal coordinates on the coarse behavior of this process. An activation condition that predicts how the disturbance influences the rate of transition is then derived. The prediction tools are derived for deterministic dynamics and we also present analogous behavior in the stochastic setting and show a divergence from Kramers activation behavior under targeted activation conditions.
Deterministic-random separation in nonstationary regime
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable
Not Available
1991-03-01
This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section's Seismic Qualification Program for reactor restart.
Development of a deterministic XML schema by resolving structure ambiguity of HL7 messages.
Huang, Ean-Wen; Wang, Da-Wei; Liou, Der-Ming
2005-10-01
Health level 7 (HL7) is a standard for medical information exchange. It defines data transfers for the application systems in the healthcare environment. Alternatively, the extensible markup language (XML) is a standard for data exchange using the Internet. If exchange messages follow the content and the sequence defined by HL7 and are expressed in the XML format, the system may benefit from the advantages of both standards. In creating the XML schema, we found ambiguities in HL7 message structures that cause the XML schema to be non-deterministic. These ambiguous expressions are summarized within 12 structures and can be replaced with equivalent or similar unambiguous structures. The finite state automata are used to verify expression equivalence. Applying this schema, an XML document may eliminate redundant segment group definitions and make the structure simple and easy to reproduce. In this paper, we discuss the methods and our experience in resolving ambiguous problems in HL7 messages to generate a deterministic XML schema. PMID:15993979
Morawski, Markus; Reinert, Tilo; Meyer-Klaucke, Wolfram; Wagner, Friedrich E.; Tröger, Wolfgang; Reinert, Anja; Jäger, Carsten; Brückner, Gert; Arendt, Thomas
2015-01-01
Perineuronal nets (PNs) are a specialized form of brain extracellular matrix, consisting of negatively charged glycosaminoglycans, glycoproteins and proteoglycans in the direct microenvironment of neurons. Still, locally immobilized charges in the tissue have not been accessible so far to direct observations and quantifications. Here, we present a new approach to visualize and quantify fixed charge-densities on brain slices using a focused proton-beam microprobe in combination with ionic metallic probes. For the first time, we can provide quantitative data on the distribution and net amount of pericellularly fixed charge-densities, which, determined at 0.4–0.5 M, is much higher than previously assumed. PNs, thus, represent an immobilized ion exchanger with ion sorting properties high enough to partition mobile ions in accord with Donnan-equilibrium. We propose that fixed charge-densities in the brain are involved in regulating ion mobility, the volume fraction of extracellular space and the viscosity of matrix components. PMID:26621052
Deterministic and non-deterministic switching in chains of magnetic hysterons.
Tanasa, R; Stancu, A
2011-10-26
This paper presents a fundamental analysis of a single-domain ferromagnetic particles chain hysteresis in perpendicular geometry as a prototype for ultra-high density memories. Due to magnetostatic long range interactions the system has a complex hysteresis but stable features can be found. The loop has a number of deterministic Barkhausen jumps and consequently a number of stable plateaus that could be used in multistate memories. The fundamental elements that sustain this behavior are shown and discussed. PMID:21969255
Deterministic Control of two Fermions in a Double Well
NASA Astrophysics Data System (ADS)
Lompe, Thomas; Murmann, Simon; Bergschneider, Andrea; Klinkhamer, Vincent; Zuern, Gerhard; Jochim, Selim
2014-05-01
The behavior of an ensemble of fermionic particles confined in a periodic potential is one of the richest topics of condensed matter physics. The simplest and most widely used theoretical description of such systems is provided by the Fermi-Hubbard Hamiltonian. We realize this Hamiltonian by deterministically preparing systems of two fermionic atoms trapped in a double well potential in a quantum state of our choice. We have studied the tunneling dynamics of this system as a function of the interparticle interactions and found good agreement with theoretical expectations. We have thus obtained a single-site addressable realization of the Fermi-Hubbard model where all parameters can be fully controlled and freely tuned. As a first experiment we prepared systems of one | ↑ > and one | ↓ > atom in the ground state of the double well, introduced repulsive (attractive) interparticle interactions and observed the crossover into a Mott-insulating (charge-density-wave) regime by measuring the occupation statistics of the individual sites. By adding a third well to the system this approach could be be used to directly observe ordered charge-density-waves and antiferromagnetic ordering. Now at Massachusetts Institute of Technology.
Gao, W.
1994-01-01
High-resolution satellite data provide detailed, quantitative descriptions of land surface characteristics over large areas so that objective scale linkage becomes feasible. With the aid of satellite data, Sellers et al. and Wood and Lakshmi examined the linearity of processes scaled up from 30 m to 15 km. If the phenomenon is scale invariant, then the aggregated value of a function or flux is equivalent to the function computed from aggregated values of controlling variables. The linear relation may be realistic for limited land areas having no large surface contrasts to cause significant horizontal exchange. However, for areas with sharp surface contrasts, horizontal exchange and different dynamics in the atmospheric boundary may induce nonlinear interactions, such as at interfaces of land-water, forest-farm land, and irrigated crops-desert steppe. The linear approach, however, represents the simplest scenario, and is useful for developing an effective scheme for incorporating subgrid land surface processes into large-scale models. Our studies focus on coupling satellite data and ground measurements with a satellite-data-driven land surface model to parameterize surface fluxes for large-scale climate models. In this case study, we used surface spectral reflectance data from satellite remote sensing to characterize spatial and temporal changes in vegetation and associated surface parameters in an area of about 350 {times} 400 km covering the southern Great Plains (SGP) Cloud and Radiation Testbed (CART) site of the US Department of Energy`s Atmospheric Radiation Measurement (ARM) Program.
Amor,J.; Swails, J.; Zhu, X.; Roy, C.; Nagai, H.; Ingmundson, A.; Cheng, X.; Kahn, R.
2005-01-01
The Legionella pneumophila protein RalF is secreted into host cytosol via the Dot/Icm type IV transporter where it acts to recruit ADP-ribosylation factor (Arf) to pathogen-containing phagosomes in the establishment of a replicative organelle. The presence in RalF of the Sec7 domain, present in all Arf guanine nucleotide exchange factors, has suggested that recruitment of Arf is an early step in pathogenesis. We have determined the crystal structure of RalF and of the isolated Sec7 domain and found that RalF is made up of two domains. The Sec7 domain is homologous to mammalian Sec7 domains. The C-terminal domain forms a cap over the active site in the Sec7 domain and contains a conserved folding motif, previously observed in adaptor subunits of vesicle coat complexes. The importance of the capping domain and of the glutamate in the 'glutamic finger,' conserved in all Sec7 domains, to RalF functions was examined using three different assays. These data highlight the functional importance of domains other than Sec7 in Arf guanine nucleotide exchange factors to biological activities and suggest novel mechanisms of regulation of those activities.
NASA Astrophysics Data System (ADS)
Tieman, Catherine; Rousseau, Valery
Highly frustrated quantum systems on lattices can exhibit a wide variety of phases. In addition to the usual Mott insulating and superfluid phases, these systems can also produce some so-called ``exotic phases'', such as super-solid and valence-bond-solid phases. An example of particularly frustrated lattice is the pyrochlore structure, which is formed by corner-sharing tetrahedrons. Many real materials adopt this structure, for instance the crystal Cd2 Re2O7 , which exhibits superconducting properties. However, the complex structure of these materials combined with the complexity of the dominant interactions that describe them makes their analytical study difficult. Also, approximate methods, such as mean-field theory, fail to give a correct description of these systems. In this work, we report on the first exact quantum Monte Carlo study of a model of hard-core bosons in a pyrochlore lattice with six-site ring-exchange interactions, using the Stochastic Green Function (SGF) algorithm. We analyze the superfluid density and the structure factor as functions of the filling and ring-exchange interaction strength, and we map out the ground state phase diagram.
NASA Astrophysics Data System (ADS)
Zhang, Da; Zha, Xin-Wei; Duan, Ya-Jun; Wei, Zhao Hui
2016-01-01
It is presented that a bidirectional remote state preparation scheme uses six-qubit maximally entangled state. In this paper we propose a new protocol which allows two distant parties to simultaneously and deterministically exchange their states under controling of a third remote party. In such a way, it cannot be successful without permission of the controller. Based on the von Neumann measurement and Bell state measurement, Alice can transmit an arbitrary single qubit state to Bob, while Bob can transmit an arbitrary single qubit state to Alice via the control of the supervisor Charlie.
Zhang, Hong; Zou, Sheng; Chen, Xiyuan; Ding, Ming; Shan, Guangcun; Hu, Zhaohui; Quan, Wei
2016-07-25
We present a method for monitoring the atomic density number on site based on atomic spin exchange relaxation. When the spin polarization P ≪ 1, the atomic density numbers could be estimated by measuring magnetic resonance linewidth in an applied DC magnetic field by using an all-optical atomic magnetometer. The density measurement results showed that the experimental results the theoretical predictions had a good consistency in the investigated temperature range from 413 K to 463 K, while, the experimental results were approximately 1.5 ∼ 2 times less than the theoretical predictions estimated from the saturated vapor pressure curve. These deviations were mainly induced by the radiative heat transfer efficiency, which inevitably leaded to a lower temperature in cell than the setting temperature. PMID:27464172
Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick
2015-01-01
Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future. PMID:26485278
Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E.; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick
2015-01-01
Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future. PMID:26485278
King, W.D.
2000-08-23
As part of the Hanford River Protection Project waste Treatment facility design contracted to BNFL, Inc., a sample of Savannah River Site (SRS) Tank 4 F waste solution was treated for the removal of technetium (as pertechnetate ion). Interest in treating the SRS sample for Tc removal resulted from the similarity between the Tank 44 F supernate composition and Hanford Envelope A supernate solutions. The Tank 44 F sample was available as a by-product of tests already conducted at the Savannah River Technology Center (SRTC) as part of the Alternative Salt Disposition Program for treatment of SRS wastes. Testing of the SRS sample resulted in considerable cost-savings since it was not necessary to ship a sample of Hanford supernate to SRS.
NASA Astrophysics Data System (ADS)
Grant, J. D.; Soulsby, C.; Malcolm, I. A.; Gibbins, C.
2007-12-01
The Atlantic salmon's (Salmo salar L.,) native Scottish, headwater spawning grounds, can be viewed as dynamic hot spots of biological productivity set within a hierarchical landscape sculpted by complex physico-chemical processes. Traditionally controls on female spawning site selection, have mainly been attributed to the sedimentary and hydraulic characteristics of available spawning habitat. In the UK, the influence of physico-chemical landscape hierarchies on spawning site selection is poorly understood. This study aims to provide a preliminary insight into the importance stream hydrochemistry has at different hierarchical scales, on spawning site selection by Atlantic salmon in a Scottish braided river system. During the 2005 and 2006 spawning seasons, intensive surveys of dissolved oxygen, alkalinity, trace metals and continuous temperature monitoring were undertaken under high and low flow conditions, in the surface water network of the floodplain reaches. Using GPS data within a GIS framework, these data were related to the locations utilised by spawning fish surveyed on a daily basis in each year. Results indicated that patterns of groundwater - surface water exchange were spatially and temporally dynamic, occurring at a range of scales across the channel floodplain system. A hierarchy of channel types could be differentiated on the basis of contrasting surface water quality and source water characteristics. These included channels dominated by soures such as groundwater, hillslope drainage and main-stem river water. Although most channels contained good hydraulic and sedimentary conditions, spawning was concentrated in those locations which displayed strong chemical groundwater signatures. In 2005: 64 % and 2006: 44 % spawning occurred in groundwater channel types. This study suggests that GW-SW interaction hierarchies may play an important role in determining site selection by spawning Atlantic salmon and sea trout.
Fukui, Y.; Doskey, P. V.; Environmental Research
1998-06-20
Emissions of nonmethane organic compounds (NMOCs) were measured by a static enclosure technique at a grassland site in the Midwestern United States during the growing seasons over a 2-year period. A mixture of nonmethane hydrocarbons (NMHCs) and oxygenated hydrocarbons (OxHCs) was emitted from the surface at rates exhibiting large seasonal and year-to-year variations. The average emission rate (and standard error) of the total NMOCs around noontime on sunny days during the growing seasons for the 2-year period was 1,300 {+-} 170 {micro}g m-2 h-1 (mass of the total NMOCs per area of enclosed soil surface per hour) or 5.5 {+-} 0.9 {micro}g g-1 h-1 (mass of the total NMOCs per mass of dry plant biomass in an enclosure per hour), with about 10% and 70% of the emissions being composed of tentatively identified NMHCs and OxHCs, respectively. Methanol was apparently derived from both the soil and vegetation and exhibited an average emission rate of 460 {+-} 73 {micro}g m-2 h-1 (1.4 {+-} 0.2 {micro}g g-1 h-1), which was the largest emission among the NMOCs. The year-to-year variation in the precipitation pattern greatly affected the NMOC emission rates. Emission rates normalized to biomass density exhibited a linear decrease as the growing season progressed. The emission rates of some NMOCs, particularly the OxHCs, from vegetation subjected to hypoxia, frost, and physical stresses were significantly greater than the average values observed at the site. Emissions of monoterpenes (a- and {beta}-pinene, limonene, and myrcene) and cis-3-hexen-1-ol were accelerated during the flowering of the plants and were much greater than those predicted by algorithms that correlated emission rates with temperature. Herbaceous vegetation is estimated to contribute about 40% and 50% of the total NMOC and monoterpene emissions, respectively, in grasslands; the remaining contributions are from woody species within grasslands. Contributions of isoprene emissions from herbaceous vegetation in
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
Deterministic particle transport in a ratchet flow
NASA Astrophysics Data System (ADS)
Beltrame, Philippe; Makhoul, Mounia; Joelson, Maminirina
2016-01-01
This study is motivated by the issue of the pumping of particle through a periodic modulated channel. We focus on a simplified deterministic model of small inertia particles within the Stokes flow framework that we call "ratchet flow." A path-following method is employed in the parameter space in order to retrace the scenario which from bounded periodic solutions leads to particle transport. Depending on whether the magnitude of the particle drag is moderate or large, two main transport mechanisms are identified in which the role of the parity symmetry of the flow differs. For large drag, transport is induced by flow asymmetry, while for moderate drag, since the full transport solution bifurcation structure already exists for symmetric settings, flow asymmetry only makes the transport effective. We analyzed the scenarios of current reversals for each mechanism as well as the role of synchronization. In particular we show that, for large drag, the particle drift is similar to phase slip in a synchronization problem.
Deredge, Daniel; Li, Jiawen; Johnson, Kenneth A; Wintrode, Patrick L
2016-05-01
New nonnucleoside analogs are being developed as part of a multi-drug regimen to treat hepatitis C viral infections. Particularly promising are inhibitors that bind to the surface of the thumb domain of the viral RNA-dependent RNA polymerase (NS5B). Numerous crystal structures have been solved showing small molecule non-nucleoside inhibitors bound to the hepatitis C viral polymerase, but these structures alone do not define the mechanism of inhibition. Our prior kinetic analysis showed that nonnucleoside inhibitors binding to thumb site-2 (NNI2) do not block initiation or elongation of RNA synthesis; rather, they block the transition from the initiation to elongation, which is thought to proceed with significant structural rearrangement of the enzyme-RNA complex. Here we have mapped the effect of three NNI2 inhibitors on the conformational dynamics of the enzyme using hydrogen/deuterium exchange kinetics. All three inhibitors rigidify an extensive allosteric network extending >40 Å from the binding site, thus providing a structural rationale for the observed disruption of the transition from distributive initiation to processive elongation. The two more potent inhibitors also suppress slow cooperative unfolding in the fingers extension-thumb interface and primer grip, which may contribute their stronger inhibition. These results establish that NNI2 inhibitors act through long range allosteric effects, reveal important conformational changes underlying normal polymerase function, and point the way to the design of more effective allosteric inhibitors that exploit this new information. PMID:27006396
Traffic chaotic dynamics modeling and analysis of deterministic network
NASA Astrophysics Data System (ADS)
Wu, Weiqiang; Huang, Ning; Wu, Zhitao
2016-07-01
Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.
Stochastic and Deterministic Assembly Processes in Subsurface Microbial Communities
Stegen, James C.; Lin, Xueju; Konopka, Allan; Fredrickson, Jim K.
2012-03-29
A major goal of microbial community ecology is to understand the forces that structure community composition. Deterministic selection by specific environmental factors is sometimes important, but in other cases stochastic or ecologically neutral processes dominate. Lacking is a unified conceptual framework aiming to understand why deterministic processes dominate in some contexts but not others. Here we work towards such a framework. By testing predictions derived from general ecological theory we aim to uncover factors that govern the relative influences of deterministic and stochastic processes. We couple spatiotemporal data on subsurface microbial communities and environmental parameters with metrics and null models of within and between community phylogenetic composition. Testing for phylogenetic signal in organismal niches showed that more closely related taxa have more similar habitat associations. Community phylogenetic analyses further showed that ecologically similar taxa coexist to a greater degree than expected by chance. Environmental filtering thus deterministically governs subsurface microbial community composition. More importantly, the influence of deterministic environmental filtering relative to stochastic factors was maximized at both ends of an environmental variation gradient. A stronger role of stochastic factors was, however, supported through analyses of phylogenetic temporal turnover. While phylogenetic turnover was on average faster than expected, most pairwise comparisons were not themselves significantly non-random. The relative influence of deterministic environmental filtering over community dynamics was elevated, however, in the most temporally and spatially variable environments. Our results point to general rules governing the relative influences of stochastic and deterministic processes across micro- and macro-organisms.
NASA Astrophysics Data System (ADS)
Diebel, F.; Boguslawski, M.; Lučić, Nemanja M.; Jović Savić, Dragana M.; Denz, C.
2015-03-01
Light propagation in structured photonic media covers many fascinating wave phenomena resulting from the band structure of the underlying lattice. Recently, the focus turned towards deterministic aperiodic structures exhibiting distinctive band gap properties. To experimentally study these effects, optical induction of photonic refractive index landscapes turned out to be the method of choice to fabricate these structures. In this contribution, we present a paradigm change of photonic lattice design by introducing a holographic optical induction method based on pixel-like spatially multiplexed single-site nondiffracting Bessel beams. This technique allows realizing a huge class of two-dimensional photonic structures, including deterministic aperiodic golden-angle Vogel spirals, as well as Fibonacci lattices.
Self-avoiding modes of motion in a deterministic Lorentz lattice gas
NASA Astrophysics Data System (ADS)
Webb, B. Z.; Cohen, E. G. D.
2014-08-01
We study the motion of a particle on the two-dimensional honeycomb lattice, whose sites are occupied by either flipping rotators or flipping mirrors, which scatter the particle according to a deterministic rule. For both types of scatterers we find a new type of motion that has not been observed in a Lorentz Lattice gas, where the particle's trajectory is a self-avoiding walk between returns to its initial position. We show that this behavior is a consequence of the deterministic scattering rule and the particular class of initial scatterer configurations we consider. Since self-avoiding walks are one of the main tools used to model the growth of crystals and polymers, the particle's motion in this class of systems is potentially important for the study of these processes.
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2011-05-17
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.
Surface plasmon field enhancements in deterministic aperiodic structures.
Shugayev, Roman
2010-11-22
In this paper we analyze optical properties and plasmonic field enhancements in large aperiodic nanostructures. We introduce extension of Generalized Ohm's Law approach to estimate electromagnetic properties of Fibonacci, Rudin-Shapiro, cluster-cluster aggregate and random deterministic clusters. Our results suggest that deterministic aperiodic structures produce field enhancements comparable to random morphologies while offering better understanding of field localizations and improved substrate design controllability. Generalized Ohm's law results for deterministic aperiodic structures are in good agreement with simulations obtained using discrete dipole method. PMID:21164839
Neo-deterministic seismic hazard assessment in North Africa
NASA Astrophysics Data System (ADS)
Mourabit, T.; Abou Elenean, K. M.; Ayadi, A.; Benouar, D.; Ben Suleman, A.; Bezzeghoud, M.; Cheddadi, A.; Chourak, M.; ElGabry, M. N.; Harbi, A.; Hfaiedh, M.; Hussein, H. M.; Kacem, J.; Ksentini, A.; Jabour, N.; Magrin, A.; Maouche, S.; Meghraoui, M.; Ousadou, F.; Panza, G. F.; Peresan, A.; Romdhane, N.; Vaccari, F.; Zuccolo, E.
2014-04-01
North Africa is one of the most earthquake-prone areas of the Mediterranean. Many devastating earthquakes, some of them tsunami-triggering, inflicted heavy loss of life and considerable economic damage to the region. In order to mitigate the destructive impact of the earthquakes, the regional seismic hazard in North Africa is assessed using the neo-deterministic, multi-scenario methodology (NDSHA) based on the computation of synthetic seismograms, using the modal summation technique, at a regular grid of 0.2 × 0.2°. This is the first study aimed at producing NDSHA maps of North Africa including five countries: Morocco, Algeria, Tunisia, Libya, and Egypt. The key input data for the NDSHA algorithm are earthquake sources, seismotectonic zonation, and structural models. In the preparation of the input data, it has been really important to go beyond the national borders and to adopt a coherent strategy all over the area. Thanks to the collaborative efforts of the teams involved, it has been possible to properly merge the earthquake catalogues available for each country to define with homogeneous criteria the seismogenic zones, the characteristic focal mechanism associated with each of them, and the structural models used to model wave propagation from the sources to the sites. As a result, reliable seismic hazard maps are produced in terms of maximum displacement ( D max), maximum velocity ( V max), and design ground acceleration.
Mahesh, S K; Rao, P Prabhakar; Thomas, Mariyam; Francis, T Linda; Koshy, Peter
2013-12-01
Stannate-based pyrochlore-type red phosphors CaGd(1-x)SnNbO7:xEu(3+), Ca(1-y)Sr(y)Gd(1-x)SnNbO7:xEu(3+), and Ca(0.8-x)Sr0.2GdSnNbO(7+δ): xEu(3+) were prepared via conventional solid-state method. Influence of cation substitution and activator site control on the photoluminescence properties of these phosphors are elucidated using powder X-ray diffraction, Rietveld analysis, Raman spectrum analysis, and photoluminescence excitation and emission spectra. The Eu(3+) luminescence in quaternary pyrochlore lattice exemplifies as a very good structural probe for the detection of short-range disorder in the lattice, which otherwise is not detected by normal powder X-ray diffraction technique. The Eu(3+) emission due to magnetic dipole transition ((5)D0-(7)F1 MD) is modified with the increase in europium concentration in the quaternary pyrochlore red phosphors. (5)D0-(7)F1 MD transition splitting is not observable for low Eu(3+) doping because of the short-range disorder in the pyrochlore lattice. Appearance of narrow peaks in Raman spectra confirms that short-range disorder in the crystal lattice disappears with progressive europium doping. By using Sr as a network modifier ion in place of Ca we were able to increase the f-f transition intensities and europium quenching concentration. The influence of effective positive charge of the central Eu(3+) ions when it replaces a metal ion having lower oxidation state such as Ca(2+) was also investigated. The relative intensities of A1g (∼500 cm(-1)) and F2g (∼330 cm(-1)) Raman vibrational modes get inverted when Eu(3+) ions replaces Ca(2+) ions instead of Gd(3+) as trivalent europium ions can attract the electron cloud of oxygen ions strongly in comparison with divalent calcium ions. The influence of positive charge effect of Eu(3+) in Ca0.7Sr0.2GdSnNbO7+δ:0.1Eu(3+) phosphor is greatly strengthened the charge transfer band and (7)F0-(5)L6 transition intensities than that of the Ca0.8Sr0.2Gd0.9SnNbO7:0.1Eu(3+) phosphor. Our
Deterministic, Nanoscale Fabrication of Mesoscale Objects
Jr., R M; Shirk, M; Gilmer, G; Rubenchik, A
2004-09-24
Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-dimensional features with 20-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels. For deterministic fabrication of high energy-density physics (HEDP) targets, it will be necessary both to fabricate features in a wide variety of materials as well as to understand and simulate the fabrication process. We continue to investigate, both in experiment and in modeling, the ablation/surface-modification processes that occur with the use of laser pulses that are near the ablation threshold fluence. During the first two years, we studied ablation of metals, and we used sub-ps laser pulses, because pulses shorter than the electron-phonon relaxation time offered the most precise control of the energy that can be deposited into a metal surface. The use of sub-ps laser pulses also allowed a decoupling of the energy-deposition process from the ensuing movement/ablation of the atoms from the solid, which simplified the modeling. We investigated the ablation of material from copper, gold, and nickel substrates. We combined the power of the 1-D hydrocode ''HYADES'' with the state-of-the-art, 3-D molecular dynamics simulations ''MDCASK'' in our studies. For FY04, we have stretched ourselves to investigate laser ablation of carbon, including chemically-assisted processes. We undertook this research, because the energy deposition that is required to perform direct sublimation of carbon is much higher than that to stimulate the reaction 2C + O{sub 2} => 2CO. Thus, extremely fragile carbon aerogels might survive the chemically-assisted process more readily than ablation via direct laser sublimation. We had planned to start by studying vitreous carbon and move onto carbon aerogels. We were able to obtain flat, high-quality vitreous carbon, which was easy to work on
Deterministic Function Computation with Chemical Reaction Networks*
Chen, Ho-Lin; Doty, David; Soloveichik, David
2013-01-01
Chemical reaction networks (CRNs) formally model chemistry in a well-mixed solution. CRNs are widely used to describe information processing occurring in natural cellular regulatory networks, and with upcoming advances in synthetic biology, CRNs are a promising language for the design of artificial molecular control circuitry. Nonetheless, despite the widespread use of CRNs in the natural sciences, the range of computational behaviors exhibited by CRNs is not well understood. CRNs have been shown to be efficiently Turing-universal (i.e., able to simulate arbitrary algorithms) when allowing for a small probability of error. CRNs that are guaranteed to converge on a correct answer, on the other hand, have been shown to decide only the semilinear predicates (a multi-dimensional generalization of “eventually periodic” sets). We introduce the notion of function, rather than predicate, computation by representing the output of a function f : ℕk → ℕl by a count of some molecular species, i.e., if the CRN starts with x1, …, xk molecules of some “input” species X1, …, Xk, the CRN is guaranteed to converge to having f(x1, …, xk) molecules of the “output” species Y1, …, Yl. We show that a function f : ℕk → ℕl is deterministically computed by a CRN if and only if its graph {(x, y) ∈ ℕk × ℕl ∣ f(x) = y} is a semilinear set. Finally, we show that each semilinear function f (a function whose graph is a semilinear set) can be computed by a CRN on input x in expected time O(polylog ∥x∥1). PMID:25383068
Reproducible and deterministic production of aspheres
NASA Astrophysics Data System (ADS)
Leitz, Ernst Michael; Stroh, Carsten; Schwalb, Fabian
2015-10-01
Aspheric lenses are ground in a single point cutting mode. Subsequently different iterative polishing methods are applied followed by aberration measurements on external metrology instruments. For an economical production, metrology and correction steps need to be reduced. More deterministic grinding and polishing is mandatory. Single point grinding is a path-controlled process. The quality of a ground asphere is mainly influenced by the accuracy of the machine. Machine improvements must focus on path accuracy and thermal expansion. Optimized design, materials and thermal management reduce thermal expansion. The path accuracy can be improved using ISO 230-2 standardized measurements. Repeated interferometric measurements over the total travel of all CNC axes in both directions are recorded. Position deviations evaluated in correction tables improve the path accuracy and that of the ground surface. Aspheric polishing using a sub-aperture flexible polishing tool is a dwell time controlled process. For plano and spherical polishing the amount of material removal during polishing is proportional to pressure, relative velocity and time (Preston). For the use of flexible tools on aspheres or freeform surfaces additional non-linear components are necessary. Satisloh ADAPT calculates a predicted removal function from lens geometry, tool geometry and process parameters with FEM. Additionally the tooĺs local removal characteristics is determined in a simple test. By oscillating the tool on a plano or spherical sample of the same lens material, a trench is created. Its 3-D profile is measured to calibrate the removal simulation. Remaining aberrations of the desired lens shape can be predicted, reducing iteration and metrology steps.
Deterministic versus stochastic trends: Detection and challenges
NASA Astrophysics Data System (ADS)
Fatichi, S.; Barbosa, S. M.; Caporali, E.; Silva, M. E.
2009-09-01
The detection of a trend in a time series and the evaluation of its magnitude and statistical significance is an important task in geophysical research. This importance is amplified in climate change contexts, since trends are often used to characterize long-term climate variability and to quantify the magnitude and the statistical significance of changes in climate time series, both at global and local scales. Recent studies have demonstrated that the stochastic behavior of a time series can change the statistical significance of a trend, especially if the time series exhibits long-range dependence. The present study examines the trends in time series of daily average temperature recorded in 26 stations in the Tuscany region (Italy). In this study a new framework for trend detection is proposed. First two parametric statistical tests, the Phillips-Perron test and the Kwiatkowski-Phillips-Schmidt-Shin test, are applied in order to test for trend stationary and difference stationary behavior in the temperature time series. Then long-range dependence is assessed using different approaches, including wavelet analysis, heuristic methods and by fitting fractionally integrated autoregressive moving average models. The trend detection results are further compared with the results obtained using nonparametric trend detection methods: Mann-Kendall, Cox-Stuart and Spearman's ρ tests. This study confirms an increase in uncertainty when pronounced stochastic behaviors are present in the data. Nevertheless, for approximately one third of the analyzed records, the stochastic behavior itself cannot explain the long-term features of the time series, and a deterministic positive trend is the most likely explanation.
Understanding Vertical Jump Potentiation: A Deterministic Model.
Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L
2016-06-01
This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength. PMID:26712510
Deterministic phase retrieval employing spherical illumination
NASA Astrophysics Data System (ADS)
Martínez-Carranza, J.; Falaggis, K.; Kozacki, T.
2015-05-01
Deterministic Phase Retrieval techniques (DPRTs) employ a series of paraxial beam intensities in order to recover the phase of a complex field. These paraxial intensities are usually generated in systems that employ plane-wave illumination. This type of illumination allows a direct processing of the captured intensities with DPRTs for recovering the phase. Furthermore, it has been shown that intensities for DPRTs can be acquired from systems that use spherical illumination as well. However, this type of illumination presents a major setback for DPRTs: the captured intensities change their size for each position of the detector on the propagation axis. In order to apply the DPRTs, reescalation of the captured intensities has to be applied. This condition can increase the error sensitivity of the final phase result if it is not carried out properly. In this work, we introduce a novel system based on a Phase Light Modulator (PLM) for capturing the intensities when employing spherical illumination. The proposed optical system enables us to capture the diffraction pattern of under, in, and over-focus intensities. The employment of the PLM allows capturing the corresponding intensities without displacing the detector. Moreover, with the proposed optical system we can control accurately the magnification of the captured intensities. Thus, the stack of captured intensities can be used in DPRTs, overcoming the problems related with the resizing of the images. In order to prove our claims, the corresponding numerical experiments will be carried out. These simulations will show that the retrieved phases with spherical illumination are accurate and can be compared with those that employ plane wave illumination. We demonstrate that with the employment of the PLM, the proposed optical system has several advantages as: the optical system is compact, the beam size on the detector plane is controlled accurately, and the errors coming from mechanical motion can be suppressed easily.
A Method to Separate Stochastic and Deterministic Information from Electrocardiograms
NASA Astrophysics Data System (ADS)
Gutiérrez, R. M.; Sandoval, L. A.
2005-01-01
In this work we present a new idea to develop a method to separate stochastic and deterministic information contained in an electrocardiogram, ECG, which may provide new sources of information with diagnostic purposes. We assume that the ECG has information corresponding to many different processes related with the cardiac activity as well as contamination from different sources related with the measurement procedure and the nature of the observed system itself. The method starts with the application of an improved archetypal analysis to separate the mentioned stochastic and deterministic information. From the stochastic point of view we analyze Renyi entropies, and with respect to the deterministic perspective we calculate the autocorrelation function and the corresponding correlation time. We show that healthy and pathologic information may be stochastic and/or deterministic, can be identified by different measures and located in different parts of the ECG.
NASA Astrophysics Data System (ADS)
Mamadou, Ossenatou; Gourlez de la Motte, Louis; De Ligne, Anne; Bernard, Heineisch; Aubinet, Marc
2016-04-01
Although widely used to measure CO2 and other gas fluxes, the eddy covariance technique still needs methodological improvements. This research focuses on the high frequency loss corrections, which are especially important when using a closed-path infrared gas analyzer. We compared three approaches to implement these corrections for CO2 fluxes and evaluated their impact on the carbon balance at the Dorinne Terrestrial Observatory (DTO), an intensively grazed grassland site in Belgium. The carbon balance at DTO is also the object of a separate analysis (Gourlez de la Motte et al., Geophysical Research Abstract, Vol. 18, EGU2016-6813-1, 2016). In the first approach, the computation of correction factors was based on the measured sensible heat cospectra ('local' cospectra), whereas the other two were based on theoretical models (Kaimal et al., 1972). The correction approaches were validated by comparing the nighttime eddy covariance CO2 fluxes corrected with each approach and in situ soil respiration measurements. We found that the local cospectra differed from the Kaimal theoretical shape, although the site could not be considered 'difficult' (i.e., fairly flat, homogeneous, low vegetation, sufficient measurement height), appearing less peaked in the inertial subrange. This difference greatly affected the correction factor, especially for night fluxes. Night fluxes measured by eddy covariance were found to be in good agreement with in situ soil respiration measurements when corrected with local cospectra and to be overestimated when corrected with Kaimal cospectra. As the difference between correction factors was larger in stable than unstable conditions, this acts as a selective systematic error and has an important impact on annual fluxes. On the basis of a 4-year average, at DTO, the errors reach 71-150 g C m-2 y-1 for net ecosystem exchange (NEE), 280-562 g C m-2 y-1 for total ecosystem respiration (TER) and 209-412 g C m-2 y-1 for gross primary productivity (GPP
Frego, Lee; Davidson, Walter
2006-01-01
HXMS (hydrogen/deuterium exchange mass spectrometry) of the glucocorticoid receptor ligand-binding domain (GR LBD) complexed with the agonist dexamethasone and the antagonist RU-486 is described. Variations in the rates of exchange were observed in regions consistent with the published crystal structures of GR LBD complexed with RU-486 when compared with the GR dexamethasone complex. We also report the HXMS results for agonist-bound GR LBD with the coactivator transcriptional intermediary factor 2 (TIF2) and anatagonist-bound GR LBD with nuclear receptor corepressor (NCoR). Alterations in exchange rates observed for agonist-bound GR LBD with TIF2 present were consistent with the published crystal structural contacts for the complex. Alterations in exchange rates observed for antagonist-bound GR LBD with NCoR were a subset of those observed with TIF2 binding, suggesting a common or overlapping binding site for coactivator and corepressor. PMID:16600964
Integral-transport-based deterministic brachytherapy dose calculations
NASA Astrophysics Data System (ADS)
Zhou, Chuanyu; Inanc, Feyzi
2003-01-01
We developed a transport-equation-based deterministic algorithm for computing three-dimensional brachytherapy dose distributions. The deterministic algorithm has been based on the integral transport equation. The algorithm provided us with the capability of computing dose distributions for multiple isotropic point and/or volumetric sources in a homogenous/heterogeneous medium. The algorithm results have been benchmarked against the results from the literature and MCNP results for isotropic point sources and volumetric sources.
HyDRa: control of parameters for deterministic polishing.
Ruiz, E; Salas, L; Sohn, E; Luna, E; Herrera, J; Quiros, F
2013-08-26
Deterministic hydrodynamic polishing with HyDRa requires a precise control of polishing parameters, such as propelling air pressure, slurry density, slurry flux and tool height. We describe the HyDRa polishing system and prove how precise, deterministic polishing can be achieved in terms of the control of these parameters. The polishing results of an 84 cm hyperbolic mirror are presented to illustrate how the stability of these parameters is important to obtain high-quality surfaces. PMID:24105579
Structural deterministic safety factors selection criteria and verification
NASA Technical Reports Server (NTRS)
Verderaime, V.
1992-01-01
Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.
Graham, Brian W; Tao, Yeqing; Dodge, Katie L; Thaxton, Carly T; Olaso, Danae; Young, Nicolas L; Marshall, Alan G; Trakselis, Michael A
2016-06-10
The archaeal minichromosomal maintenance (MCM) helicase from Sulfolobus solfataricus (SsoMCM) is a model for understanding structural and mechanistic aspects of DNA unwinding. Although interactions of the encircled DNA strand within the central channel provide an accepted mode for translocation, interactions with the excluded strand on the exterior surface have mostly been ignored with regard to DNA unwinding. We have previously proposed an extension of the traditional steric exclusion model of unwinding to also include significant contributions with the excluded strand during unwinding, termed steric exclusion and wrapping (SEW). The SEW model hypothesizes that the displaced single strand tracks along paths on the exterior surface of hexameric helicases to protect single-stranded DNA (ssDNA) and stabilize the complex in a forward unwinding mode. Using hydrogen/deuterium exchange monitored by Fourier transform ion cyclotron resonance MS, we have probed the binding sites for ssDNA, using multiple substrates targeting both the encircled and excluded strand interactions. In each experiment, we have obtained >98.7% sequence coverage of SsoMCM from >650 peptides (5-30 residues in length) and are able to identify interacting residues on both the interior and exterior of SsoMCM. Based on identified contacts, positively charged residues within the external waist region were mutated and shown to generally lower DNA unwinding without negatively affecting the ATP hydrolysis. The combined data globally identify binding sites for ssDNA during SsoMCM unwinding as well as validating the importance of the SEW model for hexameric helicase unwinding. PMID:27044751
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.
2004-01-01
We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.
Deterministic assembly processes govern bacterial community structure in the Fynbos, South Africa.
Moroenyane, I; Chimphango, S B M; Wang, J; Kim, H-K; Adams, Jonathan Miles
2016-08-01
The Mediterranean Fynbos vegetation of South Africa is well known for its high levels of diversity, endemism, and the existence of very distinct plant communities on different soil types. Studies have documented the broad taxonomic classification and diversity patterns of soil microbial diversity, but none has focused on the community assembly processes. We hypothesised that bacterial phylogenetic community structure in the Fynbos is highly governed by deterministic processes. We sampled soils in four Fynbos vegetation types and examined bacterial communities using Illumina HiSeq platform with the 16S rRNA gene marker. UniFrac analysis showed that the community clustered strongly by vegetation type, suggesting a history of evolutionary specialisation in relation to habitats or plant communities. The standardised beta mean nearest taxon distance (ses. β NTD) index showed no association with vegetation type. However, the overall phylogenetic signal indicates that distantly related OTUs do tend to co-occur. Both NTI (nearest taxon index) and ses. β NTD deviated significantly from null models, indicating that deterministic processes were important in the assembly of bacterial communities. Furthermore, ses. β NTD was significantly higher than that of null expectations, indicating that co-occurrence of related bacterial lineages (over-dispersion in phylogenetic beta diversity) is determined by the differences in environmental conditions among the sites, even though the co-occurrence pattern did not correlate with any measured environmental parameter, except for a weak correlation with soil texture. We suggest that in the Fynbos, there are frequent shifts of niches by bacterial lineages, which then become constrained and evolutionary conserved in their new environments. Overall, this study sheds light on the relative roles of both deterministic and neutral processes in governing bacterial communities in the Fynbos. It seems that deterministic processes play a major
Elliptical quantum dots as on-demand single photons sources with deterministic polarization states
NASA Astrophysics Data System (ADS)
Teng, Chu-Hsiang; Zhang, Lei; Hill, Tyler A.; Demory, Brandon; Deng, Hui; Ku, Pei-Cheng
2015-11-01
In quantum information, control of the single photon's polarization is essential. Here, we demonstrate single photon generation in a pre-programmed and deterministic polarization state, on a chip-scale platform, utilizing site-controlled elliptical quantum dots (QDs) synthesized by a top-down approach. The polarization from the QD emission is found to be linear with a high degree of linear polarization and parallel to the long axis of the ellipse. Single photon emission with orthogonal polarizations is achieved, and the dependence of the degree of linear polarization on the QD geometry is analyzed.
Elliptical quantum dots as on-demand single photons sources with deterministic polarization states
Teng, Chu-Hsiang; Demory, Brandon; Ku, Pei-Cheng; Zhang, Lei; Hill, Tyler A.; Deng, Hui
2015-11-09
In quantum information, control of the single photon's polarization is essential. Here, we demonstrate single photon generation in a pre-programmed and deterministic polarization state, on a chip-scale platform, utilizing site-controlled elliptical quantum dots (QDs) synthesized by a top-down approach. The polarization from the QD emission is found to be linear with a high degree of linear polarization and parallel to the long axis of the ellipse. Single photon emission with orthogonal polarizations is achieved, and the dependence of the degree of linear polarization on the QD geometry is analyzed.
Ergodicity of Truncated Stochastic Navier Stokes with Deterministic Forcing and Dispersion
NASA Astrophysics Data System (ADS)
Majda, Andrew J.; Tong, Xin T.
2016-05-01
Turbulence in idealized geophysical flows is a very rich and important topic. The anisotropic effects of explicit deterministic forcing, dispersive effects from rotation due to the β -plane and F-plane, and topography together with random forcing all combine to produce a remarkable number of realistic phenomena. These effects have been studied through careful numerical experiments in the truncated geophysical models. These important results include transitions between coherent jets and vortices, and direct and inverse turbulence cascades as parameters are varied, and it is a contemporary challenge to explain these diverse statistical predictions. Here we contribute to these issues by proving with full mathematical rigor that for any values of the deterministic forcing, the β - and F-plane effects and topography, with minimal stochastic forcing, there is geometric ergodicity for any finite Galerkin truncation. This means that there is a unique smooth invariant measure which attracts all statistical initial data at an exponential rate. In particular, this rigorous statistical theory guarantees that there are no bifurcations to multiple stable and unstable statistical steady states as geophysical parameters are varied in contrast to claims in the applied literature. The proof utilizes a new statistical Lyapunov function to account for enstrophy exchanges between the statistical mean and the variance fluctuations due to the deterministic forcing. It also requires careful proofs of hypoellipticity with geophysical effects and uses geometric control theory to establish reachability. To illustrate the necessity of these conditions, a two-dimensional example is developed which has the square of the Euclidean norm as the Lyapunov function and is hypoelliptic with nonzero noise forcing, yet fails to be reachable or ergodic.
Cummins, David J; Espada, Alfonso; Novick, Scott J; Molina-Martin, Manuel; Stites, Ryan E; Espinosa, Juan Felix; Broughton, Howard; Goswami, Devrishi; Pascal, Bruce D; Dodge, Jeffrey A; Chalmers, Michael J; Griffin, Patrick R
2016-06-21
Hydrogen/deuterium exchange coupled with mass spectrometry (HDX-MS) is an information-rich biophysical method for the characterization of protein dynamics. Successful applications of differential HDX-MS include the characterization of protein-ligand binding. A single differential HDX-MS data set (protein ± ligand) is often comprised of more than 40 individual HDX-MS experiments. To eliminate laborious manual processing of samples, and to minimize random and gross errors, automated systems for HDX-MS analysis have become routine in many laboratories. However, an automated system, while less prone to random errors introduced by human operators, may have systematic errors that go unnoticed without proper detection. Although the application of automated (and manual) HDX-MS has become common, there are only a handful of studies reporting the systematic evaluation of the performance of HDX-MS experiments, and no reports have been published describing a cross-site comparison of HDX-MS experiments. Here, we describe an automated HDX-MS platform that operates with a parallel, two-trap, two-column configuration that has been installed in two remote laboratories. To understand the performance of the system both within and between laboratories, we have designed and completed a test-retest repeatability study for differential HDX-MS experiments implemented at each of two laboratories, one in Florida and the other in Spain. This study provided sufficient data to do both within and between laboratory variability assessments. Initial results revealed a systematic run-order effect within one of the two systems. Therefore, the study was repeated, and this time the conclusion was that the experimental conditions were successfully replicated with minimal systematic error. PMID:27224086
NASA Astrophysics Data System (ADS)
Carey, S. K.; Drewitt, G. B.
2013-12-01
The oil sands mining industry in Canada has made a commitment to restore disturbed areas to an equivalent capability to that which existed prior to mining. Certification requires successful reclamation, which can in part be evaluated through long-term ecosystem studies. A reclamation site, informally named South Bison Hill (SBH) has had growing season water, energy and carbon fluxes measured via the eddy covariance method for 10 years since establishment. SBH was capped with a 0.2 m peat-glacial till mixture overlying 0.8 m of reworked glacial till soil. The site was seeded to barley cultivar (Hordeum spp.) in the summer of 2002 and later planted to white spruce (Picea glauca) and aspen (Populus spp.) in the summer/fall of 2004. Since 2007, the major species atop SBH has been aspen, and by 2012 was on average ~ 4 m in height. Climatically, mean growing temperature did not vary greatly, yet there was considerable difference in rainfall among years, with 2012 having the greatest rainfall at 321 mm, whereas 2011 and 2007 were notably dry at 180 and 178 mm, respectively. The partitioning of energy varied among years, but the fraction of latent heat as a portion of net radiation increased with the establishment of aspen, along with concomitant increases in LAI and growing season net ecosystem exchange (NEE). Peat growing season ET was smallest in 2004 at 2.3 mm/d and greatest in 2010 at ~3.9 mm/d. ET rates showed a marked increase in 2008 corresponding with the increase in LAI attributed to the aspen cover. Since the establishment of a surface cover and vegetation in 2003, SBH has been a growing season sink for carbon dioxide. Values of NEE follow similar patterns to those of ET, with values gradually becoming more negative (greater carbon uptake) as the aspen forest established. Comparison with other disturbed and undisturbed boreal aspen stands show that SBH exhibits similar water, energy and carbon flux patterns during the growing season.
DETERMINISTIC TRANSPORT METHODS AND CODES AT LOS ALAMOS
J. E. MOREL
1999-06-01
The purposes of this paper are to: Present a brief history of deterministic transport methods development at Los Alamos National Laboratory from the 1950's to the present; Discuss the current status and capabilities of deterministic transport codes at Los Alamos; and Discuss future transport needs and possible future research directions. Our discussion of methods research necessarily includes only a small fraction of the total research actually done. The works that have been included represent a very subjective choice on the part of the author that was strongly influenced by his personal knowledge and experience. The remainder of this paper is organized in four sections: the first relates to deterministic methods research performed at Los Alamos, the second relates to production codes developed at Los Alamos, the third relates to the current status of transport codes at Los Alamos, and the fourth relates to future research directions at Los Alamos.
Estimating the epidemic threshold on networks by deterministic connections
Li, Kezan Zhu, Guanghu; Fu, Xinchu; Small, Michael
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect than those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.
Deterministic transformations of multipartite entangled states with tensor rank 2
Turgut, S.; Guel, Y.; Pak, N. K.
2010-01-15
Transformations involving only local operations assisted with classical communication are investigated for multipartite entangled pure states having tensor rank 2. All necessary and sufficient conditions for the possibility of deterministically converting truly multipartite, rank-2 states into each other are given. Furthermore, a chain of local operations that successfully achieves the transformation has been identified for all allowed transformations. The identified chains have two nice features: (1) each party needs to carry out at most one local operation and (2) all of these local operations are also deterministic transformations by themselves. Finally, it is found that there are disjoint classes of states, all of which can be identified by a single real parameter, which remain invariant under deterministic transformations.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Deterministic and efficient quantum cryptography based on Bell's theorem
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-05-15
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.
Exchange fluctuation theorem for correlated quantum systems.
Jevtic, Sania; Rudolph, Terry; Jennings, David; Hirono, Yuji; Nakayama, Shojun; Murao, Mio
2015-10-01
We extend the exchange fluctuation theorem for energy exchange between thermal quantum systems beyond the assumption of molecular chaos, and describe the nonequilibrium exchange dynamics of correlated quantum states. The relation quantifies how the tendency for systems to equilibrate is modified in high-correlation environments. In addition, a more abstract approach leads us to a "correlation fluctuation theorem". Our results elucidate the role of measurement disturbance for such scenarios. We show a simple application by finding a semiclassical maximum work theorem in the presence of correlations. We also present a toy example of qubit-qudit heat exchange, and find that non-classical behaviour such as deterministic energy transfer and anomalous heat flow are reflected in our exchange fluctuation theorem. PMID:26565174
Deterministic entanglement distillation for secure double-server blind quantum computation.
Sheng, Yu-Bo; Zhou, Lan
2015-01-01
Blind quantum computation (BQC) provides an efficient method for the client who does not have enough sophisticated technology and knowledge to perform universal quantum computation. The single-server BQC protocol requires the client to have some minimum quantum ability, while the double-server BQC protocol makes the client's device completely classical, resorting to the pure and clean Bell state shared by two servers. Here, we provide a deterministic entanglement distillation protocol in a practical noisy environment for the double-server BQC protocol. This protocol can get the pure maximally entangled Bell state. The success probability can reach 100% in principle. The distilled maximally entangled states can be remaind to perform the BQC protocol subsequently. The parties who perform the distillation protocol do not need to exchange the classical information and they learn nothing from the client. It makes this protocol unconditionally secure and suitable for the future BQC protocol. PMID:25588565
Deterministic entanglement distillation for secure double-server blind quantum computation
Sheng, Yu-Bo; Zhou, Lan
2015-01-01
Blind quantum computation (BQC) provides an efficient method for the client who does not have enough sophisticated technology and knowledge to perform universal quantum computation. The single-server BQC protocol requires the client to have some minimum quantum ability, while the double-server BQC protocol makes the client's device completely classical, resorting to the pure and clean Bell state shared by two servers. Here, we provide a deterministic entanglement distillation protocol in a practical noisy environment for the double-server BQC protocol. This protocol can get the pure maximally entangled Bell state. The success probability can reach 100% in principle. The distilled maximally entangled states can be remaind to perform the BQC protocol subsequently. The parties who perform the distillation protocol do not need to exchange the classical information and they learn nothing from the client. It makes this protocol unconditionally secure and suitable for the future BQC protocol. PMID:25588565
Deterministic entanglement distillation for secure double-server blind quantum computation
NASA Astrophysics Data System (ADS)
Sheng, Yu-Bo; Zhou, Lan
2015-01-01
Blind quantum computation (BQC) provides an efficient method for the client who does not have enough sophisticated technology and knowledge to perform universal quantum computation. The single-server BQC protocol requires the client to have some minimum quantum ability, while the double-server BQC protocol makes the client's device completely classical, resorting to the pure and clean Bell state shared by two servers. Here, we provide a deterministic entanglement distillation protocol in a practical noisy environment for the double-server BQC protocol. This protocol can get the pure maximally entangled Bell state. The success probability can reach 100% in principle. The distilled maximally entangled states can be remaind to perform the BQC protocol subsequently. The parties who perform the distillation protocol do not need to exchange the classical information and they learn nothing from the client. It makes this protocol unconditionally secure and suitable for the future BQC protocol.
Peng, Hai Yang; Li, Yong Feng; Lin, Wei Nan; Wang, Yu Zhan; Gao, Xing Yu; Wu, Tom
2012-01-01
Intensive investigations have been launched worldwide on the resistive switching (RS) phenomena in transition metal oxides due to both fascinating science and potential applications in next generation nonvolatile resistive random access memory (RRAM) devices. It is noteworthy that most of these oxides are strongly correlated electron systems, and their electronic properties are critically affected by the electron-electron interactions. Here, using NiO as an example, we show that rationally adjusting the stoichiometry and the associated defect characteristics enables controlled room temperature conversions between two distinct RS modes, i.e., nonvolatile memory switching and volatile threshold switching, within a single device. Moreover, from first-principles calculations and x-ray absorption spectroscopy studies, we found that the strong electron correlations and the exchange interactions between Ni and O orbitals play deterministic roles in the RS operations. PMID:22679556
Not Available
1991-03-01
This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section`s Seismic Qualification Program for reactor restart.
ERIC Educational Resources Information Center
Schulz, Laura E.; Hooppell, Catherine; Jenkins, Adrianna C.
2008-01-01
Three studies look at whether the assumption of causal determinism (the assumption that all else being equal, causes generate effects deterministically) affects children's imitation of modeled actions. These studies show even when the frequency of an effect is matched, both preschoolers (N = 60; M = 56 months) and toddlers (N = 48; M = 18 months)…
Risk-based versus deterministic explosives safety criteria
Wright, R.E.
1996-12-01
The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.
A difference characteristic for one-dimensional deterministic systems
NASA Astrophysics Data System (ADS)
Shahverdian, A. Yu.; Apkarian, A. V.
2007-06-01
A numerical characteristic for one-dimensional deterministic systems reflecting its higher order difference structure is introduced. The comparison with Lyapunov exponent is given. A difference analogy for Eggleston theorem as well as an estimate for Hausdorff dimension of the difference attractor, formulated in terms of the new characteristic is proved.
Techniques to quantify the sensitivity of deterministic model uncertainties
Ishigami, T. ); Cazzoli, E. . Nuclear Energy Dept.); Khatib-Rahbar ); Unwin, S.D. )
1989-04-01
Several existing methods for the assessment of the sensitivity of output uncertainty distributions generated by deterministic computer models to the uncertainty distributions assigned to the input parameters are reviewed and new techniques are proposed. Merits and limitations of the various techniques are examined by detailed application to the suppression pool aerosol removal code (SPARC).
Deterministic dense coding and faithful teleportation with multipartite graph states
Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.
2009-05-15
We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.
From deterministic cellular automata to coupled map lattices
NASA Astrophysics Data System (ADS)
García-Morales, Vladimir
2016-07-01
A general mathematical method is presented for the systematic construction of coupled map lattices (CMLs) out of deterministic cellular automata (CAs). The entire CA rule space is addressed by means of a universal map for CAs that we have recently derived and that is not dependent on any freely adjustable parameters. The CMLs thus constructed are termed real-valued deterministic cellular automata (RDCA) and encompass all deterministic CAs in rule space in the asymptotic limit κ \\to 0 of a continuous parameter κ. Thus, RDCAs generalize CAs in such a way that they constitute CMLs when κ is finite and nonvanishing. In the limit κ \\to ∞ all RDCAs are shown to exhibit a global homogeneous fixed-point that attracts all initial conditions. A new bifurcation is discovered for RDCAs and its location is exactly determined from the linear stability analysis of the global quiescent state. In this bifurcation, fuzziness gradually begins to intrude in a purely deterministic CA-like dynamics. The mathematical method presented allows to get insight in some highly nontrivial behavior found after the bifurcation.
A Unit on Deterministic Chaos for Student Teachers
ERIC Educational Resources Information Center
Stavrou, D.; Assimopoulos, S.; Skordoulis, C.
2013-01-01
A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…
Deterministic retrieval of complex Green's functions using hard X rays.
Vine, D J; Paganin, D M; Pavlov, K M; Uesugi, K; Takeuchi, A; Suzuki, Y; Yagi, N; Kämpfe, T; Kley, E-B; Förster, E
2009-01-30
A massively parallel deterministic method is described for reconstructing shift-invariant complex Green's functions. As a first experimental implementation, we use a single phase contrast x-ray image to reconstruct the complex Green's function associated with Bragg reflection from a thick perfect crystal. The reconstruction is in excellent agreement with a classic prediction of dynamical diffraction theory. PMID:19257417
Ivanka, Paskaleva; Mihaela, Kouteva; Franco, Vaccari; Panza, Giuliano F.
2008-07-08
The earthquake record and the Code for design and construction in seismic regions in Bulgaria have shown that the territory of the Republic of Bulgaria is exposed to a high seismic risk due to local shallow and regional strong intermediate-depth seismic sources. The available strong motion database is quite limited, and therefore not representative at all of the real hazard. The application of the neo-deterministic seismic hazard assessment procedure for two main Bulgarian cities has been capable to supply a significant database of synthetic strong motions for the target sites, applicable for earthquake engineering purposes. The main advantage of the applied deterministic procedure is the possibility to take simultaneously and correctly into consideration the contribution to the earthquake ground motion at the target sites of the seismic source and of the seismic wave propagation in the crossed media. We discuss in this study the result of some recent applications of the neo-deterministic seismic microzonation procedure to the cities of Sofia and Russe. The validation of the theoretically modeled seismic input against Eurocode 8 and the few available records at these sites is discussed.
Estimation of seismic ground motions using deterministic approach for major cities of Gujarat
NASA Astrophysics Data System (ADS)
Shukla, J.; Choudhury, D.
2012-06-01
A deterministic seismic hazard analysis has been carried out for various sites of the major cities (Ahmedabad, Surat, Bhuj, Jamnagar and Junagadh) of the Gujarat region in India to compute the seismic hazard exceeding a certain level in terms of peak ground acceleration (PGA) and to estimate maximum possible PGA at each site at bed rock level. The seismic sources in Gujarat are very uncertain and recurrence intervals of regional large earthquakes are not well defined. Because the instrumental records of India specifically in the Gujarat region are far from being satisfactory for modeling the seismic hazard using the probabilistic approach, an attempt has been made in this study to accomplish it through the deterministic approach. In this regard, all small and large faults of the Gujarat region were evaluated to obtain major fault systems. The empirical relations suggested by earlier researchers for the estimation of maximum magnitude of earthquake motion with various properties of faults like length, surface area, slip rate, etc. have been applied to those faults to obtain the maximum earthquake magnitude. For the analysis, seven different ground motion attenuation relations (GMARs) of strong ground motion have been utilized to calculate the maximum horizontal ground accelerations for each major city of Gujarat. Epistemic uncertainties in the hazard computations are accounted for within a logic-tree framework by considering the controlling parameters like b-value, maximum magnitude and ground motion attenuation relations (GMARs). The corresponding deterministic spectra have been prepared for each major city for the 50th and 84th percentiles of ground motion occurrence. These deterministic spectra are further compared with the specified spectra of Indian design code IS:1893-Part I (2002) to validate them for further practical use. Close examination of the developed spectra reveals that the expected ground motion values become high for the Kachchh region i.e. Bhuj
Relevance of deterministic structures for modeling of transport: the Lauswiesen case study.
Händel, Falk; Dietrich, Peter
2012-01-01
Knowledge of site-specific contaminant transport processes is an essential requirement for performing various tasks concerning the protection and management of groundwater resources. However, prediction of their behavior is often difficult, especially in heterogeneous aquifers because of the lack of information about flow- and transport-governing subsurface structures and parameters. Hence, stochastic approaches have been developed and frequently used. However, extensive modeling studies on sedimentary structures have shown that consideration of hydrogeological subunits and their distribution can be essential for transport modeling. A case study from the intensely investigated Lauswiesen site is used to demonstrate that more accurate predictions are possible with improved knowledge of deterministic structures. Results of this case study using direct-push injection logging (DPIL) provide a more reliable characterization of hydraulic conductivity than sieve and flow meter data. PMID:22582812
Deterministically Polarized Fluorescence from Single Dye Molecules Aligned in Liquid Crystal Host
Lukishova, S.G.; Schmid, A.W.; Knox, R.; Freivald, P.; Boyd, R. W.; Stroud, Jr., C. R.; Marshall, K.L.
2005-09-30
We demonstrated for the first time to our konwledge deterministically polarized fluorescence from single dye molecules. Planar aligned nematic liquid crystal hosts provide deterministic alignment of single dye molecules in a preferred direction.
Evaluation the initial estimators using deterministic minimum covariance determinant algorithm
NASA Astrophysics Data System (ADS)
Alrawashdeh, Mufda Jameel; Sabri, Shamsul Rijal Muhammad; Ismail, Mohd Tahir
2014-07-01
The aim of the study is to examine five initial estimators introduced by Hubert et al. [1] with five additional new initial estimators by using the Deterministic Minimum Covariance Determinant algorithm, DetMCD. The objective of the DetMCD is to robustify the location and scatter matrix parameters. Since these parameters are highly influenced by the presence of outliers, the DetMCD is a newly highly robust algorithm, where it is constructed to overcome the outlier's problem. DetMCD precedes the non-random subsets, which computes a small number of deterministic initial estimators and followed by concentration steps. Here, we are going to compare the DetMCD algorithm based on two groups of estimators - one with original five Huberts' estimators and the other five new estimators. The determinant values of these estimators are observed to evaluate the performance via several cases.
Deterministic blade row interactions in a centrifugal compressor stage
NASA Technical Reports Server (NTRS)
Kirtley, K. R.; Beach, T. A.
1991-01-01
The three-dimensional viscous flow in a low speed centrifugal compressor stage is simulated using an average passage Navier-Stokes analysis. The impeller discharge flow is of the jet/wake type with low momentum fluid in the shroud-pressure side corner coincident with the tip leakage vortex. This nonuniformity introduces periodic unsteadiness in the vane frame of reference. The effect of such deterministic unsteadiness on the time-mean is included in the analysis through the average passage stress, which allows the analysis of blade row interactions. The magnitude of the divergence of the deterministic unsteady stress is of the order of the divergence of the Reynolds stress over most of the span, from the impeller trailing edge to the vane throat. Although the potential effects on the blade trailing edge from the diffuser vane are small, strong secondary flows generated by the impeller degrade the performance of the diffuser vanes.
On the secure obfuscation of deterministic finite automata.
Anderson, William Erik
2008-06-01
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
Deterministic remote two-qubit state preparation in dissipative environments
NASA Astrophysics Data System (ADS)
Li, Jin-Fang; Liu, Jin-Ming; Feng, Xun-Li; Oh, C. H.
2016-05-01
We propose a new scheme for efficient remote preparation of an arbitrary two-qubit state, introducing two auxiliary qubits and using two Einstein-Podolsky-Rosen (EPR) states as the quantum channel in a non-recursive way. At variance with all existing schemes, our scheme accomplishes deterministic remote state preparation (RSP) with only one sender and the simplest entangled resource (say, EPR pairs). We construct the corresponding quantum logic circuit using a unitary matrix decomposition procedure and analytically obtain the average fidelity of the deterministic RSP process for dissipative environments. Our studies show that, while the average fidelity gradually decreases to a stable value without any revival in the Markovian regime, it decreases to the same stable value with a dampened revival amplitude in the non-Markovian regime. We also find that the average fidelity's approximate maximal value can be preserved for a long time if the non-Markovian and the detuning conditions are satisfied simultaneously.
Deterministic synthesis of mechanical NOON states in ultrastrong optomechanics
NASA Astrophysics Data System (ADS)
Macrí, V.; Garziano, L.; Ridolfo, A.; Di Stefano, O.; Savasta, S.
2016-07-01
We propose a protocol for the deterministic preparation of entangled NOON mechanical states. The system is constituted by two identical, optically coupled optomechanical systems. The protocol consists of two steps. In the first, one of the two optical resonators is excited by a resonant external π -like Gaussian optical pulse. When the optical excitation coherently partly transfers to the second cavity, the second step starts. It consists of sending simultaneously two additional π -like Gaussian optical pulses, one at each optical resonator, with specific frequencies. In the optomechanical ultrastrong coupling regime, when the coupling strength becomes a significant fraction of the mechanical frequency, we show that NOON mechanical states with quite high Fock states can be deterministically obtained. The operating range of this protocol is carefully analyzed. Calculations have been carried out taking into account the presence of decoherence, thermal noise, and imperfect cooling.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
Approaches to implementing deterministic models in a probabilistic framework
Talbott, D.V.
1995-04-01
The increasing use of results from probabilistic risk assessments in the decision-making process makes it ever more important to eliminate simplifications in probabilistic models that might lead to conservative results. One area in which conservative simplifications are often made is modeling the physical interactions that occur during the progression of an accident sequence. This paper demonstrates and compares different approaches for incorporating deterministic models of physical parameters into probabilistic models; parameter range binning, response curves, and integral deterministic models. An example that combines all three approaches in a probabilistic model for the handling of an energetic material (i.e. high explosive, rocket propellant,...) is then presented using a directed graph model.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
NASA Astrophysics Data System (ADS)
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
Deterministic algorithm with agglomerative heuristic for location problems
NASA Astrophysics Data System (ADS)
Kazakovtsev, L.; Stupina, A.
2015-10-01
Authors consider the clustering problem solved with the k-means method and p-median problem with various distance metrics. The p-median problem and the k-means problem as its special case are most popular models of the location theory. They are implemented for solving problems of clustering and many practically important logistic problems such as optimal factory or warehouse location, oil or gas wells, optimal drilling for oil offshore, steam generators in heavy oil fields. Authors propose new deterministic heuristic algorithm based on ideas of the Information Bottleneck Clustering and genetic algorithms with greedy heuristic. In this paper, results of running new algorithm on various data sets are given in comparison with known deterministic and stochastic methods. New algorithm is shown to be significantly faster than the Information Bottleneck Clustering method having analogous preciseness.
Emergence of four dimensional quantum mechanics from a deterministic theory in 11 dimensions
NASA Astrophysics Data System (ADS)
Doyen, G.; Drakova, D.
2015-07-01
We develop a deterministic theory which accounts for the coupling of a high dimensional continuum of environmental excitations (called gravonons) to massive particle in a very localized and very weak fashion. For the model presented Schrödinger's equation can be solved practically exactly in 11 spacetime dimensions and the result demonstrates that as a function of time an incoming matter wave incident on a screen extinguishes, except at a single interaction center on the detection screen. This transition is reminiscent of the wave - particle duality arising from the ’’collapse” (also called ’’process one”) postulated in the Copenhagen-von Neumann interpretation. In our theory it is replaced by a sticking process of the particle from the vacuum to the surface of the detection screen. This situation was verified in experiments by using massive molecules. In our theory this ”wave-particle transition” is connected to the different dimensionalities of the space for particle motion and the gravonon dynamics, the latter propagating in the hidden dimensions of 11 dimensional spacetime. The fact that the particle is detected at apparently statistically determined points on the screen is traced back to the weakness and locality of the interaction with the gravonons which allows coupling on the energy shell alone. Although the theory exhibits a completely deterministic ”chooser” mechanism for single site sticking, an apparent statistical character results, as it is found in the experiments, due to small heterogeneities in the atomic and gravonon structures.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E.; Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
The deterministic SIS epidemic model in a Markovian random environment.
Economou, Antonis; Lopez-Herrero, Maria Jesus
2016-07-01
We consider the classical deterministic susceptible-infective-susceptible epidemic model, where the infection and recovery rates depend on a background environmental process that is modeled by a continuous time Markov chain. This framework is able to capture several important characteristics that appear in the evolution of real epidemics in large populations, such as seasonality effects and environmental influences. We propose computational approaches for the determination of various distributions that quantify the evolution of the number of infectives in the population. PMID:26515172
Deterministic entanglement of two neutral atoms via Rydberg blockade
Zhang, X. L.; Isenhower, L.; Gill, A. T.; Walker, T. G.; Saffman, M.
2010-09-15
We demonstrate the deterministic entanglement of two individually addressed neutral atoms using a Rydberg blockade mediated controlled-not gate. Parity oscillation measurements reveal a Bell state fidelity of F=0.58{+-}0.04, which is above the entanglement threshold of F=0.5, without any correction for atom loss, and F=0.71{+-}0.05 after correcting for background collisional losses. The fidelity results are shown to be in good agreement with a detailed error model.
Probabilistic vs deterministic views in facing natural hazards
NASA Astrophysics Data System (ADS)
Arattano, Massimo; Coviello, Velio
2015-04-01
Natural hazards can be mitigated through active or passive measures. Among these latter countermeasures, Early Warning Systems (EWSs) are playing an increasing and significant role. In particular, a growing number of studies investigate the reliability of landslide EWSs, their comparability to alternative protection measures and their cost-effectiveness. EWSs, however, inevitably and intrinsically imply the concept of probability of occurrence and/or probability of error. Since a long time science has accepted and integrated the probabilistic nature of reality and its phenomena. The same cannot be told for other fields of knowledge, such as law or politics, with which scientists sometimes have to interact. These disciplines are in fact still linked to more deterministic views of life. The same is true for what is perceived by the public opinion, which often requires or even pretends a deterministic type of answer to its needs. So, as an example, it might be easy for people to feel completely safe because an EWS has been installed. It is also easy for an administrator or a politician to contribute to spread this wrong feeling, together with the idea of having dealt with the problem and done something definitive to face it. May geoethics play a role to create a link between the probabilistic world of nature and science and the tendency of the society to a more deterministic view of things? Answering this question could help scientists to feel more confident in planning and performing their research activities.
Deterministic form correction of extreme freeform optical surfaces
NASA Astrophysics Data System (ADS)
Lynch, Timothy P.; Myer, Brian W.; Medicus, Kate; DeGroote Nelson, Jessica
2015-10-01
The blistering pace of recent technological advances has led lens designers to rely increasingly on freeform optical components as crucial pieces of their designs. As these freeform components increase in geometrical complexity and continue to deviate further from traditional optical designs, the optical manufacturing community must rethink their fabrication processes in order to keep pace. To meet these new demands, Optimax has developed a variety of new deterministic freeform manufacturing processes. Combining traditional optical fabrication techniques with cutting edge technological innovations has yielded a multifaceted manufacturing approach that can successfully handle even the most extreme freeform optical surfaces. In particular, Optimax has placed emphasis on refining the deterministic form correction process. By developing many of these procedures in house, changes can be implemented quickly and efficiently in order to rapidly converge on an optimal manufacturing method. Advances in metrology techniques allow for rapid identification and quantification of irregularities in freeform surfaces, while deterministic correction algorithms precisely target features on the part and drastically reduce overall correction time. Together, these improvements have yielded significant advances in the realm of freeform manufacturing. With further refinements to these and other aspects of the freeform manufacturing process, the production of increasingly radical freeform optical components is quickly becoming a reality.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less
Circulant Graph Modeling Deterministic Small-World Networks
NASA Astrophysics Data System (ADS)
Zhao, Chenggui
In recent years, many research works have revealed some technological networks including internet to be small-world networks, which is attracting attention from computer scientists. One can decide if or not a real network is Small-world by whether it has high local clustering and small average path distance which are the two distinguishing characteristics of small-world networks. So far, researchers have presented many small-world models by dynamically evolving a deterministic network into a small world one by stochastic adding vertices and edges to original networks. Rather few works focused on deterministic models. In this paper, as a important kind of Cayley graph, the circulant graph is proposed as models of deterministic small-world networks, thinking if its simple structures and significant adaptability. It shows circulant graph constructed in this document takes on the two expected characteristics of small word. This work should be useful because circulant graph has serviced as some models of communication and computer networks. The small world characteristic will be helpful to design and analysis of structure and performance.
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Demographic noise can reverse the direction of deterministic selection.
Constable, George W A; Rogers, Tim; McKane, Alan J; Tarnita, Corina E
2016-08-01
Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to [Formula: see text] theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085
Spatiotemporal calibration and resolution refinement of output from deterministic models.
Gilani, Owais; McKay, Lisa A; Gregoire, Timothy G; Guan, Yongtao; Leaderer, Brian P; Holford, Theodore R
2016-06-30
Spatiotemporal calibration of output from deterministic models is an increasingly popular tool to more accurately and efficiently estimate the true distribution of spatial and temporal processes. Current calibration techniques have focused on a single source of data on observed measurements of the process of interest that are both temporally and spatially dense. Additionally, these methods often calibrate deterministic models available in grid-cell format with pixel sizes small enough that the centroid of the pixel closely approximates the measurement for other points within the pixel. We develop a modeling strategy that allows us to simultaneously incorporate information from two sources of data on observed measurements of the process (that differ in their spatial and temporal resolutions) to calibrate estimates from a deterministic model available on a regular grid. This method not only improves estimates of the pollutant at the grid centroids but also refines the spatial resolution of the grid data. The modeling strategy is illustrated by calibrating and spatially refining daily estimates of ambient nitrogen dioxide concentration over Connecticut for 1994 from the Community Multiscale Air Quality model (temporally dense grid-cell estimates on a large pixel size) using observations from an epidemiologic study (spatially dense and temporally sparse) and Environmental Protection Agency monitoring stations (temporally dense and spatially sparse). Copyright © 2016 John Wiley & Sons, Ltd. PMID:26790617
Deterministic generation of remote entanglement with active quantum feedback
NASA Astrophysics Data System (ADS)
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-01
We consider the task of deterministically entangling two remote qubits using joint measurement and feedback, but no directly entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Finally, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Bajpai, Alankriti; Chandrasekhar, Pujari; Govardhan, Savitha; Banerjee, Rahul; Moorthy, Jarugu Narasimha
2015-02-01
The metal ions in a neutral Zn-MOF constructed from tritopic triacid H3 L with inherent concave features, rigid core, and peripheral flexibility are found to exist in two distinct SBUs, that is, 0D and 1D. This has allowed site-selective postsynthetic metal exchange (PSME) to be investigated and reactivities of the metal ions in two different environments in coordination polymers to be contrasted for the first time. Site-selective transmetalation of Zn ions in the discrete environment is shown to occur in a single crystal-to-single crystal (SCSC) fashion, with metal ions such as Fe(3+) , Ru(3+) , Cu(2+) , Co(2+) , etc., whereas those that are part of 1D SBU sustain structural integrity, leading to novel bimetallic MOFs, which are inaccessible by conventional approaches. To the best of our knowledge, site-selective postsynthetic exchange of an intraframework metal ion in a MOF that contains metal ions in discrete as well as polymeric SBUs is heretofore unprecedented. PMID:25533890
Lambry, Jean-Christophe; Stranava, Martin; Lobato, Laura; Martinkova, Marketa; Shimizu, Toru; Liebl, Ursula; Vos, Marten H
2016-01-01
An important question for the functioning of heme proteins is whether different ligands present within the protein moiety can readily exchange with heme-bound ligands. Studying the dynamics of the heme domain of the Escherichia coli sensor protein YddV upon dissociation of NO from the ferric heme by ultrafast spectroscopy, we demonstrate that when the hydrophobic leucine residue in the distal heme pocket is mutated to glycine, in a substantial fraction of the protein water replaces NO as an internal ligand in as fast as ∼4 ps. This process, which is near-barrierless and occurs orders of magnitude faster than the corresponding process in myoglobin, corresponds to a ligand swap of NO with a water molecule present in the heme pocket, as corroborated by molecular dynamics simulations. Our findings provide important new insight into ligand exchange in heme proteins that functionally interact with different external ligands. PMID:26651267
Sarker, Rafiquel; Grønborg, Mads; Cha, Boyoung; Mohan, Sachin; Chen, Yueping; Pandey, Akhilesh; Litchfield, David
2008-01-01
Na+/H+ exchanger 3 (NHE3) is the epithelial-brush border isoform responsible for most intestinal and renal Na+ absorption. Its activity is both up- and down-regulated under normal physiological conditions, and it is inhibited in most diarrheal diseases. NHE3 is phosphorylated under basal conditions and Ser/Thr phosphatase inhibitors stimulate basal exchange activity; however, the kinases involved are unknown. To identify kinases that regulate NHE3 under basal conditions, NHE3 was immunoprecipitated; LC-MS/MS of trypsinized NHE3 identified a novel phosphorylation site at S719 of the C terminus, which was predicted to be a casein kinase 2 (CK2) phosphorylation site. This was confirmed by an in vitro kinase assay. The NHE3-S719A mutant but not NHE3-S719D had reduced NHE3 activity due to less plasma membrane NHE3. This was due to reduced exocytosis plus decreased plasma membrane delivery of newly synthesized NHE3. Also, NHE3 activity was inhibited by the CK2 inhibitor 2-dimethylamino-4,5,6,7-tetrabromo-1H-benzimidazole DMAT when wild-type NHE3 was expressed in fibroblasts and Caco-2 cells, but the NHE3-S719 mutant was fully resistant to DMAT. CK2 bound to the NHE3 C-terminal domain, between amino acids 590 and 667, a site different from the site it phosphorylates. CK2 binds to the NHE3 C terminus and stimulates basal NHE3 activity by phosphorylating a separate single site on the NHE3 C terminus (S719), which affects NHE3 trafficking. PMID:18614797
Espinosa-Asuar, Laura; Escalante, Ana Elena; Gasca-Pineda, Jaime; Blaz, Jazmín; Peña, Lorena; Eguiarte, Luis E; Souza, Valeria
2015-06-01
The aim of this study was to determine the contributions of stochastic vs. deterministic processes in the distribution of microbial diversity in four ponds (Pozas Azules) within a temporally stable aquatic system in the Cuatro Cienegas Basin, State of Coahuila, Mexico. A sampling strategy for sites that were geographically delimited and had low environmental variation was applied to avoid obscuring distance effects. Aquatic bacterial diversity was characterized following a culture-independent approach (16S sequencing of clone libraries). The results showed a correlation between bacterial beta diversity (1-Sorensen) and geographic distance (distance decay of similarity), which indicated the influence of stochastic processes related to dispersion in the assembly of the ponds' bacterial communities. Our findings are the first to show the influence of dispersal limitation in the prokaryotic diversity distribution of Cuatro Cienegas Basin. PMID:26496618
Spatial continuity measures for probabilistic and deterministic geostatistics
Isaaks, E.H.; Srivastava, R.M.
1988-05-01
Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.
Deterministic and Nondeterministic Behavior of Earthquakes and Hazard Mitigation Strategy
NASA Astrophysics Data System (ADS)
Kanamori, H.
2014-12-01
Earthquakes exhibit both deterministic and nondeterministic behavior. Deterministic behavior is controlled by length and time scales such as the dimension of seismogenic zones and plate-motion speed. Nondeterministic behavior is controlled by the interaction of many elements, such as asperities, in the system. Some subduction zones have strong deterministic elements which allow forecasts of future seismicity. For example, the forecasts of the 2010 Mw=8.8 Maule, Chile, earthquake and the 2012 Mw=7.6, Costa Rica, earthquake are good examples in which useful forecasts were made within a solid scientific framework using GPS. However, even in these cases, because of the nondeterministic elements uncertainties are difficult to quantify. In some subduction zones, nondeterministic behavior dominates because of complex plate boundary structures and defies useful forecasts. The 2011 Mw=9.0 Tohoku-Oki earthquake may be an example in which the physical framework was reasonably well understood, but complex interactions of asperities and insufficient knowledge about the subduction-zone structures led to the unexpected tragic consequence. Despite these difficulties, broadband seismology, GPS, and rapid data processing-telemetry technology can contribute to effective hazard mitigation through scenario earthquake approach and real-time warning. A scale-independent relation between M0 (seismic moment) and the source duration, t, can be used for the design of average scenario earthquakes. However, outliers caused by the variation of stress drop, radiation efficiency, and aspect ratio of the rupture plane are often the most hazardous and need to be included in scenario earthquakes. The recent development in real-time technology would help seismologists to cope with, and prepare for, devastating tsunamis and earthquakes. Combining a better understanding of earthquake diversity and modern technology is the key to effective and comprehensive hazard mitigation practices.
Deterministic side-branching during thermal dendritic growth
NASA Astrophysics Data System (ADS)
Mullis, Andrew M.
2015-06-01
The accepted view on dendritic side-branching is that side-branches grow as the result of selective amplification of thermal noise and that in the absence of such noise dendrites would grow without the development of side-arms. However, recently there has been renewed speculation about dendrites displaying deterministic side-branching [see e.g. ME Glicksman, Metall. Mater. Trans A 43 (2012) 391]. Generally, numerical models of dendritic growth, such as phase-field simulation, have tended to display behaviour which is commensurate with the former view, in that simulated dendrites do not develop side-branches unless noise is introduced into the simulation. However, here we present simulations at high undercooling that show that under certain conditions deterministic side-branching may occur. We use a model formulated in the thin interface limit and a range of advanced numerical techniques to minimise the numerical noise introduced into the solution, including a multigrid solver. Not only are multigrid solvers one of the most efficient means of inverting the large, but sparse, system of equations that results from implicit time-stepping, they are also very effective at smoothing noise at all wavelengths. This is in contrast to most Jacobi or Gauss-Seidel iterative schemes which are effective at removing noise with wavelengths comparable to the mesh size but tend to leave noise at longer wavelengths largely undamped. From an analysis of the tangential thermal gradients on the solid-liquid interface the mechanism for side-branching appears to be consistent with the deterministic model proposed by Glicksman.
Statistical methods of parameter estimation for deterministically chaotic time series.
Pisarenko, V F; Sornette, D
2004-03-01
We discuss the possibility of applying some standard statistical methods (the least-square method, the maximum likelihood method, and the method of statistical moments for estimation of parameters) to deterministically chaotic low-dimensional dynamic system (the logistic map) containing an observational noise. A "segmentation fitting" maximum likelihood (ML) method is suggested to estimate the structural parameter of the logistic map along with the initial value x(1) considered as an additional unknown parameter. The segmentation fitting method, called "piece-wise" ML, is similar in spirit but simpler and has smaller bias than the "multiple shooting" previously proposed. Comparisons with different previously proposed techniques on simulated numerical examples give favorable results (at least, for the investigated combinations of sample size N and noise level). Besides, unlike some suggested techniques, our method does not require the a priori knowledge of the noise variance. We also clarify the nature of the inherent difficulties in the statistical analysis of deterministically chaotic time series and the status of previously proposed Bayesian approaches. We note the trade off between the need of using a large number of data points in the ML analysis to decrease the bias (to guarantee consistency of the estimation) and the unstable nature of dynamical trajectories with exponentially fast loss of memory of the initial condition. The method of statistical moments for the estimation of the parameter of the logistic map is discussed. This method seems to be the unique method whose consistency for deterministically chaotic time series is proved so far theoretically (not only numerically). PMID:15089376
Demonstration of deterministic and high fidelity squeezing of quantum information
Yoshikawa, Jun-ichi; Takei, Nobuyuki; Furusawa, Akira; Hayashi, Toshiki; Akiyama, Takayuki; Huck, Alexander; Andersen, Ulrik L.
2007-12-15
By employing a recent proposal [R. Filip, P. Marek, and U.L. Andersen, Phys. Rev. A 71, 042308 (2005)] we experimentally demonstrate a universal, deterministic, and high-fidelity squeezing transformation of an optical field. It relies only on linear optics, homodyne detection, feedforward, and an ancillary squeezed vacuum state, thus direct interaction between a strong pump and the quantum state is circumvented. We demonstrate three different squeezing levels for a coherent state input. This scheme is highly suitable for the fault-tolerant squeezing transformation in a continuous variable quantum computer.
A deterministic global optimization using smooth diagonal auxiliary functions
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.
2015-04-01
In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.
Deterministic Single-Phonon Source Triggered by a Single Photon
NASA Astrophysics Data System (ADS)
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-01
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.
A Deterministic Transport Code for Space Environment Electrons
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.
2010-01-01
A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.
Deterministic versus stochastic aspects of superexponential population growth models
NASA Astrophysics Data System (ADS)
Grosjean, Nicolas; Huillet, Thierry
2016-08-01
Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.
CALTRANS: A parallel, deterministic, 3D neutronics code
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
Non-deterministic analysis of ocean environment loads
Fang Huacan; Xu Fayan; Gao Guohua; Xu Xingping
1995-12-31
Ocean environment loads consist of the wind force, sea wave force etc. Sea wave force not only has randomness, but also has fuzziness. Hence the non-deterministic description of wave environment must be carried out, in designing of an offshore structure or evaluation of the safety of offshore structure members in service. In order to consider the randomness of sea wave, the wind speed single parameter sea wave spectrum is proposed in the paper. And a new fuzzy grading statistic method for considering fuzziness of sea wave height H and period T is given in this paper. The principle and process of calculating fuzzy random sea wave spectrum will be published lastly.
Deterministic Superreplication of One-Parameter Unitary Transformations
NASA Astrophysics Data System (ADS)
Dür, W.; Sekatski, P.; Skotiniotis, M.
2015-03-01
We show that one can deterministically generate, out of N copies of an unknown unitary operation, up to N2 almost perfect copies. The result holds for all operations generated by a Hamiltonian with an unknown interaction strength. This generalizes a similar result in the context of phase-covariant cloning where, however, superreplication comes at the price of an exponentially reduced probability of success. We also show that multiple copies of unitary operations can be emulated by operations acting on a much smaller space, e.g., a magnetic field acting on a single n -level system allows one to emulate the action of the field on n2 qubits.
The deterministic optical alignment of the HERMES spectrograph
NASA Astrophysics Data System (ADS)
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Application of deterministic chaos analysis to investigating CFB hydrodynamics
Yin, C.; Luo, Z.; Li, X.; Fang, M.; Ni, M.; Cen, K.
1997-12-31
This paper presents an application of deterministic chaos analysis to the behavior of a gas-solid circulating fluidized bed (CFB). Two improvements for the traditional algorithm are put forward: a rule and the mathematical model are present to determine the no-scale interval, and an improved formula and the corresponding recurrence formula are given to calculate distance. Calculation results for different operating conditions indicate that the correlation dimension and Kolmogorov entropy can be employed to characterize fluidization regimes and their transitions, and may be used to detect abnormal conditions in CFB.
Deterministic Single-Phonon Source Triggered by a Single Photon.
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-10
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus. PMID:27341236
Deterministic Ants in Labyrinth — Information Gained by Map Sharing
NASA Astrophysics Data System (ADS)
Malinowski, Janusz; Kantelhardt, Jan W.; Kułakowski, Krzysztof
2013-06-01
A few ant robots are placed in a labyrinth, formed by a square lattice with a small number of corridors removed. Ants move according to a deterministic algorithm designed to explore all corridors. Each ant remembers the shape of corridors which it has visited. Once two ants meet, they share the information acquired. We evaluate how the time of getting a complete information by an ant depends on the number of ants, and how the length known by an ant depends on time. Numerical results are presented in the form of scaling relations.
Deterministic controlled remote state preparation using partially entangled quantum channel
NASA Astrophysics Data System (ADS)
Chen, Na; Quan, Dong Xiao; Yang, Hong; Pei, Chang Xing
2016-04-01
In this paper, we propose a novel scheme for deterministic controlled remote state preparation (CRSP) of arbitrary two-qubit states. Suitably chosen partially entangled state is used as the quantum channel. With proper projective measurements carried out by the sender and controller, the receiver can reconstruct the target state by means of appropriate unitary operation. Unit success probability can be achieved for arbitrary two-qubit states. Different from some previous CRSP schemes utilizing partially entangled channels, auxiliary qubit is not required in our scheme. We also show that the success probability is independent of the parameters of the partially entangled quantum channel.
Coupled Deterministic-Monte Carlo Transport for Radiation Portal Modeling
Smith, Leon E.; Miller, Erin A.; Wittman, Richard S.; Shaver, Mark W.
2008-01-14
Radiation portal monitors are being deployed, both domestically and internationally, to detect illicit movement of radiological materials concealed in cargo. Evaluation of the current and next generations of these radiation portal monitor (RPM) technologies is an ongoing process. 'Injection studies' that superimpose, computationally, the signature from threat materials onto empirical vehicle profiles collected at ports of entry, are often a component of the RPM evaluation process. However, measurement of realistic threat devices can be both expensive and time-consuming. Radiation transport methods that can predict the response of radiation detection sensors with high fidelity, and do so rapidly enough to allow the modeling of many different threat-source configurations, are a cornerstone of reliable evaluation results. Monte Carlo methods have been the primary tool of the detection community for these kinds of calculations, in no small part because they are particularly effective for calculating pulse-height spectra in gamma-ray spectrometers. However, computational times for problems with a high degree of scattering and absorption can be extremely long. Deterministic codes that discretize the transport in space, angle, and energy offer potential advantages in computational efficiency for these same kinds of problems, but the pulse-height calculations needed to predict gamma-ray spectrometer response are not readily accessible. These complementary strengths for radiation detection scenarios suggest that coupling Monte Carlo and deterministic methods could be beneficial in terms of computational efficiency. Pacific Northwest National Laboratory and its collaborators are developing a RAdiation Detection Scenario Analysis Toolbox (RADSAT) founded on this coupling approach. The deterministic core of RADSAT is Attila, a three-dimensional, tetrahedral-mesh code originally developed by Los Alamos National Laboratory, and since expanded and refined by Transpire, Inc. [1
Baldocchi, Dennis
2015-03-24
Continuous eddy convariance measurements of carbon dioxide, water vapor and heat were measured continuously between an oak savanna and an annual grassland in California over a 4 year period. These systems serve as representative sites for biomes in Mediterranean climates and experience much seasonal and inter-annual variability in temperature and precipitation. These sites hence serve as natural laboratories for how whole ecosystem will respond to warmer and drier conditions. The savanna proved to be a moderate sink of carbon, taking up about 150 gC m-2y-1 compared to the annual grassland, which tended to be carbon neutral and often a source during drier years. But this carbon sink by the savanna came at a cost. This ecosystem used about 100 mm more water per year than the grassland. And because the savanna was darker and rougher its air temperature was about 0.5 C warmer. In addition to our flux measurements, we collected vast amounts of ancillary data to interpret the site and fluxes, making this site a key site for model validation and parameterization. Datasets consist of terrestrial and airborne lidar for determining canopy structure, ground penetrating radar data on root distribution, phenology cameras monitoring leaf area index and its seasonality, predawn water potential, soil moisture, stem diameter and physiological capacity of photosynthesis.
Fox, T.H. III; Richey, T. Jr.; Winders, G.R.
1962-10-23
A heat exchanger is designed for use in the transfer of heat between a radioactive fiuid and a non-radioactive fiuid. The exchanger employs a removable section containing the non-hazardous fluid extending into the section designed to contain the radioactive fluid. The removable section is provided with a construction to cancel out thermal stresses. The stationary section is pressurized to prevent leakage of the radioactive fiuid and to maintain a safe, desirable level for this fiuid. (AEC)
Vest, Joshua R; Abramson, Erika
2015-01-01
Health information exchange (HIE) systems facilitate access to patient information for a variety of health care organizations, end users, and clinical and organizational goals. While a complex intervention, organizations’ usage of HIE is often conceptualized and measured narrowly. We sought to provide greater specificity to the concept of HIE as an intervention by formulating a typology of organizational HIE usage. We interviewed representatives of a regional health information organization and health care organizations actively using HIE information to change patient utilization and costs. The resultant typology includes three dimensions: user role, usage initiation, and patient set. This approach to categorizing how health care organizations are actually applying HIE information to clinical and business tasks provides greater clarity about HIE as an intervention and helps elucidate the conceptual linkage between HIE an organizational and patient outcomes. PMID:26958266
Groskinsky Link, B. L.; Cary, L.E.
1988-01-01
Stations were selected to monitor water discharge and water quality of streams in eastern Montana. This report describes the stations and indicates the availability of hydrologic data through 1985. Included are stations that are operated by organizations that do not belong to the National Water Data Exchange (NAWDEX) program operated by the U.S. Geological Survey. Each station description contains a narration of the station 's history including location, drainage area, elevation, operator, period of record, type of equipment and instruments used at the station, and data availability. The data collected at each station have been identified according to type: water discharge, chemical quality, and suspended sediment. Descriptions are provided for 113 stations. These data have potential uses in characterizing small hydrologic basins, as well as other uses. A map of eastern Montana shows the location of the stations selected. (USGS)
Ion exchange technology assessment report
Duhn, E.F.
1992-01-01
In the execution of its charter, the SRS Ion Exchange Technology Assessment Team has determined that ion exchange (IX) technology has evolved to the point where it should now be considered as a viable alternative to the SRS reference ITP/LW/PH process. The ion exchange media available today offer the ability to design ion exchange processing systems tailored to the unique physical and chemical properties of SRS soluble HLW's. The technical assessment of IX technology and its applicability to the processing of SRS soluble HLW has demonstrated that IX is unquestionably a viable technology. A task team was chartered to evaluate the technology of ion exchange and its potential for replacing the present In-Tank Precipitation and proposed Late Wash processes to remove Cs, Sr, and Pu from soluble salt solutions at the Savannah River Site. This report documents the ion exchange technology assessment and conclusions of the task team.
Ion exchange technology assessment report
Duhn, E.F.
1992-12-31
In the execution of its charter, the SRS Ion Exchange Technology Assessment Team has determined that ion exchange (IX) technology has evolved to the point where it should now be considered as a viable alternative to the SRS reference ITP/LW/PH process. The ion exchange media available today offer the ability to design ion exchange processing systems tailored to the unique physical and chemical properties of SRS soluble HLW`s. The technical assessment of IX technology and its applicability to the processing of SRS soluble HLW has demonstrated that IX is unquestionably a viable technology. A task team was chartered to evaluate the technology of ion exchange and its potential for replacing the present In-Tank Precipitation and proposed Late Wash processes to remove Cs, Sr, and Pu from soluble salt solutions at the Savannah River Site. This report documents the ion exchange technology assessment and conclusions of the task team.
Hollinger, David Y.; Davidson, Eric A.; Richardson, Andrew D.; Dail, D. B.; Scott, N.
2013-03-25
Summary of research carried out under Interagency Agreement DE-AI02-07ER64355 with the USDA Forest Service at the Howland Forest AmeriFlux site in central Maine. Includes a list of publications resulting in part or whole from this support.
Central-site monitors do not account for factors such as outdoor-to-indoor transport and human activity patterns that inﬂuence personal exposures to ambient ﬁne-particulate matter (PM_{2.5}). We describe and compare different ambient PM_{2.5} exposure estimation...
Predictability of normal heart rhythms and deterministic chaos
NASA Astrophysics Data System (ADS)
Lefebvre, J. H.; Goodings, D. A.; Kamath, M. V.; Fallen, E. L.
1993-04-01
The evidence for deterministic chaos in normal heart rhythms is examined. Electrocardiograms were recorded of 29 subjects falling into four groups—a young healthy group, an older healthy group, and two groups of patients who had recently suffered an acute myocardial infarction. From the measured R-R intervals, a time series of 1000 first differences was constructed for each subject. The correlation integral of Grassberger and Procaccia was calculated for several subjects using these relatively short time series. No evidence was found for the existence of an attractor having a dimension less than about 4. However, a prediction method recently proposed by Sugihara and May and an autoregressive linear predictor both show that there is a measure of short-term predictability in the differenced R-R intervals. Further analysis revealed that the short-term predictability calculated by the Sugihara-May method is not consistent with the null hypothesis of a Gaussian random process. The evidence for a small amount of nonlinear dynamical behavior together with the short-term predictability suggest that there is an element of deterministic chaos in normal heart rhythms, although it is not strong or persistent. Finally, two useful parameters of the predictability curves are identified, namely, the `first step predictability' and the `predictability decay rate,' neither of which appears to be significantly correlated with the standard deviation of the R-R intervals.
Shock-induced explosive chemistry in a deterministic sample configuration.
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III; Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Deterministic doping and the exploration of spin qubits
Schenkel, T.; Weis, C. D.; Persaud, A.; Lo, C. C.; Chakarov, I.; Schneider, D. H.; Bokor, J.
2015-01-09
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.
Deterministic modelling and stochastic simulation of biochemical pathways using MATLAB.
Ullah, M; Schmidt, H; Cho, K H; Wolkenhauer, O
2006-03-01
The analysis of complex biochemical networks is conducted in two popular conceptual frameworks for modelling. The deterministic approach requires the solution of ordinary differential equations (ODEs, reaction rate equations) with concentrations as continuous state variables. The stochastic approach involves the simulation of differential-difference equations (chemical master equations, CMEs) with probabilities as variables. This is to generate counts of molecules for chemical species as realisations of random variables drawn from the probability distribution described by the CMEs. Although there are numerous tools available, many of them free, the modelling and simulation environment MATLAB is widely used in the physical and engineering sciences. We describe a collection of MATLAB functions to construct and solve ODEs for deterministic simulation and to implement realisations of CMEs for stochastic simulation using advanced MATLAB coding (Release 14). The program was successfully applied to pathway models from the literature for both cases. The results were compared to implementations using alternative tools for dynamic modelling and simulation of biochemical networks. The aim is to provide a concise set of MATLAB functions that encourage the experimentation with systems biology models. All the script files are available from www.sbi.uni-rostock.de/ publications_matlab-paper.html. PMID:16986253
Strongly Deterministic Population Dynamics in Closed Microbial Communities
NASA Astrophysics Data System (ADS)
Frentz, Zak; Kuehn, Seppe; Leibler, Stanislas
2015-10-01
Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES) as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
Goluoglu, S.; Bentley, C.; Demeglio, R.; Dunn, M.; Norton, K.; Pevey, R.; Suslov, I.; Dodds, H. L.
1998-01-14
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems.
A deterministic method for transient, three-dimensional neutron transport
Goluoglu, S.; Bentley, C.; DeMeglio, R.; Dunn, M.; Norton, K.; Pevey, R.; Suslov, I.; Dodds, H.L.
1998-05-01
A deterministic method for solving the time-dependent, three-dimensional Boltzmann transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multi-dimensional neutronic systems.
Appropriate time scales for nonlinear analyses of deterministic jump systems
NASA Astrophysics Data System (ADS)
Suzuki, Tomoya
2011-06-01
In the real world, there are many phenomena that are derived from deterministic systems but which fluctuate with nonuniform time intervals. This paper discusses the appropriate time scales that can be applied to such systems to analyze their properties. The financial markets are an example of such systems wherein price movements fluctuate with nonuniform time intervals. However, it is common to apply uniform time scales such as 1-min data and 1-h data to study price movements. This paper examines the validity of such time scales by using surrogate data tests to ascertain whether the deterministic properties of the original system can be identified from uniform sampled data. The results show that uniform time samplings are often inappropriate for nonlinear analyses. However, for other systems such as neural spikes and Internet traffic packets, which produce similar outputs, uniform time samplings are quite effective in extracting the system properties. Nevertheless, uniform samplings often generate overlapping data, which can cause false rejections of surrogate data tests.
Deterministic Stress Modeling of Hot Gas Segregation in a Turbine
NASA Technical Reports Server (NTRS)
Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger
1998-01-01
Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.
Integrability of a deterministic cellular automaton driven by stochastic boundaries
NASA Astrophysics Data System (ADS)
Prosen, Tomaž; Mejía-Monasterio, Carlos
2016-05-01
We propose an interacting many-body space–time-discrete Markov chain model, which is composed of an integrable deterministic and reversible cellular automaton (rule 54 of Bobenko et al 1993 Commun. Math. Phys. 158 127) on a finite one-dimensional lattice {({{{Z}}}2)}× n, and local stochastic Markov chains at the two lattice boundaries which provide chemical baths for absorbing or emitting the solitons. Ergodicity and mixing of this many-body Markov chain is proven for generic values of bath parameters, implying the existence of a unique nonequilibrium steady state. The latter is constructed exactly and explicitly in terms of a particularly simple form of matrix product ansatz which is termed a patch ansatz. This gives rise to an explicit computation of observables and k-point correlations in the steady state as well as the construction of a nontrivial set of local conservation laws. The feasibility of an exact solution for the full spectrum and eigenvectors (decay modes) of the Markov matrix is suggested as well. We conjecture that our ideas can pave the road towards a theory of integrability of boundary driven classical deterministic lattice systems.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
NASA Astrophysics Data System (ADS)
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
Deterministic nature of the underlying dynamics of surface wind fluctuations
NASA Astrophysics Data System (ADS)
Sreelekshmi, R. C.; Asokan, K.; Satheesh Kumar, K.
2012-10-01
Modelling the fluctuations of the Earth's surface wind has a significant role in understanding the dynamics of atmosphere besides its impact on various fields ranging from agriculture to structural engineering. Most of the studies on the modelling and prediction of wind speed and power reported in the literature are based on statistical methods or the probabilistic distribution of the wind speed data. In this paper we investigate the suitability of a deterministic model to represent the wind speed fluctuations by employing tools of nonlinear dynamics. We have carried out a detailed nonlinear time series analysis of the daily mean wind speed data measured at Thiruvananthapuram (8.483° N,76.950° E) from 2000 to 2010. The results of the analysis strongly suggest that the underlying dynamics is deterministic, low-dimensional and chaotic suggesting the possibility of accurate short-term prediction. As most of the chaotic systems are confined to laboratories, this is another example of a naturally occurring time series showing chaotic behaviour.
Deterministic photon-emitter coupling in chiral photonic circuits
NASA Astrophysics Data System (ADS)
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light.
An advanced deterministic method for spent fuel criticality safety analysis
DeHart, M.D.
1998-01-01
Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.
Deterministic Chaos in the X-ray Sources
NASA Astrophysics Data System (ADS)
Grzedzielski, M.; Sukova, P.; Janiuk, A.
2015-12-01
Hardly any of the observed black hole accretion disks in X-ray binaries and active galaxies shows constant flux. When the local stochastic variations of the disk occur at specific regions where a resonant behaviour takes place, there appear the quasi-periodic oscillations (QPOs). If the global structure of the flow and its non-linear hydrodynamics affects the fluctuations, the variability is chaotic in the sense of deterministic chaos. Our aim is to solve a problem of the stochastic versus deterministic nature of the black hole binary variabilities. We use both observational and analytic methods. We use the recurrence analysis and we study the occurence of long diagonal lines in the recurrence plot of observed data series and compare it to the surrogate series. We analyze here the data of two X-ray binaries - XTE J1550-564 and GX 339-4 observed by Rossi X-ray Timing Explorer. In these sources, the non-linear variability is expected because of the global conditions (such as the mean accretion rate) leading to the possible instability of an accretion disk. The thermal-viscous instability and fluctuations around the fixed-point solution occurs at high accretion rate, when the radiation pressure gives dominant contribution to the stress tensor.
Made-to-order nanocarbons through deterministic plasma nanotechnology
NASA Astrophysics Data System (ADS)
Ren, Yuping; Xu, Shuyan; Rider, Amanda Evelyn; Ostrikov, Kostya (Ken)
2011-02-01
Through a combinatorial approach involving experimental measurement and plasma modelling, it is shown that a high degree of control over diamond-like nanocarbon film sp3/sp2 ratio (and hence film properties) may be exercised, starting at the level of electrons (through modification of the plasma electron energy distribution function). Hydrogenated amorphous carbon nanoparticle films with high percentages of diamond-like bonds are grown using a middle-frequency (2 MHz) inductively coupled Ar + CH4 plasma. The sp3 fractions measured by X-ray photoelectron spectroscopy (XPS) and Raman spectroscopy in the thin films are explained qualitatively using sp3/sp2 ratios 1) derived from calculated sp3 and sp2 hybridized precursor species densities in a global plasma discharge model and 2) measured experimentally. It is shown that at high discharge power and lower CH4 concentrations, the sp3/sp2 fraction is higher. Our results suggest that a combination of predictive modeling and experimental studies is instrumental to achieve deterministically grown made-to-order diamond-like nanocarbons suitable for a variety of applications spanning from nano-magnetic resonance imaging to spin-flip quantum information devices. This deterministic approach can be extended to graphene, carbon nanotips, nanodiamond and other nanocarbon materials for a variety of applications
Deterministic photon-emitter coupling in chiral photonic circuits.
Söllner, Immo; Mahmoodian, Sahand; Hansen, Sofie Lindskov; Midolo, Leonardo; Javadi, Alisa; Kiršanskė, Gabija; Pregnolato, Tommaso; El-Ella, Haitham; Lee, Eun Hye; Song, Jin Dong; Stobbe, Søren; Lodahl, Peter
2015-09-01
Engineering photon emission and scattering is central to modern photonics applications ranging from light harvesting to quantum-information processing. To this end, nanophotonic waveguides are well suited as they confine photons to a one-dimensional geometry and thereby increase the light-matter interaction. In a regular waveguide, a quantum emitter interacts equally with photons in either of the two propagation directions. This symmetry is violated in nanophotonic structures in which non-transversal local electric-field components imply that photon emission and scattering may become directional. Here we show that the helicity of the optical transition of a quantum emitter determines the direction of single-photon emission in a specially engineered photonic-crystal waveguide. We observe single-photon emission into the waveguide with a directionality that exceeds 90% under conditions in which practically all the emitted photons are coupled to the waveguide. The chiral light-matter interaction enables deterministic and highly directional photon emission for experimentally achievable on-chip non-reciprocal photonic elements. These may serve as key building blocks for single-photon optical diodes, transistors and deterministic quantum gates. Furthermore, chiral photonic circuits allow the dissipative preparation of entangled states of multiple emitters for experimentally achievable parameters, may lead to novel topological photon states and could be applied for directional steering of light. PMID:26214251
NASA Astrophysics Data System (ADS)
Richman, Barbara T.
Dwindling scientific and technical exchange between the United States and the Soviet Union and prospects for enhancing such exchanges were discussed at an August 2 hearing by the Foreign Affairs Committee of the U.S. House of Representatives. The committee also heard overviews on the United States' approach to international exchange of science and technology. The hearing was the first in a series on current and future international science and technology programs.Four of eight science and technology agreements with the USSR that have expired in the last 15 months, including one on space, have not been renewed. The remaining four agreements have been extended into 1987 and 1988. Two others, including one on oceanography, are scheduled to run out in 1984.
Denysenko, Dmytro; Jelic, Jelena; Reuter, Karsten; Volkmer, Dirk
2015-05-26
The isomorphous partial substitution of Zn(2+) ions in the secondary building unit (SBU) of MFU-4l leads to frameworks with the general formula [M(x)Zn(5-x)Cl4(BTDD)3], in which x≈2, M = Mn(II), Fe(II), Co(II), Ni(II), or Cu(II), and BTDD = bis(1,2,3-triazolato-[4,5-b],[4',5'-i])dibenzo-[1,4]-dioxin. Subsequent exchange of chloride ligands by nitrite, nitrate, triflate, azide, isocyanate, formate, acetate, or fluoride leads to a variety of MFU-4l derivatives, which have been characterized by using XRPD, EDX, IR, UV/Vis-NIR, TGA, and gas sorption measurements. Several MFU-4l derivatives show high catalytic activity in a liquid-phase oxidation of ethylbenzene to acetophenone with air under mild conditions, among which Co- and Cu derivatives with chloride side-ligands are the most active catalysts. Upon thermal treatment, several side-ligands can be transformed selectively into reactive intermediates without destroying the framework. Thus, at 300 °C, Co(II)-azide units in the SBU of Co-MFU-4l are converted into Co(II)-isocyanate under continuous CO gas flow, involving the formation of a nitrene intermediate. The reaction of Cu(II)-fluoride units with H2 at 240 °C leads to Cu(I) and proceeds through the heterolytic cleavage of the H2 molecule. PMID:25882594
Drury, C.R.
1988-02-02
A heat exchanger having primary and secondary conduits in heat-exchanging relationship is described comprising: at least one serpentine tube having parallel sections connected by reverse bends, the serpentine tube constituting one of the conduits; a group of open-ended tubes disposed adjacent to the parallel sections, the open-ended tubes constituting the other of the conduits, and forming a continuous mass of contacting tubes extending between and surrounding the serpentine tube sections; and means securing the mass of tubes together to form a predetermined cross-section of the entirety of the mass of open-ended tubes and tube sections.
Oyeyemi, Olayinka A; Sours, Kevin M; Lee, Thomas; Kohen, Amnon; Resing, Katheryn A; Ahn, Natalie G; Klinman, Judith P
2011-09-27
The technique of hydrogen-deuterium exchange coupled to mass spectrometry (HDX-MS) has been applied to a mesophilic (E. coli) dihydrofolate reductase under conditions that allow direct comparison to a thermophilic (B. stearothermophilus) ortholog, Ec-DHFR and Bs-DHFR, respectively. The analysis of hydrogen-deuterium exchange patterns within proteolytically derived peptides allows spatial resolution, while requiring a series of controls to compare orthologous proteins with only ca. 40% sequence identity. These controls include the determination of primary structure effects on intrinsic rate constants for HDX as well as the use of existing 3-dimensional structures to evaluate the distance of each backbone amide hydrogen to the protein surface. Only a single peptide from the Ec-DHFR is found to be substantially more flexible than the Bs-DHFR at 25 °C in a region located within the protein interior at the intersection of the cofactor and substrate-binding sites. The surrounding regions of the enzyme are either unchanged or more flexible in the thermophilic DHFR from B. stearothermophilus. The region with increased flexibility in Ec-DHFR corresponds to one of two regions previously proposed to control the enthalpic barrier for hydride transfer in Bs-DHFR [Oyeyemi et al. (2010) Proc. Natl. Acad. Sci. U.S.A. 107, 10074]. PMID:21859100
Daman, Ernest L.; McCallister, Robert A.
1979-01-01
A heat exchanger is provided having first and second fluid chambers for passing primary and secondary fluids. The chambers are spaced apart and have heat pipes extending from inside one chamber to inside the other chamber. A third chamber is provided for passing a purge fluid, and the heat pipe portion between the first and second chambers lies within the third chamber.
Quantum dissonance and deterministic quantum computation with a single qubit
NASA Astrophysics Data System (ADS)
Ali, Mazhar
2014-11-01
Mixed state quantum computation can perform certain tasks which are believed to be efficiently intractable on a classical computer. For a specific model of mixed state quantum computation, namely, deterministic quantum computation with a single qubit (DQC1), recent investigations suggest that quantum correlations other than entanglement might be responsible for the power of DQC1 model. However, strictly speaking, the role of entanglement in this model of computation was not entirely clear. We provide conclusive evidence that there are instances where quantum entanglement is not present in any part of this model, nevertheless we have advantage over classical computation. This establishes the fact that quantum dissonance (a kind of quantum correlations) present in fully separable (FS) states provide power to DQC1 model.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Deterministic Mutation Rate Variation in the Human Genome
Smith, Nick G.C.; Webster, Matthew T.; Ellegren, Hans
2002-01-01
Several studies of substitution rate variation have indicated that the local mutation rate varies over the mammalian genome. In the present study, we show significant variation in substitution rates within the noncoding part of the human genome using 4.7 Mb of human-chimpanzee pairwise comparisons. Moreover, we find a significant positive covariation of lineage-specific chimpanzee and human local substitution rates, and very similar mean substitution rates down the two lineages. The substitution rate variation is probably not caused by selection or biased gene conversion, and so we conclude that mutation rates vary deterministically across the noncoding nonrepetitive regions of the human genome. We also show that noncoding substitution rates are significantly affected by G+C base composition, partly because the base composition is not at equilibrium. PMID:12213772
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
Deterministic nonclassicality for quantum-mechanical oscillators in thermal states
NASA Astrophysics Data System (ADS)
Marek, Petr; Lachman, Lukáš; Slodička, Lukáš; Filip, Radim
2016-07-01
Quantum nonclassicality is the basic building stone for the vast majority of quantum information applications and methods of its generation are at the forefront of research. One of the obstacles any method needs to clear is the looming presence of decoherence and noise which act against the nonclassicality and often erase it completely. In this paper we show that nonclassical states of a quantum harmonic oscillator initially in thermal equilibrium states can be deterministically created by coupling it to a single two-level system. This can be achieved even in the absorption regime in which the two-level system is initially in the ground state. The method is resilient to noise and it may actually benefit from it, as witnessed by the systems with higher thermal energy producing more nonclassical states.
Classification and unification of the microscopic deterministic traffic models.
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles. PMID:26565284
A Deterministic Computational Procedure for Space Environment Electron Transport
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.
2010-01-01
A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.
Reinforcement learning output feedback NN control using deterministic learning technique.
Xu, Bin; Yang, Chenguang; Shi, Zhongke
2014-03-01
In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control. PMID:24807456
Scaling mobility patterns and collective movements: Deterministic walks in lattices
NASA Astrophysics Data System (ADS)
Han, Xiao-Pu; Zhou, Tao; Wang, Bing-Hong
2011-05-01
Scaling mobility patterns have been widely observed for animals. In this paper, we propose a deterministic walk model to understand the scaling mobility patterns, where walkers take the least-action walks on a lattice landscape and prey. Scaling laws in the displacement distribution emerge when the amount of prey resource approaches the critical point. Around the critical point, our model generates ordered collective movements of walkers with a quasiperiodic synchronization of walkers’ directions. These results indicate that the coevolution of walkers’ least-action behavior and the landscape could be a potential origin of not only the individual scaling mobility patterns but also the flocks of animals. Our findings provide a bridge to connect the individual scaling mobility patterns and the ordered collective movements.
YALINA analytical benchmark analyses using the deterministic ERANOS code system.
Gohar, Y.; Aliberti, G.; Nuclear Engineering Division
2009-08-31
The growing stockpile of nuclear waste constitutes a severe challenge for the mankind for more than hundred thousand years. To reduce the radiotoxicity of the nuclear waste, the Accelerator Driven System (ADS) has been proposed. One of the most important issues of ADSs technology is the choice of the appropriate neutron spectrum for the transmutation of Minor Actinides (MA) and Long Lived Fission Products (LLFP). This report presents the analytical analyses obtained with the deterministic ERANOS code system for the YALINA facility within: (a) the collaboration between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research (JIPNR) Sosny of Belarus; and (b) the IAEA coordinated research projects for accelerator driven systems (ADS). This activity is conducted as a part of the Russian Research Reactor Fuel Return (RRRFR) Program and the Global Threat Reduction Initiative (GTRI) of DOE/NNSA.
Deterministic Squeezed States with Collective Measurements and Feedback.
Cox, Kevin C; Greve, Graham P; Weiner, Joshua M; Thompson, James K
2016-03-01
We demonstrate the creation of entangled, spin-squeezed states using a collective, or joint, measurement and real-time feedback. The pseudospin state of an ensemble of N=5×10^{4} laser-cooled ^{87}Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) [7.4(6) dB] in variance below the standard quantum limit for unentangled atoms-comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint premeasurement, we directly observe up to 59(8) times [17.7(6) dB] improvement in quantum phase variance relative to the standard quantum limit for N=4×10^{5} atoms. This is one of the largest reported entanglement enhancements to date in any system. PMID:26991175
Classification and unification of the microscopic deterministic traffic models
NASA Astrophysics Data System (ADS)
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Deterministic simulation of thermal neutron radiography and tomography
NASA Astrophysics Data System (ADS)
Pal Chowdhury, Rajarshi; Liu, Xin
2016-05-01
In recent years, thermal neutron radiography and tomography have gained much attention as one of the nondestructive testing methods. However, the application of thermal neutron radiography and tomography is hindered by their technical complexity, radiation shielding, and time-consuming data collection processes. Monte Carlo simulations have been developed in the past to improve the neutron imaging facility's ability. In this paper, a new deterministic simulation approach has been proposed and demonstrated to simulate neutron radiographs numerically using a ray tracing algorithm. This approach has made the simulation of neutron radiographs much faster than by previously used stochastic methods (i.e., Monte Carlo methods). The major problem with neutron radiography and tomography simulation is finding a suitable scatter model. In this paper, an analytic scatter model has been proposed that is validated by a Monte Carlo simulation.
A Deterministic Approximation Algorithm for Maximum 2-Path Packing
NASA Astrophysics Data System (ADS)
Tanahashi, Ruka; Chen, Zhi-Zhong
This paper deals with the maximum-weight 2-path packing problem (M2PP), which is the problem of computing a set of vertex-disjoint paths of length 2 in a given edge-weighted complete graph so that the total weight of edges in the paths is maximized. Previously, Hassin and Rubinstein gave a randomized cubic-time approximation algorithm for M2PP which achieves an expected ratio of 35/67 - ε ≈ 0.5223 - ε for any constant ε > 0. We refine their algorithm and derandomize it to obtain a deterministic cubic-time approximation algorithm for the problem which achieves a better ratio (namely, 0.5265 - ε for any constant ε > 0).
Location deterministic biosensing from quantum-dot-nanowire assemblies.
Liu, Chao; Kim, Kwanoh; Fan, D L
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices. PMID:25316926
Location deterministic biosensing from quantum-dot-nanowire assemblies
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Deterministic Squeezed States with Collective Measurements and Feedback
NASA Astrophysics Data System (ADS)
Cox, Kevin C.; Greve, Graham P.; Weiner, Joshua M.; Thompson, James K.
2016-03-01
We demonstrate the creation of entangled, spin-squeezed states using a collective, or joint, measurement and real-time feedback. The pseudospin state of an ensemble of N =5 ×104 laser-cooled 87Rb atoms is deterministically driven to a specified population state with angular resolution that is a factor of 5.5(8) [7.4(6) dB] in variance below the standard quantum limit for unentangled atoms—comparable to the best enhancements using only unitary evolution. Without feedback, conditioning on the outcome of the joint premeasurement, we directly observe up to 59(8) times [17.7(6) dB] improvement in quantum phase variance relative to the standard quantum limit for N =4 ×105 atoms . This is one of the largest reported entanglement enhancements to date in any system.
Capillary-mediated interface perturbations: Deterministic pattern formation
NASA Astrophysics Data System (ADS)
Glicksman, Martin E.
2016-09-01
Leibniz-Reynolds analysis identifies a 4th-order capillary-mediated energy field that is responsible for shape changes observed during melting, and for interface speed perturbations during crystal growth. Field-theoretic principles also show that capillary-mediated energy distributions cancel over large length scales, but modulate the interface shape on smaller mesoscopic scales. Speed perturbations reverse direction at specific locations where they initiate inflection and branching on unstable interfaces, thereby enhancing pattern complexity. Simulations of pattern formation by several independent groups of investigators using a variety of numerical techniques confirm that shape changes during both melting and growth initiate at locations predicted from interface field theory. Finally, limit cycles occur as an interface and its capillary energy field co-evolve, leading to synchronized branching. Synchronous perturbations produce classical dendritic structures, whereas asynchronous perturbations observed in isotropic and weakly anisotropic systems lead to chaotic-looking patterns that remain nevertheless deterministic.
Validation of a Deterministic Vibroacoustic Response Prediction Model
NASA Technical Reports Server (NTRS)
Caimi, Raoul E.; Margasahayam, Ravi
1997-01-01
This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.
Deterministic secure communications using two-mode squeezed states
Marino, Alberto M.; Stroud, C. R. Jr.
2006-08-15
We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state.
Location deterministic biosensing from quantum-dot-nanowire assemblies
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-01-01
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices. PMID:25316926
Conservative deterministic spectral Boltzmann solver near the grazing collisions limit
NASA Astrophysics Data System (ADS)
Haack, Jeffrey R.; Gamba, Irene M.
2012-11-01
We present new results building on the conservative deterministic spectral method for the space homogeneous Boltzmann equation developed by Gamba and Tharkabhushaman. This approach is a two-step process that acts on the weak form of the Boltzmann equation, and uses the machinery of the Fourier transform to reformulate the collisional integral into a weighted convolution in Fourier space. A constrained optimization problem is solved to preserve the mass, momentum, and energy of the resulting distribution. Within this framework we have extended the formulation to the case of more general case of collision operators with anisotropic scattering mechanisms, which requires a new formulation of the convolution weights. We also derive the grazing collisions limit for the method, and show that it is consistent with the Fokker-Planck-Landau equations as the grazing collisions parameter goes to zero.
Nonadiabatic exchange dynamics during adiabatic frequency sweeps
NASA Astrophysics Data System (ADS)
Barbara, Thomas M.
2016-04-01
A Bloch equation analysis that includes relaxation and exchange effects during an adiabatic frequency swept pulse is presented. For a large class of sweeps, relaxation can be incorporated using simple first order perturbation theory. For anisochronous exchange, new expressions are derived for exchange augmented rotating frame relaxation. For isochronous exchange between sites with distinct relaxation rate constants outside the extreme narrowing limit, simple criteria for adiabatic exchange are derived and demonstrate that frequency sweeps commonly in use may not be adiabatic with regard to exchange unless the exchange rates are much larger than the relaxation rates. Otherwise, accurate assessment of the sensitivity to exchange dynamics will require numerical integration of the rate equations. Examples of this situation are given for experimentally relevant parameters believed to hold for in-vivo tissue. These results are of significance in the study of exchange induced contrast in magnetic resonance imaging.
Simple deterministically constructed cycle reservoirs with regular jumps.
Rodan, Ali; Tiňo, Peter
2012-07-01
A new class of state-space models, reservoir models, with a fixed state transition structure (the "reservoir") and an adaptable readout from the state space, has recently emerged as a way for time series processing and modeling. Echo state network (ESN) is one of the simplest, yet powerful, reservoir models. ESN models are generally constructed in a randomized manner. In our previous study (Rodan & Tiňo, 2011), we showed that a very simple, cyclic, deterministically generated reservoir can yield performance competitive with standard ESN. In this contribution, we extend our previous study in three aspects. First, we introduce a novel simple deterministic reservoir model, cycle reservoir with jumps (CRJ), with highly constrained weight values, that has superior performance to standard ESN on a variety of temporal tasks of different origin and characteristics. Second, we elaborate on the possible link between reservoir characterizations, such as eigenvalue distribution of the reservoir matrix or pseudo-Lyapunov exponent of the input-driven reservoir dynamics, and the model performance. It has been suggested that a uniform coverage of the unit disk by such eigenvalues can lead to superior model performance. We show that despite highly constrained eigenvalue distribution, CRJ consistently outperforms ESN (which has much more uniform eigenvalue coverage of the unit disk). Also, unlike in the case of ESN, pseudo-Lyapunov exponents of the selected optimal CRJ models are consistently negative. Third, we present a new framework for determining the short-term memory capacity of linear reservoir models to a high degree of precision. Using the framework, we study the effect of shortcut connections in the CRJ reservoir topology on its memory capacity. PMID:22428595
Deterministic and Stochastic Receiver Clock Modeling in Precise Point Positioning
NASA Astrophysics Data System (ADS)
Orliac, E.; Dach, R.; Wang, K.; Rothacher, M.; Voithenleitner, D.; Hugentobler, U.; Heinze, M.; Svehla, D.
2012-04-01
The traditional GNSS (Global Navigation Satellite System) data analysis assumes an independent set of clock corrections for each epoch. This introduces a huge number of parameters that are highly correlated with station height and troposphere parameters. If the number of clock parameters can be reduced, the GNSS processing procedure may be stabilized. Experiments with kinematic solutions for stations equipped with H-Maser clocks have confirmed this. On the other hand, static coordinates do not significantly benefit from changing the strategy in handling the clock parameter. In the current GNSS constellation only GIOVE-B and the GPS Block IIF satellite clocks seem to be good enough to be modeled instead of freely estimated for each epoch without losing accuracy at the level of phase measurements. With the Galileo constellation this will change in future. In this context, ESA (European Space Agency) funded a project on "Satellite and Station Clock Modelling for GNSS". In the frame of this project, various deterministic and stochastic clock models have been evaluated, implemented and assessed for both, station and satellite clocks. In this paper we focus on the impact of modeling the receiver clock in the processing of GNSS data in static and kinematic precise point positioning (PPP) modes. Initial results show that for stations connected to an H-Maser clock the stability of the vertical position for kinematic PPP could be improved by up to 60%. The impact of clock modeling on the estimation of troposphere parameters is also investigated, along with the role of the tropospheric modeling itself, by testing various sampling rates and relative constraints for the troposphere parameters. Finally, we investigate the convergence time of PPP when deterministic or stochastic clock modeling is applied to the receiver clock.
Testing for deterministic trends in global sea surface temperature
NASA Astrophysics Data System (ADS)
Barbosa, Susana
2010-05-01
The identification and estimation of trends is a frequent and fundamental task in the analysis of hydrometeorological records. The task is challenging because even time series generated by purely random processes can exhibit visually appealing trends that can be misleadingly taken as evidence of non-stationary behavior. Hydrometeorological time series exhibiting long range dependence can also exhibit trend-like features that can be mistakenly interpreted as a trend, leading to erroneous forecasts and interpretations of the variability structure of the series, particularly in terms of statistical uncertainty. In practice the overwhelming majority of trends in hydro-climatic records are reported as the slope from a linear regression model. It is therefore important to assess when a linear regression model is a reasonable description for a time series. One could think that if a derived slope is statistically significant, particularly if inference is performed carefully, the linear regression model would be appropriate. However, stochastic features, such as long-range dependence can produce statistically significant linear trends. Therefore, the plausibility of the linear regression model needs to be tested itself, in addition to testing if the trend slope is statistically significant. In this work parametric statistical tests are applied in order to evaluate the trend-stationary assumption in global sea surface temperature for the period from January 1900 to December 2008. The fit of a linear deterministic model to the spatially-averaged global mean SST series yields a statistically significant positive slope, suggesting an increasing linear trend. However, statistical testing rejects the hypothesis of a deterministic linear trend with a stationary stochastic noise. This is supported by the form of the temporal structure of the detrended series, which exhibits large positive values up to lags of 5 years, indicating temporal persistence.
Deterministic Diffusion Fiber Tracking Improved by Quantitative Anisotropy
Yeh, Fang-Cheng; Verstynen, Timothy D.; Wang, Yibao; Fernández-Miranda, Juan C.; Tseng, Wen-Yih Isaac
2013-01-01
Diffusion MRI tractography has emerged as a useful and popular tool for mapping connections between brain regions. In this study, we examined the performance of quantitative anisotropy (QA) in facilitating deterministic fiber tracking. Two phantom studies were conducted. The first phantom study examined the susceptibility of fractional anisotropy (FA), generalized factional anisotropy (GFA), and QA to various partial volume effects. The second phantom study examined the spatial resolution of the FA-aided, GFA-aided, and QA-aided tractographies. An in vivo study was conducted to track the arcuate fasciculus, and two neurosurgeons blind to the acquisition and analysis settings were invited to identify false tracks. The performance of QA in assisting fiber tracking was compared with FA, GFA, and anatomical information from T1-weighted images. Our first phantom study showed that QA is less sensitive to the partial volume effects of crossing fibers and free water, suggesting that it is a robust index. The second phantom study showed that the QA-aided tractography has better resolution than the FA-aided and GFA-aided tractography. Our in vivo study further showed that the QA-aided tractography outperforms the FA-aided, GFA-aided, and anatomy-aided tractographies. In the shell scheme (HARDI), the FA-aided, GFA-aided, and anatomy-aided tractographies have 30.7%, 32.6%, and 24.45% of the false tracks, respectively, while the QA-aided tractography has 16.2%. In the grid scheme (DSI), the FA-aided, GFA-aided, and anatomy-aided tractographies have 12.3%, 9.0%, and 10.93% of the false tracks, respectively, while the QA-aided tractography has 4.43%. The QA-aided deterministic fiber tracking may assist fiber tracking studies and facilitate the advancement of human connectomics. PMID:24348913
Standard fluctuation-dissipation process from a deterministic mapping
NASA Astrophysics Data System (ADS)
Bianucci, Marco; Mannella, Riccardo; Fan, Ximing; Grigolini, Paolo; West, Bruce J.
1993-03-01
We illustrate a derivation of a standard fluctuation-dissipation process from a discrete deterministic dynamical model. This model is a three-dimensional mapping, driving the motion of three variables, w, ξ, and π. We show that for suitable values of the parameters of this mapping, the motion of the variable w is indistinguishable from that of a stochastic variable described by a Fokker-Planck equation with well-defined friction γ and diffusion D. This result can be explained as follows. The bidimensional system of the two variables ξ and π is a nonlinear, deterministic, and chaotic system, with the key property of resulting in a finite correlation time for the variable ξ and in a linear response of ξ to an external perturbation. Both properties are traced back to the fully chaotic nature of this system. When this subsystem is coupled to the variable w, via a very weak coupling guaranteeing a large-time-scale separation between the two systems, the variable w is proven to be driven by a standard fluctuation-dissipation process. We call the subsystem a booster whose chaotic nature triggers the standard fluctuation-dissipation process exhibited by the variable w. The diffusion process is a trivial consequence of the central-limit theorem, whose validity is assured by the finite time scale of the correlation function of ξ. The dissipation affecting the variable w is traced back to the linear response of the booster, which is evaluated adopting a geometrical procedure based on the properties of chaos rather than the conventional perturbation approach.
Pu Anion Exchange Process Intensification
Taylor-Pashow, K.
2015-10-08
This project seeks to improve the efficiency of the plutonium anion-exchange process for purifying Pu through the development of alternate ion-exchange media. The objective of the project in FY15 was to develop and test a porous foam monolith material that could serve as a replacement for the current anion-exchange resin, Reillex® HPQ, used at the Savannah River Site (SRS) for purifying Pu. The new material provides advantages in efficiency over the current resin by the elimination of diffusive mass transport through large granular resin beads. By replacing the large resin beads with a porous foam there is much more efficient contact between the Pu solution and the anion-exchange sites present on the material. Several samples of a polystyrene based foam grafted with poly(4-vinylpyridine) were prepared and the Pu sorption was tested in batch contact tests.
A deterministic methodology for prediction of fracture distribution in basaltic multiflows
NASA Astrophysics Data System (ADS)
Lore, Jason; Aydin, Atilla; Goodson, Kenneth
2001-04-01
The fracture distribution in basalt flows is a direct result of thermal processes. Thus basalt flows present a unique opportunity to characterize a nearly perfect deterministic system with its fundamental physical parameters. Fracture distribution data collected on cliff exposures of basalt flows near the Idaho National Engineering and Environmental Laboratory (INEEL) are combined with calculations of cooling rate and temperature distribution from a finite element model to construct a predictive methodology for fracture spacing. The methodology is based on an empirical power law relationship between inverse cooling rate and fracture spacing. The methodology may be applied to unexposed basalt flows of approximately elliptical cross section whose thickness and width are constrained only by geophysical or borehole data if sufficient fracture data on nearby exposed flows are available. The methodology aids waste remediation efforts at sites involving contaminant transport through fractured basalt, such as the INEEL and the Hanford site in Washington, as well as involving transport and fluid flow through volcanic or intrusive rocks where thermal processes are responsible for fracturing.
NASA Astrophysics Data System (ADS)
mouloud, Hamidatou
2016-04-01
The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.
Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System
ERIC Educational Resources Information Center
Maiti, Alakes; Samanta, G. P.
2005-01-01
This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…
Roseboom, Winfried; De Lacey, Antonio L; Fernandez, Victor M; Hatchikian, E Claude; Albracht, Simon P J
2006-01-01
In [FeFe]-hydrogenases, the H cluster (hydrogen-activating cluster) contains a di-iron centre ([2Fe]H subcluster, a (L)(CO)(CN)Fe(mu-RS2)(mu-CO)Fe(CysS)(CO)(CN) group) covalently attached to a cubane iron-sulphur cluster ([4Fe-4S]H subcluster). The Cys-thiol functions as the link between one iron (called Fe1) of the [2Fe]H subcluster and one iron of the cubane subcluster. The other iron in the [2Fe]H subcluster is called Fe2. The light sensitivity of the Desulfovibrio desulfuricans enzyme in a variety of states has been studied with infrared (IR) spectroscopy. The aerobic inactive enzyme (H(inact) state) and the CO-inhibited active form (H(ox)-CO state) were stable in light. Illumination of the H(ox) state led to a kind of cannibalization; in some enzyme molecules the H cluster was destroyed and the released CO was captured by the H clusters in other molecules to form the light-stable H(ox)-CO state. Illumination of active enzyme under 13CO resulted in the complete exchange of the two intrinsic COs bound to Fe2. At cryogenic temperatures, light induced the photodissociation of the extrinsic CO and the bridging CO of the enzyme in the H(ox)-CO state. Electrochemical redox titrations showed that the enzyme in the H(inact) state converts to the transition state (H(trans)) in a reversible one-electron redox step (E (m, pH 7) = -75 mV). IR spectra demonstrate that the added redox equivalent not only affects the [4Fe-4S]H subcluster, but also the di-iron centre. Enzyme in the H(trans) state reacts with extrinsic CO, which binds to Fe2. The H(trans) state converts irreversibly into the H(ox) state in a redox-dependent reaction most likely involving two electrons (E (m, pH 7) = -261 mV). These electrons do not end up on any of the six Fe atoms of the H cluster; the possible destiny of the two redox equivalents is discussed. An additional reversible one-electron redox reaction leads to the H(red) state (E (m, pH 7) = -354 mV), where both Fe atoms of the [2Fe]H subcluster
Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal
NASA Astrophysics Data System (ADS)
Wronna, M.; Omira, R.; Baptista, M. A.
2015-11-01
In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2.
Paloncýová, Markéta; Navrátilová, Veronika; Berka, Karel; Laio, Alessandro; Otyepka, Michal
2016-04-12
Although the majority of enzymes have buried active sites, very little is known about the energetics and mechanisms associated with substrate and product channeling in and out. Gaining direct information about these processes is a challenging task both for experimental and theoretical techniques. Here, we present a methodology that enables following of a ligand during its passage to the active site of cytochrome P450 (CYP) 3A4 and mapping of the free energy associated with this process. The technique is based on a combination of a bioinformatics tool for identifying access channels and bias-exchange metadynamics and provides converged free energies in good agreement with experimental data. In addition, it identifies the energetically preferred escape routes, limiting steps, and amino acids residues lining the channel. The approach was applied to mapping of a complex channel network in a complex environment, i.e., CYP3A4 attached to a lipid bilayer mimicking an endoplasmic reticulum membrane. The results provided direct information about the energetics and conformational changes associated with the ligand channeling. The methodology can easily be adapted to study channeling through other flexible biomacromolecular channels. PMID:26967371
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-09
... Chicago Stock Exchange, Incorporated (``Exchange'' or ``CHX'') filed with the Securities and Exchange... Change The Exchange proposes to amend CHX Article 20, Rule 4 which governs orders that are eligible for... Exchange's Web site at ( http://www.chx.com ), at the Exchange's Office of the Secretary, and in...
Brackenbury, Phillip J.
1986-04-01
A heat exchanger comparising a shell attached at its open end to one side of a tube sheet and a detachable head connected to the other side of said tube sheet. The head is divided into a first and second chamber in fluid communication with a nozzle inlet and nozzle outlet, respectively, formed in said tube sheet. A tube bundle is mounted within said shell and is provided with inlets and outlets formed in said tube sheet in communication with said first and second chambers, respectively.
Mesoscopic quantum emitters from deterministic aggregates of conjugated polymers
Stangl, Thomas; Wilhelm, Philipp; Remmerssen, Klaas; Höger, Sigurd; Vogelsang, Jan; Lupton, John M.
2015-01-01
An appealing definition of the term “molecule” arises from consideration of the nature of fluorescence, with discrete molecular entities emitting a stream of single photons. We address the question of how large a molecular object may become by growing deterministic aggregates from single conjugated polymer chains. Even particles containing dozens of individual chains still behave as single quantum emitters due to efficient excitation energy transfer, whereas the brightness is raised due to the increased absorption cross-section of the suprastructure. Excitation energy can delocalize between individual polymer chromophores in these aggregates by both coherent and incoherent coupling, which are differentiated by their distinct spectroscopic fingerprints. Coherent coupling is identified by a 10-fold increase in excited-state lifetime and a corresponding spectral red shift. Exciton quenching due to incoherent FRET becomes more significant as aggregate size increases, resulting in single-aggregate emission characterized by strong blinking. This mesoscale approach allows us to identify intermolecular interactions which do not exist in isolated chains and are inaccessible in bulk films where they are present but masked by disorder. PMID:26417079
Epileptic spike recognition in electroencephalogram using deterministic finite automata.
Keshri, Anup Kumar; Sinha, Rakesh Kumar; Hatwal, Rajesh; Das, Barda Nand
2009-06-01
This Paper presents an automated method of Epileptic Spike detection in Electroencephalogram (EEG) using Deterministic Finite Automata (DFA). It takes prerecorded single channel EEG data file as input and finds the occurrences of Epileptic Spikes data in it. The EEG signal was recorded at 256 Hz in two minutes separate data files using the Visual Lab-M software (ADLink Technology Inc., Taiwan). It was preprocessed for removal of baseline shift and band pass filtered using an infinite impulse response (IIR) Butterworth filter. A system, whose functionality was modeled with DFA, was designed. The system was tested with 10 EEG signal data files. The recognition rate of Epileptic Spike as on average was 95.68%. This system does not require any human intrusion. Also it does not need any short of training. The result shows that the application of DFA can be useful in detection of different characteristics present in EEG signals. This approach could be extended to a continuous data processing system. PMID:19408450
Is there a sharp phase transition for deterministic cellular automata
Wootters, W.K. Los Alamos National Lab., NM Williams Coll., Williamstown, MA . Dept. of Physics); Langton, C.G. )
1990-01-01
Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider. 18 refs., 6 figs.
Deterministic point inclusion methods for computational applications with complex geometry
Khamayseh, Ahmed; Kuprat, Andrew P.
2008-11-21
A fundamental problem in computation is finding practical and efficient algorithms for determining if a query point is contained within a model of a three-dimensional solid. The solid is modeled using a general boundary representation that can contain polygonal elements and/or parametric patches.We have developed two such algorithms: the first is based on a global closest feature query, and the second is based on a local intersection query. Both algorithms work for two- and three-dimensional objects. This paper presents both algorithms, as well as the spatial data structures and queries required for efficient implementation of the algorithms. Applications for these algorithms include computational geometry, mesh generation, particle simulation, multiphysics coupling, and computer graphics. These methods are deterministic in that they do not involve random perturbations of diagnostic rays cast from the query point in order to avoid ‘unclean’ or ‘singular’ intersections of the rays with the geometry. Avoiding the necessity of such random perturbations will become increasingly important as geometries become more convoluted and complex.
DETERMINISTIC POINT INCLUSION METHODS FOR COMPUTATIONAL APPLICATIONS WITH COMPLEX GEOMETRY.
Khamayseh, Ahmed K; Kuprat, Andrew
2008-01-01
A fundamental problem in computation is finding practical and efficient algorithms for determining if a query point is contained within a model of a three-dimensional solid. The solid is modeled using a general boundary representation that can contain polygonal elements and/or parametric patches. We have developed two such algorithms: the first is based on a global closest feature query, and the second is based on a local intersection query. Both algorithms work for two- and three-dimensional objects. This paper presents both algorithms, as well as the spatial data structures and queries required for efficient implementation of the algorithms. Applications for these algorithms include computational geometry, mesh generation, particle simulation, multiphysics coupling, and computer graphics. These methods are deterministic in that they do not involve random perturbations of diagnostic rays cast from the query point in order to avoid "unclean" or "singular" intersections of the rays with the geometry. Avoiding the necessity of such random perturbations will become increasingly important as geometries become more convoluted and complex.
SIR: Deterministic protein inference from peptides assigned to MS data.
Matthiesen, Rune; Prieto, Gorka; Amorim, Antonio; Aloria, Kerman; Fullaondo, Asier; Carvalho, Ana S; Arizmendi, Jesus M
2012-07-16
Currently the bottom up approach is the most popular for characterizing protein samples by mass spectrometry. This is mainly attributed to the fact that the bottom up approach has been successfully optimized for high throughput studies. However, the bottom up approach is associated with a number of challenges such as loss of linkage information between peptides. Previous publications have addressed some of these problems which are commonly referred to as protein inference. Nevertheless, all previous publications on the subject are oversimplified and do not represent the full complexity of the proteins identified. To this end we present here SIR (spectra based isoform resolver) that uses a novel transparent and systematic approach for organizing and presenting identified proteins based on peptide spectra assignments. The algorithm groups peptides and proteins into five evidence groups and calculates sixteen parameters for each identified protein that are useful for cases where deterministic protein inference is the goal. The novel approach has been incorporated into SIR which is a user-friendly tool only concerned with protein inference based on imports of Mascot search results. SIR has in addition two visualization tools that facilitate further exploration of the protein inference problem. PMID:22626983
Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis
2016-01-01
Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402
Deterministic separation of suspended particles in a reconfigurable obstacle array
NASA Astrophysics Data System (ADS)
Du, Siqi; Drazer, German
2015-11-01
We use a macromodel of a flow-driven deterministic lateral displacement microfluidic system to investigate conditions leading to size-separation of suspended particles. This model system can be easily reconfigured to establish an arbitrary forcing angle, i.e. the orientation between the average flow field and the square array of cylindrical posts that constitutes the stationary phase. We also consider posts of different diameters, while maintaining a constant gap between them, to investigate the effect of obstacle size on particle separation. In all cases, we observe the presence of a locked mode at small forcing angles, in which particles move along a principal direction in the lattice. A locked-to-zigzag mode transition takes place when the orientation of the driving force reaches a critical angle. We show that the transition occurs at increasing angles for larger particles, thus enabling particle separation. Moreover, we observe a linear regression between the critical angle and the size of the particles, which allows us to estimate size-resolution in these systems. The presence of such a linear relation would guide the selection of the forcing angle in microfluidic systems, in which the direction of the flow field with respect to the array of obstacles is fixed. Finally, we present a simple model based on the presence of irreversible interactions between the suspended particles and the obstacles, which describes the observed dependence of the migration angle on the orientation of the average flow.
Deterministic methods for multi-control fuel loading optimization
NASA Astrophysics Data System (ADS)
Rahman, Fariz B. Abdul
We have developed a multi-control fuel loading optimization code for pressurized water reactors based on deterministic methods. The objective is to flatten the fuel burnup profile, which maximizes overall energy production. The optimal control problem is formulated using the method of Lagrange multipliers and the direct adjoining approach for treatment of the inequality power peaking constraint. The optimality conditions are derived for a multi-dimensional multi-group optimal control problem via calculus of variations. Due to the Hamiltonian having a linear control, our optimal control problem is solved using the gradient method to minimize the Hamiltonian and a Newton step formulation to obtain the optimal control. We are able to satisfy the power peaking constraint during depletion with the control at beginning of cycle (BOC) by building the proper burnup path forward in time and utilizing the adjoint burnup to propagate the information back to the BOC. Our test results show that we are able to achieve our objective and satisfy the power peaking constraint during depletion using either the fissile enrichment or burnable poison as the control. Our fuel loading designs show an increase of 7.8 equivalent full power days (EFPDs) in cycle length compared with 517.4 EFPDs for the AP600 first cycle.
Automated optimum design of wing structures. Deterministic and probabilistic approaches
NASA Technical Reports Server (NTRS)
Rao, S. S.
1982-01-01
The automated optimum design of airplane wing structures subjected to multiple behavior constraints is described. The structural mass of the wing is considered the objective function. The maximum stress, wing tip deflection, root angle of attack, and flutter velocity during the pull up maneuver (static load), the natural frequencies of the wing structure, and the stresses induced in the wing structure due to landing and gust loads are suitably constrained. Both deterministic and probabilistic approaches are used for finding the stresses induced in the airplane wing structure due to landing and gust loads. A wing design is represented by a uniform beam with a cross section in the form of a hollow symmetric double wedge. The airfoil thickness and chord length are the design variables, and a graphical procedure is used to find the optimum solutions. A supersonic wing design is represented by finite elements. The thicknesses of the skin and the web and the cross sectional areas of the flanges are the design variables, and nonlinear programming techniques are used to find the optimum solution.
Deterministic ripple-spreading model for complex networks.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Hines, Evor L; Di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications. PMID:21599256
Entrepreneurs, chance, and the deterministic concentration of wealth.
Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Bioinspired, mechanical, deterministic fractal model for hierarchical suture joints
NASA Astrophysics Data System (ADS)
Li, Yaning; Ortiz, Christine; Boyce, Mary C.
2012-03-01
Many biological systems possess hierarchical and fractal-like interfaces and joint structures that bear and transmit loads, absorb energy, and accommodate growth, respiration, and/or locomotion. In this paper, an elastic deterministic fractal composite mechanical model was formulated to quantitatively investigate the role of structural hierarchy on the stiffness, strength, and failure of suture joints. From this model, it was revealed that the number of hierarchies (N) can be used to tailor and to amplify mechanical properties nonlinearly and with high sensitivity over a wide range of values (orders of magnitude) for a given volume and weight. Additionally, increasing hierarchy was found to result in mechanical interlocking of higher-order teeth, which creates additional load resistance capability, thereby preventing catastrophic failure in major teeth and providing flaw tolerance. Hence, this paper shows that the diversity of hierarchical and fractal-like interfaces and joints found in nature have definitive functional consequences and is an effective geometric-structural strategy to achieve different properties with limited material options in nature when other structural geometries and parameters are biologically challenging or inaccessible. This paper also indicates the use of hierarchy as a design strategy to increase design space and provides predictive capabilities to guide the mechanical design of synthetic flaw-tolerant bioinspired interfaces and joints.
Rock fracture characterization with GPR by means of deterministic deconvolution
NASA Astrophysics Data System (ADS)
Arosio, Diego
2016-03-01
In this work I address GPR characterization of rock fracture parameters, namely thickness and filling material. Rock fractures can generally be considered as thin beds, i.e., two interfaces whose separation is smaller than the resolution limit dictated by the Rayleigh's criterion. The analysis of the amplitude of the thin bed response in the time domain might permit to estimate fracture features for arbitrarily thin beds, but it is difficult to achieve and could be applied only to favorable cases (i.e., when all factors affecting amplitude are identified and corrected for). Here I explore the possibility to estimate fracture thickness and filling in the frequency domain by means of GPR. After introducing some theoretical aspects of thin bed response, I simulate GPR data on sandstone blocks with air- and water-filled fractures of known thickness. On the basis of some simplifying assumptions, I propose a 4-step procedure in which deterministic deconvolution is used to retrieve the magnitude and phase of the thin bed response in the selected frequency band. After deconvolved curves are obtained, fracture thickness and filling are estimated by means of a fitting process, which presents higher sensitivity to fracture thickness. Results are encouraging and suggest that GPR could be a fast and effective tool to determine fracture parameters in non-destructive manner. Further GPR experiments in the lab are needed to test the proposed processing sequence and to validate the results obtained so far.
Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.
Kurhekar, Manish; Deshpande, Umesh
2016-01-01
Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402
Experimental evidence for deterministic chaos in thermal pulse combustion
Daw, C.S.; Thomas, J.F.; Richards, G.A.; Narayanaswami, L.L.
1994-12-31
Given the existence of chaotic oscillations in reacting chemical systems, it is reasonable to ask whether or not similar phenomena can occur in combustion. In this paper, the authors present experimental evidence that kinetically driven chaos occurs in a highly simplified thermal pulse combustor. The combustor is a well-stirred reactor with a tailpipe extending from one end. Fuel and air are injected into the combustion chamber through orifices in the end opposite the tailpipe. Propane with the fuel used in all cases. From the experimental data analyses, it is clear that deterministic chaos is an important factor in thermal pulse combustor dynamics. While the authors have only observed such behavior in this particular type combustor to date, they infer from their understanding of the origins of the chaos that it is likely to exist in other pulse combustors and even nonpulsing combustion. They speculate that realization of the importance of chaos in affecting flame stability could lead to significant changes in combustor design and control.
Entrepreneurs, Chance, and the Deterministic Concentration of Wealth
Fargione, Joseph E.; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs–individuals with ownership in for-profit enterprises–comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Deterministic propagation model for RFID using site-specific and FDTD
NASA Astrophysics Data System (ADS)
Cunha de Azambuja, Marcelo; Passuelo Hessel, Fabiano; Luís Berz, Everton; Bauermann Porfírio, Leandro; Ruhnke Valério, Paula; De Pieri Baladei, Suely; Jung, Carlos Fernando
2015-06-01
The conduction of experiments to evaluate a tag orientation and its readability in a laboratory offers great potential for reducing time and costs for users. This article presents a novel methodology for developing simulation models for RFID (radio-frequency identification) environments. The main challenges in adopting this model are: (1) to find out how the properties of each one of the materials, on which the tag is applied, influence the read range and to determine the necessary power for tag reading and (2) to find out the power of the backscattered signal received by the tag when energised by the RF wave transmitted by the reader. The validation tests, performed in four different kinds of environments, with tags applied to six different kinds of materials, six different distances and with a reader configured with three different powers, showed achievements on the average of 95.3% accuracy in the best scenario and 87.0% in the worst scenario. The methodology can be easily duplicated to generate simulation models to other different RFID environments.
Stochastic model of tumor-induced angiogenesis: Ensemble averages and deterministic equations
NASA Astrophysics Data System (ADS)
Terragni, F.; Carretero, M.; Capasso, V.; Bonilla, L. L.
2016-02-01
A recent conceptual model of tumor-driven angiogenesis including branching, elongation, and anastomosis of blood vessels captures some of the intrinsic multiscale structures of this complex system, yet allowing one to extract a deterministic integro-partial-differential description of the vessel tip density [Phys. Rev. E 90, 062716 (2014), 10.1103/PhysRevE.90.062716]. Here we solve the stochastic model, show that ensemble averages over many realizations correspond to the deterministic equations, and fit the anastomosis rate coefficient so that the total number of vessel tips evolves similarly in the deterministic and ensemble-averaged stochastic descriptions.
Bulgakov, N G; Maksimov, V N
2005-01-01
Specific application of deterministic analysis to investigate the contingencies of various components of natural biocenosis was illustrated by the example of fish production and biomass of phyto- and zooplankton. Deterministic analysis confirms the theoretic assumptions on food preferences of herbivorous fish: both silver and bighead carps avoided feeding on cyanobacteria. Being a facultative phytoplankton feeder, silver carp preferred microalgae to zooplankton. Deterministic analysis allowed us to demonstrate the contingency of the mean biomass of phyto- and zooplankton during both the whole fish production cycle and the individual periods. PMID:16004266
Hybrid Monte Carlo-Deterministic Methods for Nuclear Reactor-Related Criticality Calculations
Edward W. Larson
2004-02-17
The overall goal of this project is to develop, implement, and test new Hybrid Monte Carlo-deterministic (or simply Hybrid) methods for the more efficient and more accurate calculation of nuclear engineering criticality problems. These new methods will make use of two (philosophically and practically) very different techniques - the Monte Carlo technique, and the deterministic technique - which have been developed completely independently during the past 50 years. The concept of this proposal is to merge these two approaches and develop fundamentally new computational techniques that enhance the strengths of the individual Monte Carlo and deterministic approaches, while minimizing their weaknesses.
Comparison of deterministic and probabilistic calculation of ecological soil screening levels.
Regan, Helen M; Sample, Brad E; Ferson, Scott
2002-04-01
The U.S. Environmental Protection Agency (U.S. EPA) is sponsoring development of ecological soil screening levels (Eco-SSLs) for terrestrial wildlife. These are intended to be used to identify chemicals of potential ecological concern at Superfund sites. Ecological soil screening levels represent concentrations of contaminants in soils that are believed to be protective of ecological receptors. An exposure model, based on soil- and food-ingestion rates and the relationship between the concentrations of contaminants in soil and food, has been developed for estimation of wildlife Eco-SSLs. It is important to understand how conservative and protective these values are, how parameterization of the model influences the resulting Eco-SSL, and how the treatment of uncertainty impacts results. The Eco-SSLs were calculated for meadow voles (Microtus pennsylvanicus) and northern short-tailed shrews (Blarina brevicauda) for lead and DDT using deterministic and probabilistic methods. Conclusions obtained include that use of central-tendency point estimates may result in hazard quotients much larger than one; that a Monte Carlo approach also leads to hazard quotients that can be substantially larger than one; that, if no hazard quotients larger than one are allowed, any probabilistic approach is identical to a worst-case approach; and that an improvement in the quality and amount of data is necessary to increase confidence that Eco-SSLs are protective at their intended levels of conservatism. PMID:11951965
Pagowski, M O; Grell, G A; Devenyi, D; Peckham, S E; McKeen, S A; Gong, W; Monache, L D; McHenry, J N; McQueen, J; Lee, P
2006-02-02
Forecasts from seven air quality models and surface ozone data collected over the eastern USA and southern Canada during July and August 2004 provide a unique opportunity to assess benefits of ensemble-based ozone forecasting and devise methods to improve ozone forecasts. In this investigation, past forecasts from the ensemble of models and hourly surface ozone measurements at over 350 sites are used to issue deterministic 24-h forecasts using a method based on dynamic linear regression. Forecasts of hourly ozone concentrations as well as maximum daily 8-h and 1-h averaged concentrations are considered. It is shown that the forecasts issued with the application of this method have reduced bias and root mean square error and better overall performance scores than any of the ensemble members and the ensemble average. Performance of the method is similar to another method based on linear regression described previously by Pagowski et al., but unlike the latter, the current method does not require measurements from multiple monitors since it operates on individual time series. Improvement in the forecasts can be easily implemented and requires minimal computational cost.
NASA Astrophysics Data System (ADS)
Boyer, D.; Miramontes, O.; Larralde, H.
2009-10-01
Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ~ k-β, in some range of the exponent β, the foraging medium induces movements that are similar to Lévy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.
Self-aligned deterministic coupling of single quantum emitter to nanofocused plasmonic modes
Gong, Su-Hyun; Kim, Je-Hyung; Ko, Young-Ho; Rodriguez, Christophe; Shin, Jonghwa; Lee, Yong-Hee; Dang, Le Si; Zhang, Xiang; Cho, Yong-Hoon
2015-01-01
The quantum plasmonics field has emerged and been growing increasingly, including study of single emitter–light coupling using plasmonic system and scalable quantum plasmonic circuit. This offers opportunity for the quantum control of light with compact device footprint. However, coupling of a single emitter to highly localized plasmonic mode with nanoscale precision remains an important challenge. Today, the spatial overlap between metallic structure and single emitter mostly relies either on chance or on advanced nanopositioning control. Here, we demonstrate deterministic coupling between three-dimensionally nanofocused plasmonic modes and single quantum dots (QDs) without any positioning for single QDs. By depositing a thin silver layer on a site-controlled pyramid QD wafer, three-dimensional plasmonic nanofocusing on each QD at the pyramid apex is geometrically achieved through the silver-coated pyramid facets. Enhancement of the QD spontaneous emission rate as high as 22 ± 16 is measured for all processed QDs emitting over ∼150-meV spectral range. This approach could apply to high fabrication yield on-chip devices for wide application fields, e.g., high-efficiency light-emitting devices and quantum information processing. PMID:25870303
Self-limiting trajectories of a particle moving deterministically in a random medium
NASA Astrophysics Data System (ADS)
Webb, B. Z.; Cohen, E. G. D.
2015-12-01
We study the motion of a particle moving on a two-dimensional honeycomb lattice, whose sites are randomly occupied by either right or left rotators. These rotators deterministically scatter the particle to the right or left, additionally changing orientation from left to right or from right to left after scattering the particle. In the model we consider, the scatterers are each initially oriented to the right with probability p\\in [0,1]. The initial configuration of scatterers, which forms the medium through which the particle moves, is set up so that the scatterer’s orientations are independent and identically distributed. For p\\in (0,1), we show that as the particle moves through the lattice, it creates a number of reflecting structures. These structures ultimately limit the particle’s motion, causing it to have a periodic trajectory. As p approaches either 0 or 1, and the medium becomes increasingly homogenous, the particle’s dynamics undergoes a discontinuous transition from this self-limiting, periodic motion to a self-avoiding motion, where the particle’s trajectory, away from its initial position, is a self-avoiding walk. In a generalization of this model, we also show the same periodic behavior exists if the model’s initial configuration of scatterers are independently but not identically distributed. However, if these orientations are not chosen independently, we demonstrate that this can drastically change the particle’s motion causing it to have a nonperiodic behavior.
Broadband seismic monitoring of active volcanoes using deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Kumagai, H.; Nakano, M.; Maeda, T.; Yepes, H.; Palacios, P.; Ruiz, M. C.; Arrais, S.; Vaca, M.; Molina, I.; Yamashina, T.
2009-12-01
We systematically used two approaches to analyze broadband seismic signals observed at active volcanoes: one is waveform inversion of very-long-period (VLP) signals in the frequency domain assuming possible source mechanisms; the other is a source location method of long-period (LP) and tremor using their amplitudes. The deterministic approach of the waveform inversion is useful to constrain the source mechanism and location, but is basically only applicable to VLP signals with periods longer than a few seconds. The source location method uses seismic amplitudes corrected for site amplifications and assumes isotropic radiation of S waves. This assumption of isotropic radiation is apparently inconsistent with the hypothesis of crack geometry at the LP source. Using the source location method, we estimated the best-fit source location of a VLP/LP event at Cotopaxi using a frequency band of 7-12 Hz and Q = 60. This location was close to the best-fit source location determined by waveform inversion of the VLP/LP event using a VLP band of 5-12.5 s. The waveform inversion indicated that a crack mechanism better explained the VLP signals than an isotropic mechanism. These results indicated that isotropic radiation is not inherent to the source and only appears at high frequencies. We also obtained a best-fit location of an explosion event at Tungurahua when using a frequency band of 5-10 Hz and Q = 60. This frequency band and Q value also yielded reasonable locations for the sources of tremor signals associated with lahars and pyroclastic flows at Tungurahua. The isotropic radiation assumption may be valid in a high frequency range in which the path effect caused by the scattering of seismic waves results in an isotropic radiation pattern of S waves. The source location method may be categorized as a stochastic approach based on the nature of scattering waves. We further applied the waveform inversion to VLP signals observed at only two stations during a volcanic crisis
Development of a Deterministic Ethernet Building blocks for Space Applications
NASA Astrophysics Data System (ADS)
Fidi, C.; Jakovljevic, Mirko
2015-09-01
The benefits of using commercially based networking standards and protocols have been widely discussed and are expected to include reduction in overall mission cost, shortened integration and test (I&T) schedules, increased operations flexibility, and hardware and software upgradeability/scalability with developments ongoing in the commercial world. The deterministic Ethernet technology TTEthernet [1] diploid on the NASA Orion spacecraft has demonstrated the use of the TTEthernet technology for a safety critical human space flight application during the Exploration Flight Test 1 (EFT-1). The TTEthernet technology used within the NASA Orion program has been matured for the use within this mission but did not lead to a broader use in space applications or an international space standard. Therefore TTTech has developed a new version which allows to scale the technology for different applications not only the high end missions allowing to decrease the size of the building blocks leading to a reduction of size weight and power enabling the use in smaller applications. TTTech is currently developing a full space products offering for its TTEthernet technology to allow the use in different space applications not restricted to launchers and human spaceflight. A broad space market assessment and the current ESA TRP7594 lead to the development of a space grade TTEthernet controller ASIC based on the ESA qualified Atmel AT1C8RHA95 process [2]. In this paper we will describe our current TTEthernet controller development towards a space qualified network component allowing future spacecrafts to operate in significant radiation environments while using a single onboard network for reliable commanding and data transfer.
"Eztrack": A single-vehicle deterministic tracking algorithm
Carrano, C J
2007-12-20
A variety of surveillance operations require the ability to track vehicles over a long period of time using sequences of images taken from a camera mounted on an airborne or similar platform. In order to be able to see and track a vehicle for any length of time, either a persistent surveillance imager is needed that can image wide fields of view over a long time-span or a highly maneuverable smaller field-of-view imager is needed that can follow the vehicle of interest. The algorithm described here was designed for the persistence surveillance case. In turns out that most vehicle tracking algorithms described in the literature[1,2,3,4] are designed for higher frame rates (> 5 FPS) and relatively short ground sampling distances (GSD) and resolutions ({approx} few cm to a couple tens of cm). But for our datasets, we are restricted to lower resolutions and GSD's ({ge}0.5 m) and limited frame-rates ({le}2.0 Hz). As a consequence, we designed our own simple approach in IDL which is a deterministic, motion-guided object tracker. The object tracking relies both on object features and path dynamics. The algorithm certainly has room for future improvements, but we have found it to be a useful tool in evaluating effects of frame-rate, resolution/GSD, and spectral content (eg. grayscale vs. color imaging ). A block diagram of the tracking approach is given in Figure 1. We describe each of the blocks of the diagram in the upcoming sections.
Deterministic precision finishing of domes and conformal optics
NASA Astrophysics Data System (ADS)
Shorey, Aric; Kordonski, William; Tricard, Marc
2005-05-01
In order to enhance missile performance, future window and dome designs will incorporate shapes with improved aerodynamic performance compared with the more traditional flats and spheres. Due to their constantly changing curvature and steep slopes, these shapes are incompatible with most conventional polishing and metrology solutions. Two types of a novel polishing technology, Magnetorheological Finishing (MRF®) and Magnetorheological (MR) Jet, could enable cost-effective manufacturing of free-form optical surfaces. MRF, a deterministic sub-aperture magnetically assisted polishing method, has been developed to overcome many of the fundamental limitations of traditional finishing. MRF has demonstrated the ability to produce complex optical surfaces with accuracies better than 30 nm peak-to-valley (PV) and surface micro-roughness less than 1 nm rms on a wide variety of optical glasses, single crystals, and glass-ceramics. The polishing tool in MRF perfectly conforms to the optical surface making it well suited for finishing this class of optics. A newly developed magnetically assisted finishing method MR JetTM, addresses the challenge of finishing the inside of steep concave domes and other irregular shapes. An applied magnetic field coupled with the properties of the MR fluid allow for stable removal rate with stand-off distances of tens of centimeters. Surface figure and roughness values similar to traditional MRF have been demonstrated. Combining these technologies with metrology techniques, such as Sub-aperture Stitching Interferometer (SSI®) and Asphere Stitching Interferometer (ASI®), enable higher precision finishing of the windows and domes today, as well as the finishing of future conformal designs.
Deterministic folding: The role of entropic forces and steric specificities
NASA Astrophysics Data System (ADS)
da Silva, Roosevelt A.; da Silva, M. A. A.; Caliri, A.
2001-03-01
The inverse folding problem of proteinlike macromolecules is studied by using a lattice Monte Carlo (MC) model in which steric specificities (nearest-neighbors constraints) are included and the hydrophobic effect is treated explicitly by considering interactions between the chain and solvent molecules. Chemical attributes and steric peculiarities of the residues are encoded in a 10-letter alphabet and a correspondent "syntax" is provided in order to write suitable sequences for the specified target structures; twenty-four target configurations, chosen in order to cover all possible values of the average contact order χ (0.2381⩽χ⩽0.4947 for this system), were encoded and analyzed. The results, obtained by MC simulations, are strongly influenced by geometrical properties of the native configuration, namely χ and the relative number φ of crankshafts-type structures: For χ<0.35 the folding is deterministic, that is, the syntax is able to encode successful sequences: The system presents larger encodability, minimum sequence-target degeneracies and smaller characteristic folding time τf. For χ⩾0.35 the above results are not reproduced any more: The folding success is severely reduced, showing strong correlation with φ. Additionally, the existence of distinct characteristic folding times suggests that different mechanisms are acting at the same time in the folding process. The results (all obtained from the same single model, under the same "physiological conditions") resemble some general features of the folding problem, supporting the premise that the steric specificities, in association with the entropic forces (hydrophobic effect), are basic ingredients in the protein folding process.
Deterministic and Stochastic Descriptions of Gene Expression Dynamics
NASA Astrophysics Data System (ADS)
Marathe, Rahul; Bierbaum, Veronika; Gomez, David; Klumpp, Stefan
2012-09-01
A key goal of systems biology is the predictive mathematical description of gene regulatory circuits. Different approaches are used such as deterministic and stochastic models, models that describe cell growth and division explicitly or implicitly etc. Here we consider simple systems of unregulated (constitutive) gene expression and compare different mathematical descriptions systematically to obtain insight into the errors that are introduced by various common approximations such as describing cell growth and division by an effective protein degradation term. In particular, we show that the population average of protein content of a cell exhibits a subtle dependence on the dynamics of growth and division, the specific model for volume growth and the age structure of the population. Nevertheless, the error made by models with implicit cell growth and division is quite small. Furthermore, we compare various models that are partially stochastic to investigate the impact of different sources of (intrinsic) noise. This comparison indicates that different sources of noise (protein synthesis, partitioning in cell division) contribute comparable amounts of noise if protein synthesis is not or only weakly bursty. If protein synthesis is very bursty, the burstiness is the dominant noise source, independent of other details of the model. Finally, we discuss two sources of extrinsic noise: cell-to-cell variations in protein content due to cells being at different stages in the division cycles, which we show to be small (for the protein concentration and, surprisingly, also for the protein copy number per cell) and fluctuations in the growth rate, which can have a significant impact.
Accurate deterministic solutions for the classic Boltzmann shock profile
NASA Astrophysics Data System (ADS)
Yue, Yubei
The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.
Graphics development of DCOR: Deterministic combat model of Oak Ridge
Hunt, G.; Azmy, Y.Y.
1992-10-01
DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR`s discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.
Merging deterministic and probabilistic approaches to forecast volcanic scenarios
NASA Astrophysics Data System (ADS)
Peruzzo, E.; Bisconti, L.; Barsanti, M.; Flandoli, F.; Papale, P.
2009-04-01
Volcanoes are extremely complex systems largely inaccessible to direct observation. As a consequence, many quantities which are relevant in determining the physical and chemical processes occurring at volcanoes are largely uncertain. On the other hand, the demand for eruption scenario forecast at many hazardous volcanoes in the world is pressing, reflecting into the development and use of increasingly complex physical models and numerical codes. Such codes are capable of accounting for the extremely complex, non-linear behaviour of the volcanic processes, and for the roles of several quantities in determining volcanic scenarios and hazards. However, they often require enormous computer resources and imply long (order of days to weeks) CPU times even on the most advanced parallel computation systems available to-date. As a consequence, they can hardly be used to reasonably cover the spectrum of possible conditions expected at a given volcano. At this purpose, we have started the development of a mixed deterministic-probabilistic approach with the aim of substantially reducing (form order 10000 to 10) the number of simulations needed to adequately represent possible scenarios and their probability of occurrence, corresponding to a given set of probability distributions for the initial/boundary conditions characterizing the system. The core of the problem is to find a "best" discretization of the continuous density function describing the random variables input to the model. This is done through the stochastic quantization theory (Graf and Luschgy, 2000). The application of this theory to volcanic scenario forecast has been tested through both an oversimplified analytical model and a more complex numerical model for magma flow in volcanic conduits, the latter still running in relatively short times to allow comparison with Monte Carlo simulations. The final aim is to define proper strategies and paradigms for application to more complex, time-demanding codes
Baldwin, Darryl Dean; Willi, Martin Leo; Fiveland, Scott Byron; Timmons, Kristine Ann
2010-12-14
A segmented heat exchanger system for transferring heat energy from an exhaust fluid to a working fluid. The heat exchanger system may include a first heat exchanger for receiving incoming working fluid and the exhaust fluid. The working fluid and exhaust fluid may travel through at least a portion of the first heat exchanger in a parallel flow configuration. In addition, the heat exchanger system may include a second heat exchanger for receiving working fluid from the first heat exchanger and exhaust fluid from a third heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the second heat exchanger in a counter flow configuration. Furthermore, the heat exchanger system may include a third heat exchanger for receiving working fluid from the second heat exchanger and exhaust fluid from the first heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the third heat exchanger in a parallel flow configuration.
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; Falcao Salles, Joana
2015-03-17
Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic matter (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.
A deterministic and statistical energy analysis of tyre cavity resonance noise
NASA Astrophysics Data System (ADS)
Mohamed, Zamri; Wang, Xu
2016-03-01
Tyre cavity resonance was studied using a combination of deterministic analysis and statistical energy analysis where its deterministic part was implemented using the impedance compact mobility matrix method and its statistical part was done by the statistical energy analysis method. While the impedance compact mobility matrix method can offer a deterministic solution to the cavity pressure response and the compliant wall vibration velocity response in the low frequency range, the statistical energy analysis method can offer a statistical solution of the responses in the high frequency range. In the mid frequency range, a combination of the statistical energy analysis and deterministic analysis methods can identify system coupling characteristics. Both methods have been compared to those from commercial softwares in order to validate the results. The combined analysis result has been verified by the measurement result from a tyre-cavity physical model. The analysis method developed in this study can be applied to other similar toroidal shape structural-acoustic systems.
Individual-based vs deterministic models for macroparasites: host cycles and extinction.
Rosà, Roberto; Pugliese, Andrea; Villani, Alessandro; Rizzoli, Annapaola
2003-06-01
Our understanding of the qualitative dynamics of host-macroparasite systems is mainly based on deterministic models. We study here an individual-based stochastic model that incorporates the same assumptions as the classical deterministic model. Stochastic simulations, using parameter values based on some case studies, preserve many features of the deterministic model, like the average value of the variables and the approximate length of the cycles. An important difference is that, even when deterministic models yield damped oscillations, stochastic simulations yield apparently sustained oscillations. The amplitude of such oscillations may be so large to threaten parasites' persistence.With density-dependence in parasite demographic traits, persistence increases somewhat. Allowing instead for infections from an external parasite reservoir, we found that host extinction may easily occur. However, the extinction probability is almost independent of the level of external infection over a wide intermediate parameter region. PMID:12742175
Analysis of the deterministic and stochastic SIRS epidemic models with nonlinear incidence
NASA Astrophysics Data System (ADS)
Liu, Qun; Chen, Qingmei
2015-06-01
In this paper, the deterministic and stochastic SIRS epidemic models with nonlinear incidence are introduced and investigated. For deterministic system, the basic reproductive number R0 is obtained. Furthermore, if R0 ≤ 1, then the disease-free equilibrium is globally asymptotically stable and if R0 > 1, then there is a unique endemic equilibrium which is globally asymptotically stable. For stochastic system, to begin with, we verify that there is a unique global positive solution starting from the positive initial value. Then when R0 > 1, we prove that stochastic perturbations may lead the disease to extinction in scenarios where the deterministic system is persistent. When R0 ≤ 1, a result on fluctuation of the solution around the disease-free equilibrium of deterministic model is obtained under appropriate conditions. At last, if the intensity of the white noise is sufficiently small and R0 > 1, then there is a unique stationary distribution to stochastic system.
A deterministic particle method for one-dimensional reaction-diffusion equations
NASA Technical Reports Server (NTRS)
Mascagni, Michael
1995-01-01
We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.
Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed
Phase conjugation with random fields and with deterministic and random scatterers
Gbur, G.; Wolf, E.
1999-01-01
The theory of distortion correction by phase conjugation, developed since the discovery of this phenomenon many years ago, applies to situations when the field that is conjugated is monochromatic and the medium with which it interacts is deterministic. In this Letter a generalization of the theory is presented that applies to phase conjugation of partially coherent waves interacting with either deterministic or random weakly scattering nonabsorbing media. {copyright} {ital 1999} {ital Optical Society of America}
Yildirim, Necmettin; Kazanci, Caner
2011-01-01
A brief introduction to mathematical modeling of biochemical regulatory reaction networks is presented. Both deterministic and stochastic modeling techniques are covered with examples from enzyme kinetics, coupled reaction networks with oscillatory dynamics and bistability. The Yildirim-Mackey model for lactose operon is used as an example to discuss and show how deterministic and stochastic methods can be used to investigate various aspects of this bacterial circuit. PMID:21187231
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Development of site-specific earthquake response spectra for eastern US sites
Beavers, J.E.; Brock, W.R.; Hunt, R.J.; Shaffer, K.E.
1993-08-01
Site-specific earthquake, uniform-hazard response spectra have been defined for the Department of Energy Oak Ridge, Tennessee, and Portsmouth, Ohio, sites for use in evaluating existing facilities and designing new facilities. The site-specific response spectra were defined from probabilistic and deterministic seismic hazard studies following the requirements in DOE-STD-1024-92, ``Guidelines for Probabilistic Seismic Hazard Curves at DOE Sites.` For these two sites, the results show that site-specific uniform-hazard response spectra are slightly higher in the high-frequency range and considerably lower in the low-frequency range compared with response spectra defined for these sites in the past.
Hunt, G. ); Azmy, Y.Y. )
1992-10-01
DCOR is a user-friendly computer implementation of a deterministic combat model developed at ORNL. To make the interpretation of the results more intuitive, a conversion of the numerical solution to a graphic animation sequence of battle evolution is desirable. DCOR uses a coarse computational spatial mesh superimposed on the battlefield. This research is aimed at developing robust methods for computing the position of the combative units over the continuum (and also pixeled) battlefield, from DCOR's discrete-variable solution representing the density of each force type evaluated at gridpoints. Three main problems have been identified and solutions have been devised and implemented in a new visualization module of DCOR. First, there is the problem of distributing the total number of objects, each representing a combative unit of each force type, among the gridpoints at each time level of the animation. This problem is solved by distributing, for each force type, the total number of combative units, one by one, to the gridpoint with the largest calculated number of units. Second, there is the problem of distributing the number of units assigned to each computational gridpoint over the battlefield area attributed to that point. This problem is solved by distributing the units within that area by taking into account the influence of surrounding gridpoints using linear interpolation. Finally, time interpolated solutions must be generated to produce a sufficient number of frames to create a smooth animation sequence. Currently, enough frames may be generated either by direct computation via the PDE solver or by using linear programming techniques to linearly interpolate intermediate frames between calculated frames.
47 CFR 22.973 - Information exchange.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Information exchange. 22.973 Section 22.973... Cellular Radiotelephone Service § 22.973 Information exchange. (a) Prior notification. Public safety/CII... information to the public safety/CII licensee at least 10 business days before a new cell site is activated...
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
Inorganic ion exchangers for nuclear waste remediation
Clearfield, A.; Bortun, A.; Bortun, L.; Behrens, E.
1997-10-01
The objective of this work is to provide a broad spectrum of inorganic ion exchangers that can be used for a range of applications and separations involving remediation of groundwater and tank wastes. The authors intend to scale-up the most promising exchangers, through partnership with AlliedSignal Inc., to provide samples for testing at various DOE sites. While much of the focus is on exchangers for removal of Cs{sup +} and Sr{sup 2+} from highly alkaline tank wastes, especially at Hanford, the authors have also synthesized exchangers for acid wastes, alkaline wastes, groundwater, and mercury, cobalt, and chromium removal. These exchangers are now available for use at DOE sites. Many of the ion exchangers described here are new, and others are improved versions of previously known exchangers. They are generally one of three types: (1) layered compounds, (2) framework or tunnel compounds, and (3) amorphous exchangers in which a gel exchanger is used to bind a fine powder into a bead for column use. Most of these exchangers can be regenerated and used again.
Confined Crystal Growth in Space. Deterministic vs Stochastic Vibroconvective Effects
NASA Astrophysics Data System (ADS)
Ruiz, Xavier; Bitlloch, Pau; Ramirez-Piscina, Laureano; Casademunt, Jaume
The analysis of the correlations between characteristics of the acceleration environment and the quality of the crystalline materials grown in microgravity remains an open and interesting question. Acceleration disturbances in space environments usually give rise to effective gravity pulses, gravity pulse trains of finite duration, quasi-steady accelerations or g-jitters. To quantify these disturbances, deterministic translational plane polarized signals have largely been used in the literature [1]. In the present work, we take an alternative approach which models g-jitters in terms of a stochastic process in the form of the so-called narrow-band noise, which is designed to capture the main statistical properties of realistic g-jitters. In particular we compare their effects so single-frequency disturbances. The crystalline quality has been characterized, following previous analyses, in terms of two parameters, the longitudinal and the radial segregation coefficients. The first one averages transversally the dopant distribution, providing continuous longitudinal information of the degree of segregation along the growth process. The radial segregation characterizes the degree of lateral non-uniformity of the dopant in the solid-liquid interface at each instant of growth. In order to complete the description, and because the heat flux fluctuations at the interface have a direct impact on the crystal growth quality -growth striations -the time dependence of a Nusselt number associated to the growing interface has also been monitored. For realistic g-jitters acting orthogonally to the thermal gradient, the longitudinal segregation remains practically unperturbed in all simulated cases. Also, the Nusselt number is not significantly affected by the noise. On the other hand, radial segregation, despite its low magnitude, exhibits a peculiar low-frequency response in all realizations. [1] X. Ruiz, "Modelling of the influence of residual gravity on the segregation in
Deterministic Modeling of the High Temperature Test Reactor
Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the
Application of tabu search to deterministic and stochastic optimization problems
NASA Astrophysics Data System (ADS)
Gurtuna, Ozgur
During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is
A Deterministic Approach to Active Debris Removal Target Selection
NASA Astrophysics Data System (ADS)
Lidtke, A.; Lewis, H.; Armellin, R.
2014-09-01
purpose of ADR are also drawn and a deterministic method for ADR target selection, which could reduce the number of ADR missions to be performed, is proposed.
Educator Exchange Resource Guide.
ERIC Educational Resources Information Center
Garza, Cris; Rodriguez, Victor
This resource guide was developed for teachers and administrators interested in participating in intercultural and international exchange programs or starting an exchange program. An analysis of an exchange program's critical elements discusses exchange activities; orientation sessions; duration of exchange; criteria for participation; travel,…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-16
... Exchange, Inc. (``CHX'' or ``Exchange'') filed with the Securities and Exchange Commission (``Commission...'s Statement of the Terms of Substance of the Proposed Rule Change CHX proposes to amend Exchange... proposed rule change is available on the Exchange's Web site at...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-03
..., 2006), 71 FR 42693 (July 27, 2006) (SR-CHX-2005-34). The Exchange also proposes to require that any... Exchange, Incorporated (``Exchange'' or ``CHX'') filed with the Securities and Exchange Commission (the... proposed rule change is available on the Exchange's Web site at ( http://www.chx.com ), at the...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-15
..., 2013, the Chicago Stock Exchange, Inc. (``CHX'' or ``Exchange'') filed with the Securities and Exchange... Substance of the Proposed Rule Change CHX proposes to amend Exchange Rules and its Schedule of Participant..., 2013. The text of this proposed rule change is available on the Exchange's Web site at...
17 CFR 240.6a-3 - Supplemental material to be filed by exchanges.
Code of Federal Regulations, 2010 CFR
2010-04-01
... continuously on an Internet web site controlled by an exchange, in lieu of filing such information with the Commission, such exchange may: (i) Indicate the location of the Internet web site where such information...
17 CFR 240.6a-3 - Supplemental material to be filed by exchanges.
Code of Federal Regulations, 2011 CFR
2011-04-01
... continuously on an Internet web site controlled by an exchange, in lieu of filing such information with the Commission, such exchange may: (i) Indicate the location of the Internet web site where such information...
NASA Astrophysics Data System (ADS)
Szymanowski, Mariusz; Kryza, Maciej
2015-11-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
Corrosive resistant heat exchanger
Richlen, Scott L.
1989-01-01
A corrosive and errosive resistant heat exchanger which recovers heat from a contaminated heat stream. The heat exchanger utilizes a boundary layer of innocuous gas, which is continuously replenished, to protect the heat exchanger surface from the hot contaminated gas. The innocuous gas is conveyed through ducts or perforations in the heat exchanger wall. Heat from the heat stream is transferred by radiation to the heat exchanger wall. Heat is removed from the outer heat exchanger wall by a heat recovery medium.
A new methodology for deterministic landslide risk assessment at the local scale
NASA Astrophysics Data System (ADS)
Cotecchia, F.; Santaloia, F.; Lollino, P.; Vitone, C.; Mitaritonna, G.; Parise, M.
2009-04-01
The present paper discusses the formulation of a methodology that is being developed for regional landslide risk assessment within geologically complex areas and some preliminary results of its application at the intermediate scale (i.e. between the regional and the slope scale). In particular, the methodology is the subject of an on-going multidisciplinary research project, which aims at the assessment of the landslide hazard, of the corresponding vulnerability of structures and of their exposition, involving different expertises. As such, both the landslide hazard and the structure vulnerability assessments are meant to be based upon the knowledge of the failure mechanisms and to benefit from scientific knowledge in the fields of both geotechnical engineering and structural mechanics. At the same time, the exposure of the elements at risk is to be investigated according to analyses of the socio-economical context where the risk is being evaluated. In the present paper only the work relating to landslide hazard is presented. This work aims at the further development of Quantitative Landslide Hazard Assessment, QHA, following a deterministic approach. As such, it is aimed at exporting the geo-mechanical interpretation of slope stability and landslide mechanisms from the slope scale (site-specific) to the regional scale. The results of such a methodology will be implemented in a GIS system and reported in guidelines. As concerns the landslide hazard assessment, the proposed methodology involves two interconnected working phases, the first one at regional scale and the second one at town scale. During the first phase, an analytical database of all the factors affecting the slope equilibrium is created and a geo-hydro-mechanical classification of the soil masses is defined together with the definition of the main landslide typologies present in the region. Thereafter, the connections existing among the sets of internal factors of landslides, which characterise the geo
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
Assessment of stochastic and deterministic models of 6304 quasar lightcurves from SDSS Stripe 82
NASA Astrophysics Data System (ADS)
Andrae, R.; Kim, D.-W.; Bailer-Jones, C. A. L.
2013-06-01
The optical lightcurves of many quasars show variations of tenths of a magnitude or more on timescales of months to years. This variation often cannot be described well by a simple deterministic model. We perform a Bayesian comparison of over 20 deterministic and stochastic models on 6304 quasi-steller object (QSO) lightcurves in SDSS Stripe 82. We include the damped random walk (or Ornstein-Uhlenbeck [OU] process), a particular type of stochastic model, which recent studies have focused on. Further models we consider are single and double sinusoids, multiple OU processes, higher order continuous autoregressive processes, and composite models. We find that only 29 out of 6304 QSO lightcurves are described significantly better by a deterministic model than a stochastic one. The OU process is an adequate description of the vast majority of cases (6023). Indeed, the OU process is the best single model for 3462 lightcurves, with the composite OU process/sinusoid model being the best in 1706 cases. The latter model is the dominant one for brighter/bluer QSOs. Furthermore, a non-negligible fraction of QSO lightcurves show evidence that not only the mean is stochastic but the variance is stochastic, too. Our results confirm earlier work that QSO lightcurves can be described with a stochastic model, but place this on a firmer footing, and further show that the OU process is preferred over several other stochastic and deterministic models. Of course, there may well exist yet better (deterministic or stochastic) models, which have not been considered here.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
NASA Astrophysics Data System (ADS)
Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing
2014-09-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.
Piscitella, Roger R.
1987-05-05
In a woven ceramic heat exchanger using the basic tube-in-shell design, each heat exchanger consisting of tube sheets and tube, is woven separately. Individual heat exchangers are assembled in cross-flow configuration. Each heat exchanger is woven from high temperature ceramic fiber, the warp is continuous from tube to tube sheet providing a smooth transition and unitized construction.
Deterministic seismic design and evaluation criteria to meet probabilistic performance goals
Short, S.A. ); Murray, R.C.; Nelson, T.A. ); Hill, J.R. . Office of Safety Appraisals)
1990-12-01
For DOE facilities across the United States, seismic design and evaluation criteria are based on probabilistic performance goals. In addition, other programs such as Advanced Light Water Reactors, New Production Reactors, and IPEEE for commercial nuclear power plants utilize design and evaluation criteria based on probabilistic performance goals. The use of probabilistic performance goals is a departure from design practice for commercial nuclear power plants which have traditionally been designed utilizing a deterministic specification of earthquake loading combined with deterministic response evaluation methods and permissible behavior limits. Approaches which utilize probabilistic seismic hazard curves for specification of earthquake loading and deterministic response evaluation methods and permissible behavior limits are discussed in this paper. Through the use of such design/evaluation approaches, it may be demonstrated that there is high likelihood that probabilistic performance goals can be achieved. 12 refs., 2 figs., 9 tabs.
A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration
NASA Technical Reports Server (NTRS)
Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce
2008-01-01
Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.
NASA Astrophysics Data System (ADS)
Chen, LiBing; Lu, Hong
2015-03-01
We show how a remote positive operator valued measurement (POVM) can be implemented deterministically by using partially entangled state(s). Firstly, we present a theoretical scheme for implementing deterministically a remote and controlled POVM onto any one of N qubits via a partially entangled ( N + 1)-qubit Greenberger-Horne-Zeilinger (GHZ) state, in which ( N - 1) administrators are included. Then, we design another scheme for implementing deterministically a POVM onto N remote qubits via N partially entangled qubit pairs. Our schemes have been designed for obtaining the optimal success probabilities: i.e. they are identical to those in the ordinary, local, POVMs. In these schemes, the POVM dictates the amount of entanglement needed. The fact that such overall treatment can save quantum resources is notable.
NASA Astrophysics Data System (ADS)
Daly, Peter M.; Hebenstreit, Gerald T.
2003-04-01
Deterministic source localization using matched-field processing (MFP) has yielded good results in propagation scenarios where the nonrandom model parameter input assumption is valid. In many shallow water environments, inputs to acoustic propagation models may be better represented using random distributions rather than fixed quantities. One can estimate the negative effect of random source inputs on deterministic MFP by (1) obtaining a realistic statistical representation of a signal model parameter, then (2) using the mean of the parameter as input to the MFP signal model (the so-called ``replica vector''), (3) synthesizing a source signal using multiple realizations of the random parameter, and (4) estimating the source localization error by correlating the synthesized signal vector with the replica vector over a three dimensional space. This approach allows one to quantify deterministic localization error introduced by random model parameters, including sound velocity profile, hydrophone locations, and sediment thickness and speed. [Work supported by DARPA Advanced Technology Office.
Experimental demonstration on the deterministic quantum key distribution based on entangled photons.
Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu
2016-01-01
As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified "Ping-Pong"(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582
Experimental demonstration on the deterministic quantum key distribution based on entangled photons
NASA Astrophysics Data System (ADS)
Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu
2016-02-01
As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications.
Experimental demonstration on the deterministic quantum key distribution based on entangled photons
Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu
2016-01-01
As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582
NASA Astrophysics Data System (ADS)
Samson, E. C.; Wilson, K. E.; Newman, Z. L.; Anderson, B. P.
2016-02-01
We experimentally and numerically demonstrate deterministic creation and manipulation of a pair of oppositely charged singly quantized vortices in a highly oblate Bose-Einstein condensate (BEC). Two identical blue-detuned, focused Gaussian laser beams that pierce the BEC serve as repulsive obstacles for the superfluid atomic gas; by controlling the positions of the beams within the plane of the BEC, superfluid flow is deterministically established around each beam such that two vortices of opposite circulation are generated by the motion of the beams, with each vortex pinned to the in situ position of a laser beam. We study the vortex creation process, and show that the vortices can be moved about within the BEC by translating the positions of the laser beams. This technique can serve as a building block in future experimental techniques to create, on-demand, deterministic arrangements of few or many vortices within a BEC for precise studies of vortex dynamics and vortex interactions.
Kutkov, V; Buglova, E; McKenna, T
2011-06-01
Lessons learned from responses to past events have shown that more guidance is needed for the response to radiation emergencies (in this context, a 'radiation emergency' means the same as a 'nuclear or radiological emergency') which could lead to severe deterministic effects. The International Atomic Energy Agency (IAEA) requirements for preparedness and response for a radiation emergency, inter alia, require that arrangements shall be made to prevent, to a practicable extent, severe deterministic effects and to provide the appropriate specialised treatment for these effects. These requirements apply to all exposure pathways, both internal and external, and all reasonable scenarios, to include those resulting from malicious acts (e.g. dirty bombs). This paper briefly describes the approach used to develop the basis for emergency response criteria for protective actions to prevent severe deterministic effects in the case of external exposure and intake of radioactive material. PMID:21617296
Traffic-light boundary in the deterministic Nagel-Schreckenberg model
NASA Astrophysics Data System (ADS)
Jia, Ning; Ma, Shoufeng
2011-06-01
The characteristics of the deterministic Nagel-Schreckenberg model with traffic-light boundary conditions are investigated and elucidated in a mostly theoretically way. First, precise analytical results of the outflow are obtained for cases in which the duration of the red phase is longer than one step. Then, some results are found and studied for cases in which the red phase equals one step. The main findings include the following. The maximum outflow is “road-length related” if the inflow is saturated; otherwise, if the inbound cars are generated stochastically, multiple theoretical outflow volumes may exist. The findings indicate that although the traffic-light boundary can be implemented in a simple and deterministic manner, the deterministic Nagel-Schreckenberg model with such a boundary has some unique and interesting behaviors.
Deterministic LOCC transformation of three-qubit pure states and entanglement transfer
Tajima, Hiroyasu
2013-02-15
A necessary and sufficient condition of the possibility of a deterministic local operations and classical communication (LOCC) transformation of three-qubit pure states is given. The condition shows that the three-qubit pure states are a partially ordered set parametrized by five well-known entanglement parameters and a novel parameter; the five are the concurrences C{sub AB}, C{sub AC}, C{sub BC}, the tangle {tau}{sub ABC} and the fifth parameter J{sub 5} of Acin et al. (2000) Ref. [19], while the other new one is the entanglement charge Q{sub e}. The order of the partially ordered set is defined by the possibility of a deterministic LOCC transformation from a state to another state. In this sense, the present condition is an extension of Nielsen's work (Nielsen (1999) [14]) to three-qubit pure states. We also clarify the rules of transfer and dissipation of the entanglement which is caused by deterministic LOCC transformations. Moreover, the minimum number of times of measurements to reproduce an arbitrary deterministic LOCC transformation between three-qubit pure states is given. - Highlights: Black-Right-Pointing-Pointer We obtained a necessary and sufficient condition for deterministic LOCC of 3 qubits. Black-Right-Pointing-Pointer We clarified rules of entanglement flow caused by measurements. Black-Right-Pointing-Pointer We found a new parameter which is interpreted as 'Charge of Entanglement'. Black-Right-Pointing-Pointer We gave a set of entanglements which determines whether two states are LU-eq. or not. Black-Right-Pointing-Pointer Our approach to deterministic LOCC of 3 qubits may be applicable to N qubits.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
NASA Astrophysics Data System (ADS)
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch
Prospective testing of neo-deterministic seismic hazard scenarios for the Italian territory
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Vaccari, Franco; Kossobokov, Vladimir; Panza, Giuliano F.
2013-04-01
for the space-time identification of strong earthquakes, with algorithms for the realistic modeling of ground motion. Accordingly, a set of deterministic scenarios of ground motion at bedrock, which refers to the time interval when a strong event is likely to occur within the alerted area, can be defined by means of full waveform modeling, both at regional and local scale. CN and M8S predictions, as well as the related time-dependent ground motion scenarios associated with the alarmed areas, are regularly updated every two months since 2006. The routine application of the time-dependent NDSHA approach provides information that can be useful in assigning priorities for timely mitigation actions and, at the same time, allows for a rigorous prospective testing and validation of the proposed methodology. As an example, for sites where ground shaking values greater than 0.2 g are estimated at bedrock, further investigations can be performed taking into account the local soil conditions, to assess the performances of relevant structures, such as historical and strategic buildings. The issues related with prospective testing and validation of the time-dependent NDSHA scenarios will be discussed, illustrating the results obtained for the recent strong earthquakes in Italy, including the May 20, 2012 Emilia earthquake.
NASA Astrophysics Data System (ADS)
Mualchin, Lalliana
2011-03-01
Modern earthquake ground motion hazard mapping in California began following the 1971 San Fernando earthquake in the Los Angeles metropolitan area of southern California. Earthquake hazard assessment followed a traditional approach, later called Deterministic Seismic Hazard Analysis (DSHA) in order to distinguish it from the newer Probabilistic Seismic Hazard Analysis (PSHA). In DSHA, seismic hazard in the event of the Maximum Credible Earthquake (MCE) magnitude from each of the known seismogenic faults within and near the state are assessed. The likely occurrence of the MCE has been assumed qualitatively by using late Quaternary and younger faults that are presumed to be seismogenic, but not when or within what time intervals MCE may occur. MCE is the largest or upper-bound potential earthquake in moment magnitude, and it supersedes and automatically considers all other possible earthquakes on that fault. That moment magnitude is used for estimating ground motions by applying it to empirical attenuation relationships, and for calculating ground motions as in neo-DSHA (Z uccolo et al., 2008). The first deterministic California earthquake hazard map was published in 1974 by the California Division of Mines and Geology (CDMG) which has been called the California Geological Survey (CGS) since 2002, using the best available fault information and ground motion attenuation relationships at that time. The California Department of Transportation (Caltrans) later assumed responsibility for printing the refined and updated peak acceleration contour maps which were heavily utilized by geologists, seismologists, and engineers for many years. Some engineers involved in the siting process of large important projects, for example, dams and nuclear power plants, continued to challenge the map(s). The second edition map was completed in 1985 incorporating more faults, improving MCE's estimation method, and using new ground motion attenuation relationships from the latest published
Electrically Switched Cesium Ion Exchange
JPH Sukamto; ML Lilga; RK Orth
1998-10-23
This report discusses the results of work to develop Electrically Switched Ion Exchange (ESIX) for separations of ions from waste streams relevant to DOE site clean-up. ESIX combines ion exchange and electrochemistry to provide a selective, reversible method for radionuclide separation that lowers costs and minimizes secondary waste generation typically associated with conventional ion exchange. In the ESIX process, an electroactive ion exchange film is deposited onto. a high surface area electrode, and ion uptake and elution are controlled directly by modulating the potential of the film. As a result, the production of secondary waste is minimized, since the large volumes of solution associated with elution, wash, and regeneration cycles typical of standard ion exchange are not needed for the ESIX process. The document is presented in two parts: Part I, the Summary Report, discusses the objectives of the project, describes the ESIX concept and the approach taken, and summarizes the major results; Part II, the Technology Description, provides a technical description of the experimental procedures and in-depth discussions on modeling, case studies, and cost comparisons between ESIX and currently used technologies.
AL HARMFUL ALGAL BLOOM (HAB) INFORMATION EXCHANGE
This project proposes to implement an integrated web site that will serve as an Alabama Harmful Algal Bloom (HAB) Information Exchange Network. This network will be a stand-alone site where HAB data from all agencies and research efforts in the State of Alabama will be integrate...
Ion exchange polymers for anion separations
Jarvinen, Gordon D.; Marsh, S. Fredric; Bartsch, Richard A.
1997-01-01
Anion exchange resins including at least two positively charged sites and a ell-defined spacing between the positive sites are provided together with a process of removing anions or anionic metal complexes from aqueous solutions by use of such resins. The resins can be substituted poly(vinylpyridine) and substituted polystyrene.
Ion exchange polymers for anion separations
Jarvinen, G.D.; Marsh, S.F.; Bartsch, R.A.
1997-09-23
Anion exchange resins including at least two positively charged sites and a well-defined spacing between the positive sites are provided together with a process of removing anions or anionic metal complexes from aqueous solutions by use of such resins. The resins can be substituted poly(vinylpyridine) and substituted polystyrene.
NASA Astrophysics Data System (ADS)
Pellerin, L.; McPhee, D. K.
2009-05-01
We present a three-step deterministic approach to understand the relationship between borehole data and surface geophysics using audiomagnetotelluric (AMT) data. Traditionally, geoscientists have used borehole data as ground truth, but it is not clear how representative a point measurement is for laterally extensive or regional areas. Furthermore, it is unclear when and where it is valid to compare borehole and surface data. Geophysics is used to site wells, but the borehole data are often used to calibrate the geophysics, and it is the relationship between the two data sets that we quantitatively investigate here. As part of a hydrological study of the Basin and Range province, an arid, mountainous, sparsely populated region of the western United States, many AMT surveys were conducted. AMT soundings were typically collected along profiles at stations spaced roughly 200-400 m. The resulting two-dimensional resistivity models successfully imaged subsurface faults and structures down to roughly 500 m depth. These faults are a primary structural control on the hydrogeology of many valleys in this region. Borehole data, including both lithological and geophysical logs, were available from several water monitoring and testing wells close to our AMT stations. Wells were located between 10m and 1.6 km from our AMT profiles, and extended down to 600 m below the surface. Although borehole data, whether lithological or geophysical logs, have excellent vertical resolution they are essentially point source data, and there are many reasons that the borehole data may not faithfully represent the survey area. The borehole can be unfortunately sited so that it is located in an anomalous area, or problems with instrumentation can cause inaccuracies with the logs. In addition, there is a great deal of borehole data that has been poorly archived and may be difficult to decipher or use. Our approach to quantitatively compare the AMT and borehole data involves three steps: 1) One
Piscitella, R.R.
1984-07-16
This invention relates to a heat exchanger for waste heat recovery from high temperature industrial exhaust streams. In a woven ceramic heat exchanger using the basic tube-in-shell design, each heat exchanger consisting of tube sheets and tube, is woven separately. Individual heat exchangers are assembled in cross-flow configuration. Each heat exchanger is woven from high temperature ceramic fiber, the warp is continuous from tube to tube sheet providing a smooth transition and unitized construction.
NASA Astrophysics Data System (ADS)
Shi, Ronghua; Liu, Shaorong; Wang, Shuo; Guo, Ying
2015-02-01
We present two deterministic entanglement purifications protocols for χ-type entangled states, resorting to multiple degrees of freedom. One protocol is implemented with the spatial entanglement to distill the maximally entangled states from the mixed states, resorting to some linear optical elements. Another one is implemented with the frequency entanglement for the purification. All the parties can jointly distill the maximally entangled states from the mixed states affected by the environmental noise during transmission. Both of the protocols can work in a deterministic way with the success probability 100 %, in principle. The derived features may make the protocols useful in the practical long-distance quantum communication.
Palmer, Tim N.; O’Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173
NASA Astrophysics Data System (ADS)
Park, Junbo; Ralph, D. C.; Buhrman, R. A.
2013-12-01
We model 100 ps pulse switching dynamics of orthogonal spin transfer (OST) devices that employ an out-of-plane polarizer and an in-plane polarizer. Simulation results indicate that increasing the spin polarization ratio, CP = PIPP/POPP, results in deterministic switching of the free layer without over-rotation (360° rotation). By using spin torque asymmetry to realize an enhanced effective PIPP, we experimentally demonstrate this behavior in OST devices in parallel to anti-parallel switching. Modeling predicts that decreasing the effective demagnetization field can substantially reduce the minimum CP required to attain deterministic switching, while retaining low critical switching current, Ip ˜ 500 μA.
NASA Astrophysics Data System (ADS)
Schwartz, I.; Cogan, D.; Schmidgall, E. R.; Gantz, L.; Don, Y.; Zieliński, M.; Gershoni, D.
2015-11-01
We use one single, few-picosecond-long, variably polarized laser pulse to deterministically write any selected spin state of a quantum dot confined dark exciton whose life and coherence time are six and five orders of magnitude longer than the laser pulse duration, respectively. The pulse is tuned to an absorption resonance of an excited dark exciton state, which acquires nonnegligible oscillator strength due to residual mixing with bright exciton states. We obtain a high-fidelity one-to-one mapping from any point on the Poincaré sphere of the pulse polarization to a corresponding point on the Bloch sphere of the spin of the deterministically photogenerated dark exciton.
Hybrid method of deterministic and probabilistic approaches for multigroup neutron transport problem
Lee, D.
2012-07-01
A hybrid method of deterministic and probabilistic methods is proposed to solve Boltzmann transport equation. The new method uses a deterministic method, Method of Characteristics (MOC), for the fast and thermal neutron energy ranges and a probabilistic method, Monte Carlo (MC), for the intermediate resonance energy range. The hybrid method, in case of continuous energy problem, will be able to take advantage of fast MOC calculation and accurate resonance self shielding treatment of MC method. As a proof of principle, this paper presents the hybrid methodology applied to a multigroup form of Boltzmann transport equation and confirms that the hybrid method can produce consistent results with MC and MOC methods. (authors)
Deterministic single-atom excitation via adiabatic passage and Rydberg blockade
Beterov, I. I.; Tretyakov, D. B.; Entin, V. M.; Yakshina, E. A.; Ryabtsev, I. I.; MacCormick, C.; Bergamini, S.
2011-08-15
We propose to use adiabatic rapid passage with a chirped laser pulse in the strong dipole blockade regime to deterministically excite only one Rydberg atom from randomly loaded optical dipole traps or optical lattices. The chirped laser excitation is shown to be insensitive to the random number N of the atoms in the traps. Our method overcomes the problem of the {radical}(N) dependence of the collective Rabi frequency, which was the main obstacle for deterministic single-atom excitation in the ensembles with unknown N, and can be applied for single-atom loading of dipole traps and optical lattices.
Deterministic Polynomial Time Equivalence between Factoring and Key-Recovery Attack on Takagi's RSA
NASA Astrophysics Data System (ADS)
Kunihiro, Noboru; Kurosawa, Kaoru
For RSA, May showed a deterministic polynomial time equivalence of computing d to factoring N(=pq). On the other hand, Takagi showed a variant of RSA such that the decryption algorithm is faster than the standard RSA, where N=prq while ed=1 mod (p-1)(q-1). In this paper, we show that a deterministic polynomial time equivalence also holds in this variant. The coefficient matrix T to which LLL algorithm is applied is no longer lower triangular, and hence we develop a new technique to overcome this problem.
Electrically controlled cesium ion exchange
Lilga, M.
1996-10-01
Several sites within the DOE complex (Savannah River, Idaho, Oak Ridge and Hanford) have underground storage tanks containing high-level waste resulting from nuclear engineering activities. To facilitate final disposal of the tank waste, it is advantageous to separate and concentrate the radionuclides for final immobilization in a vitrified glass matrix. This task proposes a new approach for radionuclide separation by combining ion exchange (IX) and electrochemistry to provide a selective and economic separation method.
BOREAS TF-11 SSA-Fen Leaf Gas Exchange Data
NASA Technical Reports Server (NTRS)
Arkebauer, Timothy J.; Hall, Forrest G. (Editor); Knapp, David E. (Editor)
2000-01-01
The BOREAS TF-11 team gathered a variety of data to complement its tower flux measurements collected at the SSA-Fen site. This data set contains single-leaf gas exchange data from the SSA-Fen site during 1994 and 1995. These leaf gas exchange properties were measured for the dominant vascular plants using portable gas exchange systems. The data are stored in tabular ASCII files.
NASA Astrophysics Data System (ADS)
Li, S.
2002-05-01
Taking advantage of the recent developments in groundwater modeling research and computer, image and graphics processing, and objected oriented programming technologies, Dr. Li and his research group have recently developed a comprehensive software system for unified deterministic and stochastic groundwater modeling. Characterized by a new real-time modeling paradigm and improved computational algorithms, the software simulates 3D unsteady flow and reactive transport in general groundwater formations subject to both systematic and "randomly" varying stresses and geological and chemical heterogeneity. The software system has following distinct features and capabilities: Interactive simulation and real time visualization and animation of flow in response to deterministic as well as stochastic stresses. Interactive, visual, and real time particle tracking, random walk, and reactive plume modeling in both systematically and randomly fluctuating flow. Interactive statistical inference, scattered data interpolation, regression, and ordinary and universal Kriging, conditional and unconditional simulation. Real-time, visual and parallel conditional flow and transport simulations. Interactive water and contaminant mass balance analysis and visual and real-time flux update. Interactive, visual, and real time monitoring of head and flux hydrographs and concentration breakthroughs. Real-time modeling and visualization of aquifer transition from confined to unconfined to partially de-saturated or completely dry and rewetting Simultaneous and embedded subscale models, automatic and real-time regional to local data extraction; Multiple subscale flow and transport models Real-time modeling of steady and transient vertical flow patterns on multiple arbitrarily-shaped cross-sections and simultaneous visualization of aquifer stratigraphy, properties, hydrological features (rivers, lakes, wetlands, wells, drains, surface seeps), and dynamically adjusted surface flooding area
Deterministic linear-optics quantum computing based on a hybrid approach
Lee, Seung-Woo; Jeong, Hyunseok
2014-12-04
We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.
RISK ESTIMATES FOR DETERMINISTIC HEALTH EFFECTS OF INHALED WEAPONS GRADE PLUTONIUM
Risk estimates for deterministic effects of inhaled weapons-grade plutonium (WG Pu) are needed to evaluate potential serious harm to: (1) U. S. Department of Energy nuclear workers from accidental or other work-place releases of WG Pu; and (2) the public from terrorist actions re...
Deterministic Chaos in Open Well-stirred Bray-Liebhafsky Reaction System
NASA Astrophysics Data System (ADS)
Kolar-Anić, Ljiljana; Vukojević, Vladana; Pejić, Nataša; Grozdić, Tomislav; Anić, Slobodan
2004-12-01
Dynamics of the Bray-Liebhafsky (BL) oscillatory reaction is analyzed in a Continuously-fed well-Stirred Thank Reactor (CSTR). Deterministic chaos is found under different conditions, when temperature and acidity are chosen as control parameters. Dynamic patterns observed in real experiments are also numerically simulated.
Use of a Deterministic Macroeconomic Computer Model as a Teaching Aid in Economic Statistics.
ERIC Educational Resources Information Center
Tedford, John R.
A simple deterministic macroeconomic computer model was tested in a junior-senior level economic statistics course to demonstrate how and why some common errors arise when statistical estimation techniques are applied to economic relationships in empirical problematic situations. The computer model was treated as the true universe or real-world…
Tag-mediated cooperation with non-deterministic genotype-phenotype mapping
NASA Astrophysics Data System (ADS)
Zhang, Hong; Chen, Shu
2016-01-01
Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.
In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...
A small-world network derived from the deterministic uniform recursive tree by line graph operation
NASA Astrophysics Data System (ADS)
Hou, Pengfeng; Zhao, Haixing; Mao, Yaping; Wang, Zhao
2016-03-01
The deterministic uniform recursive tree ({DURT}) is one of the deterministic versions of the uniform recursive tree ({URT}). Zhang et al (2008 Eur. Phys. J. B 63 507-13) studied the properties of DURT, including its topological characteristics and spectral properties. Although DURT shows a logarithmic scaling with the size of the network, DURT is not a small-world network since its clustering coefficient is zero. Lu et al (2012 Physica A 391 87-92) proposed a deterministic small-world network by adding some edges with a simple rule in each DURT iteration. In this paper, we intoduce a method for constructing a new deterministic small-world network by the line graph operation in each DURT iteration. The line graph operation brings about cliques at each node of the previous given graph, and the resulting line graph possesses larger clustering coefficients. On the other hand, this operation can decrease the diameter at almost one, then giving the analytic solutions to several topological characteristics of the model proposed. Supported by The Ministry of Science and Technology 973 project (No. 2010C B334708); National Science Foundation of China (Nos. 61164005, 11161037, 11101232, 11461054, 11551001); The Ministry of education scholars and innovation team support plan of Yangtze River (No. IRT1068); Qinghai Province Nature Science Foundation Project (Nos. 2012-Z-943, 2014-ZJ-907).
Vernekar, R; Krüger, T
2015-09-01
We investigate the effect of particle volume fraction on the efficiency of deterministic lateral displacement (DLD) devices. DLD is a popular passive sorting technique for microfluidic applications. Yet, it has been designed for treating dilute suspensions, and its efficiency for denser samples is not well known. We perform 3D simulations based on the immersed-boundary, lattice-Boltzmann and finite-element methods to model the flow of red blood cells (RBCs) in different DLD devices. We quantify the DLD efficiency in terms of appropriate "failure" probabilities and RBC counts in designated device outlets. Our main result is that the displacement mode breaks down upon an increase of RBC volume fraction, while the zigzag mode remains relatively robust. This suggests that the separation of larger particles (such as white blood cells) from a dense RBC background is simpler than separating smaller particles (such as platelets) from the same background. The observed breakdown stems from non-deterministic particle collisions interfering with the designed deterministic nature of DLD devices. Therefore, we postulate that dense suspension effects generally hamper efficient particle separation in devices based on deterministic principles. PMID:26143149
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Controlling influenza disease: Comparison between discrete time Markov chain and deterministic model
NASA Astrophysics Data System (ADS)
Novkaniza, F.; Ivana, Aldila, D.
2016-04-01
Mathematical model of respiratory diseases spread with Discrete Time Markov Chain (DTMC) and deterministic approach for constant total population size are analyzed and compared in this article. Intervention of medical treatment and use of medical mask included in to the model as a constant parameter to controlling influenza spreads. Equilibrium points and basic reproductive ratio as the endemic criteria and it level set depend on some variable are given analytically and numerically as a results from deterministic model analysis. Assuming total of human population is constant from deterministic model, number of infected people also analyzed with Discrete Time Markov Chain (DTMC) model. Since Δt → 0, we could assume that total number of infected people might change only from i to i + 1, i - 1, or i. Approximation probability of an outbreak with gambler's ruin problem will be presented. We find that no matter value of basic reproductive ℛ0, either its larger than one or smaller than one, number of infection will always tends to 0 for t → ∞. Some numerical simulation to compare between deterministic and DTMC approach is given to give a better interpretation and a better understanding about the models results.
NASA Astrophysics Data System (ADS)
Tang, Zhili
2016-06-01
This paper solved aerodynamic drag reduction of transport wing fuselage configuration in transonic regime by using a parallel Nash evolutionary/deterministic hybrid optimization algorithm. Two sets of parameters are used, namely globally and locally. It is shown that optimizing separately local and global parameters by using Nash algorithms is far more efficient than considering these variables as a whole.
Deterministic switching of hierarchy during wrinkling in quasi-planar bilayers
Saha, Sourabh K.; Culpepper, Martin L.
2016-04-25
Emergence of hierarchy during compression of quasi-planar bilayers is preceded by a mode-locked state during which the quasi-planar form persists. Transition to hierarchy is determined entirely by geometrically observable parameters. This results in a universal transition phase diagram that enables one to deterministically tune hierarchy even with limited knowledge about material properties.
Deterministic methods for sensitivity and uncertainty analysis in large-scale computer models
Worley, B.A.; Oblow, E.M.; Pin, F.G.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.; Lucius, J.L.
1987-01-01
The fields of sensitivity and uncertainty analysis are dominated by statistical techniques when large-scale modeling codes are being analyzed. This paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. The paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. The paper demonstrates the deterministic approach to sensitivity and uncertainty analysis as applied to a sample problem that models the flow of water through a borehole. The sample problem is used as a basis to compare the cumulative distribution function of the flow rate as calculated by the standard statistical methods and the DUA method. The DUA method gives a more accurate result based upon only two model executions compared to fifty executions in the statistical case.
ERIC Educational Resources Information Center
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System
ERIC Educational Resources Information Center
Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.
2015-01-01
Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…
Giant exchange interaction in mixed lanthanides
Vieru, Veacheslav; Iwahara, Naoya; Ungur, Liviu; Chibotaru, Liviu F.
2016-01-01
Combining strong magnetic anisotropy with strong exchange interaction is a long standing goal in the design of quantum magnets. The lanthanide complexes, while exhibiting a very strong ionic anisotropy, usually display a weak exchange coupling, amounting to only a few wavenumbers. Recently, an isostructural series of mixed (Ln = Gd, Tb, Dy, Ho, Er) have been reported, in which the exchange splitting is estimated to reach hundreds wavenumbers. The microscopic mechanism governing the unusual exchange interaction in these compounds is revealed here by combining detailed modeling with density-functional theory and ab initio calculations. We find it to be basically kinetic and highly complex, involving non-negligible contributions up to seventh power of total angular momentum of each lanthanide site. The performed analysis also elucidates the origin of magnetization blocking in these compounds. Contrary to general expectations the latter is not always favored by strong exchange interaction. PMID:27087470
Giant exchange interaction in mixed lanthanides
NASA Astrophysics Data System (ADS)
Vieru, Veacheslav; Iwahara, Naoya; Ungur, Liviu; Chibotaru, Liviu F.
2016-04-01
Combining strong magnetic anisotropy with strong exchange interaction is a long standing goal in the design of quantum magnets. The lanthanide complexes, while exhibiting a very strong ionic anisotropy, usually display a weak exchange coupling, amounting to only a few wavenumbers. Recently, an isostructural series of mixed (Ln = Gd, Tb, Dy, Ho, Er) have been reported, in which the exchange splitting is estimated to reach hundreds wavenumbers. The microscopic mechanism governing the unusual exchange interaction in these compounds is revealed here by combining detailed modeling with density-functional theory and ab initio calculations. We find it to be basically kinetic and highly complex, involving non-negligible contributions up to seventh power of total angular momentum of each lanthanide site. The performed analysis also elucidates the origin of magnetization blocking in these compounds. Contrary to general expectations the latter is not always favored by strong exchange interaction.
Giant exchange interaction in mixed lanthanides.
Vieru, Veacheslav; Iwahara, Naoya; Ungur, Liviu; Chibotaru, Liviu F
2016-01-01
Combining strong magnetic anisotropy with strong exchange interaction is a long standing goal in the design of quantum magnets. The lanthanide complexes, while exhibiting a very strong ionic anisotropy, usually display a weak exchange coupling, amounting to only a few wavenumbers. Recently, an isostructural series of mixed (Ln = Gd, Tb, Dy, Ho, Er) have been reported, in which the exchange splitting is estimated to reach hundreds wavenumbers. The microscopic mechanism governing the unusual exchange interaction in these compounds is revealed here by combining detailed modeling with density-functional theory and ab initio calculations. We find it to be basically kinetic and highly complex, involving non-negligible contributions up to seventh power of total angular momentum of each lanthanide site. The performed analysis also elucidates the origin of magnetization blocking in these compounds. Contrary to general expectations the latter is not always favored by strong exchange interaction. PMID:27087470
Showing particles their place: deterministic colloid immobilization by gold nanomeshes.
Stelling, Christian; Mark, Andreas; Papastavrou, Georg; Retsch, Markus
2016-08-14
The defined immobilization of colloidal particles on a non-close packed lattice on solid substrates is a challenging task in the field of directed colloidal self-assembly. In this contribution the controlled self-assembly of polystyrene beads into chemically modified nanomeshes with a high particle surface coverage is demonstrated. For this, solely electrostatic interaction forces were exploited by the use of topographically shallow gold nanomeshes. Employing orthogonal functionalization, an electrostatic contrast between the glass surface and the gold nanomesh was introduced on a sub-micron scale. This surface charge contrast promotes a highly site-selective trapping of the negatively charged polystyrene particles from the liquid phase. AFM force spectroscopy with a polystyrene colloidal probe was used to rationalize this electrostatic focusing effect. It provides quantitative access to the occurring interaction forces between the particle and substrate surface and clarifies the role of the pH during the immobilization process. Furthermore, the structure of the non-close packed colloidal monolayers can be finely tuned by varying the ionic strength and geometric parameters between colloidal particles and nanomesh. Therefore one is able to specifically and selectively adsorb one or several particles into one individual nanohole. PMID:27416921
NASA Astrophysics Data System (ADS)
Paskaleva, Ivanka; Kouteva-Guentcheva, Mihaela; Vaccari, Franco; Panza, Giuliano F.
2011-03-01
This paper describes the outcome of the advanced seismic hazard and seismic risk estimates recently performed for the city of Sofia, based on the state-of-the-art of knowledge for this site. Some major results of the neo-deterministic, scenario-based, seismic hazard assessment approach (NDSHA) to the earthquake hazard assessment for the city of Sofia are considered. Further validations of the recently constructed synthetic strong motion database, containing site and seismic source-specific ground motion time histories are performed and discussed. Displacement and acceleration response spectra are considered. The elastic displacement response spectra and displacement demand are discussed with regard to earthquake magnitude, seismic source-to-site distance, seismic source mechanism, and local geological site conditions. The elastic response design spectrum from the standard pseudo-acceleration, versus natural period, T n, format, converted to a capacity diagram in S a - S d format is discussed in the perspective of the Eurocode 8 provisions. A brief overview of the engineering applications of the seismic demand obtained making use of the NDSHA is supplied. Some applications of the outcome of NDSHA procedure for engineering purposes are shown. The obtained database of ground shaking waveforms and time-histories, computed for city of Sofia is used to: (1) extract maximum particle velocities; (2) calculate the space distribution of the horizontal strain factor Log10 ɛ; (3) estimate liquefaction susceptibility in terms of standard penetration test, N values, and initial over burden stress; (4) estimate damage index distribution; and (5) map the distribution of the expected pipe breaks and red-tagged buildings for given scenario earthquakes, etc. The theoretically obtained database, based on the simultaneous treatment of the data from many disciplines, contains data fully suitable for practical use. The proper use of this database can lead to a significant seismic
NASA Astrophysics Data System (ADS)
Doyen, G.; Drakova, D.
2015-08-01
We construct a world model consisting of a matter field living in 4 dimensional spacetime and a gravitational field living in 11 dimensional spacetime. The seven hidden dimensions are compactified within a radius estimated by reproducing the particle-wave characteristics of diffraction experiments. In the presence of matter fields the gravitational field develops localized modes with elementary excitations called gravonons which are induced by the sources (massive particles). The final world model treated here contains only gravonons and a scalar matter field. The gravonons are localized in the environment of the massive particles which generate them. The solution of the Schrödinger equation for the world model yields matter fields which are localized in the 4 dimensional subspace. The localization has the following properties: (i) There is a chooser mechanism for the selection of the localization site. (ii) The chooser selects one site on the basis of minor energy differences and differences in the gravonon structure between the sites, which at present cannot be controlled experimentally and therefore let the choice appear statistical. (iii) The changes from one localization site to a neighbouring one take place in a telegraph-signal like manner. (iv) The times at which telegraph like jumps occur depend on subtleties of the gravonon structure which at present cannot be controlled experimentally and therefore let the telegraph-like jumps appear statistical. (v) The fact that the dynamical law acts in the configuration space of fields living in 11 dimensional spacetime lets the events observed in 4 dimensional spacetime appear non-local. In this way the phenomenology of CQM is obtained without the need of introducing the process of collapse and a probabilistic interpretation of the wave function. Operators defining observables need not be introduced. All experimental findings are explained in a deterministic way as a consequence of the time development of the wave
Indiana Health Information Exchange
The Indiana Health Information Exchange is comprised of various Indiana health care institutions, established to help improve patient safety and is recognized as a best practice for health information exchange.
Showing particles their place: deterministic colloid immobilization by gold nanomeshes
NASA Astrophysics Data System (ADS)
Stelling, Christian; Mark, Andreas; Papastavrou, Georg; Retsch, Markus
2016-07-01
The defined immobilization of colloidal particles on a non-close packed lattice on solid substrates is a challenging task in the field of directed colloidal self-assembly. In this contribution the controlled self-assembly of polystyrene beads into chemically modified nanomeshes with a high particle surface coverage is demonstrated. For this, solely electrostatic interaction forces were exploited by the use of topographically shallow gold nanomeshes. Employing orthogonal functionalization, an electrostatic contrast between the glass surface and the gold nanomesh was introduced on a sub-micron scale. This surface charge contrast promotes a highly site-selective trapping of the negatively charged polystyrene particles from the liquid phase. AFM force spectroscopy with a polystyrene colloidal probe was used to rationalize this electrostatic focusing effect. It provides quantitative access to the occurring interaction forces between the particle and substrate surface and clarifies the role of the pH during the immobilization process. Furthermore, the structure of the non-close packed colloidal monolayers can be finely tuned by varying the ionic strength and geometric parameters between colloidal particles and nanomesh. Therefore one is able to specifically and selectively adsorb one or several particles into one individual nanohole.The defined immobilization of colloidal particles on a non-close packed lattice on solid substrates is a challenging task in the field of directed colloidal self-assembly. In this contribution the controlled self-assembly of polystyrene beads into chemically modified nanomeshes with a high particle surface coverage is demonstrated. For this, solely electrostatic interaction forces were exploited by the use of topographically shallow gold nanomeshes. Employing orthogonal functionalization, an electrostatic contrast between the glass surface and the gold nanomesh was introduced on a sub-micron scale. This surface charge contrast promotes a
Deterministic Earthquake Scenarios for the City of Sofia
NASA Astrophysics Data System (ADS)
Slavov, S.; Paskaleva, I.; Kouteva, M.; Vaccari, F.; Panza, G. F.
The city of Sofia is exposed to a high seismic risk. Macroseismic intensities in the range of VIII - X (MSK) can be expected in the city. The earthquakes that can influence the hazard in Sofia originate either beneath the city or are caused by seismic sources located within a radius of 40 km. The city of Sofia is also prone to the remote Vrancea seismic zone in Romania, and particularly vulnerable are the long-period elements of the built environment. The high seismic risk and the lack of instrumental recordings of the regional seismicity make the use of appropriate credible earthquake scenarios and ground-motion modelling approaches for defining the seismic input for the city of Sofia necessary. Complete synthetic seismic signals, due to several earthquake scenarios, were computed along chosen geological profiles crossing the city, applying a hybrid technique, which combines the modal summation technique and finite differences. The modelling takes into account simultaneously the geotechnical properties of the site, the position and geometry of the seismic source and the mechanical properties of the propagation medium. Acceleration, velocity and displacement time histories and related quantities of earthquake engineering interest (e.g., response spectra, ground-motion amplification along the profiles) have been supplied. The approach applied in this study allows us to obtain the definition of the seismic input at low cost, exploiting large quantities of existing data (e.g. geotechnical, geological, seismological). It may be efficiently used to estimate the ground motion for the purposes of microzonation, urban planning, retrofitting or insurance of the built environment, etc.
Anderson, Oscar A.
1978-01-01
An improved charge exchange system for substantially reducing pumping requirements of excess gas in a controlled thermonuclear reactor high energy neutral beam injector. The charge exchange system utilizes a jet-type blanket which acts simultaneously as the charge exchange medium and as a shield for reflecting excess gas.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-26
... (``Exchange'' or ``CHX'') filed with the Securities and Exchange Commission (the ``Commission'') the proposed... change is available on the Exchange's Web site at ( http://www.chx.com ), at the Exchange's Office of the... Purpose of, and Statutory Basis for, the Proposed Rule Change In its filing with the Commission, the...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-18
... the proposed rule change is available on the Exchange's Web site at http://www.directedge.com , on the Commission's Web site at http://www.sec.gov , at the principal office of the Exchange, and at the Commission... and Exchange Release No. 61698 (March 12, 2010), 75 FR 13151 (March 18, 2010) (approving File No....
Lipid exchange between membranes.
Jähnig, F
1984-01-01
The exchange of lipid molecules between vesicle bilayers in water and a monolayer forming at the water surface was investigated theoretically within the framework of thermodynamics. The total number of exchanged molecules was found to depend on the bilayer curvature as expressed by the vesicle radius and on the boundary condition for exchange, i.e., whether during exchange the radius or the packing density of the vesicles remains constant. The boundary condition is determined by the rate of flip-flop within the bilayer relative to the rate of exchange between bi- and monolayer. If flip-flop is fast, exchange is independent of the vesicle radius; if flip-flop is slow, exchange increases with the vesicle radius. Available experimental results agree with the detailed form of this dependence. When the theory was extended to exchange between two bilayers of different curvature, the direction of exchange was also determined by the curvatures and the boundary conditions for exchange. Due to the dependence of the boundary conditions on flip-flop and, consequently, on membrane fluidity, exchange between membranes may partially be regulated by membrane fluidity. PMID:6518251
Neo-deterministic definition of earthquake hazard scenarios: a multiscale application to India
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Parvez, Imtiyaz A.; Rastogi, Bal K.; Vaccari, Franco; Cozzini, Stefano; Bisignano, Davide; Romanelli, Fabio; Panza, Giuliano F.; Ashish, Mr; Mir, Ramees R.
2014-05-01
The development of effective mitigation strategies requires scientifically consistent estimates of seismic ground motion; recent analysis, however, showed that the performances of the classical probabilistic approach to seismic hazard assessment (PSHA) are very unsatisfactory in anticipating ground shaking from future large earthquakes. Moreover, due to their basic heuristic limitations, the standard PSHA estimates are by far unsuitable when dealing with the protection of critical structures (e.g. nuclear power plants) and cultural heritage, where it is necessary to consider extremely long time intervals. Nonetheless, the persistence in resorting to PSHA is often explained by the need to deal with uncertainties related with ground shaking and earthquakes recurrence. We show that current computational resources and physical knowledge of the seismic waves generation and propagation processes, along with the improving quantity and quality of geophysical data, allow nowadays for viable numerical and analytical alternatives to the use of PSHA. The advanced approach considered in this study, namely the NDSHA (neo-deterministic seismic hazard assessment), is based on the physically sound definition of a wide set of credible scenario events and accounts for uncertainties and earthquakes recurrence in a substantially different way. The expected ground shaking due to a wide set of potential earthquakes is defined by means of full waveforms modelling, based on the possibility to efficiently compute synthetic seismograms in complex laterally heterogeneous anelastic media. In this way a set of scenarios of ground motion can be defined, either at national and local scale, the latter considering the 2D and 3D heterogeneities of the medium travelled by the seismic waves. The efficiency of the NDSHA computational codes allows for the fast generation of hazard maps at the regional scale even on a modern laptop computer. At the scenario scale, quick parametric studies can be easily
Near-field evanescent waves scattered from a spatially deterministic and anisotropic medium.
Li, Jia; Chang, Liping; Wu, Zhefu
2015-06-15
The scattering of light from an anisotropic medium, which may present either spatially random or deterministic statistics, has attracted substantial interest where the measurement of structural properties of scatterers is concerned. To date, however, no literature has studied near-zone evanescent waves scattered from a spatially deterministic and anisotropic medium. In this Letter, integral expressions are derived to represent electric fields of evanescent waves in the near-zone scattered field. In addition, the dependences of spectral densities of scattered field on the propagation distance of evanescent waves and effective radius of the scattering potential (ERSP) are also shown by numerical graphs, respectively. Potential applications of our study include the near-field optical microscopy and biomedical sensing. PMID:26076235
Scaling of weighted spectral distribution in deterministic scale-free networks
NASA Astrophysics Data System (ADS)
Jiao, Bo; Nie, Yuan-ping; Shi, Jian-mai; Huang, Cheng-dong; Zhou, Ying; Du, Jing; Guo, Rong-hua; Tao, Ye-rong
2016-06-01
Scale-free networks are abundant in the real world. In this paper, we investigate the scaling properties of the weighted spectral distribution in several deterministic and stochastic models of evolving scale-free networks. First, we construct a new deterministic scale-free model whose node degrees have a unified format. Using graph structure features, we derive a precise formula for the spectral metric in this model. This formula verifies that the spectral metric grows sublinearly as network size (i.e., the number of nodes) grows. Additionally, the mathematical reasoning of the precise formula theoretically provides detailed explanations for this scaling property. Finally, we validate the scaling properties of the spectral metric using some stochastic models. The experimental results show that this scaling property can be retained regardless of local world, node deleting and assortativity adjustment.
Charged quantum dot micropillar system for deterministic light-matter interactions
NASA Astrophysics Data System (ADS)
Androvitsaneas, P.; Young, A. B.; Schneider, C.; Maier, S.; Kamp, M.; Höfling, S.; Knauer, S.; Harbord, E.; Hu, C. Y.; Rarity, J. G.; Oulton, R.
2016-06-01
Quantum dots (QDs) are semiconductor nanostructures in which a three-dimensional potential trap produces an electronic quantum confinement, thus mimicking the behavior of single atomic dipole-like transitions. However, unlike atoms, QDs can be incorporated into solid-state photonic devices such as cavities or waveguides that enhance the light-matter interaction. A near unit efficiency light-matter interaction is essential for deterministic, scalable quantum-information (QI) devices. In this limit, a single photon input into the device will undergo a large rotation of the polarization of the light field due to the strong interaction with the QD. In this paper we measure a macroscopic (˜6∘ ) phase shift of light as a result of the interaction with a negatively charged QD coupled to a low-quality-factor (Q ˜290 ) pillar microcavity. This unexpectedly large rotation angle demonstrates that this simple low-Q -factor design would enable near-deterministic light-matter interactions.
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
Deterministic amplification of Schrödinger cat states in circuit quantum electrodynamics
NASA Astrophysics Data System (ADS)
Joo, Jaewoo; Elliott, Matthew; Oi, Daniel K. L.; Ginossar, Eran; Spiller, Timothy P.
2016-02-01
Perfect deterministic amplification of arbitrary quantum states is prohibited by quantum mechanics, but determinism can be achieved by compromising between fidelity and amplification power. We propose a dynamical scheme for deterministically amplifying photonic Schrödinger cat states, which show great promise as a tool for quantum information processing. Our protocol is designed for strongly coupled circuit quantum electrodynamics and utilizes artificial atomic states and external microwave controls to engineer a set of optimal state transfers and achieve high fidelity amplification. We compare analytical results with full simulations of the open, driven Jaynes-Cummings model, using realistic device parameters for state of the art superconducting circuits. Amplification with a fidelity of 0.9 can be achieved for sizable cat states in the presence of cavity and atomic-level decoherence. This tool could be applied to practical continuous-variable information processing for the purification and stabilization of cat states in the presence of photon losses.
NASA Astrophysics Data System (ADS)
Li, Jia; Wu, Pinghui; Chang, Liping
2016-02-01
It is commonly known that the far-zone spectrum of a scattered field can be utilized to measure the scattering potential of the medium. However, properties of evanescent fields scattered from the medium with the dielectric susceptibility being a deterministic function, to the best of our knowledge, have not been concerned so far. Assuming the scattering potential of a spatially deterministic medium suffices the Gaussian profile, integrations are derived for the near-zone evanescent field generated by the scattering of light from the medium. It is noticed that the spectral density of the scattered field decays exponentially as either the propagation distance of scattered waves or the effective radius of the scattering potential (ERSP) increases. These results are applicable to the near-field biomedical imaging where the considered tiny particles and molecules solely scatter evanescent waves in near-zone regions.
Lee, Sylvanus Y.; Amsden, Jason J.; Boriskina, Svetlana V.; Gopinath, Ashwin; Mitropolous, Alexander; Kaplan, David L.; Omenetto, Fiorenzo G.; Negro, Luca Dal
2010-01-01
Light scattering phenomena in periodic systems have been investigated for decades in optics and photonics. Their classical description relies on Bragg scattering, which gives rise to constructive interference at specific wavelengths along well defined propagation directions, depending on illumination conditions, structural periodicity, and the refractive index of the surrounding medium. In this paper, by engineering multifrequency colorimetric responses in deterministic aperiodic arrays of nanoparticles, we demonstrate significantly enhanced sensitivity to the presence of a single protein monolayer. These structures, which can be readily fabricated by conventional Electron Beam Lithography, sustain highly complex structural resonances that enable a unique optical sensing approach beyond the traditional Bragg scattering with periodic structures. By combining conventional dark-field scattering micro-spectroscopy and simple image correlation analysis, we experimentally demonstrate that deterministic aperiodic surfaces with engineered structural color are capable of detecting, in the visible spectral range, protein layers with thickness of a few tens of Angstroms. PMID:20566892
Di Maio, Francesco; Zio, Enrico; Smith, Curtis; Rychkov, Valentin
2015-07-06
The present special issue contains an overview of the research in the field of Integrated Deterministic and Probabilistic Safety Assessment (IDPSA) of Nuclear Power Plants (NPPs). Traditionally, safety regulation for NPPs design and operation has been based on Deterministic Safety Assessment (DSA) methods to verify criteria that assure plant safety in a number of postulated Design Basis Accident (DBA) scenarios. Referring to such criteria, it is also possible to identify those plant Structures, Systems, and Components (SSCs) and activities that are most important for safety within those postulated scenarios. Then, the design, operation, and maintenance of these “safety-related” SSCs andmore » activities are controlled through regulatory requirements and supported by Probabilistic Safety Assessment (PSA).« less
Scan power-aware deterministic test scheme using a low-transition linear decompressor
NASA Astrophysics Data System (ADS)
Wang, Weizheng; Shuo, Cai; Xiang, Lingyun
2015-04-01
Growing test data volume and excessive testing power are both serious challenges in the testing of very large-scale integrated circuits. This article presents a scan power-aware deterministic test method based on a new linear decompressor which is composed of a traditional linear decompressor, k-input AND gates and T flip-flops. This decompression architecture can generate the low-transition deterministic test set for a circuit under test. When applying the test patterns generated by the linear decompressor, only a few transitions occur in the scan chains, and hence the switching activity during testing decreases significantly. Entire test flow compatible with the design is also presented. Experimental results on several large International Symposium on Circuits and Systems'89 and International Test Conference'99 benchmark circuits demonstrate that the proposed methodology can reduce test power significantly while providing a high compression ratio with limited hardware overhead.
Transmission Microscopy with Nanometer Resolution Using a Deterministic Single Ion Source
NASA Astrophysics Data System (ADS)
Jacob, Georg; Groot-Berning, Karin; Wolf, Sebastian; Ulm, Stefan; Couturier, Luc; Dawkins, Samuel T.; Poschinger, Ulrich G.; Schmidt-Kaler, Ferdinand; Singer, Kilian
2016-07-01
We realize a single particle microscope by using deterministically extracted laser-cooled 40Ca+ ions from a Paul trap as probe particles for transmission imaging. We demonstrate focusing of the ions to a spot size of 5.8 ±1.0 nm and a minimum two-sample deviation of the beam position of 1.5 nm in the focal plane. The deterministic source, even when used in combination with an imperfect detector, gives rise to a fivefold increase in the signal-to-noise ratio as compared with conventional Poissonian sources. Gating of the detector signal by the extraction event suppresses dark counts by 6 orders of magnitude. We implement a Bayes experimental design approach to microscopy in order to maximize the gain in spatial information. We demonstrate this method by determining the position of a 1 μ m circular hole structure to a precision of 2.7 nm using only 579 probe particles.
One-step deterministic polarization-entanglement purification using spatial entanglement
Sheng Yubo; Deng Fuguo
2010-10-15
We present a one-step deterministic entanglement purification protocol with linear optics and postselection. Compared with the Simon-Pan protocol [C. Simon and J. W. Pan, Phys. Rev. Lett. 89, 257901 (2002)], this one-step protocol has some advantages. First, it can obtain a maximally entangled pair from each photon pair with only one step, instead of improving the fidelity of less-entangled photon pairs by performing the entanglement purification process repeatedly in other protocols. Second, it works in a deterministic way, not a probabilistic one, which greatly reduces the number of entanglement resources needed. Third, it does not require the polarization state be entangled; only spatial entanglement is needed. Moreover, it is feasible with current techniques [J. W. Pan, S. Gasparonl, R. Ursin, G. Weihs, and A. Zellinger, Nature (London) 423, 417 (2003)]. All these advantages make this one-step protocol more convenient than others in quantum-communication applications.
Sheng Yubo; Deng Fuguo
2010-03-15
Entanglement purification is a very important element for long-distance quantum communication. Different from all the existing entanglement purification protocols (EPPs) in which two parties can only obtain some quantum systems in a mixed entangled state with a higher fidelity probabilistically by consuming quantum resources exponentially, here we present a deterministic EPP with hyperentanglement. Using this protocol, the two parties can, in principle, obtain deterministically maximally entangled pure states in polarization without destroying any less-entangled photon pair, which will improve the efficiency of long-distance quantum communication exponentially. Meanwhile, it will be shown that this EPP can be used to complete nonlocal Bell-state analysis perfectly. We also discuss this EPP in a practical transmission.
Simplified Scheme for Deterministic Synthesis of Chiral-Nematic Glassy Liquid Crystals
Wallace, J.U.; Chen, S.H.
2006-07-13
Potentially useful for the fabrication of nonabsorbing polarizers, optical notch filters and reflectors, and polarized light sources, chiral-nematic glassy liquid crystals can be synthesized by a statistical or deterministic approach. A deterministic approach is characterized by the relative ease of product separation and purification and hence is more amenable to process scale-up. Prompted to minimize the effort involving protection adn deprotection of functional groups, the present work has demonstrated the feasibility of reducing the number of synthesis steps from a previous synthesis scheme. The new methodology is widely applicable to the synthesis of a variety of right- and left-handed chiral-nematic glassy liquid crystals with desired phase transition temperatures.
Di Maio, Francesco; Zio, Enrico; Smith, Curtis; Rychkov, Valentin
2015-07-06
The present special issue contains an overview of the research in the field of Integrated Deterministic and Probabilistic Safety Assessment (IDPSA) of Nuclear Power Plants (NPPs). Traditionally, safety regulation for NPPs design and operation has been based on Deterministic Safety Assessment (DSA) methods to verify criteria that assure plant safety in a number of postulated Design Basis Accident (DBA) scenarios. Referring to such criteria, it is also possible to identify those plant Structures, Systems, and Components (SSCs) and activities that are most important for safety within those postulated scenarios. Then, the design, operation, and maintenance of these “safety-related” SSCs and activities are controlled through regulatory requirements and supported by Probabilistic Safety Assessment (PSA).
Deterministic coupling of delta-doped nitrogen vacancy centers to a nanobeam photonic crystal cavity
Lee, Jonathan C.; Cui, Shanying; Zhang, Xingyu; Russell, Kasey J.; Magyar, Andrew P.; Hu, Evelyn L.; Bracher, David O.; Ohno, Kenichi; McLellan, Claire A.; Alemán, Benjamin; Bleszynski Jayich, Ania; Andrich, Paolo; Awschalom, David; Aharonovich, Igor
2014-12-29
The negatively charged nitrogen vacancy center (NV) in diamond has generated significant interest as a platform for quantum information processing and sensing in the solid state. For most applications, high quality optical cavities are required to enhance the NV zero-phonon line (ZPL) emission. An outstanding challenge in maximizing the degree of NV-cavity coupling is the deterministic placement of NVs within the cavity. Here, we report photonic crystal nanobeam cavities coupled to NVs incorporated by a delta-doping technique that allows nanometer-scale vertical positioning of the emitters. We demonstrate cavities with Q up to ∼24 000 and mode volume V ∼ 0.47(λ/n){sup 3} as well as resonant enhancement of the ZPL of an NV ensemble with Purcell factor of ∼20. Our fabrication technique provides a first step towards deterministic NV-cavity coupling using spatial control of the emitters.
Price-Dynamics of Shares and Bohmian Mechanics: Deterministic or Stochastic Model?
NASA Astrophysics Data System (ADS)
Choustova, Olga
2007-02-01
We apply the mathematical formalism of Bohmian mechanics to describe dynamics of shares. The main distinguishing feature of the financial Bohmian model is the possibility to take into account market psychology by describing expectations of traders by the pilot wave. We also discuss some objections (coming from conventional financial mathematics of stochastic processes) against the deterministic Bohmian model. In particular, the objection that such a model contradicts to the efficient market hypothesis which is the cornerstone of the modern market ideology. Another objection is of pure mathematical nature: it is related to the quadratic variation of price trajectories. One possibility to reply to this critique is to consider the stochastic Bohm-Vigier model, instead of the deterministic one. We do this in the present note.
NASA Astrophysics Data System (ADS)
Park, Min-Chul; Leportier, Thibault; Kim, Wooshik; Song, Jindong
2016-06-01
In this paper, we present a method to characterize not only shape but also depth of defects in line and space mask patterns. Features in a mask are too fine for conventional imaging system to resolve them and coherent imaging system providing only the pattern diffracted by the mask are used. Then, phase retrieval methods may be applied, but the accuracy it too low to determine the exact shape of the defect. Deterministic methods have been proposed to characterize accurately the defect, but it requires a reference pattern. We propose to use successively phase retrieval algorithm to retrieve the general shape of the mask and then deterministic approach to characterize precisely the defects detected.
Deterministic joint remote state preparation of arbitrary single- and two-qubit states
NASA Astrophysics Data System (ADS)
Chen, Na; Quan, Dong-Xiao; Xu, Fu-Fang; Yang, Hong; Pei, Chang-Xing
2015-10-01
In this paper, two novel schemes for deterministic joint remote state preparation (JRSP) of arbitrary single- and two-qubit states are proposed. A set of ingenious four-particle partially entangled states are constructed to serve as the quantum channels. In our schemes, two senders and one receiver are involved. Participants collaborate with each other and perform projective measurements on their own particles under an elaborate measurement basis. Based on their measurement results, the receiver can reestablish the target state by means of appropriate local unitary operations deterministically. Unit success probability can be achieved independent of the channel’s entanglement degree. Project supported by the National Natural Science Foundation of China (Grant Nos. 61372076 and 61301171), the 111 Project (Grant No. B08038), and the Fundamental Research Funds for the Central Universities, China (Grant No. K5051201021).
Park, Junbo; Buhrman, R. A.; Ralph, D. C.
2013-12-16
We model 100 ps pulse switching dynamics of orthogonal spin transfer (OST) devices that employ an out-of-plane polarizer and an in-plane polarizer. Simulation results indicate that increasing the spin polarization ratio, C{sub P} = P{sub IPP}/P{sub OPP}, results in deterministic switching of the free layer without over-rotation (360° rotation). By using spin torque asymmetry to realize an enhanced effective P{sub IPP}, we experimentally demonstrate this behavior in OST devices in parallel to anti-parallel switching. Modeling predicts that decreasing the effective demagnetization field can substantially reduce the minimum C{sub P} required to attain deterministic switching, while retaining low critical switching current, I{sub p} ∼ 500 μA.
Russell, A C; Hsieh, W L; Chen, K C; Heikenfeld, J
2015-01-13
Dielectrowetting effects of surface wrinkling, isotropic vs anisotropic spreading, electrode geometry, and deterministic dewetting are presented both experimentally and by 3D numerical modeling. The numerical results are generated by COMSOL in conjunction with the phase-field and electrohydrodynamic methods, including comparisons to experimental data. The dynamic behavior of the two-phase system has been accurately characterized on both the macro- and microscopic level. This work provides a deeper theoretical insight into the operating physics of dielectrowetting superspreading devices. PMID:25483348
Deterministic Bak Sneppen model: Lyapunov spectrum and avalanches as return times
NASA Astrophysics Data System (ADS)
Mendes, R. Vilela
2006-02-01
A deterministic version of the Bak-Sneppen model is studied. The role of the Lyapunov spectrum in the onset of scale-free behavior is established, as well as the measure-theoretic nature of the Bak-Sneppen self-organized state. Avalanches are interpreted as return times to a small measure set and the problem of accurate determination of the scaling exponents near the critical barrier is addressed.
Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-03-17
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885
Deterministic and stochastic control of chimera states in delayed feedback oscillator
NASA Astrophysics Data System (ADS)
Semenov, V.; Zakharova, A.; Maistrenko, Y.; Schöll, E.
2016-06-01
Chimera states, characterized by the coexistence of regular and chaotic dynamics, are found in a nonlinear oscillator model with negative time-delayed feedback. The control of these chimera states by external periodic forcing is demonstrated by numerical simulations. Both deterministic and stochastic external periodic forcing are considered. It is shown that multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. The constructive role of noise in the formation of a chimera states is shown.
Accuracy of probabilistic and deterministic record linkage: the case of tuberculosis
de Oliveira, Gisele Pinto; Bierrenbach, Ana Luiza de Souza; de Camargo, Kenneth Rochel; Coeli, Cláudia Medina; Pinheiro, Rejane Sobrino
2016-01-01
ABSTRACT OBJECTIVE To analyze the accuracy of deterministic and probabilistic record linkage to identify TB duplicate records, as well as the characteristics of discordant pairs. METHODS The study analyzed all TB records from 2009 to 2011 in the state of Rio de Janeiro. A deterministic record linkage algorithm was developed using a set of 70 rules, based on the combination of fragments of the key variables with or without modification (Soundex or substring). Each rule was formed by three or more fragments. The probabilistic approach required a cutoff point for the score, above which the links would be automatically classified as belonging to the same individual. The cutoff point was obtained by linkage of the Notifiable Diseases Information System – Tuberculosis database with itself, subsequent manual review and ROC curves and precision-recall. Sensitivity and specificity for accurate analysis were calculated. RESULTS Accuracy ranged from 87.2% to 95.2% for sensitivity and 99.8% to 99.9% for specificity for probabilistic and deterministic record linkage, respectively. The occurrence of missing values for the key variables and the low percentage of similarity measure for name and date of birth were mainly responsible for the failure to identify records of the same individual with the techniques used. CONCLUSIONS The two techniques showed a high level of correlation for pair classification. Although deterministic linkage identified more duplicate records than probabilistic linkage, the latter retrieved records not identified by the former. User need and experience should be considered when choosing the best technique to be used. PMID:27556963
Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks
Moya, José M.; Vallejo, Juan Carlos; Fraga, David; Araujo, Álvaro; Villanueva, Daniel; de Goyeneche, Juan-Mariano
2009-01-01
Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios. PMID:22412345
Andreoli, Daria; Volpe, Giorgio; Popoff, Sébastien; Katz, Ori; Grésillon, Samuel; Gigan, Sylvain
2015-01-01
We present a method to measure the spectrally-resolved transmission matrix of a multiply scattering medium, thus allowing for the deterministic spatiospectral control of a broadband light source by means of wavefront shaping. As a demonstration, we show how the medium can be used to selectively focus one or many spectral components of a femtosecond pulse, and how it can be turned into a controllable dispersive optical element to spatially separate different spectral components to arbitrary positions. PMID:25965944
SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams
Zhu, T; Finlay, J; Mesina, C; Liu, H
2014-06-01
Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axis ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.
Experimental demonstration of deterministic one-way quantum computation on a NMR quantum computer
Ju, Chenyong; Zhu Jing; Peng Xinhua; Chong Bo; Zhou Xianyi; Du Jiangfeng
2010-01-15
One-way quantum computing is an important and novel approach to quantum computation. By exploiting the existing particle-particle interactions, we report an experimental realization of the complete process of deterministic one-way quantum Deutsch-Josza algorithm in NMR, including graph state preparation, single-qubit measurements, and feed-forward corrections. The findings in our experiment may shed light on the future scalable one-way quantum computation.
NASA Astrophysics Data System (ADS)
Muthalif, Asan G. A.; Wahid, Azni N.; Nor, Khairul A. M.
2014-02-01
Engineering systems such as aircraft, ships and automotive are considered built-up structures. Dynamically they are taught of as being fabricated from many components that are classified as 'deterministic subsystems' (DS) and 'non-deterministic subsystems' (Non-DS). Structures' response of the DS is deterministic in nature and analysed using deterministic modelling methods such as finite element (FE) method. The response of Non-DS is statistical in nature and estimated using statistical modelling technique such as statistical energy analysis (SEA). SEA method uses power balance equation, in which any external input to the subsystem must be represented in terms of power. Often, input force is taken as point force and ensemble average power delivered by point force is already well-established. However, the external input can also be applied in the form of moments exerted by a piezoelectric (PZT) patch actuator. In order to be able to apply SEA method for input moments, a mathematical representation for moment generated by PZT patch in the form of average power is needed, which is attempted in this paper. A simply-supported plate with attached PZT patch is taken as a benchmark model. Analytical solution to estimate average power is derived using mobility approach. Ensemble average of power given by the PZT patch actuator to the benchmark model when subjected to structural uncertainties is also simulated using Lagrangian method and FEA software. The analytical estimation is compared with the Lagrangian model and FE method for validation. The effects of size and location of the PZT actuators on the power delivered to the plate are later investigated.
Development of a hybrid deterministic/stochastic method for 1D nuclear reactor kinetics
NASA Astrophysics Data System (ADS)
Terlizzi, Stefano; Rahnema, Farzad; Zhang, Dingkang; Dulla, Sandra; Ravetto, Piero
2015-12-01
A new method has been implemented for solving the time-dependent neutron transport equation efficiently and accurately. This is accomplished by coupling the hybrid stochastic-deterministic steady-state coarse-mesh radiation transport (COMET) method [1,2] with the new predictor-corrector quasi-static method (PCQM) developed at Politecnico di Torino [3]. In this paper, the coupled method is implemented and tested in 1D slab geometry.
Visualization of a Deterministic Radiation Transport Model Using Standard Visualization Tools
James A. Galbraith; L. Eric Greenwade
2004-05-01
Output from a deterministic radiation transport code running on a CRAY SV1 is imported into a standard distributed, parallel, visualization tool for analysis. Standard output files, consisting of tetrahedral meshes, are imported to the visualization tool through the creation of a application specific plug-in module. Visualization samples are included, providing visualization of steady state results. Different plot types and operators are utilized to enhance the analysis and assist in reporting the results of the analysis.
Development of a hybrid deterministic/stochastic method for 1D nuclear reactor kinetics
Terlizzi, Stefano; Dulla, Sandra; Ravetto, Piero; Rahnema, Farzad; Zhang, Dingkang
2015-12-31
A new method has been implemented for solving the time-dependent neutron transport equation efficiently and accurately. This is accomplished by coupling the hybrid stochastic-deterministic steady-state coarse-mesh radiation transport (COMET) method [1,2] with the new predictor-corrector quasi-static method (PCQM) developed at Politecnico di Torino [3]. In this paper, the coupled method is implemented and tested in 1D slab geometry.
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-01-01
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885
Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; Wagner, Robert M.; Edwards, K. Dean; Green, Johney B.
2015-02-18
Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes and thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.
Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; Wagner, Robert M.; Edwards, K. Dean; Green, Johney B.
2015-02-18
Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less
Mesh generation and energy group condensation studies for the jaguar deterministic transport code
Kennedy, R. A.; Watson, A. M.; Iwueke, C. I.; Edwards, E. J.
2012-07-01
The deterministic transport code Jaguar is introduced, and the modeling process for Jaguar is demonstrated using a two-dimensional assembly model of the Hoogenboom-Martin Performance Benchmark Problem. This single assembly model is being used to test and analyze optimal modeling methodologies and techniques for Jaguar. This paper focuses on spatial mesh generation and energy condensation techniques. In this summary, the models and processes are defined as well as thermal flux solution comparisons with the Monte Carlo code MC21. (authors)
Applications of the 3-D Deterministic Transport Code Attlla for Core Safety Analysis
D. S. Lucas
2004-10-01
An LDRD (Laboratory Directed Research and Development) project is ongoing at the Idaho National Engineering and Environmental Laboratory (INEEL) for applying the three-dimensional multi-group deterministic neutron transport code (Attila®) to criticality, flux and depletion calculations of the Advanced Test Reactor (ATR). This paper discusses the model development, capabilities of Attila, generation of the cross-section libraries, and comparisons to an ATR MCNP model and future.