Sorensen, S A; Low, J O
1998-12-01
Siting criteria are established by regulatory authorities to evaluate potential accident scenarios associated with proposed nuclear facilities. The 0.25 Sv (25 rem) siting criteria adopted in the United States has been historically based on the prevention of deterministic effects from acute, whole-body exposures. The Department of Energy has extended the applicability of this criterion to radionuclides that deliver chronic, organ-specific irradiation through the specification of a 0.25 Sv (25 rem) committed effective dose equivalent siting criterion. A methodology is developed to determine siting criteria based on the prevention of deterministic effects from inhalation intakes of radionuclides which deliver chronic, organ-specific irradiation. Revised siting criteria, expressed in terms of committed effective dose equivalent, are proposed for nuclear facilities that handle primarily plutonium compounds. The analysis determined that a siting criterion of 1.2 Sv (120 rem) committed effective dose equivalent for inhalation exposures to weapons-grade plutonium meets the historical goal of preventing deterministic effects during a facility accident scenario. The criterion also meets the Nuclear Regulatory Commission and Department of Energy Nuclear Safety Goals provided that the frequency of the accident is sufficiently low.
Grace, Matthew; Lowry, Thomas Stephen; Arnold, Bill Walter; James, Scott Carlton; Gray, Genetha Anne; Ahlmann, Michael
2008-08-01
Uncertainty in site characterization arises from a lack of data and knowledge about a site and includes uncertainty in the boundary conditions, uncertainty in the characteristics, location, and behavior of major features within an investigation area (e.g., major faults as barriers or conduits), uncertainty in the geologic structure, as well as differences in numerical implementation (e.g., 2-D versus 3-D, finite difference versus finite element, grid resolution, deterministic versus stochastic, etc.). Since the true condition at a site can never be known, selection of the best conceptual model is very difficult. In addition, limiting the understanding to a single conceptualization too early in the process, or before data can support that conceptualization, may lead to confidence in a characterization that is unwarranted as well as to data collection efforts and field investigations that are misdirected and/or redundant. Using a series of numerical modeling experiments, this project examined the application and use of information criteria within the site characterization process. The numerical experiments are based on models of varying complexity that were developed to represent one of two synthetically developed groundwater sites; (1) a fully hypothetical site that represented a complex, multi-layer, multi-faulted site, and (2) a site that was based on the Horonobe site in northern Japan. Each of the synthetic sites were modeled in detail to provide increasingly informative 'field' data over successive iterations to the representing numerical models. The representing numerical models were calibrated to the synthetic site data and then ranked and compared using several different information criteria approaches. Results show, that for the early phases of site characterization, low-parameterized models ranked highest while more complex models generally ranked lowest. In addition, predictive capabilities were also better with the low-parameterized models. For the
NASA Astrophysics Data System (ADS)
Agrinier, Pierre; Javoy, Marc
2016-09-01
Two methods are available in order to evaluate the equilibrium isotope fractionation factors between exchange sites or phases from partial isotope exchange experiments. The first one developed by Northrop and Clayton (1966) is designed for isotope exchanges between two exchange sites (hereafter, the N&C method), the second one from Zheng et al. (1994) is a refinement of the first one to account for a third isotope exchanging site (hereafter, the Z method). In this paper, we use a simple model of isotope kinetic exchange for a 3-exchange site system (such as hydroxysilicates where oxygen occurs as OH and non-OH groups like in muscovite, chlorite, serpentine, or water or calcite) to explore the behavior of the N&C and Z methods. We show that these two methods lead to significant biases that cannot be detected with the usual graphical tests proposed by the authors. Our model shows that biases originate because isotopes are fractionated between all these exchanging sites. Actually, we point out that the variable mobility (or exchangeability) of isotopes in and between the exchange sites only controls the amplitude of the bias, but is not essential to the production of this bias as previously suggested. Setting a priori two of the three exchange sites at isotopic equilibrium remove the bias and thus is required for future partial exchange experiments to produce accurate and unbiased extrapolated equilibrium fractionation factors. Our modeling applied to published partial oxygen isotope exchange experiments for 3-exchange site systems (the muscovite-calcite (Chacko et al., 1996), the chlorite-water (Cole and Ripley, 1998) and the serpentine-water (Saccocia et al., 2009)) shows that the extrapolated equilibrium fractionation factors (reported as 1000 ln(α)) using either the N&C or the Z methods lead to bias that may reach several δ per mil in a few cases. These problematic cases, may be because experiments were conducted at low temperature and did not reach high
NASA Astrophysics Data System (ADS)
Mattie, P. D.; Knowlton, R. G.; Arnold, B. W.; Tien, N.; Kuo, M.
2006-12-01
Sandia National Laboratories (Sandia), a U.S. Department of Energy National Laboratory, has over 30 years experience in radioactive waste disposal and is providing assistance internationally in a number of areas relevant to the safety assessment of radioactive waste disposal systems. International technology transfer efforts are often hampered by small budgets, time schedule constraints, and a lack of experienced personnel in countries with small radioactive waste disposal programs. In an effort to surmount these difficulties, Sandia has developed a system that utilizes a combination of commercially available codes and existing legacy codes for probabilistic safety assessment modeling that facilitates the technology transfer and maximizes limited available funding. Numerous codes developed and endorsed by the United States Nuclear Regulatory Commission and codes developed and maintained by United States Department of Energy are generally available to foreign countries after addressing import/export control and copyright requirements. From a programmatic view, it is easier to utilize existing codes than to develop new codes. From an economic perspective, it is not possible for most countries with small radioactive waste disposal programs to maintain complex software, which meets the rigors of both domestic regulatory requirements and international peer review. Therefore, re-vitalization of deterministic legacy codes, as well as an adaptation of contemporary deterministic codes, provides a creditable and solid computational platform for constructing probabilistic safety assessment models. External model linkage capabilities in Goldsim and the techniques applied to facilitate this process will be presented using example applications, including Breach, Leach, and Transport-Multiple Species (BLT-MS), a U.S. NRC sponsored code simulating release and transport of contaminants from a subsurface low-level waste disposal facility used in a cooperative technology transfer
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-16
... Notice of Proposed Information Collection: Exchange Programs Alumni Web Site Registration ACTION: Notice... . SUPPLEMENTARY INFORMATION: Title of Information Collection: Exchange Programs Alumni Web site Registration. OMB... available for public review. Abstract of proposed collection: The International Exchange Alumni Web...
Mansoor, K; Maley, M; Demir, Z; Hoffman, F
2001-08-08
Lawrence Livermore National Laboratory (LLNL) is a large Superfund site in California that is implementing an extensive ground water remediation program. The site is underlain by a thick sequence of heterogeneous alluvial sediments. Defining ground-water flow pathways in this complex geologic setting is difficult. To better evaluate these pathways, a deterministic approach was applied to define hydrostratigraphic units (HSUS) on the basis of identifiable hydraulic behavior and contaminant migration trends. The conceptual model based on this approach indicates that groundwater flow and contaminant transport occurs within packages of sediments bounded by thin, low-permeability confining layers. To aid in the development of the remediation program, a three-dimensional finite-element model was developed for two of the HSUS at LLNL. The primary objectives of this model are to test the conceptual model with a numerical model, and provide well field management support for the large ground-water remediation system. The model was successfully calibrated to 12 years of ground water flow and contaminant transport data. These results confirm that the thin, low-permeability confining layers within the heterogeneous alluvial sediments are the dominant hydraulic control to flow and transport. This calibrated model is currently being applied to better manage the large site-wide ground water extraction system by optimizing the location of new extraction wells, managing pumping rates for extraction wells, and providing performance estimates for long-term planning and budgeting.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-28
... Notice of Proposed Information Collection: Exchange Programs Alumni Web Site Registration, DS-7006 ACTION... burden on those who are to respond. Abstract of Proposed Collection The Exchange Programs Alumni Web site requires information to process users' voluntary requests for participation in the Web site. Other...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-06
... Notice of Proposed Information Collection: Exchange Programs Alumni Web Site Registration, DS-7006 ACTION... Collection: Exchange Programs Alumni Web site Registration. OMB Control Number: None. Type of Request... collection: The State Alumni Web site requires information to process users' voluntary request...
NASA Astrophysics Data System (ADS)
Ritzi, Robert W.; Soltanian, Mohamad Reza
2015-12-01
In the method of deterministic geostatistics (sensu Isaaks and Srivastava, 1988), highly-resolved data sets are used to compute sample spatial-bivariate statistics within a deterministic framework. The general goal is to observe what real, highly resolved, sample spatial-bivariate correlation looks like when it is well-quantified in naturally-occurring sedimentary aquifers. Furthermore, it is to understand how this correlation structure, (i.e. shape and correlation range) is related to independent and physically quantifiable attributes of the sedimentary architecture. The approach has evolved among work by Rubin (1995, 2003), Barrash and Clemo (2002), Ritzi et al. (2004, 2007, 2013), Dai et al. (2005), and Ramanathan et al. (2010). In this evolution, equations for sample statistics have been developed which allow tracking the facies types at the heads and tails of lag vectors. The goal is to observe and thereby understand how aspects of the sedimentary architecture affect the well-supported sample statistics. The approach has been used to study heterogeneity at a number of sites, representing a variety of depositional environments, with highly resolved data sets. What have we learned? We offer and support an opinion that the single most important insight derived from these studies is that the structure of spatial-bivariate correlation is essentially the cross-transition probability structure, determined by the sedimentary architecture. More than one scale of hierarchical sedimentary architecture has been represented in these studies, and a hierarchy of cross-transition probability structures was found to define the correlation structure in all cases. This insight allows decomposing contributions from different scales of the sedimentary architecture, and has led to a more fundamental understanding of mass transport processes including mechanical dispersion of solutes within aquifers, and the time-dependent retardation of reactive solutes. These processes can now be
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-08
... Notice of Proposed Information Collection: Exchange Programs Alumni Web Site Registration ACTION: Notice.... ADDRESSES: You may submit comments by any of the following methods: Web: Persons with access to the Internet... of Information Collection: Exchange Programs Alumni Web site Registration OMB Control Number:...
Deterministic Walks with Choice
Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.; Hunter, Meagan N.; Barr, Peter S.
2014-01-10
This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-25
... AGENCY Settlement Agreement for Recovery of Past Response Costs Colorado Bumper Exchange Site, Pueblo... United States has at this Site for Past Response Costs, as those terms are defined in the Settlement...,000.00 to EPA in settlement of its liability for Past Response Costs incurred at the Site. In...
Processes Impacting Atmosphere-Surface Exchanges at Arctic Terrestrial Sites
NASA Astrophysics Data System (ADS)
Persson, Ola; Grachev, Andrey; Konopleva, Elena; Cox, Chris; Stone, Robert; Crepinsek, Sara; Shupe, Matthew; Uttal, Taneil
2015-04-01
Surface energy fluxes are key to the annual cycle of near-surface and soil temperature and biologic activity in the Arctic. While these energy fluxes are undoubtedly changing to produce the changes observed in the Arctic ecosystem over the last few decades, measurements have generally not been available to quantify what processes are regulating these fluxes and what is determining the characteristics of these annual cycles. The U.S. National Oceanic and Atmospheric Administration has established, or contributed to the establishment of, several terrestrial "supersites" around the perimeter of the Arctic Ocean at which detailed measurements of atmospheric structure, surface fluxes, and soil thermal properties are being made. These sites include Barrow, Alaska; Eureka and Alert, Canada; and Tiksi, Russia. Atmospheric structure measurements vary, but include radiosoundings at all sites and remote sensing of clouds at two sites. Additionally, fluxes of sensible heat and momentum are made at all of the sites, while fluxes of moisture and CO2 are made at two of the sites. Soil temperatures are also measured in the upper 120 cm at all sites, which is deep enough to define the soil active layer. The sites have been operating between 3 years (Tiksi) and 24 years (Barrow). While all sites are located north of 71° N, the summer vegetation range from lush tundra grasses to rocky soils with little vegetation. This presentation will illustrate some of the atmospheric processes that are key for determining the annual energy and temperature cycles at these sites, and some of the key characteristics that lead to differences in, for instance, the length of the summer soil active layer between the sites. Atmospheric features and processes such as cloud characteristics, snowfall, downslope wind events, and sea-breezes have impacts on the annual energy cycle. The presence of a "zero curtain" period, when autumn surface temperature remains approximately constant at the freezing point
31 CFR 330.8 - Payment or redemption-exchange by a TRS Site.
Code of Federal Regulations, 2012 CFR
2012-07-01
... with the instructions set forth in 31 CFR part 321. The transmittals must be accompanied by appropriate... TRS Site. 330.8 Section 330.8 Money and Finance: Treasury Regulations Relating to Money and Finance... SHARES) § 330.8 Payment or redemption-exchange by a TRS Site. Specially endorsed securities that an...
31 CFR 330.8 - Payment or redemption-exchange by a TRS Site.
Code of Federal Regulations, 2014 CFR
2014-07-01
... with the instructions set forth in 31 CFR part 321. The transmittals must be accompanied by appropriate... TRS Site. 330.8 Section 330.8 Money and Finance: Treasury Regulations Relating to Money and Finance... SHARES) § 330.8 Payment or redemption-exchange by a TRS Site. Specially endorsed securities that an...
31 CFR 330.8 - Payment or redemption-exchange by a TRS Site.
Code of Federal Regulations, 2013 CFR
2013-07-01
... with the instructions set forth in 31 CFR part 321. The transmittals must be accompanied by appropriate... TRS Site. 330.8 Section 330.8 Money and Finance: Treasury Regulations Relating to Money and Finance... SHARES) § 330.8 Payment or redemption-exchange by a TRS Site. Specially endorsed securities that an...
Spires, Renee; Punch, Timothy; McCabe, Daniel
2009-02-11
The Department of Energy (DOE) has developed, modeled, and tested several different ion exchange media and column designs for cesium removal. One elutable resin and one non-elutable resin were considered for this salt processing application. Deployment of non-elutable Crystalline Silicotitanate and elutable Resorcinol Formaldehyde in several different column configurations were assessed in a formal Systems Engineering Evaluation (SEE). Salt solutions were selected that would allow a grouping of non-compliant tanks to be closed. Tests were run with the elutable resin to determine compatibility with the resin configuration required for an in-tank ion exchange system. Models were run to estimate the ion exchange cycles required with the two resins in several column configurations. Material balance calculations were performed to estimate the impact on the High Level Waste (HLW) system at the Savannah River Site (SRS). Conceptual process diagrams were used to support the hazard analysis. Data from the hazard analysis was used to determine the relative impact on safety. This report will discuss the technical inputs, SEE methods, results and path forward to complete the technical maturation of ion exchange.
Deterministic uncertainty analysis
Worley, B.A.
1987-01-01
Uncertainties of computer results are of primary interest in applications such as high-level waste (HLW) repository performance assessment in which experimental validation is not possible or practical. This work presents an alternate deterministic approach for calculating uncertainties that has the potential to significantly reduce the number of computer runs required for conventional statistical analysis. 7 refs., 1 fig.
ERIC Educational Resources Information Center
Mital, Monika; Israel, D.; Agarwal, Shailja
2010-01-01
Purpose: The purpose of this paper is to examine the mediating effect of trust on the relationship between the type of information exchange (IE) and information disclosure (ID) on social networking web sites (SNWs). Design/methodology/approach: Constructs were developed for type of IE and trust. To understand the mediating role of trust a…
"Actually, I Wanted to Learn": Study-Related Knowledge Exchange on Social Networking Sites
ERIC Educational Resources Information Center
Wodzicki, Katrin; Schwammlein, Eva; Moskaliuk, Johannes
2012-01-01
Social media open up multiple options to add a new dimension to learning and knowledge processes. Particularly, social networking sites allow students to connect formal and informal learning settings. Students can find like-minded people and organize informal knowledge exchange for educational purposes. However, little is known about in which way…
Deterministic hierarchical networks
NASA Astrophysics Data System (ADS)
Barrière, L.; Comellas, F.; Dalfó, C.; Fiol, M. A.
2016-06-01
It has been shown that many networks associated with complex systems are small-world (they have both a large local clustering coefficient and a small diameter) and also scale-free (the degrees are distributed according to a power law). Moreover, these networks are very often hierarchical, as they describe the modularity of the systems that are modeled. Most of the studies for complex networks are based on stochastic methods. However, a deterministic method, with an exact determination of the main relevant parameters of the networks, has proven useful. Indeed, this approach complements and enhances the probabilistic and simulation techniques and, therefore, it provides a better understanding of the modeled systems. In this paper we find the radius, diameter, clustering coefficient and degree distribution of a generic family of deterministic hierarchical small-world scale-free networks that has been considered for modeling real-life complex systems.
NASA Astrophysics Data System (ADS)
Beyer, C.; Höper, H.
2015-04-01
During the last decades an increasing area of drained peatlands has been rewetted. Especially in Germany, rewetting is the principal treatment on cutover sites when peat extraction is finished. The objectives are bog restoration and the reduction of greenhouse gas (GHG) emissions. The first sites were rewetted in the 1980s. Thus, there is a good opportunity to study long-term effects of rewetting on greenhouse gas exchange, which has not been done so far on temperate cutover peatlands. Moreover, Sphagnum cultivating may become a new way to use cutover peatlands and agriculturally used peatlands as it permits the economical use of bogs under wet conditions. The climate impact of such measures has not been studied yet. We conducted a field study on the exchange of carbon dioxide, methane and nitrous oxide at three rewetted sites with a gradient from dry to wet conditions and at a Sphagnum cultivation site in NW Germany over the course of more than 2 years. Gas fluxes were measured using transparent and opaque closed chambers. The ecosystem respiration (CO2) and the net ecosystem exchange (CO2) were modelled at a high temporal resolution. Measured and modelled values fit very well together. Annually cumulated gas flux rates, net ecosystem carbon balances (NECB) and global warming potential (GWP) balances were determined. The annual net ecosystem exchange (CO2) varied strongly at the rewetted sites (from -201.7 ± 126.8 to 29.7± 112.7g CO2-C m-2 a-1) due to differing weather conditions, water levels and vegetation. The Sphagnum cultivation site was a sink of CO2 (-118.8 ± 48.1 and -78.6 ± 39.8 g CO2-C m-2 a-1). The annual CH4 balances ranged between 16.2 ± 2.2 and 24.2 ± 5.0g CH4-C m-2 a-1 at two inundated sites, while one rewetted site with a comparatively low water level and the Sphagnum farming site show CH4 fluxes close to 0. The net N2O fluxes were low and not significantly different between the four sites. The annual NECB was between -185.5 ± 126.9 and 49
Boltz, J.C.
1992-09-01
EXCHANGE is published monthly by the Idaho National Engineering Laboratory (INEL), a multidisciplinary facility operated for the US Department of Energy (DOE). The purpose of EXCHANGE is to inform computer users about about recent changes and innovations in both the mainframe and personal computer environments and how these changes can affect work being performed at DOE facilities.
Membrane Contact Sites: Complex Zones for Membrane Association and Lipid Exchange
Quon, Evan; Beh, Christopher T.
2015-01-01
Lipid transport between membranes within cells involves vesicle and protein carriers, but as agents of nonvesicular lipid transfer, the role of membrane contact sites has received increasing attention. As zones for lipid metabolism and exchange, various membrane contact sites mediate direct associations between different organelles. In particular, membrane contact sites linking the plasma membrane (PM) and the endoplasmic reticulum (ER) represent important regulators of lipid and ion transfer. In yeast, cortical ER is stapled to the PM through membrane-tethering proteins, which establish a direct connection between the membranes. In this review, we consider passive and facilitated models for lipid transfer at PM–ER contact sites. Besides the tethering proteins, we examine the roles of an additional repertoire of lipid and protein regulators that prime and propagate PM–ER membrane association. We conclude that instead of being simple mediators of membrane association, regulatory components of membrane contact sites have complex and multilayered functions. PMID:26949334
Chengjiang Mao
1996-12-31
In typical AI systems, we employ so-called non-deterministic reasoning (NDR), which resorts to some systematic search with backtracking in the search spaces defined by knowledge bases (KBs). An eminent property of NDR is that it facilitates programming, especially programming for those difficult AI problems such as natural language processing for which it is difficult to find algorithms to tell computers what to do at every step. However, poor efficiency of NDR is still an open problem. Our work aims at overcoming this efficiency problem.
The Deterministic Information Bottleneck
NASA Astrophysics Data System (ADS)
Strouse, D. J.; Schwab, David
2015-03-01
A fundamental and ubiquitous task that all organisms face is prediction of the future based on past sensory experience. Since an individual's memory resources are limited and costly, however, there is a tradeoff between memory cost and predictive payoff. The information bottleneck (IB) method (Tishby, Pereira, & Bialek 2000) formulates this tradeoff as a mathematical optimization problem using an information theoretic cost function. IB encourages storing as few bits of past sensory input as possible while selectively preserving the bits that are most predictive of the future. Here we introduce an alternative formulation of the IB method, which we call the deterministic information bottleneck (DIB). First, we argue for an alternative cost function, which better represents the biologically-motivated goal of minimizing required memory resources. Then, we show that this seemingly minor change has the dramatic effect of converting the optimal memory encoder from stochastic to deterministic. Next, we propose an iterative algorithm for solving the DIB problem. Additionally, we compare the IB and DIB methods on a variety of synthetic datasets, and examine the performance of retinal ganglion cell populations relative to the optimal encoding strategy for each problem.
Wang, Kai; Yang, Yanzhi; Chodera, John D.; Shirts, Michael R.
2014-01-01
We present a method to identify small molecule ligand binding sites and orientations to a given protein crystal structure using GPU-accelerated Hamiltonian replica exchange molecular dynamics simulations. The Hamiltonians used vary from the physical end state of protein interacting with the ligand to a unphysical end state where the ligand does not interact with the protein. As replicas explore the space of Hamiltonians interpolating between these states the ligand can rapidly escape local minima and explore potential binding sites. Geometric restraints keep the ligands within the protein volume, and a potential energy pathway designed to increase phase space overlap between intermediates ensures good mixing. Because of the rigorous statistical mechanical nature of the Hamiltonian exchange framework, we can also extract binding free energy estimates at all putative binding sites, which agree well with free energies computed from occupation probabilities. We present results of this methodology on the T4 lysozyme L99A model system with four ligands, including one non-binder as a control. We find that our methodology identifies the crystallographic binding sites consistently and accurately for the small number of ligands considered here and gives free energies consistent with experiment. We are also able to analyze the contribution of individual binding sites on the overall binding affinity. Our methodology points to near term potential applications in early-stage drug discovery. PMID:24297454
NASA Astrophysics Data System (ADS)
Mahan, H. R.; Wagle, P.; Bajgain, R.; Zhou, Y.; Basara, J. B.; Xiao, X.; Duckles, J. M.; Steiner, J. L.; Starks, P. J.; Northup, B. K.
2014-12-01
Quantifying methane (CH4), carbon dioxide (CO2), and water vapor fluxes between land surface and boundary layer using the eddy covariance method have many applicable uses across several disciplines. Three eddy flux towers have been established over no-till winter wheat (Triticum aestivum L.), and native and improved pastures at the USDA ARS Grazinglands Research Laboratory, El Reno, OK. An additional tower will be established in fall 2014 over till winter wheat. Each flux site is equipped with an eddy covariance system, PhenoCam, COSMOS, and in-situ observations of soil and atmospheric state variables. The objective of this research is to measure, compare, and model the land-atmosphere exchange of CO2, water vapor, and CH4 in different land cover types and management practices (till vs no-till, grazing vs no-grazing, native vs improved pasture). Models that focus on net ecosystem CO2 exchange (NEE), gross primary production (GPP), evapotranspiration (ET), and CH4 fluxes can be improved by the cross verification of these measurements. Another application will be to link the in-situ measurements with satellite remote sensing in order to scale-up flux measurements from small spatial scales to local and regional scales. Preliminary data analysis from the native grassland site revealed that CH4 concentration was negligible (~ 0), and it increased significantly when cattle were introduced into the site. In summer 2014, daily ET magnitude was about 4-5 mm day-1 and the NEE magnitude was 4-5 g C day-1 at the native grassland site. Further analysis of data for all the sites for longer temporal periods will enhance understanding of biotic and abiotic factors that govern carbon, water, and energy exchanges between the land surface and atmosphere under different land cover and management systems. The research findings will help predict the responses of these ecosystems to management practices and global environmental change in the future.
Vascular Patterns in Iguanas and Other Squamates: Blood Vessels and Sites of Thermal Exchange
Porter, William Ruger; Witmer, Lawrence M.
2015-01-01
Squamates use the circulatory system to regulate body and head temperatures during both heating and cooling. The flexibility of this system, which possibly exceeds that of endotherms, offers a number of physiological mechanisms to gain or retain heat (e.g., increase peripheral blood flow and heart rate, cooling the head to prolong basking time for the body) as well as to shed heat (modulate peripheral blood flow, expose sites of thermal exchange). Squamates also have the ability to establish and maintain the same head-to-body temperature differential that birds, crocodilians, and mammals demonstrate, but without a discrete rete or other vascular physiological device. Squamates offer important anatomical and phylogenetic evidence for the inference of the blood vessels of dinosaurs and other extinct archosaurs in that they shed light on the basal diapsid condition. Given this basal positioning, squamates likewise inform and constrain the range of physiological thermoregulatory mechanisms that may have been found in Dinosauria. Unfortunately, the literature on squamate vascular anatomy is limited. Cephalic vascular anatomy of green iguanas (Iguana iguana) was investigated using a differential-contrast, dual-vascular injection (DCDVI) technique and high-resolution X-ray microcomputed tomography (μCT). Blood vessels were digitally segmented to create a surface representation of vascular pathways. Known sites of thermal exchange, consisting of the oral, nasal, and orbital regions, were given special attention due to their role in brain and cephalic thermoregulation. Blood vessels to and from sites of thermal exchange were investigated to detect conserved vascular patterns and to assess their ability to deliver cooled blood to the dural venous sinuses. Arteries within sites of thermal exchange were found to deliver blood directly and through collateral pathways. The venous drainage was found to have multiple pathways that could influence neurosensory tissue temperature
Vascular Patterns in Iguanas and Other Squamates: Blood Vessels and Sites of Thermal Exchange.
Porter, William Ruger; Witmer, Lawrence M
2015-01-01
Squamates use the circulatory system to regulate body and head temperatures during both heating and cooling. The flexibility of this system, which possibly exceeds that of endotherms, offers a number of physiological mechanisms to gain or retain heat (e.g., increase peripheral blood flow and heart rate, cooling the head to prolong basking time for the body) as well as to shed heat (modulate peripheral blood flow, expose sites of thermal exchange). Squamates also have the ability to establish and maintain the same head-to-body temperature differential that birds, crocodilians, and mammals demonstrate, but without a discrete rete or other vascular physiological device. Squamates offer important anatomical and phylogenetic evidence for the inference of the blood vessels of dinosaurs and other extinct archosaurs in that they shed light on the basal diapsid condition. Given this basal positioning, squamates likewise inform and constrain the range of physiological thermoregulatory mechanisms that may have been found in Dinosauria. Unfortunately, the literature on squamate vascular anatomy is limited. Cephalic vascular anatomy of green iguanas (Iguana iguana) was investigated using a differential-contrast, dual-vascular injection (DCDVI) technique and high-resolution X-ray microcomputed tomography (μCT). Blood vessels were digitally segmented to create a surface representation of vascular pathways. Known sites of thermal exchange, consisting of the oral, nasal, and orbital regions, were given special attention due to their role in brain and cephalic thermoregulation. Blood vessels to and from sites of thermal exchange were investigated to detect conserved vascular patterns and to assess their ability to deliver cooled blood to the dural venous sinuses. Arteries within sites of thermal exchange were found to deliver blood directly and through collateral pathways. The venous drainage was found to have multiple pathways that could influence neurosensory tissue temperature
Vascular Patterns in Iguanas and Other Squamates: Blood Vessels and Sites of Thermal Exchange.
Porter, William Ruger; Witmer, Lawrence M
2015-01-01
Squamates use the circulatory system to regulate body and head temperatures during both heating and cooling. The flexibility of this system, which possibly exceeds that of endotherms, offers a number of physiological mechanisms to gain or retain heat (e.g., increase peripheral blood flow and heart rate, cooling the head to prolong basking time for the body) as well as to shed heat (modulate peripheral blood flow, expose sites of thermal exchange). Squamates also have the ability to establish and maintain the same head-to-body temperature differential that birds, crocodilians, and mammals demonstrate, but without a discrete rete or other vascular physiological device. Squamates offer important anatomical and phylogenetic evidence for the inference of the blood vessels of dinosaurs and other extinct archosaurs in that they shed light on the basal diapsid condition. Given this basal positioning, squamates likewise inform and constrain the range of physiological thermoregulatory mechanisms that may have been found in Dinosauria. Unfortunately, the literature on squamate vascular anatomy is limited. Cephalic vascular anatomy of green iguanas (Iguana iguana) was investigated using a differential-contrast, dual-vascular injection (DCDVI) technique and high-resolution X-ray microcomputed tomography (μCT). Blood vessels were digitally segmented to create a surface representation of vascular pathways. Known sites of thermal exchange, consisting of the oral, nasal, and orbital regions, were given special attention due to their role in brain and cephalic thermoregulation. Blood vessels to and from sites of thermal exchange were investigated to detect conserved vascular patterns and to assess their ability to deliver cooled blood to the dural venous sinuses. Arteries within sites of thermal exchange were found to deliver blood directly and through collateral pathways. The venous drainage was found to have multiple pathways that could influence neurosensory tissue temperature
NASA Astrophysics Data System (ADS)
Zhang, Jingjing; Kitova, Elena N.; Li, Jun; Eugenio, Luiz; Ng, Kenneth; Klassen, John S.
2016-01-01
The application of hydrogen/deuterium exchange mass spectrometry (HDX-MS) to localize ligand binding sites in carbohydrate-binding proteins is described. Proteins from three bacterial toxins, the B subunit homopentamers of Cholera toxin and Shiga toxin type 1 and a fragment of Clostridium difficile toxin A, and their interactions with native carbohydrate receptors, GM1 pentasaccharides (β-Gal-(1→3)-β-GalNAc-(1→4)[α-Neu5Ac-(2→3)]-β-Gal-(1→4)-Glc), Pk trisaccharide (α-Gal-(1→4)-β-Gal-(1→4)-Glc) and CD-grease (α-Gal-(1→3)-β-Gal-(1→4)-β-GlcNAcO(CH2)8CO2CH3), respectively, served as model systems for this study. Comparison of the differences in deuterium uptake for peptic peptides produced in the absence and presence of ligand revealed regions of the proteins that are protected against deuterium exchange upon ligand binding. Notably, protected regions generally coincide with the carbohydrate binding sites identified by X-ray crystallography. However, ligand binding can also result in increased deuterium exchange in other parts of the protein, presumably through allosteric effects. Overall, the results of this study suggest that HDX-MS can serve as a useful tool for localizing the ligand binding sites in carbohydrate-binding proteins. However, a detailed interpretation of the changes in deuterium exchange upon ligand binding can be challenging because of the presence of ligand-induced changes in protein structure and dynamics.
Zhang, Jingjing; Kitova, Elena N; Li, Jun; Eugenio, Luiz; Ng, Kenneth; Klassen, John S
2016-01-01
The application of hydrogen/deuterium exchange mass spectrometry (HDX-MS) to localize ligand binding sites in carbohydrate-binding proteins is described. Proteins from three bacterial toxins, the B subunit homopentamers of Cholera toxin and Shiga toxin type 1 and a fragment of Clostridium difficile toxin A, and their interactions with native carbohydrate receptors, GM1 pentasaccharides (β-Gal-(1→3)-β-GalNAc-(1→4)[α-Neu5Ac-(2→3)]-β-Gal-(1→4)-Glc), Pk trisaccharide (α-Gal-(1→4)-β-Gal-(1→4)-Glc) and CD-grease (α-Gal-(1→3)-β-Gal-(1→4)-β-GlcNAcO(CH2)8CO2CH3), respectively, served as model systems for this study. Comparison of the differences in deuterium uptake for peptic peptides produced in the absence and presence of ligand revealed regions of the proteins that are protected against deuterium exchange upon ligand binding. Notably, protected regions generally coincide with the carbohydrate binding sites identified by X-ray crystallography. However, ligand binding can also result in increased deuterium exchange in other parts of the protein, presumably through allosteric effects. Overall, the results of this study suggest that HDX-MS can serve as a useful tool for localizing the ligand binding sites in carbohydrate-binding proteins. However, a detailed interpretation of the changes in deuterium exchange upon ligand binding can be challenging because of the presence of ligand-induced changes in protein structure and dynamics.
Alignment of Surface-Atmosphere Exchange Sensors at Sloped Sites: An Integrated Strategy
NASA Astrophysics Data System (ADS)
Metzger, S.; Ayres, E.; Clement, R.; Durden, D.; Foken, T.; Kowalski, A. S.; Luo, H.; McCaughey, J. H.; Durden, N. P.; Serrano-Ortiz, P.; Sun, J.
2015-12-01
Closure of the energy balance on the earth's surface is regarded as an indicator that the measurements of radiative, turbulent, diffusive and storage fluxes fulfill fundamental methodological assumptions. However, for sloped measurement sites the different terms contributing to the energy balance are not aligned along the axes of a single reference coordinate system. Consequently, a measurement and data processing strategy is needed that enables consistently quantifying each contributing term. The National Ecological Observatory Network (NEON) is currently deploying surface-atmosphere exchange sensors at several dozen measurement sites, 50% of which are located on slopes up to 20 degrees. To enable unbiased observations across sites, the incident angle between a flux and its measurement should be minimized, and instrument limitations and spatial representativeness need to be considered. Here, we present a strategy that combines site-adaptive instrument alignment with real-time attitude tracking and a set of trigonometric, radiative and source area conversions. This allows obeying the physical limitations underlying radiation, turbulence, profile and soil heat flux sensors, while providing observations in a consistent frame of reference. The strategy is evaluated against initial findings from the first months of surface-atmosphere exchange sensor deployments at NEON.
O2 activation by binuclear Cu sites: Noncoupled versus exchange coupled reaction mechanisms
NASA Astrophysics Data System (ADS)
Chen, Peng; Solomon, Edward I.
2004-09-01
Binuclear Cu proteins play vital roles in O2 binding and activation in biology and can be classified into coupled and noncoupled binuclear sites based on the magnetic interaction between the two Cu centers. Coupled binuclear Cu proteins include hemocyanin, tyrosinase, and catechol oxidase. These proteins have two Cu centers strongly magnetically coupled through direct bridging ligands that provide a mechanism for the 2-electron reduction of O2 to a µ-2:2 side-on peroxide bridged species. This side-on bridged peroxo-CuII2 species is activated for electrophilic attack on the phenolic ring of substrates. Noncoupled binuclear Cu proteins include peptidylglycine -hydroxylating monooxygenase and dopamine -monooxygenase. These proteins have binuclear Cu active sites that are distant, that exhibit no exchange interaction, and that activate O2 at a single Cu center to generate a reactive CuII/O2 species for H-atom abstraction from the C-H bond of substrates. O2 intermediates in the coupled binuclear Cu enzymes can be trapped and studied spectroscopically. Possible intermediates in noncoupled binuclear Cu proteins can be defined through correlation to mononuclear CuII/O2 model complexes. The different intermediates in these two classes of binuclear Cu proteins exhibit different reactivities that correlate with their different electronic structures and exchange coupling interactions between the binuclear Cu centers. These studies provide insight into the role of exchange coupling between the Cu centers in their reaction mechanisms.
Sulfate-chloride exchange by lobster hepatopancreas is regulated by pH-sensitive modifier sites
Cattey, M.A.; Ahearn, G.A.; Gerencser, G.A. Univ. of Florida, Gainesville )
1991-03-15
{sup 35}SO{sub 4}{sup 2{minus}} uptake by Atlantic lobster (Homarus americanus) hepatopancreatic epithelial brush border membrane vesicles (BBMV) was stimulated by internal Cl{sup {minus}}, but not internal HCO{sub 3}{sup {minus}}, or external Na{sup +}. Sulfate-chloride exchange was stimulated by inside positive, and inhibited by inside negative, trans-membrane K diffusion potentials. {sup 35}SO{sub 4}{sup 2{minus}}-Cl{sup {minus}} exchange was strongly inhibited by 4,4{prime} diisothiocyanostilbene-2,2{prime}-disulfonic acid (DIDS), 4-acetamido-4{prime}-isotheocynaostilbene-2,2{prime}-disulfonic acid, (SITS), and thiosulfate. Chloride, bicarbonate, furosamide, and bumetanide slightly, yet significantly, cis-inhibited {sup 35}SO{sub 4}{sup 2{minus}}-Cl{sup {minus}} exchange. Altering bilateral pH from 8.0 to 5.4 stimulated {sup 35}SO{sub 4}{sup 2{minus}}-Cl{sup {minus}} exchange when vesicles were loaded with Cl{sup {minus}}, but reduced bilateral pH alone or the presence of pH gradients did not affect {sup 35}SO{sub 4}{sup 2{minus}} transport in the absence of internal Cl{sup {minus}}. {sup 36}Cl uptake into SO{sub 4}{sup 2{minus}}-loaded BBMV was stimulated by an internal negative membrane potential and inhibited when the interior was electrically positive. A model is proposed which suggests that SO{sub 4}{sup 2{minus}}-Cl{sup {minus}} exchange is regulated by internal and external pH-sensitive modifier sites on the anion antiporter and by coupling to the electrogenic 2 Na{sup +}/1 H{sup +} antiporter and by coupling to the electrogenic 2 Na{sup +}/1 H{sup +} antiporter on the same membrane.
Young, Calvin J; Siemann, Stefan
2016-10-11
Metal exchange is a common strategy to replace the zinc ion of many zinc proteins with other transition metals amenable to spectroscopic investigations. We here demonstrate that in anthrax lethal factor (and likely other zinc proteases), metal exchange is a fast process, and involves the occupation of an inhibitory metal site by the incoming ion prior to the release of zinc. PMID:27517100
Deterministic Switching in Bismuth Ferrite Nanoislands.
Morelli, Alessio; Johann, Florian; Burns, Stuart R; Douglas, Alan; Gregg, J Marty
2016-08-10
We report deterministic selection of polarization variant in bismuth BiFeO3 nanoislands via a two-step scanning probe microscopy procedure. The polarization orientation in a nanoisland is toggled to the desired variant after a reset operation by scanning a conductive atomic force probe in contact over the surface while a bias is applied. The final polarization variant is determined by the direction of the inhomogeneous in-plane trailing field associated with the moving probe tip. This work provides the framework for better control of switching in rhombohedral ferroelectrics and for a deeper understanding of exchange coupling in multiferroic nanoscale heterostructures toward the realization of magnetoelectric devices. PMID:27454612
Sturm, H.F. Jr.; Hottel, R.E.; Christoper, N.
1994-06-01
The Savannah River Site conducted its first Supplier Information Exchange in September 1993. The intent of the conference was to inform potential suppliers of the Savannah River Sites mission and research and development program objectives in the areas of environmental restoration and waste management, and to solicit proposals for innovative research in those areas. Major areas addressed were Solid Waste, Environmental Restoration, Environmental Monitoring, Transition/Decontamination and Decommissioning, and the Savannah River Technology Center. A total of 1062 proposals were received addressing the 89 abstracts presented. This paper will describe the forum the process for solicitation, the process for proposal review and selection, and review the overall results and benefits to Savannah River.
Say, Ridvan
2006-10-01
This manuscript describes a method for the selective binding behavior of paraoxan and parathion compounds on surface imprinted polymers which were prepared using both charge transfer (CT) (methacryloyl-antipyrine, MAAP) and ligand-exchange (LE) (methacryloyl-antipyrine-gadalonium, MAAP-Gd) monomers. These polymers were prepared in the presence of azobisisobutyronitrile (AIBN) as an initiator and crosslinking EDMA and were imprinted with organophosphate esters. Influence of CT and LE imprinting on the creation of recognition sites toward paraoxan and parathion was determined applying adsorption isotherms. The effect of initial concentration of paraoxan and parathion, adsorption time and imprinting efficiency on adsorption selectivity for MIP-CT and MIP-LE was investigated. Association constant (K(ass)), number of accessible sites (Q(max)), relative selectivity coefficient (k') and binding ability were also evaluated.
Site-specific transformation of Drosophila via phiC31 integrase-mediated cassette exchange.
Bateman, Jack R; Lee, Anne M; Wu, C-ting
2006-06-01
Position effects can complicate transgene analyses. This is especially true when comparing transgenes that have inserted randomly into different genomic positions and are therefore subject to varying position effects. Here, we introduce a method for the precise targeting of transgenic constructs to predetermined genomic sites in Drosophila using the C31 integrase system in conjunction with recombinase-mediated cassette exchange (RMCE). We demonstrate the feasibility of this system using two donor cassettes, one carrying the yellow gene and the other carrying GFP. At all four genomic sites tested, we observed exchange of donor cassettes with an integrated target cassette carrying the mini-white gene. Furthermore, because RMCE-mediated integration of the donor cassette is necessarily accompanied by loss of the target cassette, we were able to identify integrants simply by the loss of mini-white eye color. Importantly, this feature of the technology will permit integration of unmarked constructs into Drosophila, even those lacking functional genes. Thus, C31 integrase-mediated RMCE should greatly facilitate transgene analysis as well as permit new experimental designs. PMID:16547094
Controlling factors of biosphere-atmosphere ammonia exchange at a semi-natural peatland site
NASA Astrophysics Data System (ADS)
Brummer, C.; Richter, U.; Smith, J. J.; Delorme, J. P.; Kutsch, W. L.
2014-12-01
Recent advancements in laser spectrometry offer new opportunities to investigate net biosphere-atmosphere exchange of ammonia. During a three month field campaign from February to May 2014, we tested the performance of a quantum cascade laser within an eddy-covariance setup. The laser was operated at a semi-natural peatland site that is surrounded by highly fertilized agricultural land and intensive livestock production (~1 km distance). Ammonia concentrations were highly variable between 2 and almost 100 ppb with an average value of 15 ppb. Different concentration patterns could be identified. The variability was closely linked to the timing of management practices and the prevailing local climate, particularly wind direction, temperature and surface wetness with the latter indicating higher non-stomatal uptake under wet conditions leading to decreased concentrations. Average ammonia fluxes were around -15 ng N m-2 s-1 at the beginning of the campaign in February and shifted towards a neutral average exchange regime of -1 to 0 ng N m-2 s-1 in April and May. Intriguingly, during the time of decreasing ammonia uptake, concentrations were considerably rising, which clearly indicated N saturation in the predominant vegetation such as bog heather, purple moor-grass, and cotton grass. The cumulative net uptake for the period of investigation was ~300 g N ha-1. This stresses the importance of a thorough method inter-comparison, e.g. with denuder systems in combination with dry deposition modeling. As previous results from the latter methods showed an annual uptake of ~9 kg N ha-1 for the same site, the implementation of adequate ammonia compensation point parameterizations become crucial in surface-atmosphere exchange schemes for bog vegetation. Through their high temporal resolution, robustness and continuous measurement mode, quantum cascade lasers will help assessing the effects of atmospheric N loads to vulnerable N-limited ecosystems such as peatlands.
THE NON-STEADY STATE MEMBRANE POTENTIAL OF ION EXCHANGERS WITH FIXED SITES.
CONTI, F; EISENMAN, G
1965-03-01
A system of equations, based upon the assumption that the only force acting on each ionic species is due to the gradient of its electrochemical potential, is used to deduce, in the non-steady state for zero net current, the expression of the difference of electric potential between two solutions separated by an ion exchange membrane with fixed monovalent sites. The membrane is assumed to be solely permeable to cations or anions, depending on whether the charge of the sites is -1 or +1, and not to permit any flow of solvent. Under the assumptions that the difference of standard chemical potentials of any pair of permeant monovalent species and the ratio of their mobilities are constant throughout the membrane, even when the spacing of sites is variable, explicit expressions are derived for the diffusion potential and total membrane potential as functions of time and of solution activities. The expressions are valid for any number of permeant monovalent species having ideal behavior and for two permeant monovalent species having "n-type" non-ideal behavior. The results show that for a step change in solution composition the observable potential across a membrane having fixed, but not necessarily uniformly spaced, sites becomes independent of time once equilibria are established at the boundaries of the membrane and attains its steady-state value even while the ionic concentration profiles and the electric potential profile within the membrane are changing with time.
Self-stabilizing Deterministic Gathering
NASA Astrophysics Data System (ADS)
Dieudonné, Yoann; Petit, Franck
In this paper, we investigate the possibility to deterministically solve the gathering problem (GP) with weak robots (anonymous, autonomous, disoriented, oblivious, deaf, and dumb). We introduce strong multiplicity detection as the ability for the robots to detect the exact number of robots located at a given position. We show that with strong multiplicity detection, there exists a deterministic self-stabilizing algorithm solving GP for n robots if, and only if, n is odd.
Haghighat-Khah, Roya Elaine; Scaife, Sarah; Martins, Sara; St John, Oliver; Matzen, Kelly Jean; Morrison, Neil; Alphey, Luke
2015-01-01
Genetically engineered insects are being evaluated as potential tools to decrease the economic and public health burden of mosquitoes and agricultural pest insects. Here we describe a new tool for the reliable and targeted genome manipulation of pest insects for research and field release using recombinase mediated cassette exchange (RMCE) mechanisms. We successfully demonstrated the established ΦC31-RMCE method in the yellow fever mosquito, Aedes aegypti, which is the first report of RMCE in mosquitoes. A new variant of this RMCE system, called iRMCE, combines the ΦC31-att integration system and Cre or FLP-mediated excision to remove extraneous sequences introduced as part of the site-specific integration process. Complete iRMCE was achieved in two important insect pests, Aedes aegypti and the diamondback moth, Plutella xylostella, demonstrating the transferability of the system across a wide phylogenetic range of insect pests. PMID:25830287
Shape-Controlled Deterministic Assembly of Nanowires.
Zhao, Yunlong; Yao, Jun; Xu, Lin; Mankin, Max N; Zhu, Yinbo; Wu, Hengan; Mai, Liqiang; Zhang, Qingjie; Lieber, Charles M
2016-04-13
Large-scale, deterministic assembly of nanowires and nanotubes with rationally controlled geometries could expand the potential applications of one-dimensional nanomaterials in bottom-up integrated nanodevice arrays and circuits. Control of the positions of straight nanowires and nanotubes has been achieved using several assembly methods, although simultaneous control of position and geometry has not been realized. Here, we demonstrate a new concept combining simultaneous assembly and guided shaping to achieve large-scale, high-precision shape controlled deterministic assembly of nanowires. We lithographically pattern U-shaped trenches and then shear transfer nanowires to the patterned substrate wafers, where the trenches serve to define the positions and shapes of transferred nanowires. Studies using semicircular trenches defined by electron-beam lithography yielded U-shaped nanowires with radii of curvature defined by inner surface of the trenches. Wafer-scale deterministic assembly produced U-shaped nanowires for >430,000 sites with a yield of ∼90%. In addition, mechanistic studies and simulations demonstrate that shaping results in primarily elastic deformation of the nanowires and show clearly the diameter-dependent limits achievable for accessible forces. Last, this approach was used to assemble U-shaped three-dimensional nanowire field-effect transistor bioprobe arrays containing 200 individually addressable nanodevices. By combining the strengths of wafer-scale top-down fabrication with diverse and tunable properties of one-dimensional building blocks in novel structural configurations, shape-controlled deterministic nanowire assembly is expected to enable new applications in many areas including nanobioelectronics and nanophotonics. PMID:26999059
Deterministic teleportation of electrons in a quantum dot nanostructure.
de Visser, R L; Blaauboer, M
2006-06-23
We present a proposal for deterministic quantum teleportation of electrons in a semiconductor nanostructure consisting of a single and a double quantum dot. The central issue addressed in this Letter is how to design and implement the most efficient--in terms of the required number of single and two-qubit operations--deterministic teleportation protocol for this system. Using a group-theoretical analysis, we show that deterministic teleportation requires a minimum of three single-qubit rotations and two entangling (square root SWAP) operations. These can be implemented for spin qubits in quantum dots using electron-spin resonance (for single-spin rotations) and exchange interaction (for square root SWAP operations).
Deterministic multidimensional nonuniform gap sampling.
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.
Deterministic multidimensional nonuniform gap sampling
NASA Astrophysics Data System (ADS)
Worley, Bradley; Powers, Robert
2015-12-01
Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities.
Deterministic models for traffic jams
NASA Astrophysics Data System (ADS)
Nagel, Kai; Herrmann, Hans J.
1993-10-01
We study several deterministic one-dimensional traffic models. For integer positions and velocities we find the typical high and low density phases separated by a simple transition. If positions and velocities are continuous variables the model shows self-organized critically driven by the slowest car.
Lee, Hong Jo; Lee, Hyung Chul; Kim, Young Min; Hwang, Young Sun; Park, Young Hyun; Park, Tae Sub; Han, Jae Yong
2016-02-01
Targeted genome recombination has been applied in diverse research fields and has a wide range of possible applications. In particular, the discovery of specific loci in the genome that support robust and ubiquitous expression of integrated genes and the development of genome-editing technology have facilitated rapid advances in various scientific areas. In this study, we produced transgenic (TG) chickens that can induce recombinase-mediated gene cassette exchange (RMCE), one of the site-specific recombination technologies, and confirmed RMCE in TG chicken-derived cells. As a result, we established TG chicken lines that have, Flipase (Flp) recognition target (FRT) pairs in the chicken genome, mediated by piggyBac transposition. The transgene integration patterns were diverse in each TG chicken line, and the integration diversity resulted in diverse levels of expression of exogenous genes in each tissue of the TG chickens. In addition, the replaced gene cassette was expressed successfully and maintained by RMCE in the FRT predominant loci of TG chicken-derived cells. These results indicate that targeted genome recombination technology with RMCE could be adaptable to TG chicken models and that the technology would be applicable to specific gene regulation by cis-element insertion and customized expression of functional proteins at predicted levels without epigenetic influence.
Leclerc, Monique Y.
2014-11-17
This final report presents the main activities and results of the project “A Carbon Flux Super Site: New Insights and Innovative Atmosphere-Terrestrial Carbon Exchange Measurements and Modeling” from 10/1/2006 to 9/30/2014. It describes the new AmeriFlux tower site (Aiken) at Savanna River Site (SC) and instrumentation, long term eddy-covariance, sodar, microbarograph, soil and other measurements at the site, and intensive field campaigns of tracer experiment at the Carbon Flux Super Site, SC, in 2009 and at ARM-CF site, Lamont, OK, and experiments in Plains, GA. The main results on tracer experiment and modeling, on low-level jet characteristics and their impact on fluxes, on gravity waves and their influence on eddy fluxes, and other results are briefly described in the report.
Evidence for histidyl and carboxy groups at the active site of the human placental Na+-H+ exchanger.
Ganapathy, V; Balkovetz, D F; Ganapathy, M E; Mahesh, V B; Devoe, L D; Leibach, F H
1987-01-01
The Na+-H+ exchanger of the human placental brush-border membrane was inhibited by pretreatment of the membrane vesicles with a histidyl-group-specific reagent, diethyl pyrocarbonate and with a carboxy-group-specific reagent, N-ethoxycarbonyl-2-ethoxy-1,2-dihydroquinoline. In both cases the inhibition was irreversible and non-competitive in nature. But, if the membrane vesicles were treated with these reagents in the presence of amiloride, cimetidine or clonidine, there was no inhibition. Since amiloride, cimetidine and clonidine all interact with the active site of the exchanger in a mutually exclusive manner, the findings provide evidence for the presence of essential histidyl and carboxy groups at or near the active site of the human placental Na+-H+ exchanger. This conclusion was further substantiated by the findings that Rose Bengal-catalysed photo-oxidation of histidine residues as well as covalent modification of carboxy residues with NN'-dicyclohexylcarbodi-imide irreversibly inhibited the Na+-H+ exchanger and that amiloride protected the exchanger from inhibition caused by NN'-dicyclohexylcarbodi-imide. PMID:2822022
DiPolo, Reinaldo; Beaugé, Luis
2011-09-01
The Na(+)/Ca(2+) exchanger, a major mechanism by which cells extrude calcium, is involved in several physiological and physiopathological interactions. In this work we have used the dialyzed squid giant axon to study the effects of two oxidants, SIN-1-buffered peroxynitrite and hydrogen peroxide (H(2)O(2)), on the Na(+)/Ca(2+) exchanger in the absence and presence of MgATP upregulation. The results show that oxidative stress induced by peroxynitrite and hydrogen peroxide inhibits the Na(+)/Ca(2+) exchanger by impairing the intracellular Ca(2+) (Ca(i)(2+))-regulatory sites, leaving unharmed the intracellular Na(+)- and Ca(2+)-transporting sites. This effect is efficiently counteracted by the presence of MgATP and by intracellular alkalinization, conditions that also protect H(i)(+) and (H(i)(+) + Na(i)(+)) inhibition of Ca(i)(2+)-regulatory sites. In addition, 1 mM intracellular EGTA reduces oxidant inhibition. However, once the effects of oxidants are installed they cannot be reversed by either MgATP or EGTA. These results have significant implications regarding the role of the Na(+)/Ca(2+) exchanger in response to pathological conditions leading to tissue ischemia-reperfusion and anoxia/reoxygenation; they concur with a marked reduction in ATP concentration, an increase in oxidant production, and a rise in intracellular Ca(2+) concentration that seems to be the main factor responsible for cell damage.
Duignan, M.; Nash, C.
2010-03-31
A principal goal at the Savannah River Site (SRS) is to safely dispose of the large volume of liquid nuclear waste held in many storage tanks. In-tank ion exchange (IX) columns are being considered for cesium removal. The spherical form of resorcinol formaldehyde ion exchange resin (sRF) is being evaluated for decontamination of dissolved saltcake waste at SRS, which is generally lower in potassium and organic components than Hanford waste. The sRF performance with SRS waste was evaluated in two phases: resin batch contacts and IX column testing with both simulated and actual dissolved salt waste. The tests, equipment, and results are discussed.
Deterministic relativistic quantum bit commitment
NASA Astrophysics Data System (ADS)
Adlam, Emily; Kent, Adrian
2015-06-01
We describe new unconditionally secure bit commitment schemes whose security is based on Minkowski causality and the monogamy of quantum entanglement. We first describe an ideal scheme that is purely deterministic, in the sense that neither party needs to generate any secret randomness at any stage. We also describe a variant that allows the committer to proceed deterministically, requires only local randomness generation from the receiver, and allows the commitment to be verified in the neighborhood of the unveiling point. We show that these schemes still offer near-perfect security in the presence of losses and errors, which can be made perfect if the committer uses an extra single random secret bit. We discuss scenarios where these advantages are significant.
Trafficking of Na+/Ca2+ exchanger to the site of persistent inflammation in nociceptive afferents.
Scheff, Nicole N; Gold, Michael S
2015-06-01
Persistent inflammation results in an increase in the amplitude and duration of depolarization-evoked Ca(2+) transients in putative nociceptive afferents. Previous data indicated that these changes were the result of neither increased neuronal excitability nor an increase in the amplitude of depolarization. Subsequent data also ruled out an increase in voltage-gated Ca(2+) currents and recruitment of Ca(2+)-induced Ca(2+) release. Parametric studies indicated that the inflammation-induced increase in the duration of the evoked Ca(2+) transient required a relatively large and long-lasting increase in the concentration of intracellular Ca(2+) implicating the Na(+)/Ca(2+) exchanger (NCX), a major Ca(2+) extrusion mechanism activated with high intracellular Ca(2+) loads. The contribution of NCX to the inflammation-induced increase in the evoked Ca(2+) transient in rat sensory neurons was tested using fura-2 AM imaging and electrophysiological recordings. Changes in NCX expression and protein were assessed with real-time PCR and Western blot analysis, respectively. An inflammation-induced decrease in NCX activity was observed in a subpopulation of putative nociceptive neurons innervating the site of inflammation. The time course of the decrease in NCX activity paralleled that of the inflammation-induced changes in nociceptive behavior. The change in NCX3 in the cell body was associated with a decrease in NCX3 protein in the ganglia, an increase in the peripheral nerve (sciatic) yet no change in the central root. This single response to inflammation is associated with changes in at least three different segments of the primary afferent, all of which are likely to contribute to the dynamic response to persistent inflammation. PMID:26041911
Trafficking of Na+/Ca2+ exchanger to the site of persistent inflammation in nociceptive afferents.
Scheff, Nicole N; Gold, Michael S
2015-06-01
Persistent inflammation results in an increase in the amplitude and duration of depolarization-evoked Ca(2+) transients in putative nociceptive afferents. Previous data indicated that these changes were the result of neither increased neuronal excitability nor an increase in the amplitude of depolarization. Subsequent data also ruled out an increase in voltage-gated Ca(2+) currents and recruitment of Ca(2+)-induced Ca(2+) release. Parametric studies indicated that the inflammation-induced increase in the duration of the evoked Ca(2+) transient required a relatively large and long-lasting increase in the concentration of intracellular Ca(2+) implicating the Na(+)/Ca(2+) exchanger (NCX), a major Ca(2+) extrusion mechanism activated with high intracellular Ca(2+) loads. The contribution of NCX to the inflammation-induced increase in the evoked Ca(2+) transient in rat sensory neurons was tested using fura-2 AM imaging and electrophysiological recordings. Changes in NCX expression and protein were assessed with real-time PCR and Western blot analysis, respectively. An inflammation-induced decrease in NCX activity was observed in a subpopulation of putative nociceptive neurons innervating the site of inflammation. The time course of the decrease in NCX activity paralleled that of the inflammation-induced changes in nociceptive behavior. The change in NCX3 in the cell body was associated with a decrease in NCX3 protein in the ganglia, an increase in the peripheral nerve (sciatic) yet no change in the central root. This single response to inflammation is associated with changes in at least three different segments of the primary afferent, all of which are likely to contribute to the dynamic response to persistent inflammation.
Biasing Potential Replica Exchange Multi-Site λ-Dynamics for Efficient Free Energy Calculations
Armacost, Kira A.; Goh, Garrett B.; Brooks, Charles L.
2016-01-01
Traditional free energy calculation methods are well known for their drawbacks in scalability and speed in converging results particularly for calculations with large perturbations. In the present work, we report on the development of biasing potential replica exchange multi-site λ-dynamics (BP-REX MSλD), which is a free energy method that is capable of performing simultaneous alchemical free energy transformations, including perturbations between flexible moieties. BP-REX MSλD and the original MSλD are applied to a series of symmetrical 2,5-benzoquinone derivatives covering a diverse chemical space and range of conformational flexibility. Improved λ-space sampling is observed for the BP-REX MSλD simulations, yielding a 2–5-fold increase in the number of transitions between substituents compared to traditional MSλD. We also demonstrate the efficacy of varying the value of c, the parameter that controls the ruggedness of the landscape mediating the sampling of λ-states, based on the flexibility of the fragment. Finally, we developed a protocol for maximizing the transition frequency between fragments. This protocol reduces the “kinetic barrier” for alchemically transforming fragments by grouping and ordering based on volume. These findings are applied to a challenging test set involving a series of geldanamycin-based inhibitors of heat shock protein 90 (Hsp90). Even though the perturbations span volume changes by as large as 60 Å3, the values for the free energy change achieve an average unsigned error (AUE) of 1.5 kcal/mol relative to experimental Kd measurements with a reasonable correlation (R = 0.56). Our results suggest that the BP-REX MSλD algorithm is a highly efficient and scalable free energy method, which when utilized will enable routine calculations on the order of hundreds of compounds using only a few simulations. PMID:26579773
Analysis of FBC deterministic chaos
Daw, C.S.
1996-06-01
It has recently been discovered that the performance of a number of fossil energy conversion devices such as fluidized beds, pulsed combustors, steady combustors, and internal combustion engines are affected by deterministic chaos. It is now recognized that understanding and controlling the chaotic elements of these devices can lead to significantly improved energy efficiency and reduced emissions. Application of these techniques to key fossil energy processes are expected to provide important competitive advantages for U.S. industry.
Interannual variability and decadal trends in carbon exchange at the Harvard Forest EMS site
NASA Astrophysics Data System (ADS)
Munger, J.. W.; Wofsy, S. C.; Moorcroft, P. R.; Medvigy, D.
2009-04-01
The Harvard Forest EMS site in a mixed deciduous forest in central Massachusetts has been measuring carbon, water, and energy fluxes since 1992. Above-ground biomass, litter input, and tree mortality have been measured since 1995. The forest at this site has consistently been a net sink for carbon over the measurement period with annual uptake rates of 1.0 to > 5.Mg-C ha-1y-1. Carbon uptake rates show a significant increasing trend, despite the forest being 75- 110 years old. There were parallel increases in midsummer photosynthetic capacity at high light level (21.5-31.5 mole m-2s-1), woody biomass (101-115 Mg-C ha-1from 1993-2005, mostly due to growth of one species, red oak), and peak leaf area index (4.5-5.5 m2m-2from 1998-2005). These long-term trends were interrupted in 1998 by sharp declines in photosynthetic capacity, net ecosystem exchange (NEE) of CO2, and other parameters, followed by recovery over the next 3 years. The dip in 1998 could not be directly attributed to any one cause, though leaf expansion in the spring appeared to stall during a period of unfavorable weather, and did not recover later in the summer. Annual increment of above-ground woody biomass has followed the trend in NEE with 1 year offset implying that spring wood growth is supplied by carbon fixed in the previous year. An empirical model of carbon fluxes based on mean temperature and light response functions and observed phenology represents the hourly to seasonal patterns in carbon fluxes but can not adequately account for interannual variability or the long-term trends in carbon uptake. A structured ecosystem model (ED2) that represented both canopy-scale physiology and long-term dynamics of tree growth, mortality, and species composition was able to simulate interannual variability over decadal intervals better than the empirical model based on mean responses could. These results imply that direct effects of climate variability only partially account for interannual variability in
Kisley, Lydia; Chen, Jixin; Mansur, Andrea P.; Dominguez-Medina, Sergio; Kulla, Eliona; Kang, Marci; Shuang, Bo; Kourentzi, Katerina; Poongavanam, Mohan-Vivekanandan; Dhamane, Sagar; Willson, Richard C.; Landes, Christy F.
2014-01-01
The retention and elution of proteins in ion-exchange chromatography is routinely controlled by adjusting the mobile phase salt concentration. It has repeatedly been observed, as judged from adsorption isotherms, that the apparent heterogeneity of adsorption is lower at more-eluting, higher ionic strength. Here, we present an investigation into the mechanism of this phenomenon using a single-molecule, super-resolution imaging technique called motion-blur Points Accumulation for Imaging in Nanoscale Topography (mbPAINT). We observed that the number of functional adsorption sites was smaller at high ionic strength and that these sites had reduced desorption kinetic heterogeneity, and thus narrower predicted elution profiles, for the anion-exchange adsorption of α-lactalbumin on an agarose-supported, clustered-charge ligand stationary phase. Explanations for the narrowing of the functional population such as inter-protein interactions and protein or support structural changes were investigated through kinetic analysis, circular dichroism spectroscopy, and microscopy of agarose microbeads, respectively. The results suggest the reduction of heterogeneity is due to both electrostatic screening between the protein and ligand and tuning the steric availability within the agarose support. Overall, we have shown that single molecule spectroscopy can aid in understanding the influence of ionic strength on the population of functional adsorbent sites participating in the ion-exchange chromatographic separation of proteins. PMID:24751557
Probabilistic versus deterministic hazard assessment in liquefaction susceptible zones
NASA Astrophysics Data System (ADS)
Daminelli, Rosastella; Gerosa, Daniele; Marcellini, Alberto; Tento, Alberto
2015-04-01
Probabilistic seismic hazard assessment (PSHA), usually adopted in the framework of seismic codes redaction, is based on Poissonian description of the temporal occurrence, negative exponential distribution of magnitude and attenuation relationship with log-normal distribution of PGA or response spectrum. The main positive aspect of this approach stems into the fact that is presently a standard for the majority of countries, but there are weak points in particular regarding the physical description of the earthquake phenomenon. Factors like site effects, source characteristics like duration of the strong motion and directivity that could significantly influence the expected motion at the site are not taken into account by PSHA. Deterministic models can better evaluate the ground motion at a site from a physical point of view, but its prediction reliability depends on the degree of knowledge of the source, wave propagation and soil parameters. We compare these two approaches in selected sites affected by the May 2012 Emilia-Romagna and Lombardia earthquake, that caused widespread liquefaction phenomena unusually for magnitude less than 6. We focus on sites liquefiable because of their soil mechanical parameters and water table level. Our analysis shows that the choice between deterministic and probabilistic hazard analysis is strongly dependent on site conditions. The looser the soil and the higher the liquefaction potential, the more suitable is the deterministic approach. Source characteristics, in particular the duration of strong ground motion, have long since recognized as relevant to induce liquefaction; unfortunately a quantitative prediction of these parameters appears very unlikely, dramatically reducing the possibility of their adoption in hazard assessment. Last but not least, the economic factors are relevant in the choice of the approach. The case history of 2012 Emilia-Romagna and Lombardia earthquake, with an officially estimated cost of 6 billions
Deterministic implementation of weak quantum cubic nonlinearity
Marek, Petr; Filip, Radim; Furusawa, Akira
2011-11-15
We propose a deterministic implementation of weak cubic nonlinearity, which is a basic building block of a full-scale continuous-variable quantum computation. Our proposal relies on preparation of a specific ancillary state and transferring its nonlinear properties onto the desired target by means of deterministic Gaussian operations and feed forward. We show that, despite the imperfections arising from the deterministic nature of the operation, the weak quantum nonlinearity can be implemented and verified with the current level of technology.
NASA Astrophysics Data System (ADS)
Biederman, J. A.; Scott, R. L.; Goulden, M.
2014-12-01
Climate change is predicted to increase the frequency and severity of water limitation, altering terrestrial ecosystems and their carbon exchange with the atmosphere. Here we compare site-level temporal sensitivity of annual carbon fluxes to interannual variations in water availability against cross-site spatial patterns over a network of 19 eddy covariance flux sites. This network represents one order of magnitude in mean annual productivity and includes western North American desert shrublands and grasslands, savannahs, woodlands, and forests with continuous records of 4 to 12 years. Our analysis reveals site-specific patterns not identifiable in prior syntheses that pooled sites. We interpret temporal variability as an indicator of ecosystem response to annual water availability due to fast-changing factors such as leaf stomatal response and microbial activity, while cross-site spatial patterns are used to infer ecosystem adjustment to climatic water availability through slow-changing factors such as plant community and organic carbon pools. Using variance decomposition, we directly quantify how terrestrial carbon balance depends on slow- and fast-changing components of gross ecosystem production (GEP) and total ecosystem respiration (TER). Slow factors explain the majority of variance in annual net ecosystem production (NEP) across the dataset, and their relative importance is greater at wetter, forest sites than desert ecosystems. Site-specific offsets from spatial patterns of GEP and TER explain one third of NEP variance, likely due to slow-changing factors not directly linked to water, such as disturbance. TER and GEP are correlated across sites as previously shown, but our site-level analysis reveals surprisingly consistent linear relationships between these fluxes in deserts and savannahs, indicating fast coupling of TER and GEP in more arid ecosystems. Based on the uncertainty associated with slow and fast factors, we suggest a framework for improved
Kagome Approximation for 3He on Husimi Lattice with - and Three-Site Exchange Interactions
NASA Astrophysics Data System (ADS)
Ananikian, N. S.; Hovhannisyan, V. V.; Lazaryan, H. A.
The Ising approximation of the Heisenberg model in a strong magnetic field, with two-, and three-spin exchange interactions are studied on a Husimi lattice. This model can be considered as an approximation of the third layer of 3He absorbed on the surface of graphite (kagome lattice). Using dynamic approach, we have found exact recursion relation for the partition function. For different values of exchange parameters and temperature, the diagrams of magnetization are plotted and showed that magnetization properties of the model vary from ferromagnetic to antiferromagnetic depending on the value of model parameters. For antiferromagnetic case magnetization plateau at 1/3 of saturation field is obtained. Lyapunov exponent for recursion relation are considered and showed absence of bifurcation points in thermodynamic limit. The Yang-Lee zeros are analyzed in terms of neutral fixed points and showed that Yang-Lee zeros of the model are located on the arcs of the circle with the radius R = 1.
Survivability of Deterministic Dynamical Systems
NASA Astrophysics Data System (ADS)
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-07-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures.
Survivability of Deterministic Dynamical Systems.
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Survivability of Deterministic Dynamical Systems
Hellmann, Frank; Schultz, Paul; Grabow, Carsten; Heitzig, Jobst; Kurths, Jürgen
2016-01-01
The notion of a part of phase space containing desired (or allowed) states of a dynamical system is important in a wide range of complex systems research. It has been called the safe operating space, the viability kernel or the sunny region. In this paper we define the notion of survivability: Given a random initial condition, what is the likelihood that the transient behaviour of a deterministic system does not leave a region of desirable states. We demonstrate the utility of this novel stability measure by considering models from climate science, neuronal networks and power grids. We also show that a semi-analytic lower bound for the survivability of linear systems allows a numerically very efficient survivability analysis in realistic models of power grids. Our numerical and semi-analytic work underlines that the type of stability measured by survivability is not captured by common asymptotic stability measures. PMID:27405955
Comment on: Supervisory Asymmetric Deterministic Secure Quantum Communication
NASA Astrophysics Data System (ADS)
Kao, Shih-Hung; Tsai, Chia-Wei; Hwang, Tzonelih
2012-12-01
In 2010, Xiu et al. (Optics Communications 284:2065-2069, 2011) proposed several applications based on a new secure four-site distribution scheme using χ-type entangled states. This paper points out that one of these applications, namely, supervisory asymmetric deterministic secure quantum communication, is subject to an information leakage problem, in which the receiver can extract two bits of a three-bit secret message without the supervisor's permission. An enhanced protocol is proposed to resolve this problem.
Deterministic weak localization in periodic structures.
Tian, C; Larkin, A
2005-12-01
In some perfect periodic structures classical motion exhibits deterministic diffusion. For such systems we present the weak localization theory. As a manifestation for the velocity autocorrelation function a universal power law decay is predicted to appear at four Ehrenfest times. This deterministic weak localization is robust against weak quenched disorders, which may be confirmed by coherent backscattering measurements of periodic photonic crystals.
Cassidy, R.M.; Elchuk, S.
1985-03-01
Reversed phases coated with a permanently sorbed ion exchanger and indirect UV detection have been investigated for the determination of simple and multifunctional carboxylic acids in chemical cleaning solutions. The advantages of being able to vary both the ion-exchange capacity and the hydrophobic interactions on these types of ion exchangers for the optimization of resolution and detection are illustrated, and the selection of optimum separation conditions is discussed. Dissolved iron interferes with the analysis due to photochemical, redox, and kinetic effects but good recoveries can be obtained after reduction of the iron with hydroxylamine and complexation with 1,2-diaminocyclohexanetetraacetic acid. Detection limits (3 x base line noise) for oxalate, citrate, ethylenediaminetetraacetate, and hydroxyethylenediaminetriacetate are 0.6-20 ..mu..g x mL/sup -1/ for a 20-..mu..L sample, and relative standard deviations are 3 to % in the 75-350 ..mu..g x mL/sup -1/ range. Analysis results for reactor decontamination solutions containing up to 250 ..mu..g x mL/sup -1/ of iron agree with results obtained by other techniques, and it is shown that this technique should also be useful for determination of metal ions in the samples. A determination of the above reagents in the presence of Fe(II) and Ni(II) takes 7 to 12 min after a 5 to 10 min reduction step. Cr(III) forms nonlabile complexes with ethylenediaminetetraacetic acid, and its presence will cause low results for this acid. 17 references, 4 figures, 6 tables.
NASA Astrophysics Data System (ADS)
Khakinejad, Mahdiar; Ghassabi Kondalaji, Samaneh; Donohoe, Gregory C.; Valentine, Stephen J.
2016-03-01
Ion mobility spectrometry (IMS) coupled with gas-phase hydrogen deuterium exchange (HDX)-mass spectrometry (MS) and molecular dynamic simulations (MDS) has been used for structural investigation of anions produced by electrospraying a sample containing a synthetic peptide having the sequence KKDDDDDIIKIIK. In these experiments the potential of the analytical method for locating charge sites on ions as well as for utilizing collision-induced dissociation (CID) to reveal the degree of deuterium uptake within specific amino acid residues has been assessed. For diffuse (i.e., more elongated) [M - 2H]2- ions, decreased deuterium content along with MDS data suggest that the D4 and D6 residues are charge sites, whereas for the more diffuse [M - 3H]3- ions, the data suggest that the D4, D7, and the C-terminus are deprotonated. Fragmentation of mobility-selected, diffuse [M - 2H]2- ions to determine deuterium uptake at individual amino acid residues reveals a degree of deuterium retention at incorporation sites. Although the diffuse [M - 3H]3- ions may show more HD scrambling, it is not possible to clearly distinguish HD scrambling from the expected deuterium uptake based on a hydrogen accessibility model. The capability of the IMS-HDX-MS/MS approach to provide relevant details about ion structure is discussed. Additionally, the ability to extend the approach for locating protonation sites on positively-charged ions is presented.
Momentum, water vapor, and carbon dioxide exchange at a centrally located prairie site during FIFE
NASA Technical Reports Server (NTRS)
Verma, Shashi B.; Kim, Joon; Clement, Robert J.
1992-01-01
Eddy correlation measurements were taken of momentum, water vapor, sensible heat, and CO2 at a centrally located plateau site in the FIFE study area from May to October 1987. Approximately 82 percent of the vegetation at the site was composed of several C4 grass species, with the remainder being C3 grasses, forbs, wedges, and woody plants. Precipitation was about normal during the study period, except for a three week dry period in late July to early August that caused moisture stress conditions.
International data- and information exchange for off-site emergency management--where to go?
Höbler, Christian; Hable, Kathrin; Baig, Sandra; Zähringer, Matthias
2004-01-01
The communication and exchange of views and opinions between decision makers and their advisers is crucial for coherent crisis management. As a vision, the concept of a 'Virtual Round Table' is proposed. It stands for the dedication of all accountable authorities to cross border cooperation and should provide a comprehensive technical infrastructure enabling decision makers and experts to communicate with their colleagues in neighbouring countries as if they were sitting at the same table. Organisational arrangements must be in place for coordinating decisions. The technical infrastructure should comprise modern communication technologies such as video conferencing and a 'Common Information Board', which presents all relevant information and documents in a structured way, manages alerting, decision journaling, data distribution and access control. In order to achieve this goal in an evolutionary approach, existing procedures of good practice are analysed and helpful features are identified. PMID:15238657
Deterministic quantum teleportation with atoms.
Riebe, M; Häffner, H; Roos, C F; Hänsel, W; Benhelm, J; Lancaster, G P T; Körber, T W; Becher, C; Schmidt-Kaler, F; James, D F V; Blatt, R
2004-06-17
Teleportation of a quantum state encompasses the complete transfer of information from one particle to another. The complete specification of the quantum state of a system generally requires an infinite amount of information, even for simple two-level systems (qubits). Moreover, the principles of quantum mechanics dictate that any measurement on a system immediately alters its state, while yielding at most one bit of information. The transfer of a state from one system to another (by performing measurements on the first and operations on the second) might therefore appear impossible. However, it has been shown that the entangling properties of quantum mechanics, in combination with classical communication, allow quantum-state teleportation to be performed. Teleportation using pairs of entangled photons has been demonstrated, but such techniques are probabilistic, requiring post-selection of measured photons. Here, we report deterministic quantum-state teleportation between a pair of trapped calcium ions. Following closely the original proposal, we create a highly entangled pair of ions and perform a complete Bell-state measurement involving one ion from this pair and a third source ion. State reconstruction conditioned on this measurement is then performed on the other half of the entangled pair. The measured fidelity is 75%, demonstrating unequivocally the quantum nature of the process.
Connecting deterministic and stochastic metapopulation models.
Barbour, A D; McVinish, R; Pollett, P K
2015-12-01
In this paper, we study the relationship between certain stochastic and deterministic versions of Hanski's incidence function model and the spatially realistic Levins model. We show that the stochastic version can be well approximated in a certain sense by the deterministic version when the number of habitat patches is large, provided that the presence or absence of individuals in a given patch is influenced by a large number of other patches. Explicit bounds on the deviation between the stochastic and deterministic models are given. PMID:25735440
Site-specific bacterial chromosome engineering: ΦC31 integrase mediated cassette exchange (IMCE).
Heil, John R; Cheng, Jiujun; Charles, Trevor C
2012-03-16
The bacterial chromosome may be used to stably maintain foreign DNA in the mega-base range. Integration into the chromosome circumvents issues such as plasmid replication, plasmid stability, plasmid incompatibility, and plasmid copy number variance. This method uses the site-specific integrase from the Streptomyces phage (Φ) C31. The ΦC31 integrase catalyzes a direct recombination between two specific DNA sites: attB and attP (34 and 39 bp, respectively). This recombination is stable and does not revert. A "landing pad" (LP) sequence consisting of a spectinomycin-resistance gene, aadA (SpR), and the E. coli ß-glucuronidase gene (uidA) flanked by attP sites has been integrated into the chromosomes of Sinorhizobium meliloti, Ochrobactrum anthropi, and Agrobacterium tumefaciens in an intergenic region, the ampC locus, and the tetA locus, respectively. S. meliloti is used in this protocol. Mobilizable donor vectors containing attB sites flanking a stuffer red fluorescent protein (rfp) gene and an antibiotic resistance gene have also been constructed. In this example the gentamicin resistant plasmid pJH110 is used. The rfp gene may be replaced with a desired construct using SphI and PstI. Alternatively a synthetic construct flanked by attB sites may be sub-cloned into a mobilizable vector such as pK19mob. The expression of the ΦC31 integrase gene (cloned from pHS62) is driven by the lac promoter, on a mobilizable broad host range plasmid pRK7813. A tetraparental mating protocol is used to transfer the donor cassette into the LP strain thereby replacing the markers in the LP sequence with the donor cassette. These cells are trans-integrants. Trans-integrants are formed with a typical efficiency of 0.5%. Trans-integrants are typically found within the first 500-1,000 colonies screened by antibiotic sensitivity or blue-white screening using 5-bromo-4-chloro-3-indolyl-beta-D-glucuronic acid (X-gluc). This protocol contains the mating and selection procedures for
On the Question of Site-Selective Ligand Exchange in Carboxylate-Substituted Metal Oxo Clusters
Kreutzer, Johannes; Czakler, Matthias; Puchberger, Michael; Pittenauer, Ernst; Schubert, Ulrich
2015-01-01
Reaction of [Ti4Zr4O6(OBu)4(OMc)16] (OMc = methacrylate) with acetylacetone (acacH) resulted in dissection of the cluster and formation of [Ti(OBu)2(acac)2] and the smaller cluster [Ti2Zr4O4(OMc)16]. In contrast, the same reaction with [Zr6O4(OH)4(OOCR)12]2·6RCOOH (R = Et, CH2CH=CH2) led to site-selective substitution of two carboxylate ligands and formation of isostructural [Zr6O4(OH)4(OOCR)12–x(acac)x]2·6RCOOH (x ≤ 1). PMID:26300687
Li, Xin; Nichols, Valerie M; Zhou, Dapeng; Lim, Cynthia; Pau, George Shu Heng; Bardeen, Christopher J; Tang, Ming L
2014-06-11
We study ligand exchange between the carboxylic acid group and 5.0 nm oleic-acid capped CdS nanocrystals (NCs) using fluorescence resonance energy transfer (FRET). This is the first measurement of the initial binding events between cadmium chalcogenide NCs and carboxylic acid groups. The binding behavior can be described as an interaction between a ligand with single binding group and a substrate with multiple, identical binding sites. Assuming Poissonian binding statistics, our model fits both steady-state and time-resolved photoluminescence (SSPL and TRPL, respectively) data well. A modified Langmuir isotherm reveals that a CdS nanoparticle has an average of 3.0 new carboxylic acid ligands and binding constant, Ka, of 3.4 × 10(5) M(-1).
Sharp, David W.
1980-01-01
In a coal gasification operation or similar conversion process carried out in the presence of an alkali metal-containing catalyst wherein solid particles containing alkali metal residues are produced, alkali metal constituents are recovered for the particles by contacting or washing them with an aqueous solution containing calcium or magnesium ions in an alkali metal recovery zone at a low temperature, preferably below about 249.degree. F. During the washing or leaching process, the calcium or magnesium ions displace alkali metal ions held by ion exchange sites in the particles thereby liberating the ions and producing an aqueous effluent containing alkali metal constituents. The aqueous effluent from the alkali metal recovery zone is then recycled to the conversion process where the alkali metal constituents serve as at least a portion of the alkali metal constituents which comprise the alkali metal-containing catalyst.
Two-site fluctuations and multipolar intersite exchange interactions in strongly correlated systems
NASA Astrophysics Data System (ADS)
Pourovskii, L. V.
2016-09-01
An approach is proposed for evaluating dipolar and multipolar intersite interactions in strongly correlated materials. This approach is based on the single-site dynamical mean-field theory (DMFT) in conjunction with the atomic approximation for the local self-energy. Starting from the local-moment paramagnetic state described by DMFT, we derive intersite interactions by considering the response of the DMFT grand potential to small fluctuations of atomic configurations on two neighboring sites. The present method is validated by applying it to one-band and two-band eg Hubbard models on the simple-cubic 3 d lattice. It is also applied to study the spin-orbital order in the parent cubic structure of ternary chromium fluoride KCrF3. We obtain the onset of a G-type antiferro-orbital order at a significantly lower temperature compared to that in real distorted KCrF3. In contrast, its layered A-type antiferromagnetic order and Néel temperature are rather well reproduced. The calculated full Kugel-Khomskii Hamiltonian contains spin-orbital coupling terms inducing a misalignment in the antiferro-orbital order upon the onset of antiferromagnetism.
From deterministic dynamics to probabilistic descriptions
Misra, B.; Prigogine, I.; Courbage, M.
1979-01-01
The present work is devoted to the following question: What is the relationship between the deterministic laws of dynamics and probabilistic description of physical processes? It is generally accepted that probabilistic processes can arise from deterministic dynamics only through a process of “coarse graining” or “contraction of description” that inevitably involves a loss of information. In this work we present an alternative point of view toward the relationship between deterministic dynamics and probabilistic descriptions. Speaking in general terms, we demonstrate the possibility of obtaining (stochastic) Markov processes from deterministic dynamics simply through a “change of representation” that involves no loss of information provided the dynamical system under consideration has a suitably high degree of instability of motion. The fundamental implications of this finding for statistical mechanics and other areas of physics are discussed. From a mathematical point of view, the theory we present is a theory of invertible, positivity-preserving, and necessarily nonunitary similarity transformations that convert the unitary groups associated with deterministic dynamics to contraction semigroups associated with stochastic Markov processes. We explicitly construct such similarity transformations for the so-called Bernoulli systems. This construction illustrates also the construction of the so-called Lyapounov variables and the operator of “internal time,” which play an important role in our approach to the problem of irreversibility. The theory we present can also be viewed as a theory of entropy-increasing evolutions and their relationship to deterministic dynamics. PMID:16592691
Köhler, S; Jungkunst, H F; Gutzler, C; Herrera, R; Gerold, G
2012-09-01
In the light of global change, the necessity to monitor atmospheric depositions that have relevant effects on ecosystems is ever increasing particularly for tropical sites. For this study, atmospheric ionic depositions were measured on tropical Central Sulawesi at remote sites with both a conventional bulk water collector system (BWS collector) and with a passive ion exchange resin collector system (IER collector). The principle of IER collector to fix all ionic depositions, i.e. anions and cations, has certain advantages referring to (1) post-deposition transformation processes, (2) low ionic concentrations and (3) low rainfall and associated particulate inputs, e.g. dust or sand. The ionic concentrations to be measured for BWS collectors may easily fall below detection limits under low deposition conditions which are common for tropical sites of low land use intensity. Additionally, BWS collections are not as independent from the amount of rain fallen as are IER collections. For this study, the significant differences between both collectors found for nearly all measured elements were partly correlated to the rainfall pattern, i.e. for calcium, magnesium, potassium and sodium. However, the significant differences were, in most cases, not highly relevant. More relevant differences between the systems were found for aluminium and nitrate (434-484 %). Almost five times higher values for nitrate clarified the advantage of the IER system particularly for low deposition rate which is one particularity of atmospheric ionic deposition in tropical sites of extensive land use. The monthly resolution of the IER data offers new insights into the temporal distribution of annual ionic depositions. Here, it did not follow the tropical rain pattern of a drier season within generally wet conditions.
Vo, Uybach; Vajpai, Navratna; Flavell, Liz; Bobby, Romel; Breeze, Alexander L.; Embrey, Kevin J.; Golovanov, Alexander P.
2016-01-01
The activity of Ras is controlled by the interconversion between GTP- and GDP-bound forms partly regulated by the binding of the guanine nucleotide exchange factor Son of Sevenless (Sos). The details of Sos binding, leading to nucleotide exchange and subsequent dissociation of the complex, are not completely understood. Here, we used uniformly 15N-labeled Ras as well as [13C]methyl-Met,Ile-labeled Sos for observing site-specific details of Ras-Sos interactions in solution. Binding of various forms of Ras (loaded with GDP and mimics of GTP or nucleotide-free) at the allosteric and catalytic sites of Sos was comprehensively characterized by monitoring signal perturbations in the NMR spectra. The overall affinity of binding between these protein variants as well as their selected functional mutants was also investigated using intrinsic fluorescence. The data support a positive feedback activation of Sos by Ras·GTP with Ras·GTP binding as a substrate for the catalytic site of activated Sos more weakly than Ras·GDP, suggesting that Sos should actively promote unidirectional GDP → GTP exchange on Ras in preference of passive homonucleotide exchange. Ras·GDP weakly binds to the catalytic but not to the allosteric site of Sos. This confirms that Ras·GDP cannot properly activate Sos at the allosteric site. The novel site-specific assay described may be useful for design of drugs aimed at perturbing Ras-Sos interactions. PMID:26565026
Vo, Uybach; Vajpai, Navratna; Flavell, Liz; Bobby, Romel; Breeze, Alexander L; Embrey, Kevin J; Golovanov, Alexander P
2016-01-22
The activity of Ras is controlled by the interconversion between GTP- and GDP-bound forms partly regulated by the binding of the guanine nucleotide exchange factor Son of Sevenless (Sos). The details of Sos binding, leading to nucleotide exchange and subsequent dissociation of the complex, are not completely understood. Here, we used uniformly (15)N-labeled Ras as well as [(13)C]methyl-Met,Ile-labeled Sos for observing site-specific details of Ras-Sos interactions in solution. Binding of various forms of Ras (loaded with GDP and mimics of GTP or nucleotide-free) at the allosteric and catalytic sites of Sos was comprehensively characterized by monitoring signal perturbations in the NMR spectra. The overall affinity of binding between these protein variants as well as their selected functional mutants was also investigated using intrinsic fluorescence. The data support a positive feedback activation of Sos by Ras·GTP with Ras·GTP binding as a substrate for the catalytic site of activated Sos more weakly than Ras·GDP, suggesting that Sos should actively promote unidirectional GDP → GTP exchange on Ras in preference of passive homonucleotide exchange. Ras·GDP weakly binds to the catalytic but not to the allosteric site of Sos. This confirms that Ras·GDP cannot properly activate Sos at the allosteric site. The novel site-specific assay described may be useful for design of drugs aimed at perturbing Ras-Sos interactions.
Recent Achievements of the Neo-Deterministic Seismic Hazard Assessment in the CEI Region
Panza, G. F.; Kouteva, M.; Vaccari, F.; Peresan, A.; Romanelli, F.; Cioflan, C. O.; Radulian, M.; Marmureanu, G.; Paskaleva, I.; Gribovszki, K.; Varga, P.; Herak, M.; Zaichenco, A.; Zivcic, M.
2008-07-08
A review of the recent achievements of the innovative neo-deterministic approach for seismic hazard assessment through realistic earthquake scenarios has been performed. The procedure provides strong ground motion parameters for the purpose of earthquake engineering, based on the deterministic seismic wave propagation modelling at different scales--regional, national and metropolitan. The main advantage of this neo-deterministic procedure is the simultaneous treatment of the contribution of the earthquake source and seismic wave propagation media to the strong motion at the target site/region, as required by basic physical principles. The neo-deterministic seismic microzonation procedure has been successfully applied to numerous metropolitan areas all over the world in the framework of several international projects. In this study some examples focused on CEI region concerning both regional seismic hazard assessment and seismic microzonation of the selected metropolitan areas are shown.
Comito, Robert J; Fritzsching, Keith J; Sundell, Benjamin J; Schmidt-Rohr, Klaus; Dincă, Mircea
2016-08-17
The manufacture of advanced polyolefins has been critically enabled by the development of single-site heterogeneous catalysts. Metal-organic frameworks (MOFs) show great potential as heterogeneous catalysts that may be designed and tuned on the molecular level. In this work, exchange of zinc ions in Zn5Cl4(BTDD)3, H2BTDD = bis(1H-1,2,3-triazolo[4,5-b],[4',5'-i])dibenzo[1,4]dioxin) (MFU-4l) with reactive metals serves to establish a general platform for selective olefin polymerization in a high surface area solid promising for industrial catalysis. Characterization of polyethylene produced by these materials demonstrates both molecular and morphological control. Notably, reactivity approaches single-site catalysis, as evidenced by low polydispersity indices, and good molecular weight control. We further show that these new catalysts copolymerize ethylene and propylene. Uniform growth of the polymer around the catalyst particles provides a mechanism for controlling the polymer morphology, a relevant metric for continuous flow processes.
Momentum, water vapor, and carbon dioxide exchange at a centrally located prairie site during FIFE
NASA Astrophysics Data System (ADS)
Verma, Shashi B.; Kim, Joon; Clement, Robert J.
1992-11-01
Eddy correlation measurements were made of fluxes of momentum, sensible heat, water vapor, and carbon dioxide at a centrally located plateau site in the FIFE study area during the period from May to October 1987. About 82% of the vegetation at the site was comprised of several C4 grass species (big bluestem, Indian grass, switchgrass, tall dropseed, little bluestem, and blue grama), with the remainder being C3 grasses, sedges, forbs, and woody plants. The prairie was burned in mid-April and was not grazed. Precipitation during the study period was about normal, except for a 3-week dry period in late July to early August, which caused moisture stress conditions. The drag coefficient (Cd=u*2/u¯2, where u* is the friction velocity and ū is the mean wind speed at 2.25 m above the ground) of the prairie vegetation ranged from 0.0087 to 0.0099. The average d/zc and z0/zc (where d is the zero plane displacement, z0 is the roughness parameter, and zc is the canopy height) were estimated to be about 0.71 and 0.028, respectively. Information was developed on the aerodynamic conductance (ga) in terms of mean wind speed (measured at a reference height) for different periods in the growing season. During the early and peak growth stages, with favorable soil moisture, the daily evapotranspiration (ET) rates ranged from 3.9 to 6.6 mm d-1. The ET rate during the dry period was between 2.9 and 3.8 mm d-1. The value of the Priestley-Taylor coefficient (α), calculated as the ratio of the measured ET to the equilibrium ET, averaged around 1.26 when the canopy stomatal resistance (rc) was less than 100 s m-1. When rc increased above 100 s m-1, α decreased rapidly. The atmospheric CO2 flux data (eddy correlation) were used, in conjunction with estimated soil CO2 flux, to evaluate canopy photosynthesis (Pc). The dependence of Pc on photosynthetically active radiation (KPAR), vapor pressure deficit, and soil moisture was examined. Under nonlimiting soil moisture conditions, Pc was
Ligand binding and proton exchange dynamics in site-specific mutants of human myoglobin
Lambright, D.G.
1992-01-01
Site specific mutagenesis was used to make substitutions of four residues in the distal heme pocket of human myoglobin: Val68, His64, Lys45, and Asp60. Strongly diffracting crystals of the conservative mutation K45R in the met aquo form were grown in the trigonal space group P3[sub 2]21 and the X-ray crystal structure determined at 1.6 [angstrom] resolution. The overall structure is similar to that of sperm whale met aquo myoglobin. Several of the mutant proteins were characterized by 2-D NMR spectroscopy. The NMR data suggest the structural changes are localized to the region of the mutation. The dynamics of ligand binding to myoglobin mutants were studied by transient absorption spectroscopy following photolysis of the CO complexes. Transient absorption kinetics and spectra on the ns to ms timescale were measured in aqueous solution from 280 K to 310 K and in 75% glycerol: water from 250 K to 310 K. Two significant basis spectra were obtained from singular value decomposition of the matrix of time dependent spectra. The information was used to obtain approximations for the extent of ligand rebinding and the kinetics of conformational relaxation. Except for K45R, substitutions at Lys45 or Asp60 produce changes in the kinetics for ligand rebinding. Replacement of Lys45 with Arg increases the rate of ligand rebinding from the protein matrix by a factor of 2, but does not alter the rates for ligand escape or entry into the protein or the dynamics of the conformational relaxation. Substitutions at His64 and Val68 influence the kinetics of ligand rebinding and the dynamics of conformational relaxation. The results do not support the hypothesis that ligand migration between the heme pocket and solvent is determined solely by fluctuations of Arg45 and His64 between open and closed conformations of the heme pocket but can be rationalized if ligand diffusion through the protein matrix involves multiple competing pathways.
NASA Astrophysics Data System (ADS)
Bhatia, G.; Bubier, J. L.
2001-05-01
Peatlands play a significant role in the global carbon cycle sequestering approximately one-third of the global pool of soil carbon. An increased understanding of the carbon cycle in these critical ecosystems is imperative to further our comprehension of the role they play in future global warming. Net ecosystem exchange (NEE) of carbon dioxide was measured at Mer Bleue Bog in Ottawa, Ontario, Canada from May through August 2000. Dominant species at Mer Bleue included Ledum groenlandicum, Chamaedaphne calyculata, Eriophorum vaginatum, Carex oligosperma and Sphagnum species. In order to understand the controls and variability of NEE a range of sites were considered, including a beaver pond, a bog and a poor fen. This study aimed at comparing overall seasonal patterns and ranges of NEE, photosynthesis and respiration and understanding the relationships with photosynthetically active radiation (PAR), water table, temperature, species composition and plant biomass. A clear lexan and teflon film climate-controlled chamber was used to measure the rate of respiration and photosynthesis on a bi-weekly basis in all sites. The chamber was attached to a LI-COR 6200 portable photosynthesis system, which included a LI-6250 infrared gas analyzer, quantum sensor and data logger. Shrouds of different mesh sizes were used to regulate the amount of light entering the chamber in order to measure NEE at a wide range of PAR. An opaque shroud was used to measure ecosystem respiration. Photosynthesis was calculated as the difference between NEE and respiration. Seasonal patterns showed a peak season from June 23rd through July 15th where higher PAR and temperature levels led to increased photosynthesis and respiration measurements. Although NEE rates at the sites varied, during peak season NEE ranged in increasing order: bog hummock and hollow (6 to -6.5 μ mol CO2 m{-2} s{-1}) < beaver pond (6 to -7 μ mol CO2 m{-2} s{-1}) < poor fen (10 to -8 μ mol CO2 m{-2}s {-1}).
Abrams, M D; Mostoller, S A
1995-06-01
Seasonal ecophysiology, leaf structure and nitrogen were measured in saplings of early (Populus grandidentata Michx. and Prunus serotina J.F. Ehrh.), middle (Fraxinus americana L. and Carya tomentosa Nutt.) and late (Acer rubrum L. and Cornus florida L.) successional tree species during severe drought on adjacent open and understory sites in central Pennsylvania, USA. Area-based net photosynthesis (A) and leaf conductance to water vapor diffusion (g(wv)) varied by site and species and were highest in open growing plants and early successional species at both the open and understory sites. In response to the period of maximum drought, both sunfleck and sun leaves of the early successional species exhibited smaller decreases in A than leaves of the other species. Shaded understory leaves of all species were more susceptible to drought than sun leaves and had negative midday A values during the middle and later growing season. Shaded understory leaves also displayed a reduced photosynthetic light response during the peak drought period. Sun leaves were thicker and had a greater mass per area (LMA) and nitrogen (N) content than shaded leaves, and early and middle successional species had higher N contents and concentrations than late successional species. In both sunfleck and sun leaves, seasonal A was positively related to predawn leaf Psi, g(wv), LMA and N, and was negatively related to vapor pressure deficit, midday leaf Psi and internal CO(2). Although a significant amount of plasticity occurred in all species for most gas exchange and leaf structural parameters, middle successional species exhibited the largest degree of phenotypic plasticity between open and understory plants. PMID:14965944
Improving ground-penetrating radar data in sedimentary rocks using deterministic deconvolution
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.; Byrnes, A.P.
2003-01-01
Resolution is key to confidently identifying unique geologic features using ground-penetrating radar (GPR) data. Source wavelet "ringing" (related to bandwidth) in a GPR section limits resolution because of wavelet interference, and can smear reflections in time and/or space. The resultant potential for misinterpretation limits the usefulness of GPR. Deconvolution offers the ability to compress the source wavelet and improve temporal resolution. Unlike statistical deconvolution, deterministic deconvolution is mathematically simple and stable while providing the highest possible resolution because it uses the source wavelet unique to the specific radar equipment. Source wavelets generated in, transmitted through and acquired from air allow successful application of deterministic approaches to wavelet suppression. We demonstrate the validity of using a source wavelet acquired in air as the operator for deterministic deconvolution in a field application using "400-MHz" antennas at a quarry site characterized by interbedded carbonates with shale partings. We collected GPR data on a bench adjacent to cleanly exposed quarry faces in which we placed conductive rods to provide conclusive groundtruth for this approach to deconvolution. The best deconvolution results, which are confirmed by the conductive rods for the 400-MHz antenna tests, were observed for wavelets acquired when the transmitter and receiver were separated by 0.3 m. Applying deterministic deconvolution to GPR data collected in sedimentary strata at our study site resulted in an improvement in resolution (50%) and improved spatial location (0.10-0.15 m) of geologic features compared to the same data processed without deterministic deconvolution. The effectiveness of deterministic deconvolution for increased resolution and spatial accuracy of specific geologic features is further demonstrated by comparing results of deconvolved data with nondeconvolved data acquired along a 30-m transect immediately adjacent
Effect of Uncertainty on Deterministic Runway Scheduling
NASA Technical Reports Server (NTRS)
Gupta, Gautam; Malik, Waqar; Jung, Yoon C.
2012-01-01
Active runway scheduling involves scheduling departures for takeoffs and arrivals for runway crossing subject to numerous constraints. This paper evaluates the effect of uncertainty on a deterministic runway scheduler. The evaluation is done against a first-come- first-serve scheme. In particular, the sequence from a deterministic scheduler is frozen and the times adjusted to satisfy all separation criteria; this approach is tested against FCFS. The comparison is done for both system performance (throughput and system delay) and predictability, and varying levels of congestion are considered. The modeling of uncertainty is done in two ways: as equal uncertainty in availability at the runway as for all aircraft, and as increasing uncertainty for later aircraft. Results indicate that the deterministic approach consistently performs better than first-come-first-serve in both system performance and predictability.
Stochastic search with Poisson and deterministic resetting
NASA Astrophysics Data System (ADS)
Bhat, Uttam; De Bacco, Caterina; Redner, S.
2016-08-01
We investigate a stochastic search process in one, two, and three dimensions in which N diffusing searchers that all start at x 0 seek a target at the origin. Each of the searchers is also reset to its starting point, either with rate r, or deterministically, with a reset time T. In one dimension and for a small number of searchers, the search time and the search cost are minimized at a non-zero optimal reset rate (or time), while for sufficiently large N, resetting always hinders the search. In general, a single searcher leads to the minimum search cost in one, two, and three dimensions. When the resetting is deterministic, several unexpected feature arise for N searchers, including the search time being independent of T for 1/T\\to 0 and the search cost being independent of N over a suitable range of N. Moreover, deterministic resetting typically leads to a lower search cost than in Poisson resetting.
Deterministic dense coding with partially entangled states
Mozes, Shay; Reznik, Benni; Oppenheim, Jonathan
2005-01-01
The utilization of a d-level partially entangled state, shared by two parties wishing to communicate classical information without errors over a noiseless quantum channel, is discussed. We analytically construct deterministic dense coding schemes for certain classes of nonmaximally entangled states, and numerically obtain schemes in the general case. We study the dependency of the maximal alphabet size of such schemes on the partially entangled state shared by the two parties. Surprisingly, for d>2 it is possible to have deterministic dense coding with less than one ebit. In this case the number of alphabet letters that can be communicated by a single particle is between d and 2d. In general, we numerically find that the maximal alphabet size is any integer in the range [d,d{sup 2}] with the possible exception of d{sup 2}-1. We also find that states with less entanglement can have a greater deterministic communication capacity than other more entangled states.
Optimal partial deterministic quantum teleportation of qubits
Mista, Ladislav Jr.; Filip, Radim
2005-02-01
We propose a protocol implementing optimal partial deterministic quantum teleportation for qubits. This is a teleportation scheme realizing deterministically an optimal 1{yields}2 asymmetric universal cloning where one imperfect copy of the input state emerges at the sender's station while the other copy emerges at receiver's possibly distant station. The optimality means that the fidelities of the copies saturate the asymmetric cloning inequality. The performance of the protocol relies on the partial deterministic nondemolition Bell measurement that allows us to continuously control the flow of information among the outgoing qubits. We also demonstrate that the measurement is optimal two-qubit operation in the sense of the trade-off between the state disturbance and the information gain.
Kamboj, Sunita; Cheng, Jing-Jy; Yu, Charley
2005-05-01
The dose assessments for sites containing residual radioactivity usually involve the use of computer models that employ input parameters describing the physical conditions of the contaminated and surrounding media and the living and consumption patterns of the receptors in analyzing potential doses to the receptors. The precision of the dose results depends on the precision of the input parameter values. The identification of sensitive parameters that have great influence on the dose results would help set priorities in research and information gathering for parameter values so that a more precise dose assessment can be conducted. Two methods of identifying site-specific sensitive parameters, deterministic and probabilistic, were compared by applying them to the RESRAD computer code for analyzing radiation exposure for a residential farmer scenario. The deterministic method has difficulty in evaluating the effect of simultaneous changes in a large number of input parameters on the model output results. The probabilistic method easily identified the most sensitive parameters, but the sensitivity measure of other parameters was obscured. The choice of sensitivity analysis method would depend on the availability of site-specific data. Generally speaking, the deterministic method would identify the same set of sensitive parameters as the probabilistic method when 1) the baseline values used in the deterministic method were selected near the mean or median value of each parameter and 2) the selected range of parameter values used in the deterministic method was wide enough to cover the 5th to 95th percentile values from the distribution of that parameter.
Nine challenges for deterministic epidemic models.
Roberts, Mick; Andreasen, Viggo; Lloyd, Alun; Pellis, Lorenzo
2015-03-01
Deterministic models have a long history of being applied to the study of infectious disease epidemiology. We highlight and discuss nine challenges in this area. The first two concern the endemic equilibrium and its stability. We indicate the need for models that describe multi-strain infections, infections with time-varying infectivity, and those where superinfection is possible. We then consider the need for advances in spatial epidemic models, and draw attention to the lack of models that explore the relationship between communicable and non-communicable diseases. The final two challenges concern the uses and limitations of deterministic models as approximations to stochastic systems.
Deterministic Quantization by Dynamical Boundary Conditions
Dolce, Donatello
2010-06-15
We propose an unexplored quantization method. It is based on the assumption of dynamical space-time intrinsic periodicities for relativistic fields, which in turn can be regarded as dual to extra-dimensional fields. As a consequence we obtain a unified and consistent interpretation of Special Relativity and Quantum Mechanics in terms of Deterministic Geometrodynamics.
A deterministic discrete ordinates transport proxy application
2014-06-03
Kripke is a simple 3D deterministic discrete ordinates (Sn) particle transport code that maintains the computational load and communications pattern of a real transport code. It is intended to be a research tool to explore different data layouts, new programming paradigms and computer architectures.
Linear Deterministic Accumulator Models of Simple Choice
Heathcote, Andrew; Love, Jonathon
2012-01-01
We examine theories of simple choice as a race among evidence accumulation processes. We focus on the class of deterministic race models, which assume that the effects of fluctuations in the parameters of the accumulation processes between-choice trials (between-choice noise) dominate the effects of fluctuations occurring while making a choice (within-choice noise) in behavioral data (i.e., response times and choices). The latter deterministic approximation, when combined with the assumption that accumulation is linear, leads to a class of models that can be readily applied to simple-choice behavior because they are computationally tractable. We develop a new and mathematically simple exemplar within the class of linear deterministic models, the Lognormal race (LNR). We then examine how the LNR, and another widely applied linear deterministic model, Brown and Heathcote’s (2008) LBA, account for a range of benchmark simple-choice effects in lexical-decision task data reported by Wagenmakers et al. (2008). Our results indicate that the LNR provides an accurate description of this data. Although the LBA model provides a slightly better account, both models support similar psychological conclusions. PMID:22936920
McKinley, James P.; Zachara, John M.; Smith, Steven C.; Liu, Chongxuan
2007-01-15
Nuclear waste that bore 90Sr2+ was accidentally leaked into the vadose zone at the Hanford site, and was immobilized at relatively shallow depths in sediments containing little apparent clay or silt-sized components. Sr2+, 90Sr2+, Mg2+, and Ca2+ was desorbed and total inorganic carbon concentration was monitored during the equilibration of this sediment with varying concentrations of Na+, Ca2+. A cation exchange model previously developed for similar sediments was applied to these results as a predictor of final solution compositions. The model included binary exchange reactions for the four operant cations and an equilibrium dissolution/precipitation reaction for calcite. The model successfully predicted the desorption data. The contaminated sediment was also examined using digital autoradiography, a sensitive tool for imaging the distribution of radioactivity. The exchanger phase containing 90Sr was found to consist of smectite formed from weathering of mesostasis glass in basaltic lithic fragments. These clasts are a significant component of Hanford formation sands. The relatively small but significant cation exchange capacity of these sediments was thus a consequence of reaction with physically sequestered clays in sediment that contained essentially no fine-grained material. The nature of this exchange component explained the relatively slow (scale of days) evolution of desorption solutions. The experimental and model results indicated that there is little risk of migration of 90Sr2+ to the water table.
Deterministic dynamics in the minority game
NASA Astrophysics Data System (ADS)
Jefferies, P.; Hart, M. L.; Johnson, N. F.
2002-01-01
The minority game (MG) behaves as a stochastically disturbed deterministic system due to the coin toss invoked to resolve tied strategies. Averaging over this stochasticity yields a description of the MG's deterministic dynamics via mapping equations for the strategy score and global information. The strategy-score map contains both restoring-force and bias terms, whose magnitudes depend on the game's quenched disorder. Approximate analytical expressions are obtained and the effect of ``market impact'' is discussed. The global-information map represents a trajectory on a de Bruijn graph. For small quenched disorder, a Eulerian trail represents a stable attractor. It is shown analytically how antipersistence arises. The response to perturbations and different initial conditions is also discussed.
Bayesian Uncertainty Analyses Via Deterministic Model
NASA Astrophysics Data System (ADS)
Krzysztofowicz, R.
2001-05-01
Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.
Deterministic signal associated with a random field.
Kim, Taewoo; Zhu, Ruoyu; Nguyen, Tan H; Zhou, Renjie; Edwards, Chris; Goddard, Lynford L; Popescu, Gabriel
2013-09-01
Stochastic fields do not generally possess a Fourier transform. This makes the second-order statistics calculation very difficult, as it requires solving a fourth-order stochastic wave equation. This problem was alleviated by Wolf who introduced the coherent mode decomposition and, as a result, space-frequency statistics propagation of wide-sense stationary fields. In this paper we show that if, in addition to wide-sense stationarity, the fields are also wide-sense statistically homogeneous, then monochromatic plane waves can be used as an eigenfunction basis for the cross spectral density. Furthermore, the eigenvalue associated with a plane wave, exp[i(k · r-ωt)], is given by the spatiotemporal power spectrum evaluated at the frequency (k, ω). We show that the second-order statistics of these fields is fully described by the spatiotemporal power spectrum, a real, positive function. Thus, the second-order statistics can be efficiently propagated in the wavevector-frequency representation using a new framework of deterministic signals associated with random fields. Analogous to the complex analytic signal representation of a field, the deterministic signal is a mathematical construct meant to simplify calculations. Specifically, the deterministic signal associated with a random field is defined such that it has the identical autocorrelation as the actual random field. Calculations for propagating spatial and temporal correlations are simplified greatly because one only needs to solve a deterministic wave equation of second order. We illustrate the power of the wavevector-frequency representation with calculations of spatial coherence in the far zone of an incoherent source, as well as coherence effects induced by biological tissues.
Ada programming guidelines for deterministic storage management
NASA Technical Reports Server (NTRS)
Auty, David
1988-01-01
Previous reports have established that a program can be written in the Ada language such that the program's storage management requirements are determinable prior to its execution. Specific guidelines for ensuring such deterministic usage of Ada dynamic storage requirements are described. Because requirements may vary from one application to another, guidelines are presented in a most-restrictive to least-restrictive fashion to allow the reader to match appropriate restrictions to the particular application area under investigation.
Schwalm, Christopher R.; Williams, Christopher A.; Schaefer, Kevin; Anderson, Ryan; Arain, A.; Baker, Ian; Lokupitiya, Erandathie; Barr, Alan; Black, T. A.; Gu, Lianhong; Riciutto, Dan M.
2010-12-01
Our current understanding of terrestrial carbon processes is represented in various models used to integrate and scale measurements of CO2 exchange from remote sensing and other spatiotemporal data. Yet assessments are rarely conducted to determine how well models simulate carbon processes across vegetation types and environmental conditions. Using standardized data from the North American Carbon Program we compare observed and simulated monthly CO2 exchange from 44 eddy covariance flux towers in North America and 22 terrestrial biosphere models. The analysis period spans 220 site-years, 10 biomes, and includes two large-scale drought events, providing a natural experiment to evaluate model skill as a function of drought and seasonality. We evaluate models' ability to simulate the seasonal cycle of CO2 exchange using multiple model skill metrics and analyze links between model characteristics, site history, and model skill. Overall model performance was poor; the difference between observations and simulations was 10 times observational uncertainty, with forested ecosystems better predicted than nonforested. Model-data agreement was highest in summer and in temperate evergreen forests. In contrast, model performance declined in spring and fall, especially in ecosystems with large deciduous components, and in dry periods during the growing season. Models used across multiple biomes and sites, the mean model ensemble, and a model using assimilated parameter values showed high consistency with observations. Models with the highest skill across all biomes all used prescribed canopy phenology, calculated NEE as the difference between GPP and ecosystem respiration, and did not use a daily time step.
Deterministic Mean-Field Ensemble Kalman Filtering
Law, Kody J. H.; Tembine, Hamidou; Tempone, Raul
2016-05-03
The proof of convergence of the standard ensemble Kalman filter (EnKF) from Le Gland, Monbet, and Tran [Large sample asymptotics for the ensemble Kalman filter, in The Oxford Handbook of Nonlinear Filtering, Oxford University Press, Oxford, UK, 2011, pp. 598--631] is extended to non-Gaussian state-space models. In this paper, a density-based deterministic approximation of the mean-field limit EnKF (DMFEnKF) is proposed, consisting of a PDE solver and a quadrature rule. Given a certain minimal order of convergence κ between the two, this extends to the deterministic filter approximation, which is therefore asymptotically superior to standard EnKF for dimension d
Ma, Chien-Hui; Liu, Yen-Ting; Savva, Christos G; Rowley, Paul A; Cannon, Brian; Fan, Hsiu-Fang; Russell, Rick; Holzenburg, Andreas; Jayaram, Makkuni
2014-02-20
Flp site-specific recombination between two target sites (FRTs) harboring non-homology within the strand exchange region does not yield stable recombinant products. In negatively supercoiled plasmids containing head-to-tail sites, the reaction produces a series of knots with odd-numbered crossings. When the sites are in head-to-head orientation, the knot products contain even-numbered crossings. Both types of knots retain parental DNA configuration. By carrying out Flp recombination after first assembling the topologically well defined Tn3 resolvase synapse, it is possible to determine whether these knots arise by a processive or a dissociative mechanism. The nearly exclusive products from head-to-head and head-to-tail oriented "non-homologous" FRT partners are a 4-noded knot and a 5-noded knot, respectively. The corresponding products from a pair of native (homologous) FRT sites are a 3-noded knot and a 4-noded catenane, respectively. These results are consistent with non-homology-induced two rounds of dissociative recombination by Flp, the first to generate reciprocal recombinants containing non-complementary base pairs and the second to produce parental molecules with restored base pairing. Single molecule fluorescence resonance energy transfer (smFRET) analysis of geometrically restricted FRTs, together with single molecule tethered particle motion (smTPM) assays of unconstrained FRTs, suggests that the sites are preferentially synapsed in an anti-parallel fashion. This selectivity in synapse geometry occurs prior to the chemical steps of recombination, signifying early commitment to a productive reaction path. The cumulative topological, smFRET and smTPM results have implications for the relative orientation of DNA partners and the directionality of strand exchange during recombination mediated by tyrosine site-specific recombinases.
Paczosa-Bator, Beata; Stepien, Milena; Maj-Zurawska, Magdalena; Lewenstam, Andrzej
2009-03-01
Competitive divalent (magnesium and calcium) or monovalent (potassium, lithium and sodium) ion exchange and its influence on a membrane potential formation was studied at biological ligands (BL) such as adenosine triphosphate (ATP), asparagine (Asn) and glutamine (Gln) sites. The sites are dispersed electrochemically in membranes made of the conducting polymers (CPs)--poly(N-methylpyrrole) (PMPy) and poly(pyrrole) (PPy). The membranes are made sensitive to calcium and magnesium or to potassium, sodium and lithium by optimized electrodeposition and soaking procedures supported by the study of membrane topography and morphology. Distinctively different electrochemical responses, i.e. electrical potential transients or currents, are observed in the case of "antagonistic" calcium and magnesium or potassium and sodium/lithium ion pairs. Dissimilarity in the responses is ascribed to a difference between on site vs. bulk concentrations of ions, and is dictated by different transport properties of the ions, as shown by using the Nernst-Planck-Poisson (NPP) model and the diffusion-layer model (DLM). The method described allows inspecting potential-dependent competitive ion-exchange processes at the biologically active sites. It is suggested that this approach could be used as an auxiliary tool in study of potential dependent block in realistic membrane channels, such as Mg block in the N-methyl D-aspartate receptor channel (NMDA).
Deterministic processes vary during community assembly for ecologically dissimilar taxa
Powell, Jeff R.; Karunaratne, Senani; Campbell, Colin D.; Yao, Huaiying; Robinson, Lucinda; Singh, Brajesh K.
2015-01-01
The continuum hypothesis states that both deterministic and stochastic processes contribute to the assembly of ecological communities. However, the contextual dependency of these processes remains an open question that imposes strong limitations on predictions of community responses to environmental change. Here we measure community and habitat turnover across multiple vertical soil horizons at 183 sites across Scotland for bacteria and fungi, both dominant and functionally vital components of all soils but which differ substantially in their growth habit and dispersal capability. We find that habitat turnover is the primary driver of bacterial community turnover in general, although its importance decreases with increasing isolation and disturbance. Fungal communities, however, exhibit a highly stochastic assembly process, both neutral and non-neutral in nature, largely independent of disturbance. These findings suggest that increased focus on dispersal limitation and biotic interactions are necessary to manage and conserve the key ecosystem services provided by these assemblages. PMID:26436640
Li, Lin Z; Kadlececk, Stephen; Xu, He N; Daye, Dania; Pullinger, Benjamin; Profka, Harrilla; Chodosh, Lewis; Rizi, Rahim
2013-10-01
Conventional methods for the analysis of in vivo hyperpolarized (13) C NMR data from the lactate dehydrogenase (LDH) reaction usually make assumptions on the stability of rate constants and/or the validity of the two-site exchange model. In this study, we developed a framework to test the validity of the assumption of stable reaction rate constants and the two-site exchange model in vivo via ratiometric fitting of the time courses of the signal ratio L(t)/P(t). Our analysis provided evidence that the LDH enzymatic kinetics observed by hyperpolarized NMR are in near-equilibrium and satisfy the two-site exchange model for only a specific time window. In addition, we quantified both the forward and reverse exchange rate constants of the LDH reaction for the transgenic and mouse xenograft models of breast cancer using the ratio fitting method developed, which includes only two modeling parameters and is less sensitive to the influence of instrument settings/protocols, such as flip angles, degree of polarization and tracer dosage. We further compared the ratio fitting method with a conventional two-site exchange modeling method, i.e. the differential equation fitting method, using both the experimental and simulated hyperpolarized NMR data. The ratio fitting method appeared to fit better than the differential equation fitting method for the reverse rate constant on the mouse tumor data, with less relative errors on average, whereas the differential equation fitting method also resulted in a negative reverse rate constant for one tumor. The simulation results indicated that the accuracy of both methods depends on the width of the transport function, noise level and rate constant ratio; one method may be more accurate than the other based on the experimental/biological conditions aforementioned. We were able to categorize our tumor models into specific conditions of the computer simulation and to estimate the errors of rate quantification. We also discussed possible
Atmospheric N deposition and feedbacks on net ecosystem CO2 exchange at a semi-natural peatland site
NASA Astrophysics Data System (ADS)
Hurkuck, Miriam; Brümmer, Christian; Spott, Oliver; Flessa, Heinz; Kutsch, Werner L.
2013-04-01
Large areas of Northern Germany have been converted from natural peat bogs to arable land and were subjected to draining and peat cutting in the past. The few protected peatland areas remaining are affected by high nitrogen (N) deposition. This is the case at our study site - a semi-natural raised bog - which although located in a natural park, is surrounded by highly fertilized agricultural land and highly emitting animal husbandry farms. In this study, we use a combined approach of two independent methods to quantify atmospheric N deposition. We further investigate possible feedbacks of seasonal variation in N deposition on net ecosystem CO2 exchange (NEE). Fluxes of ammonia (NH3) and its atmospheric reactants are measured by a KAPS-denuder system. Additionally, total N input from the atmosphere into a soil-plant model ecosystem is investigated by a 15N dilution method called 'Integrated Total Nitrogen Input' (ITNI). With this approach, we allocate atmospheric N after its uptake by the ecosystem into its different fractions and investigate both plant-species effects (Lolium multiflorum, Eriophorum vaginatum) and influences of the plant biomass production induced by different amounts of fertilizer addition. Continuous eddy-covariance measurements are carried out to measure NEE. Maximum NH3 depositions of 0.41 ± 0.04 kg ha-1 week-1 were found in spring 2012. The proportion of fluxes of other N compounds such as HNO3, aerosol NH4 and NO3 was usually around 20 % of total dry N measured by KAPS denuders. In total, dry N deposition was 11.2 ± 0.9 kg N ha-1 yr-1 over the first year of experiments. Complemented with wet N measurements using bulk samplers, total N depositions of about 25.0 kg ha-1 yr-1 were found. The mean atmospheric N uptake determined with the ITNI system was 3.99 ± 0.82 mg N g-1 dry weight from July to October 2011. About two third of total deposited airborne N was allocated in above-ground plant biomass and roots. Upscaling of data based on pot
NASA Astrophysics Data System (ADS)
Lüers, J.; Westermann, S.; Piel, K.; Boike, J.
2014-01-01
The annual variability of CO2 exchange in most ecosystems is primarily driven by the activities of plants and soil microorganisms. However, little is known about the carbon balance and its controlling factors outside the growing season in arctic regions dominated by soil freeze/thaw-processes, long-lasting snow cover, and several months of darkness. This study presents a complete annual cycle of the CO2 net ecosystem exchange (NEE) dynamics for a High Arctic tundra area on the west coast of Svalbard based on eddy-covariance flux measurements. The annual cumulative CO2 budget is close to zero grams carbon per square meter per year, but shows a very strong seasonal variability. Four major CO2 exchange seasons have been identified. (1) During summer (ground snow-free), the CO2 exchange occurs mainly as a result of biological activity, with a predominance of strong CO2 assimilation by the ecosystem. (2) The autumn (ground snow-free or partly snow-covered) is dominated by CO2 respiration as a result of biological activity. (3) In winter and spring (ground snow-covered), low but persistent CO2 release occur, overlain by considerable CO2 exchange events in both directions associated with changes of air masses and air and atmospheric CO2 pressure. (4) The snow melt season (pattern of snow-free and snow-covered areas), where both, meteorological and biological forcing, resulting in a visible carbon uptake by the high arctic ecosystem. Data related to this article are archived under: http://doi.pangaea.de/10.1594/PANGAEA.809507.
NASA Astrophysics Data System (ADS)
Lüers, J.; Westermann, S.; Piel, K.; Boike, J.
2014-11-01
The annual variability of CO2 exchange in most ecosystems is primarily driven by the activities of plants and soil microorganisms. However, little is known about the carbon balance and its controlling factors outside the growing season in Arctic regions dominated by soil freeze/thaw processes, long-lasting snow cover, and several months of darkness. This study presents a complete annual cycle of the CO2 net ecosystem exchange (NEE) dynamics for a high Arctic tundra area at the west coast of Svalbard based on eddy covariance flux measurements. The annual cumulative CO2 budget is close to 0 g C m-2 yr-1, but displays a strong seasonal variability. Four major CO2 exchange seasons have been identified. (1) During summer (snow-free ground), the CO2 exchange occurs mainly as a result of biological activity, with a dominance of strong CO2 assimilation by the ecosystem. (2) The autumn (snow-free ground or partly snow-covered) is dominated by CO2 respiration as a result of biological activity. (3) In winter and spring (snow-covered ground), low but persistent CO2 release occurs, overlayed by considerable CO2 exchange events in both directions associated with high wind speed and changes of air masses and atmospheric air pressure. (4) The snow melt season (pattern of snow-free and snow-covered areas) is associated with both meteorological and biological forcing, resulting in a carbon uptake by the high Arctic ecosystem. Data related to this article are archived at http://doi.pangaea.de/10.1594/PANGAEA.809507.
Deterministic photon bias in speckle imaging
NASA Technical Reports Server (NTRS)
Beletic, James W.
1989-01-01
A method for determining photo bias terms in speckle imaging is presented, and photon bias is shown to be a deterministic quantity that can be calculated without the use of the expectation operator. The quantities obtained are found to be identical to previous results. The present results have extended photon bias calculations to the important case of the bispectrum where photon events are assigned different weights, in which regime the bias is a frequency dependent complex quantity that must be calculated for each frame.
Minimal Deterministic Physicality Applied to Cosmology
NASA Astrophysics Data System (ADS)
Valentine, John S.
This report summarizes ongoing research and development since our 2012 foundation paper, including the emergent effects of a deterministic mechanism for fermion interactions: (1) the coherence of black holes and particles using a quantum chaotic model; (2) wide-scale (anti)matter prevalence from exclusion and weak interaction during the fermion reconstitution process; and (3) red-shift due to variations of vacuum energy density. We provide a context for Standard Model fields, and show how gravitation can be accountably unified in the same mechanism, but not as a unified field.
Deterministic quantum computation with one photonic qubit
NASA Astrophysics Data System (ADS)
Hor-Meyll, M.; Tasca, D. S.; Walborn, S. P.; Ribeiro, P. H. Souto; Santos, M. M.; Duzzioni, E. I.
2015-07-01
We show that deterministic quantum computing with one qubit (DQC1) can be experimentally implemented with a spatial light modulator, using the polarization and the transverse spatial degrees of freedom of light. The scheme allows the computation of the trace of a high-dimension matrix, being limited by the resolution of the modulator panel and the technical imperfections. In order to illustrate the method, we compute the normalized trace of unitary matrices and implement the Deutsch-Jozsa algorithm. The largest matrix that can be manipulated with our setup is 1080 ×1920 , which is able to represent a system with approximately 21 qubits.
Knauf, P.A.; Law, F.Y.; Tarshis, T.; Furuya, W.
1984-05-01
External N-(4-azido (NAP-taurine) inhibits human red cell chloride exchange by binding to a site that is distinct from the chloride transport site. Increases in the intracellular chloride concentration (at constant external chloride) cause an increase in the inhibitory potency of external NAP-taurine. This effect is not due to the changes in pH or membrane potential that usually accompany a chloride gradient, since even when these changes are reversed or eliminated the inhibitory potency remains high. According to the ping-pong model for anion exchange, such transmembrane effects of intracellular chloride on external NAP-taurine can be explained if NAP-taurine only binds to its site when the transport site is in the outward-facing (E/sub o/ or ECl/sub o/) form. Since NAP-taurine prevents the conformational change from ECl/sub o/ to ECl/sub i/, it must lock the system in the outward-facing form. NAP-taurine can therefore be used just like the competitive inhibitor H/sub 2/DIDS (4,4'-diisothiocyano-1,2-diphenylethane-2,2'-disulfonic acid) to monitor the fraction of transport sites that face outward. A quantitative analysis of the effects of chloride gradients on the inhibitory potency of NAP-taurine and H/sub 2/DIDS reveals that the transport system is intrinsically asymmetric, such that when Cl/sub i/ = Cl/sub o/, most of the unloaded transport sites face the cytoplasmic side of the membrane. 30 references, 7 figures, 3 tables.
Ngo, Sam; Chiang, Vicky; Guo, Zhefeng
2012-11-01
Amyloid formation is associated with a range of debilitating human disorders including Alzheimer's and prion diseases. The amyloid structure is essential for understanding the role of amyloids in these diseases. Amyloid formation of Ure2 protein underlies the yeast prion [URE3]. Here we use site-directed spin labeling and electron paramagnetic resonance (EPR) spectroscopy to investigate the structure of amyloid fibrils formed by the Ure2 prion domain. The Ure2 prion domain under study contains a Sup35M domain at C-terminus as a solubilization element. We introduced spin labels at every residue from positions 2-15, and every 5th residue from positions 20-80 in Ure2 prion domain. EPR spectra at most labeling sites show strong spin exchange interactions, suggesting a parallel in-register β structure. With quantitative analysis of spin exchange interactions, we show that residues 8-12 form the first β strand, followed by a turn at residues 13-14, and then the second β strand from residue 15 to at least residue 20. Comparison of the spin exchange frequency for the fibrils formed under quiescent and agitated conditions also revealed differences in the fibril structures. Currently there is a lack of techniques for in-depth structural studies of amyloid fibrils. Detailed structural information is obtained almost exclusively from solid-state NMR. The identification of β-strand and turn regions in this work suggests that quantitative analysis of spin exchange interactions in spin-labeled amyloid fibrils is a powerful approach for identifying the β-strand and turn/loop residues and for studying structural differences of different fibril polymorphs.
Discrete Deterministic and Stochastic Petri Nets
NASA Technical Reports Server (NTRS)
Zijal, Robert; Ciardo, Gianfranco
1996-01-01
Petri nets augmented with timing specifications gained a wide acceptance in the area of performance and reliability evaluation of complex systems exhibiting concurrency, synchronization, and conflicts. The state space of time-extended Petri nets is mapped onto its basic underlying stochastic process, which can be shown to be Markovian under the assumption of exponentially distributed firing times. The integration of exponentially and non-exponentially distributed timing is still one of the major problems for the analysis and was first attacked for continuous time Petri nets at the cost of structural or analytical restrictions. We propose a discrete deterministic and stochastic Petri net (DDSPN) formalism with no imposed structural or analytical restrictions where transitions can fire either in zero time or according to arbitrary firing times that can be represented as the time to absorption in a finite absorbing discrete time Markov chain (DTMC). Exponentially distributed firing times are then approximated arbitrarily well by geometric distributions. Deterministic firing times are a special case of the geometric distribution. The underlying stochastic process of a DDSPN is then also a DTMC, from which the transient and stationary solution can be obtained by standard techniques. A comprehensive algorithm and some state space reduction techniques for the analysis of DDSPNs are presented comprising the automatic detection of conflicts and confusions, which removes a major obstacle for the analysis of discrete time models.
Deterministic magnetorheological finishing of optical aspheric mirrors
NASA Astrophysics Data System (ADS)
Song, Ci; Dai, Yifan; Peng, Xiaoqiang; Li, Shengyi; Shi, Feng
2009-05-01
A new method magnetorheological finishing (MRF) used for deterministical finishing of optical aspheric mirrors is applied to overcome some disadvantages including low finishing efficiency, long iterative time and unstable convergence in the process of conventional polishing. Based on the introduction of the basic principle of MRF, the key techniques to implement deterministical MRF are also discussed. To demonstrate it, a 200 mm diameter K9 class concave asphere with a vertex radius of 640mm was figured on MRF polish tool developed by ourselves. Through one process about two hours, the surface accuracy peak-to-valley (PV) is improved from initial 0.216λ to final 0.179λ and root-mean-square (RMS) is improved from 0.027λ to 0.017λ (λ = 0.6328um ). High-precision and high-efficiency convergence of optical aspheric surface error shows that MRF is an advanced optical manufacturing method that owns high convergence ratio of surface figure, high precision of optical surfacing, stabile and controllable finishing process. Therefore, utilizing MRF to finish optical aspheric mirrors determinately is credible and stabile; its advantages can be also used for finishing optical elements on varieties of types such as plane mirrors and spherical mirrors.
Deterministic forward scatter from surface gravity waves.
Deane, Grant B; Preisig, James C; Tindle, Chris T; Lavery, Andone; Stokes, M Dale
2012-12-01
Deterministic structures in sound reflected by gravity waves, such as focused arrivals and Doppler shifts, have implications for underwater acoustics and sonar, and the performance of underwater acoustic communications systems. A stationary phase analysis of the Helmholtz-Kirchhoff scattering integral yields the trajectory of focused arrivals and their relationship to the curvature of the surface wave field. Deterministic effects along paths up to 70 water depths long are observed in shallow water measurements of surface-scattered sound at the Martha's Vineyard Coastal Observatory. The arrival time and amplitude of surface-scattered pulses are reconciled with model calculations using measurements of surface waves made with an upward-looking sonar mounted mid-way along the propagation path. The root mean square difference between the modeled and observed pulse arrival amplitude and delay, respectively, normalized by the maximum range of amplitudes and delays, is found to be 0.2 or less for the observation periods analyzed. Cross-correlation coefficients for modeled and observed pulse arrival delays varied from 0.83 to 0.16 depending on surface conditions. Cross-correlation coefficients for normalized pulse energy for the same conditions were small and varied from 0.16 to 0.06. In contrast, the modeled and observed pulse arrival delay and amplitude statistics were in good agreement.
Deterministic prediction of surface wind speed variations
NASA Astrophysics Data System (ADS)
Drisya, G. V.; Kiplangat, D. C.; Asokan, K.; Satheesh Kumar, K.
2014-11-01
Accurate prediction of wind speed is an important aspect of various tasks related to wind energy management such as wind turbine predictive control and wind power scheduling. The most typical characteristic of wind speed data is its persistent temporal variations. Most of the techniques reported in the literature for prediction of wind speed and power are based on statistical methods or probabilistic distribution of wind speed data. In this paper we demonstrate that deterministic forecasting methods can make accurate short-term predictions of wind speed using past data, at locations where the wind dynamics exhibit chaotic behaviour. The predictions are remarkably accurate up to 1 h with a normalised RMSE (root mean square error) of less than 0.02 and reasonably accurate up to 3 h with an error of less than 0.06. Repeated application of these methods at 234 different geographical locations for predicting wind speeds at 30-day intervals for 3 years reveals that the accuracy of prediction is more or less the same across all locations and time periods. Comparison of the results with f-ARIMA model predictions shows that the deterministic models with suitable parameters are capable of returning improved prediction accuracy and capturing the dynamical variations of the actual time series more faithfully. These methods are simple and computationally efficient and require only records of past data for making short-term wind speed forecasts within practically tolerable margin of errors.
Deterministic Creation of Macroscopic Cat States
Lombardo, Daniel; Twamley, Jason
2015-01-01
Despite current technological advances, observing quantum mechanical effects outside of the nanoscopic realm is extremely challenging. For this reason, the observation of such effects on larger scale systems is currently one of the most attractive goals in quantum science. Many experimental protocols have been proposed for both the creation and observation of quantum states on macroscopic scales, in particular, in the field of optomechanics. The majority of these proposals, however, rely on performing measurements, making them probabilistic. In this work we develop a completely deterministic method of macroscopic quantum state creation. We study the prototypical optomechanical Membrane In The Middle model and show that by controlling the membrane’s opacity, and through careful choice of the optical cavity initial state, we can deterministically create and grow the spatial extent of the membrane’s position into a large cat state. It is found that by using a Bose-Einstein condensate as a membrane high fidelity cat states with spatial separations of up to ∼300 nm can be achieved. PMID:26345157
Schwalm, C.R.; Williams, C.A.; Schaefer, K.; Anderson, R.; Arain, M.A.; Baker, I.; Black, T.A.; Chen, G.; Ciais, P.; Davis, K. J.; Desai, A. R.; Dietze, M.; Dragoni, D.; Fischer, M.L.; Flanagan, L.B.; Grant, R.F.; Gu, L.; Hollinger, D.; Izaurralde, R.C.; Kucharik, C.; Lafleur, P.M.; Law, B.E.; Li, L.; Li, Z.; Liu, S.; Lokupitiya, E.; Luo, Y.; Ma, S.; Margolis, H.; Matamala, R.; McCaughey, H.; Monson, R. K.; Oechel, W. C.; Peng, C.; Poulter, B.; Price, D.T.; Riciutto, D.M.; Riley, W.J.; Sahoo, A.K.; Sprintsin, M.; Sun, J.; Tian, H.; Tonitto, C.; Verbeeck, H.; Verma, S.B.
2011-06-01
Our current understanding of terrestrial carbon processes is represented in various models used to integrate and scale measurements of CO{sub 2} exchange from remote sensing and other spatiotemporal data. Yet assessments are rarely conducted to determine how well models simulate carbon processes across vegetation types and environmental conditions. Using standardized data from the North American Carbon Program we compare observed and simulated monthly CO{sub 2} exchange from 44 eddy covariance flux towers in North America and 22 terrestrial biosphere models. The analysis period spans {approx}220 site-years, 10 biomes, and includes two large-scale drought events, providing a natural experiment to evaluate model skill as a function of drought and seasonality. We evaluate models' ability to simulate the seasonal cycle of CO{sub 2} exchange using multiple model skill metrics and analyze links between model characteristics, site history, and model skill. Overall model performance was poor; the difference between observations and simulations was {approx}10 times observational uncertainty, with forested ecosystems better predicted than nonforested. Model-data agreement was highest in summer and in temperate evergreen forests. In contrast, model performance declined in spring and fall, especially in ecosystems with large deciduous components, and in dry periods during the growing season. Models used across multiple biomes and sites, the mean model ensemble, and a model using assimilated parameter values showed high consistency with observations. Models with the highest skill across all biomes all used prescribed canopy phenology, calculated NEE as the difference between GPP and ecosystem respiration, and did not use a daily time step.
NASA Astrophysics Data System (ADS)
Kazazić, Saša; Bertoša, Branimir; Luić, Marija; Mikleušević, Goran; Tarnowski, Krzysztof; Dadlez, Michal; Narczyk, Marta; Bzowska, Agnieszka
2016-01-01
The biologically active form of purine nucleoside phosphorylase (PNP) from Escherichia coli (EC 2.4.2.1) is a homohexamer unit, assembled as a trimer of dimers. Upon binding of phosphate, neighboring monomers adopt different active site conformations, described as open and closed. To get insight into the functions of the two distinctive active site conformations, virtually inactive Arg24Ala mutant is complexed with phosphate; all active sites are found to be in the open conformation. To understand how the sites of neighboring monomers communicate with each other, we have combined H/D exchange (H/DX) experiments with molecular dynamics (MD) simulations. Both methods point to the mobility of the enzyme, associated with a few flexible regions situated at the surface and within the dimer interface. Although H/DX provides an average extent of deuterium uptake for all six hexamer active sites, it was able to indicate the dynamic mechanism of cross-talk between monomers, allostery. Using this technique, it was found that phosphate binding to the wild type (WT) causes arrest of the molecular motion in backbone fragments that are flexible in a ligand-free state. This was not the case for the Arg24Ala mutant. Upon nucleoside substrate/inhibitor binding, some release of the phosphate-induced arrest is observed for the WT, whereas the opposite effects occur for the Arg24Ala mutant. MD simulations confirmed that phosphate is bound tightly in the closed active sites of the WT; conversely, in the open conformation of the active site of the WT phosphate is bound loosely moving towards the exit of the active site. In Arg24Ala mutant binary complex Pi is bound loosely, too.
Turning Indium Oxide into a Superior Electrocatalyst: Deterministic Heteroatoms
Zhang, Bo; Zhang, Nan Nan; Chen, Jian Fu; Hou, Yu; Yang, Shuang; Guo, Jian Wei; Yang, Xiao Hua; Zhong, Ju Hua; Wang, Hai Feng; Hu, P.; Zhao, Hui Jun; Yang, Hua Gui
2013-01-01
The efficient electrocatalysts for many heterogeneous catalytic processes in energy conversion and storage systems must possess necessary surface active sites. Here we identify, from X-ray photoelectron spectroscopy and density functional theory calculations, that controlling charge density redistribution via the atomic-scale incorporation of heteroatoms is paramount to import surface active sites. We engineer the deterministic nitrogen atoms inserting the bulk material to preferentially expose active sites to turn the inactive material into a sufficient electrocatalyst. The excellent electrocatalytic activity of N-In2O3 nanocrystals leads to higher performance of dye-sensitized solar cells (DSCs) than the DSCs fabricated with Pt. The successful strategy provides the rational design of transforming abundant materials into high-efficient electrocatalysts. More importantly, the exciting discovery of turning the commonly used transparent conductive oxide (TCO) in DSCs into counter electrode material means that except for decreasing the cost, the device structure and processing techniques of DSCs can be simplified in future. PMID:24173503
Statistical properties of deterministic Bernoulli flows
Radunskaya, A.E.
1992-12-31
This thesis presents several new theorems about the stability and the statistical properties of deterministic chaotic flows. Many concrete systems known to exhibit deterministic chaos have so far been shown to be of a class known as Bernoulli Flows. This class of flows is characterized by the Finitely Determined property, which can be checked in specific cases. The first theorem says that these flows can be modeled arbitrarily well for all time by continuous-time finite state Markov processes. In other words it is theoretically possible to model the flow arbitrarily well by a computer equipped with a roulette wheel. There follows a stability result, which says that one can distort the measurements made on the processes without affecting the approximation. These results are than applied to the problem of distinguishing deterministic chaos from stochastic processes in the analysis of time series. The second part of the thesis deals with a specific set of examples. Although it has been possible to analyze specific systems to determine whether they lie in the class of Bernoulli systems, the standard techniques rely on the construction of expanding and contracting fibers in the phase space of the system. These fibers are then used to coordinatize the phase space and to prove the existence of a hyperbolic structure. Unfortunately such methods may fail in the general case, where smoothness conditions and a small singular set cannot be assumed. For example, suppose the standard billiard flow on a square table with a perfectly round obstacle, which is known to be Bernoulli, is replaced by a similar flow on a table with a bumpy fractal-like obstacle: a model perhaps closer to nature. It is shown that these fibers no longer exist and hence cannot be used in the standard manner to prove Bernoulliness or ergodicity. But, one can use the fact that the class of Bernoulli flows is closed in the d-bar metric to show that this billard flow with a bumpy obstacle is in fact Bernoulli.
Deterministic, Nanoscale Fabrication of Mesoscale Objects
Jr., R M; Gilmer, J; Rubenchik, A; Shirk, M
2004-12-08
Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-D features and with 100-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels, as well as materials such as diamond and vanadium. The motivation for this project was to investigate the physics and chemistry that control the interactions of solid surfaces with laser beams and ion beams, with a view towards their applicability to the desired deterministic fabrication processes. As part of this LDRD project, one of our goals was to advance the state of the art for experimental work, but, in order to create ultimately a deterministic capability for such precision micromachining, another goal was to form a new modeling/simulation capability that could also extend the state of the art in this field. We have achieved both goals. In this project, we have, for the first time, combined a 1-D hydrocode (''HYADES'') with a 3-D molecular dynamics simulator (''MDCASK'') in our modeling studies. In FY02 and FY03, we investigated the ablation/surface-modification processes that occur on copper, gold, and nickel substrates with the use of sub-ps laser pulses. In FY04, we investigated laser ablation of carbon, including laser-enhanced chemical reaction on the carbon surface for both vitreous carbon and carbon aerogels. Both experimental and modeling results will be presented in the report that follows. The immediate impact of our investigation was a much better understanding of the chemical and physical processes that ensure when solid materials are exposed to femtosecond laser pulses. More broadly, we have better positioned LLNL to design a cluster tool for fabricating mesoscale objects utilizing laser pulses and ion-beams as well as more traditional machining/manufacturing techniques for applications such as components in NIF targets, remote sensors, including
Beitia, Anton Oscar; Kuperman, Gilad; Delman, Bradley N; Shapiro, Jason S
2013-01-01
We evaluated the performance of LOINC® and RadLex standard terminologies for covering CT test names from three sites in a health information exchange (HIE) with the eventual goal of building an HIE-based clinical decision support system to alert providers of prior duplicate CTs. Given the goal, the most important parameter to assess was coverage for high frequency exams that were most likely to be repeated. We showed that both LOINC® and RadLex provided sufficient coverage for our use case through calculations of (a) high coverage of 90% and 94%, respectively for the subset of CTs accounting for 99% of exams performed and (b) high concept token coverage (total percentage of exams performed that map to terminologies) of 92% and 95%, respectively. With trends toward greater interoperability, this work may provide a framework for those wishing to map radiology site codes to a standard nomenclature for purposes of tracking resource utilization.
Beitia, Anton Oscar; Kuperman, Gilad; Delman, Bradley N; Shapiro, Jason S
2013-01-01
We evaluated the performance of LOINC® and RadLex standard terminologies for covering CT test names from three sites in a health information exchange (HIE) with the eventual goal of building an HIE-based clinical decision support system to alert providers of prior duplicate CTs. Given the goal, the most important parameter to assess was coverage for high frequency exams that were most likely to be repeated. We showed that both LOINC® and RadLex provided sufficient coverage for our use case through calculations of (a) high coverage of 90% and 94%, respectively for the subset of CTs accounting for 99% of exams performed and (b) high concept token coverage (total percentage of exams performed that map to terminologies) of 92% and 95%, respectively. With trends toward greater interoperability, this work may provide a framework for those wishing to map radiology site codes to a standard nomenclature for purposes of tracking resource utilization. PMID:24551324
King, A.W.; DeAngelis, D.L.; Post, W.M.
1987-12-01
Ecological models of the seasonal exchange of carbon dioxide (CO/sub 2/) between the atmosphere and the terrestrial biosphere are needed in the study of changes in atmospheric CO/sub 2/ concentration. In response to this need, a set of site-specific models of seasonal terrestrial carbon dynamics was assembled from open-literature sources. The collection was chosen as a base for the development of biome-level models for each of the earth's principal terrestrial biomes or vegetation complexes. The primary disadvantage of this approach is the problem of extrapolating the site-specific models across large regions having considerable biotic, climatic, and edaphic heterogeneity. Two methods of extrapolation were tested. 142 refs., 59 figs., 47 tabs
Schwalm, Christopher R; Williams, Christopher A; Schaefer, Kevin; Anderson, Ryan; Arain, M A; Baker, Ian; Barr, Alan; Black, T Andrew; Chen, Guangsheng; Chen, Jing Ming; Ciais, Philippe; Davis, Kenneth J; Desai, Ankur R; Dietze, Michael; Dragoni, Danilo; Fischer, Marc; Flanagan, Lawrence; Grant, Robert; Gu, Lianghong; Hollinger, D; Izaurralde, Roberto C; Kucharik, Chris; Lafleur, Peter; Law, Beverly E; Li, Longhui; Li, Zhengpeng; Liu, Shuguang; Lokupitiya, Erandathie; Luo, Yiqi; Ma, Siyan; Margolis, Hank; Matamala, R; McCaughey, Harry; Monson, Russell K; Oechel, Walter C; Peng, Changhui; Poulter, Benjamin; Price, David T; Riciutto, Dan M; Riley, William; Sahoo, Alok Kumar; Sprintsin, Michael; Sun, Jianfeng; Tian, Hanqin; Tonitto, Christine; Verbeeck, Hans; Verma, Shashi B
2010-12-09
There is a continued need for models to improve consistency and agreement with observations [Friedlingstein et al., 2006], both overall and under more frequent extreme climatic events related to global environmental change such as drought [Trenberth et al., 2007]. Past validation studies of terrestrial biosphere models have focused only on few models and sites, typically in close proximity and primarily in forested biomes [e.g., Amthor et al., 2001; Delpierre et al., 2009; Grant et al., 2005; Hanson et al., 2004; Granier et al., 2007; Ichii et al., 2009; Ito, 2008; Siqueira et al., 2006; Zhou et al., 2008]. Furthermore, assessing model-data agreement relative to drought requires, in addition to high-quality observedCO2 exchange data, a reliable drought metric as well as a natural experiment across sites and drought conditions.
Deterministic approaches to coherent diffractive imaging
NASA Astrophysics Data System (ADS)
Allen, L. J.; D'Alfonso, A. J.; Martin, A. V.; Morgan, A. J.; Quiney, H. M.
2016-01-01
In this review we will consider the retrieval of the wave at the exit surface of an object illuminated by a coherent probe from one or more measured diffraction patterns. These patterns may be taken in the near-field (often referred to as images) or in the far field (the Fraunhofer diffraction pattern, where the wave is the Fourier transform of that at the exit surface). The retrieval of the exit surface wave from such data is an inverse scattering problem. This inverse problem has historically been solved using nonlinear iterative methods, which suffer from convergence and uniqueness issues. Here we review deterministic approaches to obtaining the exit surface wave which ameliorate those problems.
Deterministic polishing from theory to practice
NASA Astrophysics Data System (ADS)
Hooper, Abigail R.; Hoffmann, Nathan N.; Sarkas, Harry W.; Escolas, John; Hobbs, Zachary
2015-10-01
Improving predictability in optical fabrication can go a long way towards increasing profit margins and maintaining a competitive edge in an economic environment where pressure is mounting for optical manufacturers to cut costs. A major source of hidden cost is rework - the share of production that does not meet specification in the first pass through the polishing equipment. Rework substantially adds to the part's processing and labor costs as well as bottlenecks in production lines and frustration for managers, operators and customers. The polishing process consists of several interacting variables including: glass type, polishing pads, machine type, RPM, downforce, slurry type, baume level and even the operators themselves. Adjusting the process to get every variable under control while operating in a robust space can not only provide a deterministic polishing process which improves profitability but also produces a higher quality optic.
Pollock, E.O. Jr.
1987-10-15
The active solar Domestic Hot Water (DHW) system at the HQ Army-Air Force Exchange Service (AAFES) Building was designed and constructed as part of the Solar in Federal Buildings Programs (SFBP). This retrofitted system is one of eight of the systems in the SFBP selected for quality monitoring. The purpose of this monitoring effort is to document the performance of quality state-of-the-art solar systems in large federal building applications. The six-story HQ AAFES Building houses a cafeteria, officer's mess and club and office space for 2400 employees. The siphon-return drainback system uses 1147 ft/sup 2/ of Aircraftsman flat-plate collectors to collect solar energy which is used to preheat domestic hot water. Solar energy is stored in a 1329-gallon tank and transferred to the hot water load through a heat exchanger located in the 356-gallon DHW preheat tank. Auxiliary energy is supplied by two gas fired boilers which boost the temperature to 130/sup 0/F before it is distributed to the load. Highlights of the performance of the HQ AAFES Building solar system during the monitoring period from August 1984 through May 1985 are presented in this report.
Hans Peter Schmid; Craig Wayson
2009-05-05
The primary objective of this project was to evaluate carbon exchange dynamics across a region of North America between the Great Plains and the East Coast. This region contains about 40 active carbon cycle research (AmeriFlux) sites in a variety of climatic and landuse settings, from upland forest to urban development. The core research involved a scaling strategy that uses measured fluxes of CO{sub 2}, energy, water, and other biophysical and biometric parameters to train and calibrate surface-vegetation-atmosphere models, in conjunction with satellite (MODIS) derived drivers. To achieve matching of measured and modeled fluxes, the ecosystem parameters of the models will be adjusted to the dynamically variable flux-tower footprints following Schmid (1997). High-resolution vegetation index variations around the flux sites have been derived from Landsat data for this purpose. The calibrated models are being used in conjunction with MODIS data, atmospheric re-analysis data, and digital land-cover databases to derive ecosystem exchange fluxes over the study domain.
Brozek, Carl K.; Cozzolino, Anthony F.; Teat, Simon J.; Chen, Yu-Sheng; Dinc,; #259; Mircea,
2013-09-23
We employed multiwavelength anomalous X-ray dispersion to determine the relative cation occupation at two crystallographically distinct metal sites in Fe^{2+}-, Cu^{2+}-, and Zn^{2+}-exchanged versions of the microporous metal–organic framework (MOF) known as MnMnBTT (BTT = 1,3,5-benzenetristetrazolate). By exploiting the dispersive differences between Mn, Fe, Cu, and Zn, the extent and location of cation exchange were determined from single crystal X-ray diffraction data sets collected near the K edges of Mn^{2+} and of the substituting metal, and at a wavelength remote from either edge as a reference. Comparing the anomalous dispersion between these measurements indicated that the extent of Mn^{2+} replacement depends on the identity of the substituting metal. We contrasted two unique methods to analyze this data with a conventional approach and evaluated their limitations with emphasis on the general application of this method to other heterometallic MOFs, where site-specific metal identification is fundamental to tuning catalytic and physical properties.
Deterministic-random separation in nonstationary regime
NASA Astrophysics Data System (ADS)
Abboud, D.; Antoni, J.; Sieg-Zieba, S.; Eltabach, M.
2016-02-01
In rotating machinery vibration analysis, the synchronous average is perhaps the most widely used technique for extracting periodic components. Periodic components are typically related to gear vibrations, misalignments, unbalances, blade rotations, reciprocating forces, etc. Their separation from other random components is essential in vibration-based diagnosis in order to discriminate useful information from masking noise. However, synchronous averaging theoretically requires the machine to operate under stationary regime (i.e. the related vibration signals are cyclostationary) and is otherwise jeopardized by the presence of amplitude and phase modulations. A first object of this paper is to investigate the nature of the nonstationarity induced by the response of a linear time-invariant system subjected to speed varying excitation. For this purpose, the concept of a cyclo-non-stationary signal is introduced, which extends the class of cyclostationary signals to speed-varying regimes. Next, a "generalized synchronous average'' is designed to extract the deterministic part of a cyclo-non-stationary vibration signal-i.e. the analog of the periodic part of a cyclostationary signal. Two estimators of the GSA have been proposed. The first one returns the synchronous average of the signal at predefined discrete operating speeds. A brief statistical study of it is performed, aiming to provide the user with confidence intervals that reflect the "quality" of the estimator according to the SNR and the estimated speed. The second estimator returns a smoothed version of the former by enforcing continuity over the speed axis. It helps to reconstruct the deterministic component by tracking a specific trajectory dictated by the speed profile (assumed to be known a priori).The proposed method is validated first on synthetic signals and then on actual industrial signals. The usefulness of the approach is demonstrated on envelope-based diagnosis of bearings in variable
Not Available
1991-03-01
This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section's Seismic Qualification Program for reactor restart.
NASA Astrophysics Data System (ADS)
Morawski, Markus; Reinert, Tilo; Meyer-Klaucke, Wolfram; Wagner, Friedrich E.; Tröger, Wolfgang; Reinert, Anja; Jäger, Carsten; Brückner, Gert; Arendt, Thomas
2015-12-01
Perineuronal nets (PNs) are a specialized form of brain extracellular matrix, consisting of negatively charged glycosaminoglycans, glycoproteins and proteoglycans in the direct microenvironment of neurons. Still, locally immobilized charges in the tissue have not been accessible so far to direct observations and quantifications. Here, we present a new approach to visualize and quantify fixed charge-densities on brain slices using a focused proton-beam microprobe in combination with ionic metallic probes. For the first time, we can provide quantitative data on the distribution and net amount of pericellularly fixed charge-densities, which, determined at 0.4-0.5 M, is much higher than previously assumed. PNs, thus, represent an immobilized ion exchanger with ion sorting properties high enough to partition mobile ions in accord with Donnan-equilibrium. We propose that fixed charge-densities in the brain are involved in regulating ion mobility, the volume fraction of extracellular space and the viscosity of matrix components.
ERIC Educational Resources Information Center
Hamilton, Kendra
2004-01-01
Luann Wright, founder and president of NoIndoctrination.org, a Web site devoted to policing professors accused of harassing conservative students in their classrooms, firmly believes that what she's doing is a public service. "The university should be a market place of ideas, a safe place to explore a variety of perspectives," she says. "'But I…
Benedetti-Cecchi, Lisandro; Canepa, Antonio; Fuentes, Veronica; Tamburello, Laura; Purcell, Jennifer E.; Piraino, Stefano; Roberts, Jason; Boero, Ferdinando; Halpin, Patrick
2015-01-01
Jellyfish outbreaks are increasingly viewed as a deterministic response to escalating levels of environmental degradation and climate extremes. However, a comprehensive understanding of the influence of deterministic drivers and stochastic environmental variations favouring population renewal processes has remained elusive. This study quantifies the deterministic and stochastic components of environmental change that lead to outbreaks of the jellyfish Pelagia noctiluca in the Mediterranen Sea. Using data of jellyfish abundance collected at 241 sites along the Catalan coast from 2007 to 2010 we: (1) tested hypotheses about the influence of time-varying and spatial predictors of jellyfish outbreaks; (2) evaluated the relative importance of stochastic vs. deterministic forcing of outbreaks through the environmental bootstrap method; and (3) quantified return times of extreme events. Outbreaks were common in May and June and less likely in other summer months, which resulted in a negative relationship between outbreaks and SST. Cross- and along-shore advection by geostrophic flow were important concentrating forces of jellyfish, but most outbreaks occurred in the proximity of two canyons in the northern part of the study area. This result supported the recent hypothesis that canyons can funnel P. noctiluca blooms towards shore during upwelling. This can be a general, yet unappreciated mechanism leading to outbreaks of holoplanktonic jellyfish species. The environmental bootstrap indicated that stochastic environmental fluctuations have negligible effects on return times of outbreaks. Our analysis emphasized the importance of deterministic processes leading to jellyfish outbreaks compared to the stochastic component of environmental variation. A better understanding of how environmental drivers affect demographic and population processes in jellyfish species will increase the ability to anticipate jellyfish outbreaks in the future. PMID:26485278
Human gait recognition via deterministic learning.
Zeng, Wei; Wang, Cong
2012-11-01
Recognition of temporal/dynamical patterns is among the most difficult pattern recognition tasks. Human gait recognition is a typical difficulty in the area of dynamical pattern recognition. It classifies and identifies individuals by their time-varying gait signature data. Recently, a new dynamical pattern recognition method based on deterministic learning theory was presented, in which a time-varying dynamical pattern can be effectively represented in a time-invariant manner and can be rapidly recognized. In this paper, we present a new model-based approach for human gait recognition via the aforementioned method, specifically for recognizing people by gait. The approach consists of two phases: a training (learning) phase and a test (recognition) phase. In the training phase, side silhouette lower limb joint angles and angular velocities are selected as gait features. A five-link biped model for human gait locomotion is employed to demonstrate that functions containing joint angle and angular velocity state vectors characterize the gait system dynamics. Due to the quasi-periodic and symmetrical characteristics of human gait, the gait system dynamics can be simplified to be described by functions of joint angles and angular velocities of one side of the human body, thus the feature dimension is effectively reduced. Locally-accurate identification of the gait system dynamics is achieved by using radial basis function (RBF) neural networks (NNs) through deterministic learning. The obtained knowledge of the approximated gait system dynamics is stored in constant RBF networks. A gait signature is then derived from the extracted gait system dynamics along the phase portrait of joint angles versus angular velocities. A bank of estimators is constructed using constant RBF networks to represent the training gait patterns. In the test phase, by comparing the set of estimators with the test gait pattern, a set of recognition errors are generated, and the average L(1) norms
Morawski, Markus; Reinert, Tilo; Meyer-Klaucke, Wolfram; Wagner, Friedrich E.; Tröger, Wolfgang; Reinert, Anja; Jäger, Carsten; Brückner, Gert; Arendt, Thomas
2015-01-01
Perineuronal nets (PNs) are a specialized form of brain extracellular matrix, consisting of negatively charged glycosaminoglycans, glycoproteins and proteoglycans in the direct microenvironment of neurons. Still, locally immobilized charges in the tissue have not been accessible so far to direct observations and quantifications. Here, we present a new approach to visualize and quantify fixed charge-densities on brain slices using a focused proton-beam microprobe in combination with ionic metallic probes. For the first time, we can provide quantitative data on the distribution and net amount of pericellularly fixed charge-densities, which, determined at 0.4–0.5 M, is much higher than previously assumed. PNs, thus, represent an immobilized ion exchanger with ion sorting properties high enough to partition mobile ions in accord with Donnan-equilibrium. We propose that fixed charge-densities in the brain are involved in regulating ion mobility, the volume fraction of extracellular space and the viscosity of matrix components. PMID:26621052
Amor,J.; Swails, J.; Zhu, X.; Roy, C.; Nagai, H.; Ingmundson, A.; Cheng, X.; Kahn, R.
2005-01-01
The Legionella pneumophila protein RalF is secreted into host cytosol via the Dot/Icm type IV transporter where it acts to recruit ADP-ribosylation factor (Arf) to pathogen-containing phagosomes in the establishment of a replicative organelle. The presence in RalF of the Sec7 domain, present in all Arf guanine nucleotide exchange factors, has suggested that recruitment of Arf is an early step in pathogenesis. We have determined the crystal structure of RalF and of the isolated Sec7 domain and found that RalF is made up of two domains. The Sec7 domain is homologous to mammalian Sec7 domains. The C-terminal domain forms a cap over the active site in the Sec7 domain and contains a conserved folding motif, previously observed in adaptor subunits of vesicle coat complexes. The importance of the capping domain and of the glutamate in the 'glutamic finger,' conserved in all Sec7 domains, to RalF functions was examined using three different assays. These data highlight the functional importance of domains other than Sec7 in Arf guanine nucleotide exchange factors to biological activities and suggest novel mechanisms of regulation of those activities.
NASA Astrophysics Data System (ADS)
Tieman, Catherine; Rousseau, Valery
Highly frustrated quantum systems on lattices can exhibit a wide variety of phases. In addition to the usual Mott insulating and superfluid phases, these systems can also produce some so-called ``exotic phases'', such as super-solid and valence-bond-solid phases. An example of particularly frustrated lattice is the pyrochlore structure, which is formed by corner-sharing tetrahedrons. Many real materials adopt this structure, for instance the crystal Cd2 Re2O7 , which exhibits superconducting properties. However, the complex structure of these materials combined with the complexity of the dominant interactions that describe them makes their analytical study difficult. Also, approximate methods, such as mean-field theory, fail to give a correct description of these systems. In this work, we report on the first exact quantum Monte Carlo study of a model of hard-core bosons in a pyrochlore lattice with six-site ring-exchange interactions, using the Stochastic Green Function (SGF) algorithm. We analyze the superfluid density and the structure factor as functions of the filling and ring-exchange interaction strength, and we map out the ground state phase diagram.
Curved paths in raptor flight: Deterministic models.
Lorimer, John W
2006-10-21
Two deterministic models for flight of Peregrine Falcons and possibly other raptors as they approach their prey are examined mathematically. Both models make two assumptions. The first, applicable to both models, is that the angle of sight between falcon and prey is constant, consistent with observations that the falcon keeps its head straight during flight and keeps on course by use of the deep foveal region in its eye which allows maximum acuity at an angle of sight of about 45 degrees . The second assumption for the first model (conical spiral), is that the initial direction of flight determines the overall path. For the second model (flight constrained to a tilted plane), a parameter that fixes the orientation of the plane is required. A variational calculation also shows that the tilted plane flight path is the shortest total path, and, consequently, the conical spiral is another shortest total path. Numerical calculations indicate that the flight paths for the two models are very similar for the experimental conditions under which observations have been made. However, the angles of flight and bank differ significantly. More observations are needed to investigate the applicability of the two models.
Quality control in a deterministic manufacturing environment
Barkman, W.E.; Babelay, E.F.; De Mint, P.D.; Lewis, J.C.; Woodard, L.M.
1985-01-24
An approach for establishing quality control in processes which exhibit undesired continual or intermittent excursions in key process parameters is discussed. The method is called deterministic manufacturing, and it is designed to employ automatic monitoring of the key process variables for process certification, but utilizes only sample certification of the process output to verify the validity of the measurement process. The system utilizes a local minicomputer to sample the appropriate process parameters that describe the condition of the machine tool, the cutting process, and the computer numerical control system. Sampled data are pre-processed by the minicomputer and then sent to a host computer that maintains a permanent data base describing the manufacturing conditions for each work piece. Parts are accepted if the various parameters remain within the required limits during the machining cycle. The need for additional actions is flagged if limits are exceeded. With this system it is possible to retrospectively examine the process status just prior to the occurrence of a problem. (LEW)
Deterministic particle transport in a ratchet flow
NASA Astrophysics Data System (ADS)
Beltrame, Philippe; Makhoul, Mounia; Joelson, Maminirina
2016-01-01
This study is motivated by the issue of the pumping of particle through a periodic modulated channel. We focus on a simplified deterministic model of small inertia particles within the Stokes flow framework that we call "ratchet flow." A path-following method is employed in the parameter space in order to retrace the scenario which from bounded periodic solutions leads to particle transport. Depending on whether the magnitude of the particle drag is moderate or large, two main transport mechanisms are identified in which the role of the parity symmetry of the flow differs. For large drag, transport is induced by flow asymmetry, while for moderate drag, since the full transport solution bifurcation structure already exists for symmetric settings, flow asymmetry only makes the transport effective. We analyzed the scenarios of current reversals for each mechanism as well as the role of synchronization. In particular we show that, for large drag, the particle drift is similar to phase slip in a synchronization problem.
Zhang, Hong; Zou, Sheng; Chen, Xiyuan; Ding, Ming; Shan, Guangcun; Hu, Zhaohui; Quan, Wei
2016-07-25
We present a method for monitoring the atomic density number on site based on atomic spin exchange relaxation. When the spin polarization P ≪ 1, the atomic density numbers could be estimated by measuring magnetic resonance linewidth in an applied DC magnetic field by using an all-optical atomic magnetometer. The density measurement results showed that the experimental results the theoretical predictions had a good consistency in the investigated temperature range from 413 K to 463 K, while, the experimental results were approximately 1.5 ∼ 2 times less than the theoretical predictions estimated from the saturated vapor pressure curve. These deviations were mainly induced by the radiative heat transfer efficiency, which inevitably leaded to a lower temperature in cell than the setting temperature. PMID:27464172
Zhang, Hong; Zou, Sheng; Chen, Xiyuan; Ding, Ming; Shan, Guangcun; Hu, Zhaohui; Quan, Wei
2016-07-25
We present a method for monitoring the atomic density number on site based on atomic spin exchange relaxation. When the spin polarization P ≪ 1, the atomic density numbers could be estimated by measuring magnetic resonance linewidth in an applied DC magnetic field by using an all-optical atomic magnetometer. The density measurement results showed that the experimental results the theoretical predictions had a good consistency in the investigated temperature range from 413 K to 463 K, while, the experimental results were approximately 1.5 ∼ 2 times less than the theoretical predictions estimated from the saturated vapor pressure curve. These deviations were mainly induced by the radiative heat transfer efficiency, which inevitably leaded to a lower temperature in cell than the setting temperature.
Traffic chaotic dynamics modeling and analysis of deterministic network
NASA Astrophysics Data System (ADS)
Wu, Weiqiang; Huang, Ning; Wu, Zhitao
2016-07-01
Network traffic is an important and direct acting factor of network reliability and performance. To understand the behaviors of network traffic, chaotic dynamics models were proposed and helped to analyze nondeterministic network a lot. The previous research thought that the chaotic dynamics behavior was caused by random factors, and the deterministic networks would not exhibit chaotic dynamics behavior because of lacking of random factors. In this paper, we first adopted chaos theory to analyze traffic data collected from a typical deterministic network testbed — avionics full duplex switched Ethernet (AFDX, a typical deterministic network) testbed, and found that the chaotic dynamics behavior also existed in deterministic network. Then in order to explore the chaos generating mechanism, we applied the mean field theory to construct the traffic dynamics equation (TDE) for deterministic network traffic modeling without any network random factors. Through studying the derived TDE, we proposed that chaotic dynamics was one of the nature properties of network traffic, and it also could be looked as the action effect of TDE control parameters. A network simulation was performed and the results verified that the network congestion resulted in the chaotic dynamics for a deterministic network, which was identical with expectation of TDE. Our research will be helpful to analyze the traffic complicated dynamics behavior for deterministic network and contribute to network reliability designing and analysis.
Stochastic and Deterministic Assembly Processes in Subsurface Microbial Communities
Stegen, James C.; Lin, Xueju; Konopka, Allan; Fredrickson, Jim K.
2012-03-29
A major goal of microbial community ecology is to understand the forces that structure community composition. Deterministic selection by specific environmental factors is sometimes important, but in other cases stochastic or ecologically neutral processes dominate. Lacking is a unified conceptual framework aiming to understand why deterministic processes dominate in some contexts but not others. Here we work towards such a framework. By testing predictions derived from general ecological theory we aim to uncover factors that govern the relative influences of deterministic and stochastic processes. We couple spatiotemporal data on subsurface microbial communities and environmental parameters with metrics and null models of within and between community phylogenetic composition. Testing for phylogenetic signal in organismal niches showed that more closely related taxa have more similar habitat associations. Community phylogenetic analyses further showed that ecologically similar taxa coexist to a greater degree than expected by chance. Environmental filtering thus deterministically governs subsurface microbial community composition. More importantly, the influence of deterministic environmental filtering relative to stochastic factors was maximized at both ends of an environmental variation gradient. A stronger role of stochastic factors was, however, supported through analyses of phylogenetic temporal turnover. While phylogenetic turnover was on average faster than expected, most pairwise comparisons were not themselves significantly non-random. The relative influence of deterministic environmental filtering over community dynamics was elevated, however, in the most temporally and spatially variable environments. Our results point to general rules governing the relative influences of stochastic and deterministic processes across micro- and macro-organisms.
King, W.D.
2000-08-23
As part of the Hanford River Protection Project waste Treatment facility design contracted to BNFL, Inc., a sample of Savannah River Site (SRS) Tank 4 F waste solution was treated for the removal of technetium (as pertechnetate ion). Interest in treating the SRS sample for Tc removal resulted from the similarity between the Tank 44 F supernate composition and Hanford Envelope A supernate solutions. The Tank 44 F sample was available as a by-product of tests already conducted at the Savannah River Technology Center (SRTC) as part of the Alternative Salt Disposition Program for treatment of SRS wastes. Testing of the SRS sample resulted in considerable cost-savings since it was not necessary to ship a sample of Hanford supernate to SRS.
Surface plasmon field enhancements in deterministic aperiodic structures.
Shugayev, Roman
2010-11-22
In this paper we analyze optical properties and plasmonic field enhancements in large aperiodic nanostructures. We introduce extension of Generalized Ohm's Law approach to estimate electromagnetic properties of Fibonacci, Rudin-Shapiro, cluster-cluster aggregate and random deterministic clusters. Our results suggest that deterministic aperiodic structures produce field enhancements comparable to random morphologies while offering better understanding of field localizations and improved substrate design controllability. Generalized Ohm's law results for deterministic aperiodic structures are in good agreement with simulations obtained using discrete dipole method.
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E. , Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2011-05-17
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. A method includes depositing a catalyst particle on a surface of a substrate to define a deterministically located position; growing an aligned elongated nanostructure on the substrate, an end of the aligned elongated nanostructure coupled to the substrate at the deterministically located position; coating the aligned elongated nanostructure with a conduit material; removing a portion of the conduit material to expose the catalyst particle; removing the catalyst particle; and removing the elongated nanostructure to define a nanoconduit.
NASA Astrophysics Data System (ADS)
Diebel, F.; Boguslawski, M.; Lučić, Nemanja M.; Jović Savić, Dragana M.; Denz, C.
2015-03-01
Light propagation in structured photonic media covers many fascinating wave phenomena resulting from the band structure of the underlying lattice. Recently, the focus turned towards deterministic aperiodic structures exhibiting distinctive band gap properties. To experimentally study these effects, optical induction of photonic refractive index landscapes turned out to be the method of choice to fabricate these structures. In this contribution, we present a paradigm change of photonic lattice design by introducing a holographic optical induction method based on pixel-like spatially multiplexed single-site nondiffracting Bessel beams. This technique allows realizing a huge class of two-dimensional photonic structures, including deterministic aperiodic golden-angle Vogel spirals, as well as Fibonacci lattices.
Deterministic, Nanoscale Fabrication of Mesoscale Objects
Jr., R M; Shirk, M; Gilmer, G; Rubenchik, A
2004-09-24
Neither LLNL nor any other organization has the capability to perform deterministic fabrication of mm-sized objects with arbitrary, {micro}m-sized, 3-dimensional features with 20-nm-scale accuracy and smoothness. This is particularly true for materials such as high explosives and low-density aerogels. For deterministic fabrication of high energy-density physics (HEDP) targets, it will be necessary both to fabricate features in a wide variety of materials as well as to understand and simulate the fabrication process. We continue to investigate, both in experiment and in modeling, the ablation/surface-modification processes that occur with the use of laser pulses that are near the ablation threshold fluence. During the first two years, we studied ablation of metals, and we used sub-ps laser pulses, because pulses shorter than the electron-phonon relaxation time offered the most precise control of the energy that can be deposited into a metal surface. The use of sub-ps laser pulses also allowed a decoupling of the energy-deposition process from the ensuing movement/ablation of the atoms from the solid, which simplified the modeling. We investigated the ablation of material from copper, gold, and nickel substrates. We combined the power of the 1-D hydrocode ''HYADES'' with the state-of-the-art, 3-D molecular dynamics simulations ''MDCASK'' in our studies. For FY04, we have stretched ourselves to investigate laser ablation of carbon, including chemically-assisted processes. We undertook this research, because the energy deposition that is required to perform direct sublimation of carbon is much higher than that to stimulate the reaction 2C + O{sub 2} => 2CO. Thus, extremely fragile carbon aerogels might survive the chemically-assisted process more readily than ablation via direct laser sublimation. We had planned to start by studying vitreous carbon and move onto carbon aerogels. We were able to obtain flat, high-quality vitreous carbon, which was easy to work on
Reproducible and deterministic production of aspheres
NASA Astrophysics Data System (ADS)
Leitz, Ernst Michael; Stroh, Carsten; Schwalb, Fabian
2015-10-01
Aspheric lenses are ground in a single point cutting mode. Subsequently different iterative polishing methods are applied followed by aberration measurements on external metrology instruments. For an economical production, metrology and correction steps need to be reduced. More deterministic grinding and polishing is mandatory. Single point grinding is a path-controlled process. The quality of a ground asphere is mainly influenced by the accuracy of the machine. Machine improvements must focus on path accuracy and thermal expansion. Optimized design, materials and thermal management reduce thermal expansion. The path accuracy can be improved using ISO 230-2 standardized measurements. Repeated interferometric measurements over the total travel of all CNC axes in both directions are recorded. Position deviations evaluated in correction tables improve the path accuracy and that of the ground surface. Aspheric polishing using a sub-aperture flexible polishing tool is a dwell time controlled process. For plano and spherical polishing the amount of material removal during polishing is proportional to pressure, relative velocity and time (Preston). For the use of flexible tools on aspheres or freeform surfaces additional non-linear components are necessary. Satisloh ADAPT calculates a predicted removal function from lens geometry, tool geometry and process parameters with FEM. Additionally the tooĺs local removal characteristics is determined in a simple test. By oscillating the tool on a plano or spherical sample of the same lens material, a trench is created. Its 3-D profile is measured to calibrate the removal simulation. Remaining aberrations of the desired lens shape can be predicted, reducing iteration and metrology steps.
Understanding Vertical Jump Potentiation: A Deterministic Model.
Suchomel, Timothy J; Lamont, Hugh S; Moir, Gavin L
2016-06-01
This review article discusses previous postactivation potentiation (PAP) literature and provides a deterministic model for vertical jump (i.e., squat jump, countermovement jump, and drop/depth jump) potentiation. There are a number of factors that must be considered when designing an effective strength-power potentiation complex (SPPC) focused on vertical jump potentiation. Sport scientists and practitioners must consider the characteristics of the subject being tested and the design of the SPPC itself. Subject characteristics that must be considered when designing an SPPC focused on vertical jump potentiation include the individual's relative strength, sex, muscle characteristics, neuromuscular characteristics, current fatigue state, and training background. Aspects of the SPPC that must be considered for vertical jump potentiation include the potentiating exercise, level and rate of muscle activation, volume load completed, the ballistic or non-ballistic nature of the potentiating exercise, and the rest interval(s) used following the potentiating exercise. Sport scientists and practitioners should design and seek SPPCs that are practical in nature regarding the equipment needed and the rest interval required for a potentiated performance. If practitioners would like to incorporate PAP as a training tool, they must take the athlete training time restrictions into account as a number of previous SPPCs have been shown to require long rest periods before potentiation can be realized. Thus, practitioners should seek SPPCs that may be effectively implemented in training and that do not require excessive rest intervals that may take away from valuable training time. Practitioners may decrease the necessary time needed to realize potentiation by improving their subject's relative strength. PMID:26712510
Deterministic phase retrieval employing spherical illumination
NASA Astrophysics Data System (ADS)
Martínez-Carranza, J.; Falaggis, K.; Kozacki, T.
2015-05-01
Deterministic Phase Retrieval techniques (DPRTs) employ a series of paraxial beam intensities in order to recover the phase of a complex field. These paraxial intensities are usually generated in systems that employ plane-wave illumination. This type of illumination allows a direct processing of the captured intensities with DPRTs for recovering the phase. Furthermore, it has been shown that intensities for DPRTs can be acquired from systems that use spherical illumination as well. However, this type of illumination presents a major setback for DPRTs: the captured intensities change their size for each position of the detector on the propagation axis. In order to apply the DPRTs, reescalation of the captured intensities has to be applied. This condition can increase the error sensitivity of the final phase result if it is not carried out properly. In this work, we introduce a novel system based on a Phase Light Modulator (PLM) for capturing the intensities when employing spherical illumination. The proposed optical system enables us to capture the diffraction pattern of under, in, and over-focus intensities. The employment of the PLM allows capturing the corresponding intensities without displacing the detector. Moreover, with the proposed optical system we can control accurately the magnification of the captured intensities. Thus, the stack of captured intensities can be used in DPRTs, overcoming the problems related with the resizing of the images. In order to prove our claims, the corresponding numerical experiments will be carried out. These simulations will show that the retrieved phases with spherical illumination are accurate and can be compared with those that employ plane wave illumination. We demonstrate that with the employment of the PLM, the proposed optical system has several advantages as: the optical system is compact, the beam size on the detector plane is controlled accurately, and the errors coming from mechanical motion can be suppressed easily.
Fukui, Y.; Doskey, P. V.; Environmental Research
1998-06-20
Emissions of nonmethane organic compounds (NMOCs) were measured by a static enclosure technique at a grassland site in the Midwestern United States during the growing seasons over a 2-year period. A mixture of nonmethane hydrocarbons (NMHCs) and oxygenated hydrocarbons (OxHCs) was emitted from the surface at rates exhibiting large seasonal and year-to-year variations. The average emission rate (and standard error) of the total NMOCs around noontime on sunny days during the growing seasons for the 2-year period was 1,300 {+-} 170 {micro}g m-2 h-1 (mass of the total NMOCs per area of enclosed soil surface per hour) or 5.5 {+-} 0.9 {micro}g g-1 h-1 (mass of the total NMOCs per mass of dry plant biomass in an enclosure per hour), with about 10% and 70% of the emissions being composed of tentatively identified NMHCs and OxHCs, respectively. Methanol was apparently derived from both the soil and vegetation and exhibited an average emission rate of 460 {+-} 73 {micro}g m-2 h-1 (1.4 {+-} 0.2 {micro}g g-1 h-1), which was the largest emission among the NMOCs. The year-to-year variation in the precipitation pattern greatly affected the NMOC emission rates. Emission rates normalized to biomass density exhibited a linear decrease as the growing season progressed. The emission rates of some NMOCs, particularly the OxHCs, from vegetation subjected to hypoxia, frost, and physical stresses were significantly greater than the average values observed at the site. Emissions of monoterpenes (a- and {beta}-pinene, limonene, and myrcene) and cis-3-hexen-1-ol were accelerated during the flowering of the plants and were much greater than those predicted by algorithms that correlated emission rates with temperature. Herbaceous vegetation is estimated to contribute about 40% and 50% of the total NMOC and monoterpene emissions, respectively, in grasslands; the remaining contributions are from woody species within grasslands. Contributions of isoprene emissions from herbaceous vegetation in
Deredge, Daniel; Li, Jiawen; Johnson, Kenneth A; Wintrode, Patrick L
2016-05-01
New nonnucleoside analogs are being developed as part of a multi-drug regimen to treat hepatitis C viral infections. Particularly promising are inhibitors that bind to the surface of the thumb domain of the viral RNA-dependent RNA polymerase (NS5B). Numerous crystal structures have been solved showing small molecule non-nucleoside inhibitors bound to the hepatitis C viral polymerase, but these structures alone do not define the mechanism of inhibition. Our prior kinetic analysis showed that nonnucleoside inhibitors binding to thumb site-2 (NNI2) do not block initiation or elongation of RNA synthesis; rather, they block the transition from the initiation to elongation, which is thought to proceed with significant structural rearrangement of the enzyme-RNA complex. Here we have mapped the effect of three NNI2 inhibitors on the conformational dynamics of the enzyme using hydrogen/deuterium exchange kinetics. All three inhibitors rigidify an extensive allosteric network extending >40 Å from the binding site, thus providing a structural rationale for the observed disruption of the transition from distributive initiation to processive elongation. The two more potent inhibitors also suppress slow cooperative unfolding in the fingers extension-thumb interface and primer grip, which may contribute their stronger inhibition. These results establish that NNI2 inhibitors act through long range allosteric effects, reveal important conformational changes underlying normal polymerase function, and point the way to the design of more effective allosteric inhibitors that exploit this new information. PMID:27006396
On a class of quantum Turing machine halting deterministically
NASA Astrophysics Data System (ADS)
Liang, Min; Yang, Li
2013-05-01
We consider a subclass of quantum Turing machines (QTM), named stationary rotational quantum Turing machine (SR-QTM), which halts deterministically and has deterministic tape head position. A quantum state transition diagram (QSTD) is proposed to describe SR-QTM. With QSTD, we construct a SR-QTM which is universal for all near-trivial transformations. This indicates there exists a QTM which is universal for the above subclass. Finally we show that SR-QTM is computational equivalent with ordinary QTM in the bounded error setting. It can be seen that SR-QTMs have deterministic tape head position and halt deterministically, and thus the halting scheme problem will not exist for this class of QTMs.
Nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E.; Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2011-08-23
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoreplicant structure coupled to a surface of the substrate.
NASA Astrophysics Data System (ADS)
Herbst, M.; Friborg, T.; Schelde, K.; Jensen, R.; Ringgaard, R.; Vasquez, V.; Thomsen, A. G.; Soegaard, H.
2013-01-01
The atmospheric greenhouse gas (GHG) budget of a restored wetland in western Denmark was established for the years 2009-2011 from eddy covariance measurements of carbon dioxide (CO2) and methane (CH4) fluxes. The water table in the wetland, which was restored in 2002, was unregulated, and the vegetation height was limited through occasional grazing by cattle and grass cutting. The annual net CO2 uptake varied between 195 and 983 g m-2 and the annual net CH4 release varied between 11 and 17 g m-2. In all three years the wetland was a carbon sink and removed between 42 and 259 g C m-2 from the atmosphere. However, in terms of the full annual GHG budget (assuming that 1 g CH4 is equivalent to 25 g CO2 with respect to the greenhouse effect over a time horizon of 100 years) the wetland was a sink in 2009, a source in 2010 and neutral in 2011. Complementary observations of meteorological factors and management activities were used to explain the large inter-annual variations in the full atmospheric GHG budget of the wetland. The largest impact on the annual GHG fluxes, eventually defining their sign, came from site management through changes in grazing duration and animal stocking density. These changes accounted for half of the observed variability in the CO2 fluxes and about two thirds of the variability in CH4 fluxes. An unusually long period of snow cover in 2010 had the second largest effect on the annual CO2 flux, whose interannual variability was larger than that of the CH4 flux. Since integrated CO2 and CH4 flux data from restored wetlands are still very rare, it is concluded that more long-term flux measurements are needed to quantify the effects of ecosystem disturbance, in terms of management activities and exceptional weather patterns, on the atmospheric GHG budget more accurately.
Structural deterministic safety factors selection criteria and verification
NASA Technical Reports Server (NTRS)
Verderaime, V.
1992-01-01
Though current deterministic safety factors are arbitrarily and unaccountably specified, its ratio is rooted in resistive and applied stress probability distributions. This study approached the deterministic method from a probabilistic concept leading to a more systematic and coherent philosophy and criterion for designing more uniform and reliable high-performance structures. The deterministic method was noted to consist of three safety factors: a standard deviation multiplier of the applied stress distribution; a K-factor for the A- or B-basis material ultimate stress; and the conventional safety factor to ensure that the applied stress does not operate in the inelastic zone of metallic materials. The conventional safety factor is specifically defined as the ratio of ultimate-to-yield stresses. A deterministic safety index of the combined safety factors was derived from which the corresponding reliability proved the deterministic method is not reliability sensitive. The bases for selecting safety factors are presented and verification requirements are discussed. The suggested deterministic approach is applicable to all NASA, DOD, and commercial high-performance structures under static stresses.
Single Ion Implantation and Deterministic Doping
Schenkel, Thomas
2010-06-11
The presence of single atoms, e.g. dopant atoms, in sub-100 nm scale electronic devices can affect the device characteristics, such as the threshold voltage of transistors, or the sub-threshold currents. Fluctuations of the number of dopant atoms thus poses a complication for transistor scaling. In a complementary view, new opportunities emerge when novel functionality can be implemented in devices deterministically doped with single atoms. The grand price of the latter might be a large scale quantum computer, where quantum bits (qubits) are encoded e.g. in the spin states of electrons and nuclei of single dopant atoms in silicon, or in color centers in diamond. Both the possible detrimental effects of dopant fluctuations and single atom device ideas motivate the development of reliable single atom doping techniques which are the subject of this chapter. Single atom doping can be approached with top down and bottom up techniques. Top down refers to the placement of dopant atoms into a more or less structured matrix environment, like a transistor in silicon. Bottom up refers to approaches to introduce single dopant atoms during the growth of the host matrix e.g. by directed self-assembly and scanning probe assisted lithography. Bottom up approaches are discussed in Chapter XYZ. Since the late 1960's, ion implantation has been a widely used technique to introduce dopant atoms into silicon and other materials in order to modify their electronic properties. It works particularly well in silicon since the damage to the crystal lattice that is induced by ion implantation can be repaired by thermal annealing. In addition, the introduced dopant atoms can be incorporated with high efficiency into lattice position in the silicon host crystal which makes them electrically active. This is not the case for e.g. diamond, which makes ion implantation doping to engineer the electrical properties of diamond, especially for n-type doping much harder then for silicon. Ion
Xia, J.; Franseen, E.K.; Miller, R.D.; Weis, T.V.
2004-01-01
We successfully applied deterministic deconvolution to real ground-penetrating radar (GPR) data by using the source wavelet that was generated in and transmitted through air as the operator. The GPR data were collected with 400-MHz antennas on a bench adjacent to a cleanly exposed quarry face. The quarry site is characterized by horizontally bedded carbonate strata with shale partings. In order to provide groundtruth for this deconvolution approach, 23 conductive rods were drilled into the quarry face at key locations. The steel rods provided critical information for: (1) correlation between reflections on GPR data and geologic features exposed in the quarry face, (2) GPR resolution limits, (3) accuracy of velocities calculated from common midpoint data and (4) identifying any multiples. Comparing the results of deconvolved data with non-deconvolved data demonstrates the effectiveness of deterministic deconvolution in low dielectric-loss media for increased accuracy of velocity models (improved at least 10-15% in our study after deterministic deconvolution), increased vertical and horizontal resolution of specific geologic features and more accurate representation of geologic features as confirmed from detailed study of the adjacent quarry wall. ?? 2004 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mamadou, Ossenatou; Gourlez de la Motte, Louis; De Ligne, Anne; Bernard, Heineisch; Aubinet, Marc
2016-04-01
Although widely used to measure CO2 and other gas fluxes, the eddy covariance technique still needs methodological improvements. This research focuses on the high frequency loss corrections, which are especially important when using a closed-path infrared gas analyzer. We compared three approaches to implement these corrections for CO2 fluxes and evaluated their impact on the carbon balance at the Dorinne Terrestrial Observatory (DTO), an intensively grazed grassland site in Belgium. The carbon balance at DTO is also the object of a separate analysis (Gourlez de la Motte et al., Geophysical Research Abstract, Vol. 18, EGU2016-6813-1, 2016). In the first approach, the computation of correction factors was based on the measured sensible heat cospectra ('local' cospectra), whereas the other two were based on theoretical models (Kaimal et al., 1972). The correction approaches were validated by comparing the nighttime eddy covariance CO2 fluxes corrected with each approach and in situ soil respiration measurements. We found that the local cospectra differed from the Kaimal theoretical shape, although the site could not be considered 'difficult' (i.e., fairly flat, homogeneous, low vegetation, sufficient measurement height), appearing less peaked in the inertial subrange. This difference greatly affected the correction factor, especially for night fluxes. Night fluxes measured by eddy covariance were found to be in good agreement with in situ soil respiration measurements when corrected with local cospectra and to be overestimated when corrected with Kaimal cospectra. As the difference between correction factors was larger in stable than unstable conditions, this acts as a selective systematic error and has an important impact on annual fluxes. On the basis of a 4-year average, at DTO, the errors reach 71-150 g C m-2 y-1 for net ecosystem exchange (NEE), 280-562 g C m-2 y-1 for total ecosystem respiration (TER) and 209-412 g C m-2 y-1 for gross primary productivity (GPP
Deterministic assembly processes govern bacterial community structure in the Fynbos, South Africa.
Moroenyane, I; Chimphango, S B M; Wang, J; Kim, H-K; Adams, Jonathan Miles
2016-08-01
The Mediterranean Fynbos vegetation of South Africa is well known for its high levels of diversity, endemism, and the existence of very distinct plant communities on different soil types. Studies have documented the broad taxonomic classification and diversity patterns of soil microbial diversity, but none has focused on the community assembly processes. We hypothesised that bacterial phylogenetic community structure in the Fynbos is highly governed by deterministic processes. We sampled soils in four Fynbos vegetation types and examined bacterial communities using Illumina HiSeq platform with the 16S rRNA gene marker. UniFrac analysis showed that the community clustered strongly by vegetation type, suggesting a history of evolutionary specialisation in relation to habitats or plant communities. The standardised beta mean nearest taxon distance (ses. β NTD) index showed no association with vegetation type. However, the overall phylogenetic signal indicates that distantly related OTUs do tend to co-occur. Both NTI (nearest taxon index) and ses. β NTD deviated significantly from null models, indicating that deterministic processes were important in the assembly of bacterial communities. Furthermore, ses. β NTD was significantly higher than that of null expectations, indicating that co-occurrence of related bacterial lineages (over-dispersion in phylogenetic beta diversity) is determined by the differences in environmental conditions among the sites, even though the co-occurrence pattern did not correlate with any measured environmental parameter, except for a weak correlation with soil texture. We suggest that in the Fynbos, there are frequent shifts of niches by bacterial lineages, which then become constrained and evolutionary conserved in their new environments. Overall, this study sheds light on the relative roles of both deterministic and neutral processes in governing bacterial communities in the Fynbos. It seems that deterministic processes play a major
Kalita, Anamika; Hussain, Sameer; Malik, Akhtar Hussain; Barman, Ujjwol; Goswami, Namami; Iyer, Parameswar Krishnan
2016-09-28
A new derivative of naphthalene diimide (NDMI) was synthesized that displayed optical, electrical, and visual changes exclusively for the most widespread nitroexplosive and highly water-soluble toxicant picric acid (PA) due to strong π-π interactions, dipole-charge interaction, and a favorable ground state electron transfer process facilitated by Coulombic attraction. The sensing mechanism and interaction between NDMI with PA is demonstrated via X-ray diffraction analysis, (1)H NMR studies, cyclic voltammetry, UV-visible/fluorescence spectroscopy, and lifetime measurements. Single crystal X-ray structure of NDMI revealed the formation of self-assembled crystalline network assisted by noncovalent C-H···I interactions that get disrupted upon introducing PA as a result of anion exchange and strong π-π stacking between NDMI and PA. Morphological studies of NDMI displayed large numbers of single crystalline microrods along with some three-dimensional (3D) daisy-like structures which were fabricated on Al-coated glass substrate to construct a low-cost two terminal sensor device for realizing vapor mode detection of PA at room temperature and under ambient conditions. Furthermore, an economical and portable electronic prototype was developed for visual and on-site detection of PA vapors under exceptionally realistic conditions.
Prat, Irene; Company, Anna; Postils, Verònica; Ribas, Xavi; Que, Lawrence; Luis, Josep M; Costas, Miquel
2013-05-17
A detailed mechanistic study of the hydroxylation of alkane C-H bonds using H2O2 by a family of mononuclear non heme iron catalysts with the formula [Fe(II)(CF3SO3)2(L)] is described, in which L is a tetradentate ligand containing a triazacyclononane tripod and a pyridine ring bearing different substituents at the α and γ positions, which tune the electronic or steric properties of the corresponding iron complexes. Two inequivalent cis-labile exchangeable sites, occupied by triflate ions, complete the octahedral iron coordination sphere. The C-H hydroxylation mediated by this family of complexes takes place with retention of configuration. Oxygen atoms from water are incorporated into hydroxylated products and the extent of this incorporation depends in a systematic manner on the nature of the catalyst, and the substrate. Mechanistic probes and isotopic analyses, in combination with detailed density functional theory (DFT) calculations, provide strong evidence that C-H hydroxylation is performed by highly electrophilic [Fe(V)(O)(OH)L] species through a concerted asynchronous mechanism, involving homolytic breakage of the C-H bond, followed by rebound of the hydroxyl ligand. The [Fe(V)(O)(OH)L] species can exist in two tautomeric forms, differing in the position of oxo and hydroxide ligands. Isotopic-labeling analysis shows that the relative reactivities of the two tautomeric forms are sensitively affected by the α substituent of the pyridine, and this reactivity behavior is rationalized by computational methods.
Ergodicity of Truncated Stochastic Navier Stokes with Deterministic Forcing and Dispersion
NASA Astrophysics Data System (ADS)
Majda, Andrew J.; Tong, Xin T.
2016-10-01
Turbulence in idealized geophysical flows is a very rich and important topic. The anisotropic effects of explicit deterministic forcing, dispersive effects from rotation due to the β -plane and F-plane, and topography together with random forcing all combine to produce a remarkable number of realistic phenomena. These effects have been studied through careful numerical experiments in the truncated geophysical models. These important results include transitions between coherent jets and vortices, and direct and inverse turbulence cascades as parameters are varied, and it is a contemporary challenge to explain these diverse statistical predictions. Here we contribute to these issues by proving with full mathematical rigor that for any values of the deterministic forcing, the β - and F-plane effects and topography, with minimal stochastic forcing, there is geometric ergodicity for any finite Galerkin truncation. This means that there is a unique smooth invariant measure which attracts all statistical initial data at an exponential rate. In particular, this rigorous statistical theory guarantees that there are no bifurcations to multiple stable and unstable statistical steady states as geophysical parameters are varied in contrast to claims in the applied literature. The proof utilizes a new statistical Lyapunov function to account for enstrophy exchanges between the statistical mean and the variance fluctuations due to the deterministic forcing. It also requires careful proofs of hypoellipticity with geophysical effects and uses geometric control theory to establish reachability. To illustrate the necessity of these conditions, a two-dimensional example is developed which has the square of the Euclidean norm as the Lyapunov function and is hypoelliptic with nonzero noise forcing, yet fails to be reachable or ergodic.
Elliptical quantum dots as on-demand single photons sources with deterministic polarization states
Teng, Chu-Hsiang; Demory, Brandon; Ku, Pei-Cheng; Zhang, Lei; Hill, Tyler A.; Deng, Hui
2015-11-09
In quantum information, control of the single photon's polarization is essential. Here, we demonstrate single photon generation in a pre-programmed and deterministic polarization state, on a chip-scale platform, utilizing site-controlled elliptical quantum dots (QDs) synthesized by a top-down approach. The polarization from the QD emission is found to be linear with a high degree of linear polarization and parallel to the long axis of the ellipse. Single photon emission with orthogonal polarizations is achieved, and the dependence of the degree of linear polarization on the QD geometry is analyzed.
Estimating the epidemic threshold on networks by deterministic connections
Li, Kezan Zhu, Guanghu; Fu, Xinchu; Small, Michael
2014-12-15
For many epidemic networks some connections between nodes are treated as deterministic, while the remainder are random and have different connection probabilities. By applying spectral analysis to several constructed models, we find that one can estimate the epidemic thresholds of these networks by investigating information from only the deterministic connections. Nonetheless, in these models, generic nonuniform stochastic connections and heterogeneous community structure are also considered. The estimation of epidemic thresholds is achieved via inequalities with upper and lower bounds, which are found to be in very good agreement with numerical simulations. Since these deterministic connections are easier to detect than those stochastic connections, this work provides a feasible and effective method to estimate the epidemic thresholds in real epidemic networks.
DETERMINISTIC TRANSPORT METHODS AND CODES AT LOS ALAMOS
J. E. MOREL
1999-06-01
The purposes of this paper are to: Present a brief history of deterministic transport methods development at Los Alamos National Laboratory from the 1950's to the present; Discuss the current status and capabilities of deterministic transport codes at Los Alamos; and Discuss future transport needs and possible future research directions. Our discussion of methods research necessarily includes only a small fraction of the total research actually done. The works that have been included represent a very subjective choice on the part of the author that was strongly influenced by his personal knowledge and experience. The remainder of this paper is organized in four sections: the first relates to deterministic methods research performed at Los Alamos, the second relates to production codes developed at Los Alamos, the third relates to the current status of transport codes at Los Alamos, and the fourth relates to future research directions at Los Alamos.
Deterministic sensing matrices in compressive sensing: a survey.
Nguyen, Thu L N; Shin, Yoan
2013-01-01
Compressive sensing is a sampling method which provides a new approach to efficient signal compression and recovery by exploiting the fact that a sparse signal can be suitably reconstructed from very few measurements. One of the most concerns in compressive sensing is the construction of the sensing matrices. While random sensing matrices have been widely studied, only a few deterministic sensing matrices have been considered. These matrices are highly desirable on structure which allows fast implementation with reduced storage requirements. In this paper, a survey of deterministic sensing matrices for compressive sensing is presented. We introduce a basic problem in compressive sensing and some disadvantage of the random sensing matrices. Some recent results on construction of the deterministic sensing matrices are discussed.
Inherent Conservatism in Deterministic Quasi-Static Structural Analysis
NASA Technical Reports Server (NTRS)
Verderaime, V.
1997-01-01
The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.
Deterministic and efficient quantum cryptography based on Bell's theorem
Chen Zengbing; Pan Jianwei; Zhang Qiang; Bao Xiaohui; Schmiedmayer, Joerg
2006-05-15
We propose a double-entanglement-based quantum cryptography protocol that is both efficient and deterministic. The proposal uses photon pairs with entanglement both in polarization and in time degrees of freedom; each measurement in which both of the two communicating parties register a photon can establish one and only one perfect correlation, and thus deterministically create a key bit. Eavesdropping can be detected by violation of local realism. A variation of the protocol shows a higher security, similar to the six-state protocol, under individual attacks. Our scheme allows a robust implementation under the current technology.
Cassidy, Michael R.; Roberts, J. Scott; Bird, Thomas D.; Steinbart, Ellen J.; Cupples, L. Adrienne; Chen, Clara A.; Linnenbringer, Erin; Green, Robert C.
2008-01-01
Background Genetic risk for Alzheimer’s disease (AD) may be conferred by the susceptibility polymorphism apolipoprotein E (APOE), where the ε4 allele increases the risk of developing late-onset Alzheimer’s disease but is not a definitive predictor of the disease, or by autosomal dominant mutations (e.g., the presenilins), which almost inevitably result in early-onset familial Alzheimer’s disease. The purpose of this study was to compare the psychological impact of using these two different types of genetic information to disclose genetic risk for AD to family members of affected patients. Methods Data were compared from two separate protocols. The Risk Evaluation and Education for Alzheimer’s Disease (REVEAL) Study is a randomized, multi-site clinical trial that evaluated the impact of susceptibility testing for Alzheimer’s disease with APOE in 101 adult children of Alzheimer’s disease patients. A separate study, conducted at the University of Washington, assessed the impact of deterministic genetic testing by disclosing presenilin-1, presenilin-2, or TAU genotype to 22 individuals at risk for familial Alzheimer’s disease or frontotemporal dementia. In both protocols, participants received genetic counseling and completed the Impact of Event Scale (IES), a measure of test-specific distress. Scores were analyzed at the time point closest to one year post-disclosure at which IES data were available. The role of genetic test result (positive vs. negative) and type of genetic testing (deterministic vs. susceptibility) in predicting log-transformed IES scores was assessed with linear regression, controlling for age, gender, and time from disclosure. Results Subjects from the REVEAL Study who learned that they were positive for the susceptibility gene APOE ε4+ experienced similar, low levels of test-specific distress compared to those who received positive results of deterministic testing in the University of Washington study (p= 0.78). APOE ε4
Cummins, David J; Espada, Alfonso; Novick, Scott J; Molina-Martin, Manuel; Stites, Ryan E; Espinosa, Juan Felix; Broughton, Howard; Goswami, Devrishi; Pascal, Bruce D; Dodge, Jeffrey A; Chalmers, Michael J; Griffin, Patrick R
2016-06-21
Hydrogen/deuterium exchange coupled with mass spectrometry (HDX-MS) is an information-rich biophysical method for the characterization of protein dynamics. Successful applications of differential HDX-MS include the characterization of protein-ligand binding. A single differential HDX-MS data set (protein ± ligand) is often comprised of more than 40 individual HDX-MS experiments. To eliminate laborious manual processing of samples, and to minimize random and gross errors, automated systems for HDX-MS analysis have become routine in many laboratories. However, an automated system, while less prone to random errors introduced by human operators, may have systematic errors that go unnoticed without proper detection. Although the application of automated (and manual) HDX-MS has become common, there are only a handful of studies reporting the systematic evaluation of the performance of HDX-MS experiments, and no reports have been published describing a cross-site comparison of HDX-MS experiments. Here, we describe an automated HDX-MS platform that operates with a parallel, two-trap, two-column configuration that has been installed in two remote laboratories. To understand the performance of the system both within and between laboratories, we have designed and completed a test-retest repeatability study for differential HDX-MS experiments implemented at each of two laboratories, one in Florida and the other in Spain. This study provided sufficient data to do both within and between laboratory variability assessments. Initial results revealed a systematic run-order effect within one of the two systems. Therefore, the study was repeated, and this time the conclusion was that the experimental conditions were successfully replicated with minimal systematic error. PMID:27224086
Rivera-Velasquez, Maria Fernanda; Fallico, Carmine; Guerra, Ignazio; Straface, Salvatore
2013-12-01
In this article we consider the methods of deterministic and probabilistic risk analysis regarding the presence of chemical contaminants in soil, water and air, with a broader meaning than usual for the latter, as we extended the probabilistic treatment to the parameters that influence the transport to a greater extent, in particular hydraulic conductivity and partition coefficient. These parameters, to which only one value is assigned, are considered here as random variables. The objective of the study reported herein was to demonstrate that application of the probabilistic method of risk assessment is preferable to the use of the deterministic method. Both methods yield contaminant removal levels that will reduce adverse effects on human health and the environment, but results from the deterministic method are typically more conservative than necessary, and are thus more costly to achieve. In addition, we found it essential to consider the importance of random variables (the parameters influencing the flow and the transport), such as the hydraulic conductivity and the partition coefficient, when assessing health risks. Both methodologies of health risk analysis, deterministic and probabilistic, were applied to a site in southern Italy, contaminated by heavy metals. The results obtained confirm the purposes of this study. PMID:24293229
Rivera-Velasquez, Maria Fernanda; Fallico, Carmine; Guerra, Ignazio; Straface, Salvatore
2013-12-01
In this article we consider the methods of deterministic and probabilistic risk analysis regarding the presence of chemical contaminants in soil, water and air, with a broader meaning than usual for the latter, as we extended the probabilistic treatment to the parameters that influence the transport to a greater extent, in particular hydraulic conductivity and partition coefficient. These parameters, to which only one value is assigned, are considered here as random variables. The objective of the study reported herein was to demonstrate that application of the probabilistic method of risk assessment is preferable to the use of the deterministic method. Both methods yield contaminant removal levels that will reduce adverse effects on human health and the environment, but results from the deterministic method are typically more conservative than necessary, and are thus more costly to achieve. In addition, we found it essential to consider the importance of random variables (the parameters influencing the flow and the transport), such as the hydraulic conductivity and the partition coefficient, when assessing health risks. Both methodologies of health risk analysis, deterministic and probabilistic, were applied to a site in southern Italy, contaminated by heavy metals. The results obtained confirm the purposes of this study.
NASA Astrophysics Data System (ADS)
Carey, S. K.; Drewitt, G. B.
2013-12-01
The oil sands mining industry in Canada has made a commitment to restore disturbed areas to an equivalent capability to that which existed prior to mining. Certification requires successful reclamation, which can in part be evaluated through long-term ecosystem studies. A reclamation site, informally named South Bison Hill (SBH) has had growing season water, energy and carbon fluxes measured via the eddy covariance method for 10 years since establishment. SBH was capped with a 0.2 m peat-glacial till mixture overlying 0.8 m of reworked glacial till soil. The site was seeded to barley cultivar (Hordeum spp.) in the summer of 2002 and later planted to white spruce (Picea glauca) and aspen (Populus spp.) in the summer/fall of 2004. Since 2007, the major species atop SBH has been aspen, and by 2012 was on average ~ 4 m in height. Climatically, mean growing temperature did not vary greatly, yet there was considerable difference in rainfall among years, with 2012 having the greatest rainfall at 321 mm, whereas 2011 and 2007 were notably dry at 180 and 178 mm, respectively. The partitioning of energy varied among years, but the fraction of latent heat as a portion of net radiation increased with the establishment of aspen, along with concomitant increases in LAI and growing season net ecosystem exchange (NEE). Peat growing season ET was smallest in 2004 at 2.3 mm/d and greatest in 2010 at ~3.9 mm/d. ET rates showed a marked increase in 2008 corresponding with the increase in LAI attributed to the aspen cover. Since the establishment of a surface cover and vegetation in 2003, SBH has been a growing season sink for carbon dioxide. Values of NEE follow similar patterns to those of ET, with values gradually becoming more negative (greater carbon uptake) as the aspen forest established. Comparison with other disturbed and undisturbed boreal aspen stands show that SBH exhibits similar water, energy and carbon flux patterns during the growing season.
Deterministic entanglement distillation for secure double-server blind quantum computation
Sheng, Yu-Bo; Zhou, Lan
2015-01-01
Blind quantum computation (BQC) provides an efficient method for the client who does not have enough sophisticated technology and knowledge to perform universal quantum computation. The single-server BQC protocol requires the client to have some minimum quantum ability, while the double-server BQC protocol makes the client's device completely classical, resorting to the pure and clean Bell state shared by two servers. Here, we provide a deterministic entanglement distillation protocol in a practical noisy environment for the double-server BQC protocol. This protocol can get the pure maximally entangled Bell state. The success probability can reach 100% in principle. The distilled maximally entangled states can be remaind to perform the BQC protocol subsequently. The parties who perform the distillation protocol do not need to exchange the classical information and they learn nothing from the client. It makes this protocol unconditionally secure and suitable for the future BQC protocol. PMID:25588565
Not Available
1991-03-01
This report summarizes the results of a deterministic assessment of earthquake ground motions at the Savannah River Site (SRS). The purpose of this study is to assist the Environmental Sciences Section of the Savannah River Laboratory in reevaluating the design basis earthquake (DBE) ground motion at SRS during approaches defined in Appendix A to 10 CFR Part 100. This work is in support of the Seismic Engineering Section`s Seismic Qualification Program for reactor restart.
From deterministic cellular automata to coupled map lattices
NASA Astrophysics Data System (ADS)
García-Morales, Vladimir
2016-07-01
A general mathematical method is presented for the systematic construction of coupled map lattices (CMLs) out of deterministic cellular automata (CAs). The entire CA rule space is addressed by means of a universal map for CAs that we have recently derived and that is not dependent on any freely adjustable parameters. The CMLs thus constructed are termed real-valued deterministic cellular automata (RDCA) and encompass all deterministic CAs in rule space in the asymptotic limit κ \\to 0 of a continuous parameter κ. Thus, RDCAs generalize CAs in such a way that they constitute CMLs when κ is finite and nonvanishing. In the limit κ \\to ∞ all RDCAs are shown to exhibit a global homogeneous fixed-point that attracts all initial conditions. A new bifurcation is discovered for RDCAs and its location is exactly determined from the linear stability analysis of the global quiescent state. In this bifurcation, fuzziness gradually begins to intrude in a purely deterministic CA-like dynamics. The mathematical method presented allows to get insight in some highly nontrivial behavior found after the bifurcation.
Deterministic retrieval of complex Green's functions using hard X rays.
Vine, D J; Paganin, D M; Pavlov, K M; Uesugi, K; Takeuchi, A; Suzuki, Y; Yagi, N; Kämpfe, T; Kley, E-B; Förster, E
2009-01-30
A massively parallel deterministic method is described for reconstructing shift-invariant complex Green's functions. As a first experimental implementation, we use a single phase contrast x-ray image to reconstruct the complex Green's function associated with Bragg reflection from a thick perfect crystal. The reconstruction is in excellent agreement with a classic prediction of dynamical diffraction theory. PMID:19257417
A Unit on Deterministic Chaos for Student Teachers
ERIC Educational Resources Information Center
Stavrou, D.; Assimopoulos, S.; Skordoulis, C.
2013-01-01
A unit aiming to introduce pre-service teachers of primary education to the limited predictability of deterministic chaotic systems is presented. The unit is based on a commercial chaotic pendulum system connected with a data acquisition interface. The capabilities and difficulties in understanding the notion of limited predictability of 18…
Risk-based versus deterministic explosives safety criteria
Wright, R.E.
1996-12-01
The Department of Defense Explosives Safety Board (DDESB) is actively considering ways to apply risk-based approaches in its decision- making processes. As such, an understanding of the impact of converting to risk-based criteria is required. The objectives of this project are to examine the benefits and drawbacks of risk-based criteria and to define the impact of converting from deterministic to risk-based criteria. Conclusions will be couched in terms that allow meaningful comparisons of deterministic and risk-based approaches. To this end, direct comparisons of the consequences and impacts of both deterministic and risk-based criteria at selected military installations are made. Deterministic criteria used in this report are those in DoD 6055.9-STD, `DoD Ammunition and Explosives Safety Standard.` Risk-based criteria selected for comparison are those used by the government of Switzerland, `Technical Requirements for the Storage of Ammunition (TLM 75).` The risk-based criteria used in Switzerland were selected because they have been successfully applied for over twenty-five years.
Deterministic dense coding and faithful teleportation with multipartite graph states
Huang, C.-Y.; Yu, I-C.; Lin, F.-L.; Hsu, L.-Y.
2009-05-15
We propose schemes to perform the deterministic dense coding and faithful teleportation with multipartite graph states. We also find the sufficient and necessary condition of a viable graph state for the proposed schemes. That is, for the associated graph, the reduced adjacency matrix of the Tanner-type subgraph between senders and receivers should be invertible.
Ivanka, Paskaleva; Mihaela, Kouteva; Franco, Vaccari; Panza, Giuliano F.
2008-07-08
The earthquake record and the Code for design and construction in seismic regions in Bulgaria have shown that the territory of the Republic of Bulgaria is exposed to a high seismic risk due to local shallow and regional strong intermediate-depth seismic sources. The available strong motion database is quite limited, and therefore not representative at all of the real hazard. The application of the neo-deterministic seismic hazard assessment procedure for two main Bulgarian cities has been capable to supply a significant database of synthetic strong motions for the target sites, applicable for earthquake engineering purposes. The main advantage of the applied deterministic procedure is the possibility to take simultaneously and correctly into consideration the contribution to the earthquake ground motion at the target sites of the seismic source and of the seismic wave propagation in the crossed media. We discuss in this study the result of some recent applications of the neo-deterministic seismic microzonation procedure to the cities of Sofia and Russe. The validation of the theoretically modeled seismic input against Eurocode 8 and the few available records at these sites is discussed.
Exchange fluctuation theorem for correlated quantum systems.
Jevtic, Sania; Rudolph, Terry; Jennings, David; Hirono, Yuji; Nakayama, Shojun; Murao, Mio
2015-10-01
We extend the exchange fluctuation theorem for energy exchange between thermal quantum systems beyond the assumption of molecular chaos, and describe the nonequilibrium exchange dynamics of correlated quantum states. The relation quantifies how the tendency for systems to equilibrate is modified in high-correlation environments. In addition, a more abstract approach leads us to a "correlation fluctuation theorem". Our results elucidate the role of measurement disturbance for such scenarios. We show a simple application by finding a semiclassical maximum work theorem in the presence of correlations. We also present a toy example of qubit-qudit heat exchange, and find that non-classical behaviour such as deterministic energy transfer and anomalous heat flow are reflected in our exchange fluctuation theorem. PMID:26565174
Estimation of seismic ground motions using deterministic approach for major cities of Gujarat
NASA Astrophysics Data System (ADS)
Shukla, J.; Choudhury, D.
2012-06-01
A deterministic seismic hazard analysis has been carried out for various sites of the major cities (Ahmedabad, Surat, Bhuj, Jamnagar and Junagadh) of the Gujarat region in India to compute the seismic hazard exceeding a certain level in terms of peak ground acceleration (PGA) and to estimate maximum possible PGA at each site at bed rock level. The seismic sources in Gujarat are very uncertain and recurrence intervals of regional large earthquakes are not well defined. Because the instrumental records of India specifically in the Gujarat region are far from being satisfactory for modeling the seismic hazard using the probabilistic approach, an attempt has been made in this study to accomplish it through the deterministic approach. In this regard, all small and large faults of the Gujarat region were evaluated to obtain major fault systems. The empirical relations suggested by earlier researchers for the estimation of maximum magnitude of earthquake motion with various properties of faults like length, surface area, slip rate, etc. have been applied to those faults to obtain the maximum earthquake magnitude. For the analysis, seven different ground motion attenuation relations (GMARs) of strong ground motion have been utilized to calculate the maximum horizontal ground accelerations for each major city of Gujarat. Epistemic uncertainties in the hazard computations are accounted for within a logic-tree framework by considering the controlling parameters like b-value, maximum magnitude and ground motion attenuation relations (GMARs). The corresponding deterministic spectra have been prepared for each major city for the 50th and 84th percentiles of ground motion occurrence. These deterministic spectra are further compared with the specified spectra of Indian design code IS:1893-Part I (2002) to validate them for further practical use. Close examination of the developed spectra reveals that the expected ground motion values become high for the Kachchh region i.e. Bhuj
Keenan, Daniel M.; Alexander, Susan; Irvine, Clifford H. G.; Clarke, Iain; Scott, Chris; Turner, Anne; Tilbrook, A. J.; Canny, B. J.; Veldhuis, Johannes D.
2004-01-01
Homeostasis in the intact organism is achieved implicitly by repeated incremental feedback (inhibitory) and feedforward (stimulatory) adjustments enforced via intermittent signal exchange. In separated systems, neurohormone signals act deterministically on target cells via quantifiable effector-response functions. On the other hand, in vivo interglandular signaling dynamics have not been estimable to date. Indeed, experimentally isolating components of an interactive network definitionally disrupts time-sensitive linkages. We implement and validate analytical reconstruction of endogenous effector-response properties via a composite model comprising (i) a deterministic basic feedback and feedforward ensemble structure; (ii) judicious statistical allowance for possible stochastic variability in individual biologically interpretable dose–response properties; and (iii) the sole data requirement of serially observed concentrations of a paired signal (input) and response (output). Application of this analytical strategy to a prototypical neuroendocrine axis in the conscious uninjected horse, sheep, and human (i) illustrates probabilistic estimation of endogenous effector dose–response properties; and (ii) unmasks statistically vivid (2- to 5-fold) random fluctuations in inferred target-gland responsivity within any given pulse train. In conclusion, balanced mathematical formalism allows one to (i) reconstruct deterministic properties of interglandular signaling in the intact mammal and (ii) quantify apparent signal-response variability over short time scales in vivo. The present proof-of-principle experiments introduce a previously undescribed means to estimate time-evolving signal-response relationships without isotope infusion or pathway disruption. PMID:15090645
Deterministic remote two-qubit state preparation in dissipative environments
NASA Astrophysics Data System (ADS)
Li, Jin-Fang; Liu, Jin-Ming; Feng, Xun-Li; Oh, C. H.
2016-05-01
We propose a new scheme for efficient remote preparation of an arbitrary two-qubit state, introducing two auxiliary qubits and using two Einstein-Podolsky-Rosen (EPR) states as the quantum channel in a non-recursive way. At variance with all existing schemes, our scheme accomplishes deterministic remote state preparation (RSP) with only one sender and the simplest entangled resource (say, EPR pairs). We construct the corresponding quantum logic circuit using a unitary matrix decomposition procedure and analytically obtain the average fidelity of the deterministic RSP process for dissipative environments. Our studies show that, while the average fidelity gradually decreases to a stable value without any revival in the Markovian regime, it decreases to the same stable value with a dampened revival amplitude in the non-Markovian regime. We also find that the average fidelity's approximate maximal value can be preserved for a long time if the non-Markovian and the detuning conditions are satisfied simultaneously.
Deterministic synthesis of mechanical NOON states in ultrastrong optomechanics
NASA Astrophysics Data System (ADS)
Macrí, V.; Garziano, L.; Ridolfo, A.; Di Stefano, O.; Savasta, S.
2016-07-01
We propose a protocol for the deterministic preparation of entangled NOON mechanical states. The system is constituted by two identical, optically coupled optomechanical systems. The protocol consists of two steps. In the first, one of the two optical resonators is excited by a resonant external π -like Gaussian optical pulse. When the optical excitation coherently partly transfers to the second cavity, the second step starts. It consists of sending simultaneously two additional π -like Gaussian optical pulses, one at each optical resonator, with specific frequencies. In the optomechanical ultrastrong coupling regime, when the coupling strength becomes a significant fraction of the mechanical frequency, we show that NOON mechanical states with quite high Fock states can be deterministically obtained. The operating range of this protocol is carefully analyzed. Calculations have been carried out taking into account the presence of decoherence, thermal noise, and imperfect cooling.
Deterministic generation of multiparticle entanglement by quantum Zeno dynamics.
Barontini, Giovanni; Hohmann, Leander; Haas, Florian; Estève, Jérôme; Reichel, Jakob
2015-09-18
Multiparticle entangled quantum states, a key resource in quantum-enhanced metrology and computing, are usually generated by coherent operations exclusively. However, unusual forms of quantum dynamics can be obtained when environment coupling is used as part of the state generation. In this work, we used quantum Zeno dynamics (QZD), based on nondestructive measurement with an optical microcavity, to deterministically generate different multiparticle entangled states in an ensemble of 36 qubit atoms in less than 5 microseconds. We characterized the resulting states by performing quantum tomography, yielding a time-resolved account of the entanglement generation. In addition, we studied the dependence of quantum states on measurement strength and quantified the depth of entanglement. Our results show that QZD is a versatile tool for fast and deterministic entanglement generation in quantum engineering applications.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement.
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
On the secure obfuscation of deterministic finite automata.
Anderson, William Erik
2008-06-01
In this paper, we show how to construct secure obfuscation for Deterministic Finite Automata, assuming non-uniformly strong one-way functions exist. We revisit the software protection approaches originally proposed by [5, 10, 12, 17] and revise them to the current obfuscation setting of Barak et al. [2]. Under this model, we introduce an efficient oracle that retains some 'small' secret about the original program. Using this secret, we can construct an obfuscator and two-party protocol that securely obfuscates Deterministic Finite Automata against malicious adversaries. The security of this model retains the strong 'virtual black box' property originally proposed in [2] while incorporating the stronger condition of dependent auxiliary inputs in [15]. Additionally, we show that our techniques remain secure under concurrent self-composition with adaptive inputs and that Turing machines are obfuscatable under this model.
Deterministic algorithm with agglomerative heuristic for location problems
NASA Astrophysics Data System (ADS)
Kazakovtsev, L.; Stupina, A.
2015-10-01
Authors consider the clustering problem solved with the k-means method and p-median problem with various distance metrics. The p-median problem and the k-means problem as its special case are most popular models of the location theory. They are implemented for solving problems of clustering and many practically important logistic problems such as optimal factory or warehouse location, oil or gas wells, optimal drilling for oil offshore, steam generators in heavy oil fields. Authors propose new deterministic heuristic algorithm based on ideas of the Information Bottleneck Clustering and genetic algorithms with greedy heuristic. In this paper, results of running new algorithm on various data sets are given in comparison with known deterministic and stochastic methods. New algorithm is shown to be significantly faster than the Information Bottleneck Clustering method having analogous preciseness.
Approaches to implementing deterministic models in a probabilistic framework
Talbott, D.V.
1995-04-01
The increasing use of results from probabilistic risk assessments in the decision-making process makes it ever more important to eliminate simplifications in probabilistic models that might lead to conservative results. One area in which conservative simplifications are often made is modeling the physical interactions that occur during the progression of an accident sequence. This paper demonstrates and compares different approaches for incorporating deterministic models of physical parameters into probabilistic models; parameter range binning, response curves, and integral deterministic models. An example that combines all three approaches in a probabilistic model for the handling of an energetic material (i.e. high explosive, rocket propellant,...) is then presented using a directed graph model.
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-01-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication. PMID:26861681
Deterministic error correction for nonlocal spatial-polarization hyperentanglement
NASA Astrophysics Data System (ADS)
Li, Tao; Wang, Guan-Yu; Deng, Fu-Guo; Long, Gui-Lu
2016-02-01
Hyperentanglement is an effective quantum source for quantum communication network due to its high capacity, low loss rate, and its unusual character in teleportation of quantum particle fully. Here we present a deterministic error-correction scheme for nonlocal spatial-polarization hyperentangled photon pairs over collective-noise channels. In our scheme, the spatial-polarization hyperentanglement is first encoded into a spatial-defined time-bin entanglement with identical polarization before it is transmitted over collective-noise channels, which leads to the error rejection of the spatial entanglement during the transmission. The polarization noise affecting the polarization entanglement can be corrected with a proper one-step decoding procedure. The two parties in quantum communication can, in principle, obtain a nonlocal maximally entangled spatial-polarization hyperentanglement in a deterministic way, which makes our protocol more convenient than others in long-distance quantum communication.
Deterministic entanglement of two neutral atoms via Rydberg blockade
Zhang, X. L.; Isenhower, L.; Gill, A. T.; Walker, T. G.; Saffman, M.
2010-09-15
We demonstrate the deterministic entanglement of two individually addressed neutral atoms using a Rydberg blockade mediated controlled-not gate. Parity oscillation measurements reveal a Bell state fidelity of F=0.58{+-}0.04, which is above the entanglement threshold of F=0.5, without any correction for atom loss, and F=0.71{+-}0.05 after correcting for background collisional losses. The fidelity results are shown to be in good agreement with a detailed error model.
The deterministic SIS epidemic model in a Markovian random environment.
Economou, Antonis; Lopez-Herrero, Maria Jesus
2016-07-01
We consider the classical deterministic susceptible-infective-susceptible epidemic model, where the infection and recovery rates depend on a background environmental process that is modeled by a continuous time Markov chain. This framework is able to capture several important characteristics that appear in the evolution of real epidemics in large populations, such as seasonality effects and environmental influences. We propose computational approaches for the determination of various distributions that quantify the evolution of the number of infectives in the population. PMID:26515172
Nano transfer and nanoreplication using deterministically grown sacrificial nanotemplates
Melechko, Anatoli V.; McKnight, Timothy E.; Guillorn, Michael A.; Ilic, Bojan; Merkulov, Vladimir I.; Doktycz, Mitchel J.; Lowndes, Douglas H.; Simpson, Michael L.
2012-03-27
Methods, manufactures, machines and compositions are described for nanotransfer and nanoreplication using deterministically grown sacrificial nanotemplates. An apparatus, includes a substrate and a nanoconduit material coupled to a surface of the substrate. The substrate defines an aperture and the nanoconduit material defines a nanoconduit that is i) contiguous with the aperture and ii) aligned substantially non-parallel to a plane defined by the surface of the substrate.
A deterministic algorithm for constrained enumeration of transmembrane protein folds.
Brown, William Michael; Young, Malin M.; Sale, Kenneth L.; Faulon, Jean-Loup Michel; Schoeniger, Joseph S.
2004-07-01
A deterministic algorithm for enumeration of transmembrane protein folds is presented. Using a set of sparse pairwise atomic distance constraints (such as those obtained from chemical cross-linking, FRET, or dipolar EPR experiments), the algorithm performs an exhaustive search of secondary structure element packing conformations distributed throughout the entire conformational space. The end result is a set of distinct protein conformations, which can be scored and refined as part of a process designed for computational elucidation of transmembrane protein structures.
Beyond Dispersity: Deterministic Control of Polymer Molecular Weight Distribution.
Gentekos, Dillon T; Dupuis, Lauren N; Fors, Brett P
2016-02-17
The breadth of the molecular weight distributions (MWD) of polymers influences their physical properties; however, no synthetic methods allow precise control of the exact shape and composition of a distribution. We report a modular strategy that enables deterministic control over polymer MWD through temporal regulation of initiation in nitroxide-mediated polymerization reactions. This approach is applicable to any controlled polymerization that uses a discrete initiator, and it allows the use of MWD composition as a parameter to tune material properties.
Deterministic polarization-entanglement purification using spatial entanglement
Li Xihan
2010-10-15
We present an efficient entanglement purification protocol with hyperentanglement in which additional spatial entanglement is utilized to purify the two-particle polarization-entangled state. The bit-flip error and phase-flip error can be corrected and eliminated in one step. Two remote parties can obtain maximally entangled polarization states deterministically and only passive linear optics are employed. We also discuss the protocol with practical quantum source and noisy channel.
Demographic noise can reverse the direction of deterministic selection.
Constable, George W A; Rogers, Tim; McKane, Alan J; Tarnita, Corina E
2016-08-01
Deterministic evolutionary theory robustly predicts that populations displaying altruistic behaviors will be driven to extinction by mutant cheats that absorb common benefits but do not themselves contribute. Here we show that when demographic stochasticity is accounted for, selection can in fact act in the reverse direction to that predicted deterministically, instead favoring cooperative behaviors that appreciably increase the carrying capacity of the population. Populations that exist in larger numbers experience a selective advantage by being more stochastically robust to invasions than smaller populations, and this advantage can persist even in the presence of reproductive costs. We investigate this general effect in the specific context of public goods production and find conditions for stochastic selection reversal leading to the success of public good producers. This insight, developed here analytically, is missed by the deterministic analysis as well as by standard game theoretic models that enforce a fixed population size. The effect is found to be amplified by space; in this scenario we find that selection reversal occurs within biologically reasonable parameter regimes for microbial populations. Beyond the public good problem, we formulate a general mathematical framework for models that may exhibit stochastic selection reversal. In this context, we describe a stochastic analog to [Formula: see text] theory, by which small populations can evolve to higher densities in the absence of disturbance. PMID:27450085
Deterministic generation of remote entanglement with active quantum feedback
NASA Astrophysics Data System (ADS)
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-01
We consider the task of deterministically entangling two remote qubits using joint measurement and feedback, but no directly entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Finally, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Deterministic form correction of extreme freeform optical surfaces
NASA Astrophysics Data System (ADS)
Lynch, Timothy P.; Myer, Brian W.; Medicus, Kate; DeGroote Nelson, Jessica
2015-10-01
The blistering pace of recent technological advances has led lens designers to rely increasingly on freeform optical components as crucial pieces of their designs. As these freeform components increase in geometrical complexity and continue to deviate further from traditional optical designs, the optical manufacturing community must rethink their fabrication processes in order to keep pace. To meet these new demands, Optimax has developed a variety of new deterministic freeform manufacturing processes. Combining traditional optical fabrication techniques with cutting edge technological innovations has yielded a multifaceted manufacturing approach that can successfully handle even the most extreme freeform optical surfaces. In particular, Optimax has placed emphasis on refining the deterministic form correction process. By developing many of these procedures in house, changes can be implemented quickly and efficiently in order to rapidly converge on an optimal manufacturing method. Advances in metrology techniques allow for rapid identification and quantification of irregularities in freeform surfaces, while deterministic correction algorithms precisely target features on the part and drastically reduce overall correction time. Together, these improvements have yielded significant advances in the realm of freeform manufacturing. With further refinements to these and other aspects of the freeform manufacturing process, the production of increasingly radical freeform optical components is quickly becoming a reality.
Probabilistic vs deterministic views in facing natural hazards
NASA Astrophysics Data System (ADS)
Arattano, Massimo; Coviello, Velio
2015-04-01
Natural hazards can be mitigated through active or passive measures. Among these latter countermeasures, Early Warning Systems (EWSs) are playing an increasing and significant role. In particular, a growing number of studies investigate the reliability of landslide EWSs, their comparability to alternative protection measures and their cost-effectiveness. EWSs, however, inevitably and intrinsically imply the concept of probability of occurrence and/or probability of error. Since a long time science has accepted and integrated the probabilistic nature of reality and its phenomena. The same cannot be told for other fields of knowledge, such as law or politics, with which scientists sometimes have to interact. These disciplines are in fact still linked to more deterministic views of life. The same is true for what is perceived by the public opinion, which often requires or even pretends a deterministic type of answer to its needs. So, as an example, it might be easy for people to feel completely safe because an EWS has been installed. It is also easy for an administrator or a politician to contribute to spread this wrong feeling, together with the idea of having dealt with the problem and done something definitive to face it. May geoethics play a role to create a link between the probabilistic world of nature and science and the tendency of the society to a more deterministic view of things? Answering this question could help scientists to feel more confident in planning and performing their research activities.
Convergence studies of deterministic methods for LWR explicit reflector methodology
Canepa, S.; Hursin, M.; Ferroukhi, H.; Pautz, A.
2013-07-01
The standard approach in modem 3-D core simulators, employed either for steady-state or transient simulations, is to use Albedo coefficients or explicit reflectors at the core axial and radial boundaries. In the latter approach, few-group homogenized nuclear data are a priori produced with lattice transport codes using 2-D reflector models. Recently, the explicit reflector methodology of the deterministic CASMO-4/SIMULATE-3 code system was identified to potentially constitute one of the main sources of errors for core analyses of the Swiss operating LWRs, which are all belonging to GII design. Considering that some of the new GIII designs will rely on very different reflector concepts, a review and assessment of the reflector methodology for various LWR designs appeared as relevant. Therefore, the purpose of this paper is to first recall the concepts of the explicit reflector modelling approach as employed by CASMO/SIMULATE. Then, for selected reflector configurations representative of both GII and GUI designs, a benchmarking of the few-group nuclear data produced with the deterministic lattice code CASMO-4 and its successor CASMO-5, is conducted. On this basis, a convergence study with regards to geometrical requirements when using deterministic methods with 2-D homogenous models is conducted and the effect on the downstream 3-D core analysis accuracy is evaluated for a typical GII deflector design in order to assess the results against available plant measurements. (authors)
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can bemore » modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.« less
Deterministic generation of remote entanglement with active quantum feedback
Martin, Leigh; Motzoi, Felix; Li, Hanhan; Sarovar, Mohan; Whaley, K. Birgitta
2015-12-10
We develop and study protocols for deterministic remote entanglement generation using quantum feedback, without relying on an entangling Hamiltonian. In order to formulate the most effective experimentally feasible protocol, we introduce the notion of average-sense locally optimal feedback protocols, which do not require real-time quantum state estimation, a difficult component of real-time quantum feedback control. We use this notion of optimality to construct two protocols that can deterministically create maximal entanglement: a semiclassical feedback protocol for low-efficiency measurements and a quantum feedback protocol for high-efficiency measurements. The latter reduces to direct feedback in the continuous-time limit, whose dynamics can be modeled by a Wiseman-Milburn feedback master equation, which yields an analytic solution in the limit of unit measurement efficiency. Our formalism can smoothly interpolate between continuous-time and discrete-time descriptions of feedback dynamics and we exploit this feature to derive a superior hybrid protocol for arbitrary nonunit measurement efficiency that switches between quantum and semiclassical protocols. Lastly, we show using simulations incorporating experimental imperfections that deterministic entanglement of remote superconducting qubits may be achieved with current technology using the continuous-time feedback protocol alone.
Iterative acceleration methods for Monte Carlo and deterministic criticality calculations
Urbatsch, T.J.
1995-11-01
If you have ever given up on a nuclear criticality calculation and terminated it because it took so long to converge, you might find this thesis of interest. The author develops three methods for improving the fission source convergence in nuclear criticality calculations for physical systems with high dominance ratios for which convergence is slow. The Fission Matrix Acceleration Method and the Fission Diffusion Synthetic Acceleration (FDSA) Method are acceleration methods that speed fission source convergence for both Monte Carlo and deterministic methods. The third method is a hybrid Monte Carlo method that also converges for difficult problems where the unaccelerated Monte Carlo method fails. The author tested the feasibility of all three methods in a test bed consisting of idealized problems. He has successfully accelerated fission source convergence in both deterministic and Monte Carlo criticality calculations. By filtering statistical noise, he has incorporated deterministic attributes into the Monte Carlo calculations in order to speed their source convergence. He has used both the fission matrix and a diffusion approximation to perform unbiased accelerations. The Fission Matrix Acceleration method has been implemented in the production code MCNP and successfully applied to a real problem. When the unaccelerated calculations are unable to converge to the correct solution, they cannot be accelerated in an unbiased fashion. A Hybrid Monte Carlo method weds Monte Carlo and a modified diffusion calculation to overcome these deficiencies. The Hybrid method additionally possesses reduced statistical errors.
Qu, Yanyan; Xia, Simin; Yuan, Huiming; Wu, Qi; Li, Man; Zou, Lijuan; Zhang, Lihua; Liang, Zhen; Zhang, Yukui
2011-10-01
An integrated sample pretreatment system, composed of a click maltose hydrophilic interaction chromatography (HILIC) column, a strong cation exchange (SCX) precolumn, and a PNGase F immobilized enzymatic reactor (IMER), was established for the simultaneous glycopeptide enrichment, sample buffer exchange, and online deglycosylation, by which the sample pretreatment for glycoproteome could be performed online automatically, beneficial to improve the efficiency and sensitivity of the N-linked glycosylation site identification. With such a system, the deglycosylated glycopeptide from the digests of avidin with the coexistence of 50 times (mass ratio) BSA could be selectively detected, and the detection limit as low as 5 fmol was achieved. Moreover, the sample pretreatment time was significantly shortened to ~1 h. Such a system was further successfully applied for analyzing the digest of the soluble fraction extracted from rat brain. A total of 120 unique glycoprotein groups and 196 N-linked glycosylation sites were identified by nanoreversed phase liquid chromatography-electrospray ionization-tandem mass spectrometry (nanoRPLC-ESI-MS/MS), with the injected digests amount as 6 μg. All these results demonstrate that the integrated system is of great promise for N-linked glycosylation site profiling and could be further online coupled with nanoHPLC-ESI-MS/MS to achieve high-throughput glycoproteome analysis.
Mn(2+)-Ion Site Distribution of Zeolite Y (FAU, Si/Al = 1.56) Depending on the Ion-Exchange Ratio.
Seo, Sung Man; Moon, Dae Jun; Suh, Jeong Min; Zhu, John; Lim, Woo Taik
2016-05-01
To investigate the tendency of Mn(2+)-ion exchange into zeolite Y, four single crystals of fully dehydrated Mn2+, Na(+)-exchanged zeolite Y (Si/Al = 1.56) were prepared by the exchange of Na75-Y (INa75I[Si117Al75,O384]-FAU) with aqueous of various concentrations by Mn2+ and Na+ in a total 0.05 M for molar ratios of 1:1 (crystal 1), 1:25 (crystal 2), 1:50 (crystal 3), and 1:100 (crystal 4), respectively, followed by vacuum dehydration at 400 degrees C. Their single-crystal structures were determined by synchrotron X-ray diffraction techniques in the cubic space group Fd3(-)m and were refined to the final error indices R1/wR2 = 0.0440/0.1545, 0.0369/0.1153, 0.0373/0.1091, and 0.0506/0.1667, respectively. Their unit-cell formulas are approximately LMn33.5Na8I[Si117Al75O384]-FAU, IMn20.5Na34I[Si117Al75O384]-FAU, IMn20.5Na34I[Si117Al75O384]-FAU, and IMn16.5Na42I[Si117Al75O384]-FAU, respectively. The degree of Mn2+-ion exchange increases from 44.3% to 89.1% with increasing the initial Mn2+ concentrations as Na+ content and the unit cell constant of the zeolite framework decrease.
Espinosa-Asuar, Laura; Escalante, Ana Elena; Gasca-Pineda, Jaime; Blaz, Jazmín; Peña, Lorena; Eguiarte, Luis E; Souza, Valeria
2015-06-01
The aim of this study was to determine the contributions of stochastic vs. deterministic processes in the distribution of microbial diversity in four ponds (Pozas Azules) within a temporally stable aquatic system in the Cuatro Cienegas Basin, State of Coahuila, Mexico. A sampling strategy for sites that were geographically delimited and had low environmental variation was applied to avoid obscuring distance effects. Aquatic bacterial diversity was characterized following a culture-independent approach (16S sequencing of clone libraries). The results showed a correlation between bacterial beta diversity (1-Sorensen) and geographic distance (distance decay of similarity), which indicated the influence of stochastic processes related to dispersion in the assembly of the ponds' bacterial communities. Our findings are the first to show the influence of dispersal limitation in the prokaryotic diversity distribution of Cuatro Cienegas Basin. PMID:26496618
Espinosa-Asuar, Laura; Escalante, Ana Elena; Gasca-Pineda, Jaime; Blaz, Jazmín; Peña, Lorena; Eguiarte, Luis E; Souza, Valeria
2015-06-01
The aim of this study was to determine the contributions of stochastic vs. deterministic processes in the distribution of microbial diversity in four ponds (Pozas Azules) within a temporally stable aquatic system in the Cuatro Cienegas Basin, State of Coahuila, Mexico. A sampling strategy for sites that were geographically delimited and had low environmental variation was applied to avoid obscuring distance effects. Aquatic bacterial diversity was characterized following a culture-independent approach (16S sequencing of clone libraries). The results showed a correlation between bacterial beta diversity (1-Sorensen) and geographic distance (distance decay of similarity), which indicated the influence of stochastic processes related to dispersion in the assembly of the ponds' bacterial communities. Our findings are the first to show the influence of dispersal limitation in the prokaryotic diversity distribution of Cuatro Cienegas Basin.
NASA Astrophysics Data System (ADS)
Ervens, Barbara; Feingold, Graham
2013-06-01
Ice particle number concentrations are often described deterministically, i.e., ice nucleation is singular and occurs on active sites unambiguously at a given temperature. Other approaches are based on classical nucleation theory (CNT) that describes ice nucleation stochastically as a function of time and nucleation rate. Sensitivity studies of CNT for immersion freezing performed here show that ice nucleation has by far the lowest sensitivity to time as compared to temperature, ice nucleus (IN) diameter, and contact angle. Sensitivities generally decrease with decreasing temperature. Our study helps to reconcile the apparent differences in stochastic and singular freezing behavior, and suggests that over a wide range of temperatures and IN parameters, time-independent CNT-based expressions for immersion freezing may be derived for use in large-scale models.
Deterministic and Nondeterministic Behavior of Earthquakes and Hazard Mitigation Strategy
NASA Astrophysics Data System (ADS)
Kanamori, H.
2014-12-01
Earthquakes exhibit both deterministic and nondeterministic behavior. Deterministic behavior is controlled by length and time scales such as the dimension of seismogenic zones and plate-motion speed. Nondeterministic behavior is controlled by the interaction of many elements, such as asperities, in the system. Some subduction zones have strong deterministic elements which allow forecasts of future seismicity. For example, the forecasts of the 2010 Mw=8.8 Maule, Chile, earthquake and the 2012 Mw=7.6, Costa Rica, earthquake are good examples in which useful forecasts were made within a solid scientific framework using GPS. However, even in these cases, because of the nondeterministic elements uncertainties are difficult to quantify. In some subduction zones, nondeterministic behavior dominates because of complex plate boundary structures and defies useful forecasts. The 2011 Mw=9.0 Tohoku-Oki earthquake may be an example in which the physical framework was reasonably well understood, but complex interactions of asperities and insufficient knowledge about the subduction-zone structures led to the unexpected tragic consequence. Despite these difficulties, broadband seismology, GPS, and rapid data processing-telemetry technology can contribute to effective hazard mitigation through scenario earthquake approach and real-time warning. A scale-independent relation between M0 (seismic moment) and the source duration, t, can be used for the design of average scenario earthquakes. However, outliers caused by the variation of stress drop, radiation efficiency, and aspect ratio of the rupture plane are often the most hazardous and need to be included in scenario earthquakes. The recent development in real-time technology would help seismologists to cope with, and prepare for, devastating tsunamis and earthquakes. Combining a better understanding of earthquake diversity and modern technology is the key to effective and comprehensive hazard mitigation practices.
Spatial continuity measures for probabilistic and deterministic geostatistics
Isaaks, E.H.; Srivastava, R.M.
1988-05-01
Geostatistics has traditionally used a probabilistic framework, one in which expected values or ensemble averages are of primary importance. The less familiar deterministic framework views geostatistical problems in terms of spatial integrals. This paper outlines the two frameworks and examines the issue of which spatial continuity measure, the covariance C(h) or the variogram ..sigma..(h), is appropriate for each framework. Although C(h) and ..sigma..(h) were defined originally in terms of spatial integrals, the convenience of probabilistic notation made the expected value definitions more common. These now classical expected value definitions entail a linear relationship between C(h) and ..sigma..(h); the spatial integral definitions do not. In a probabilistic framework, where available sample information is extrapolated to domains other than the one which was sampled, the expected value definitions are appropriate; furthermore, within a probabilistic framework, reasons exist for preferring the variogram to the covariance function. In a deterministic framework, where available sample information is interpolated within the same domain, the spatial integral definitions are appropriate and no reasons are known for preferring the variogram. A case study on a Wiener-Levy process demonstrates differences between the two frameworks and shows that, for most estimation problems, the deterministic viewpoint is more appropriate. Several case studies on real data sets reveal that the sample covariance function reflects the character of spatial continuity better than the sample variogram. From both theoretical and practical considerations, clearly for most geostatistical problems, direct estimation of the covariance is better than the traditional variogram approach.
Deterministic side-branching during thermal dendritic growth
NASA Astrophysics Data System (ADS)
Mullis, Andrew M.
2015-06-01
The accepted view on dendritic side-branching is that side-branches grow as the result of selective amplification of thermal noise and that in the absence of such noise dendrites would grow without the development of side-arms. However, recently there has been renewed speculation about dendrites displaying deterministic side-branching [see e.g. ME Glicksman, Metall. Mater. Trans A 43 (2012) 391]. Generally, numerical models of dendritic growth, such as phase-field simulation, have tended to display behaviour which is commensurate with the former view, in that simulated dendrites do not develop side-branches unless noise is introduced into the simulation. However, here we present simulations at high undercooling that show that under certain conditions deterministic side-branching may occur. We use a model formulated in the thin interface limit and a range of advanced numerical techniques to minimise the numerical noise introduced into the solution, including a multigrid solver. Not only are multigrid solvers one of the most efficient means of inverting the large, but sparse, system of equations that results from implicit time-stepping, they are also very effective at smoothing noise at all wavelengths. This is in contrast to most Jacobi or Gauss-Seidel iterative schemes which are effective at removing noise with wavelengths comparable to the mesh size but tend to leave noise at longer wavelengths largely undamped. From an analysis of the tangential thermal gradients on the solid-liquid interface the mechanism for side-branching appears to be consistent with the deterministic model proposed by Glicksman.
Deterministic Single-Phonon Source Triggered by a Single Photon.
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-10
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.
Ideal state reconstructor for deterministic digital control systems
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1989-01-01
A state reconstructor for deterministic digital systems is presented which is ideal in the following sense: if the plant parameters are known exactly, the output of the state reconstructor will exactly equal the true state of the plant, not just approximate it. Furthermore, this ideal state reconstructor adds no additional states or eigenvalues to the system. Nor does it affect the plant equation for the system in any way; it affects only the measurement equation. While there are countless ways of choosing the ideal state reconstructor parameters, two distinct methods are described here. An example is presented which illustrates the procedures to completely design the ideal state reconstructor using both methods.
A deterministic global optimization using smooth diagonal auxiliary functions
NASA Astrophysics Data System (ADS)
Sergeyev, Yaroslav D.; Kvasov, Dmitri E.
2015-04-01
In many practical decision-making problems it happens that functions involved in optimization process are black-box with unknown analytical representations and hard to evaluate. In this paper, a global optimization problem is considered where both the goal function f (x) and its gradient f‧ (x) are black-box functions. It is supposed that f‧ (x) satisfies the Lipschitz condition over the search hyperinterval with an unknown Lipschitz constant K. A new deterministic 'Divide-the-Best' algorithm based on efficient diagonal partitions and smooth auxiliary functions is proposed in its basic version, its convergence conditions are studied and numerical experiments executed on eight hundred test functions are presented.
Deterministic Superreplication of One-Parameter Unitary Transformations
NASA Astrophysics Data System (ADS)
Dür, W.; Sekatski, P.; Skotiniotis, M.
2015-03-01
We show that one can deterministically generate, out of N copies of an unknown unitary operation, up to N2 almost perfect copies. The result holds for all operations generated by a Hamiltonian with an unknown interaction strength. This generalizes a similar result in the context of phase-covariant cloning where, however, superreplication comes at the price of an exponentially reduced probability of success. We also show that multiple copies of unitary operations can be emulated by operations acting on a much smaller space, e.g., a magnetic field acting on a single n -level system allows one to emulate the action of the field on n2 qubits.
Deterministic Single-Phonon Source Triggered by a Single Photon
NASA Astrophysics Data System (ADS)
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-01
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus.
Deterministic versus stochastic aspects of superexponential population growth models
NASA Astrophysics Data System (ADS)
Grosjean, Nicolas; Huillet, Thierry
2016-08-01
Deterministic population growth models with power-law rates can exhibit a large variety of growth behaviors, ranging from algebraic, exponential to hyperexponential (finite time explosion). In this setup, selfsimilarity considerations play a key role, together with two time substitutions. Two stochastic versions of such models are investigated, showing a much richer variety of behaviors. One is the Lamperti construction of selfsimilar positive stochastic processes based on the exponentiation of spectrally positive processes, followed by an appropriate time change. The other one is based on stable continuous-state branching processes, given by another Lamperti time substitution applied to stable spectrally positive processes.
A Deterministic Transport Code for Space Environment Electrons
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamczyk, Anne M.
2010-01-01
A deterministic computational procedure has been developed to describe transport of space environment electrons in various shield media. This code is an upgrade and extension of an earlier electron code. Whereas the former code was formulated on the basis of parametric functions derived from limited laboratory data, the present code utilizes well established theoretical representations to describe the relevant interactions and transport processes. The shield material specification has been made more general, as have the pertinent cross sections. A combined mean free path and average trajectory approach has been used in the transport formalism. Comparisons with Monte Carlo calculations are presented.
The deterministic optical alignment of the HERMES spectrograph
NASA Astrophysics Data System (ADS)
Gers, Luke; Staszak, Nicholas
2014-07-01
The High Efficiency and Resolution Multi Element Spectrograph (HERMES) is a four channel, VPH-grating spectrograph fed by two 400 fiber slit assemblies whose construction and commissioning has now been completed at the Anglo Australian Telescope (AAT). The size, weight, complexity, and scheduling constraints of the system necessitated that a fully integrated, deterministic, opto-mechanical alignment system be designed into the spectrograph before it was manufactured. This paper presents the principles about which the system was assembled and aligned, including the equipment and the metrology methods employed to complete the spectrograph integration.
Deterministic Smoluchowski-Feynman ratchets driven by chaotic noise.
Chew, Lock Yue
2012-01-01
We have elucidated the effect of statistical asymmetry on the directed current in Smoluchowski-Feynman ratchets driven by chaotic noise. Based on the inhomogeneous Smoluchowski equation and its generalized version, we arrive at analytical expressions of the directed current that includes a source term. The source term indicates that statistical asymmetry can drive the system further away from thermodynamic equilibrium, as exemplified by the constant flashing, the state-dependent, and the tilted deterministic Smoluchowski-Feynman ratchets, with the consequence of an enhancement in the directed current.
Deterministic Single-Phonon Source Triggered by a Single Photon.
Söllner, Immo; Midolo, Leonardo; Lodahl, Peter
2016-06-10
We propose a scheme that enables the deterministic generation of single phonons at gigahertz frequencies triggered by single photons in the near infrared. This process is mediated by a quantum dot embedded on chip in an optomechanical circuit, which allows for the simultaneous control of the relevant photonic and phononic frequencies. We devise new optomechanical circuit elements that constitute the necessary building blocks for the proposed scheme and are readily implementable within the current state-of-the-art of nanofabrication. This will open new avenues for implementing quantum functionalities based on phonons as an on-chip quantum bus. PMID:27341236
Non-deterministic analysis of ocean environment loads
Fang Huacan; Xu Fayan; Gao Guohua; Xu Xingping
1995-12-31
Ocean environment loads consist of the wind force, sea wave force etc. Sea wave force not only has randomness, but also has fuzziness. Hence the non-deterministic description of wave environment must be carried out, in designing of an offshore structure or evaluation of the safety of offshore structure members in service. In order to consider the randomness of sea wave, the wind speed single parameter sea wave spectrum is proposed in the paper. And a new fuzzy grading statistic method for considering fuzziness of sea wave height H and period T is given in this paper. The principle and process of calculating fuzzy random sea wave spectrum will be published lastly.
CALTRANS: A parallel, deterministic, 3D neutronics code
Carson, L.; Ferguson, J.; Rogers, J.
1994-04-01
Our efforts to parallelize the deterministic solution of the neutron transport equation has culminated in a new neutronics code CALTRANS, which has full 3D capability. In this article, we describe the layout and algorithms of CALTRANS and present performance measurements of the code on a variety of platforms. Explicit implementation of the parallel algorithms of CALTRANS using both the function calls of the Parallel Virtual Machine software package (PVM 3.2) and the Meiko CS-2 tagged message passing library (based on the Intel NX/2 interface) are provided in appendices.
Jörnvall, H; Hempel, J; Vallee, B L; Bosron, W F; Li, T K
1984-01-01
The homodimeric Oriental beta 2 beta 2 isozyme of human liver alcohol dehydrogenase, corresponding to an allelic variant at the ADH2 gene locus, was studied in order to define the amino acid exchange in relation to the beta 1 beta 1 isozyme, the predominant allelic form among Caucasians. Sequence analysis reveals that the amino acid substitution occurs at position 7 of the largest CNBr fragment, corresponding to position 47 of the whole protein chain. Here, the beta 2 form has a histidine residue, while, in common with other characterized mammalian liver alcohol dehydrogenases, the beta 1 form has an arginine residue. This exchange does not affect the adjacent cysteine-46 residue, which is a protein ligand to the active-site zinc atom, thus clarifying previously inconsistent results. The histidine/arginine-47 mutational replacement corresponds to a position that binds the pyrophosphate group of the coenzyme NAD(H); this explains the functional differences between the beta 1 beta 1 and beta 2 beta 2 isozymes, including both a lower pH optimum and higher turnover number of beta 2 beta 2, which is likely to be the mutant form. The exchange demonstrates the existence of parallel but separate mutations in the evolution of alcohol dehydrogenases because these mammalian enzymes differ at exactly the same position by the same type of substitution as is found between a mutant and the wild-type constitutive forms of the corresponding yeast enzyme. PMID:6374651
Stochastic and deterministic causes of streamer branching in liquid dielectrics
Jadidian, Jouya; Zahn, Markus; Lavesson, Nils; Widlund, Ola; Borg, Karl
2013-08-14
Streamer branching in liquid dielectrics is driven by stochastic and deterministic factors. The presence of stochastic causes of streamer branching such as inhomogeneities inherited from noisy initial states, impurities, or charge carrier density fluctuations is inevitable in any dielectric. A fully three-dimensional streamer model presented in this paper indicates that deterministic origins of branching are intrinsic attributes of streamers, which in some cases make the branching inevitable depending on shape and velocity of the volume charge at the streamer frontier. Specifically, any given inhomogeneous perturbation can result in streamer branching if the volume charge layer at the original streamer head is relatively thin and slow enough. Furthermore, discrete nature of electrons at the leading edge of an ionization front always guarantees the existence of a non-zero inhomogeneous perturbation ahead of the streamer head propagating even in perfectly homogeneous dielectric. Based on the modeling results for streamers propagating in a liquid dielectric, a gauge on the streamer head geometry is introduced that determines whether the branching occurs under particular inhomogeneous circumstances. Estimated number, diameter, and velocity of the born branches agree qualitatively with experimental images of the streamer branching.
Forced Translocation of Polymer through Nanopore: Deterministic Model and Simulations
NASA Astrophysics Data System (ADS)
Wang, Yanqian; Panyukov, Sergey; Liao, Qi; Rubinstein, Michael
2012-02-01
We propose a new theoretical model of forced translocation of a polymer chain through a nanopore. We assume that DNA translocation at high fields proceeds too fast for the chain to relax, and thus the chain unravels loop by loop in an almost deterministic way. So the distribution of translocation times of a given monomer is controlled by the initial conformation of the chain (the distribution of its loops). Our model predicts the translocation time of each monomer as an explicit function of initial polymer conformation. We refer to this concept as ``fingerprinting''. The width of the translocation time distribution is determined by the loop distribution in initial conformation as well as by the thermal fluctuations of the polymer chain during the translocation process. We show that the conformational broadening δt of translocation times of m-th monomer δtm^1.5 is stronger than the thermal broadening δtm^1.25 The predictions of our deterministic model were verified by extensive molecular dynamics simulations
On the deterministic and stochastic use of hydrologic models
NASA Astrophysics Data System (ADS)
Farmer, William H.; Vogel, Richard M.
2016-07-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Non-Deterministic Dynamic Instability of Composite Shells
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Abumeri, Galib H.
2004-01-01
A computationally effective method is described to evaluate the non-deterministic dynamic instability (probabilistic dynamic buckling) of thin composite shells. The method is a judicious combination of available computer codes for finite element, composite mechanics, and probabilistic structural analysis. The solution method is incrementally updated Lagrangian. It is illustrated by applying it to thin composite cylindrical shell subjected to dynamic loads. Both deterministic and probabilistic buckling loads are evaluated to demonstrate the effectiveness of the method. A universal plot is obtained for the specific shell that can be used to approximate buckling loads for different load rates and different probability levels. Results from this plot show that the faster the rate, the higher the buckling load and the shorter the time. The lower the probability, the lower is the buckling load for a specific time. Probabilistic sensitivity results show that the ply thickness, the fiber volume ratio and the fiber longitudinal modulus, dynamic load and loading rate are the dominant uncertainties, in that order.
A DETERMINISTIC METHOD FOR TRANSIENT, THREE-DIMENSIONAL NUETRON TRANSPORT
Goluoglu, S.; Bentley, C.; Demeglio, R.; Dunn, M.; Norton, K.; Pevey, R.; Suslov, I.; Dodds, H. L.
1998-01-14
A deterministic method for solving the time-dependent, three-dimensional Boltzmam transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement can also be modeled. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multidimensional neutronic systems.
A deterministic method for transient, three-dimensional neutron transport
Goluoglu, S.; Bentley, C.; DeMeglio, R.; Dunn, M.; Norton, K.; Pevey, R.; Suslov, I.; Dodds, H.L.
1998-05-01
A deterministic method for solving the time-dependent, three-dimensional Boltzmann transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable of the neutron flux is known as the improved quasi-static (IQS) method. The position, energy, and angle-dependent neutron flux is computed deterministically by using the three-dimensional discrete ordinates code TORT. This paper briefly describes the methodology and selected results. The code developed at the University of Tennessee based on this methodology is called TDTORT. TDTORT can be used to model transients involving voided and/or strongly absorbing regions that require transport theory for accuracy. This code can also be used to model either small high-leakage systems, such as space reactors, or asymmetric control rod movements. TDTORT can model step, ramp, step followed by another step, and step followed by ramp type perturbations. It can also model columnwise rod movement. A special case of columnwise rod movement in a three-dimensional model of a boiling water reactor (BWR) with simple adiabatic feedback is also included. TDTORT is verified through several transient one-dimensional, two-dimensional, and three-dimensional benchmark problems. The results show that the transport methodology and corresponding code developed in this work have sufficient accuracy and speed for computing the dynamic behavior of complex multi-dimensional neutronic systems.
Integrability of a deterministic cellular automaton driven by stochastic boundaries
NASA Astrophysics Data System (ADS)
Prosen, Tomaž; Mejía-Monasterio, Carlos
2016-05-01
We propose an interacting many-body space–time-discrete Markov chain model, which is composed of an integrable deterministic and reversible cellular automaton (rule 54 of Bobenko et al 1993 Commun. Math. Phys. 158 127) on a finite one-dimensional lattice {({{{Z}}}2)}× n, and local stochastic Markov chains at the two lattice boundaries which provide chemical baths for absorbing or emitting the solitons. Ergodicity and mixing of this many-body Markov chain is proven for generic values of bath parameters, implying the existence of a unique nonequilibrium steady state. The latter is constructed exactly and explicitly in terms of a particularly simple form of matrix product ansatz which is termed a patch ansatz. This gives rise to an explicit computation of observables and k-point correlations in the steady state as well as the construction of a nontrivial set of local conservation laws. The feasibility of an exact solution for the full spectrum and eigenvectors (decay modes) of the Markov matrix is suggested as well. We conjecture that our ideas can pave the road towards a theory of integrability of boundary driven classical deterministic lattice systems.
Deterministic direct reprogramming of somatic cells to pluripotency.
Rais, Yoach; Zviran, Asaf; Geula, Shay; Gafni, Ohad; Chomsky, Elad; Viukov, Sergey; Mansour, Abed AlFatah; Caspi, Inbal; Krupalnik, Vladislav; Zerbib, Mirie; Maza, Itay; Mor, Nofar; Baran, Dror; Weinberger, Leehee; Jaitin, Diego A; Lara-Astiaso, David; Blecher-Gonen, Ronnie; Shipony, Zohar; Mukamel, Zohar; Hagai, Tzachi; Gilad, Shlomit; Amann-Zalcenstein, Daniela; Tanay, Amos; Amit, Ido; Novershtern, Noa; Hanna, Jacob H
2013-10-01
Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution.
A survey of deterministic solvers for rarefied flows (Invited)
NASA Astrophysics Data System (ADS)
Mieussens, Luc
2014-12-01
Numerical simulations of rarefied gas flows are generally made with DSMC methods. Up to a recent period, deterministic numerical methods based on a discretization of the Boltzmann equation were restricted to simple problems (1D, linearized flows, or simple geometries, for instance). In the last decade, several deterministic solvers have been developed in different teams to tackle more complex problems like 2D and 3D flows. Some of them are based on the full Boltzmann equation. Solving this equation numerically is still very challenging, and 3D solvers are still restricted to monoatomic gases, even if recent works have proved it was possible to simulate simple flows for polyatomic gases. Other solvers are based on simpler BGK like models: they allow for much more intensive simulations on 3D flows for realistic geometries, but treating complex gases requires extended BGK models that are still under development. In this paper, we discuss the main features of these existing solvers, and we focus on their strengths and inefficiencies. We will also review some recent results that show how these solvers can be improved: - higher accuracy (higher order finite volume methods, discontinuous Galerkin approaches) - lower memory and CPU costs with special velocity discretization (adaptive grids, spectral methods) - multi-scale simulations by using hybrid and asymptotic preserving schemes - efficient implementation on high performance computers (parallel computing, hybrid parallelization) Finally, we propose some perspectives to make these solvers more efficient and more popular.
An advanced deterministic method for spent fuel criticality safety analysis
DeHart, M.D.
1998-01-01
Over the past two decades, criticality safety analysts have come to rely to a large extent on Monte Carlo methods for criticality calculations. Monte Carlo has become popular because of its capability to model complex, non-orthogonal configurations or fissile materials, typical of real world problems. Over the last few years, however, interest in determinist transport methods has been revived, due shortcomings in the stochastic nature of Monte Carlo approaches for certain types of analyses. Specifically, deterministic methods are superior to stochastic methods for calculations requiring accurate neutron density distributions or differential fluxes. Although Monte Carlo methods are well suited for eigenvalue calculations, they lack the localized detail necessary to assess uncertainties and sensitivities important in determining a range of applicability. Monte Carlo methods are also inefficient as a transport solution for multiple pin depletion methods. Discrete ordinates methods have long been recognized as one of the most rigorous and accurate approximations used to solve the transport equation. However, until recently, geometric constraints in finite differencing schemes have made discrete ordinates methods impractical for non-orthogonal configurations such as reactor fuel assemblies. The development of an extended step characteristic (ESC) technique removes the grid structure limitations of traditional discrete ordinates methods. The NEWT computer code, a discrete ordinates code built upon the ESC formalism, is being developed as part of the SCALE code system. This paper will demonstrate the power, versatility, and applicability of NEWT as a state-of-the-art solution for current computational needs.
Strongly Deterministic Population Dynamics in Closed Microbial Communities
NASA Astrophysics Data System (ADS)
Frentz, Zak; Kuehn, Seppe; Leibler, Stanislas
2015-10-01
Biological systems are influenced by random processes at all scales, including molecular, demographic, and behavioral fluctuations, as well as by their interactions with a fluctuating environment. We previously established microbial closed ecosystems (CES) as model systems for studying the role of random events and the emergent statistical laws governing population dynamics. Here, we present long-term measurements of population dynamics using replicate digital holographic microscopes that maintain CES under precisely controlled external conditions while automatically measuring abundances of three microbial species via single-cell imaging. With this system, we measure spatiotemporal population dynamics in more than 60 replicate CES over periods of months. In contrast to previous studies, we observe strongly deterministic population dynamics in replicate systems. Furthermore, we show that previously discovered statistical structure in abundance fluctuations across replicate CES is driven by variation in external conditions, such as illumination. In particular, we confirm the existence of stable ecomodes governing the correlations in population abundances of three species. The observation of strongly deterministic dynamics, together with stable structure of correlations in response to external perturbations, points towards a possibility of simple macroscopic laws governing microbial systems despite numerous stochastic events present on microscopic levels.
Shock-induced explosive chemistry in a deterministic sample configuration.
Stuecker, John Nicholas; Castaneda, Jaime N.; Cesarano, Joseph, III; Trott, Wayne Merle; Baer, Melvin R.; Tappan, Alexander Smith
2005-10-01
Explosive initiation and energy release have been studied in two sample geometries designed to minimize stochastic behavior in shock-loading experiments. These sample concepts include a design with explosive material occupying the hole locations of a close-packed bed of inert spheres and a design that utilizes infiltration of a liquid explosive into a well-defined inert matrix. Wave profiles transmitted by these samples in gas-gun impact experiments have been characterized by both velocity interferometry diagnostics and three-dimensional numerical simulations. Highly organized wave structures associated with the characteristic length scales of the deterministic samples have been observed. Initiation and reaction growth in an inert matrix filled with sensitized nitromethane (a homogeneous explosive material) result in wave profiles similar to those observed with heterogeneous explosives. Comparison of experimental and numerical results indicates that energetic material studies in deterministic sample geometries can provide an important new tool for validation of models of energy release in numerical simulations of explosive initiation and performance.
Deterministic Stress Modeling of Hot Gas Segregation in a Turbine
NASA Technical Reports Server (NTRS)
Busby, Judy; Sondak, Doug; Staubach, Brent; Davis, Roger
1998-01-01
Simulation of unsteady viscous turbomachinery flowfields is presently impractical as a design tool due to the long run times required. Designers rely predominantly on steady-state simulations, but these simulations do not account for some of the important unsteady flow physics. Unsteady flow effects can be modeled as source terms in the steady flow equations. These source terms, referred to as Lumped Deterministic Stresses (LDS), can be used to drive steady flow solution procedures to reproduce the time-average of an unsteady flow solution. The goal of this work is to investigate the feasibility of using inviscid lumped deterministic stresses to model unsteady combustion hot streak migration effects on the turbine blade tip and outer air seal heat loads using a steady computational approach. The LDS model is obtained from an unsteady inviscid calculation. The LDS model is then used with a steady viscous computation to simulate the time-averaged viscous solution. Both two-dimensional and three-dimensional applications are examined. The inviscid LDS model produces good results for the two-dimensional case and requires less than 10% of the CPU time of the unsteady viscous run. For the three-dimensional case, the LDS model does a good job of reproducing the time-averaged viscous temperature migration and separation as well as heat load on the outer air seal at a CPU cost that is 25% of that of an unsteady viscous computation.
On the deterministic and stochastic use of hydrologic models
Farmer, William H.; Vogel, Richard M.
2016-01-01
Environmental simulation models, such as precipitation-runoff watershed models, are increasingly used in a deterministic manner for environmental and water resources design, planning, and management. In operational hydrology, simulated responses are now routinely used to plan, design, and manage a very wide class of water resource systems. However, all such models are calibrated to existing data sets and retain some residual error. This residual, typically unknown in practice, is often ignored, implicitly trusting simulated responses as if they are deterministic quantities. In general, ignoring the residuals will result in simulated responses with distributional properties that do not mimic those of the observed responses. This discrepancy has major implications for the operational use of environmental simulation models as is shown here. Both a simple linear model and a distributed-parameter precipitation-runoff model are used to document the expected bias in the distributional properties of simulated responses when the residuals are ignored. The systematic reintroduction of residuals into simulated responses in a manner that produces stochastic output is shown to improve the distributional properties of the simulated responses. Every effort should be made to understand the distributional behavior of simulation residuals and to use environmental simulation models in a stochastic manner.
Integrability of a deterministic cellular automaton driven by stochastic boundaries
NASA Astrophysics Data System (ADS)
Prosen, Tomaž; Mejía-Monasterio, Carlos
2016-05-01
We propose an interacting many-body space-time-discrete Markov chain model, which is composed of an integrable deterministic and reversible cellular automaton (rule 54 of Bobenko et al 1993 Commun. Math. Phys. 158 127) on a finite one-dimensional lattice {({{{Z}}}2)}× n, and local stochastic Markov chains at the two lattice boundaries which provide chemical baths for absorbing or emitting the solitons. Ergodicity and mixing of this many-body Markov chain is proven for generic values of bath parameters, implying the existence of a unique nonequilibrium steady state. The latter is constructed exactly and explicitly in terms of a particularly simple form of matrix product ansatz which is termed a patch ansatz. This gives rise to an explicit computation of observables and k-point correlations in the steady state as well as the construction of a nontrivial set of local conservation laws. The feasibility of an exact solution for the full spectrum and eigenvectors (decay modes) of the Markov matrix is suggested as well. We conjecture that our ideas can pave the road towards a theory of integrability of boundary driven classical deterministic lattice systems.
Predictability of normal heart rhythms and deterministic chaos
NASA Astrophysics Data System (ADS)
Lefebvre, J. H.; Goodings, D. A.; Kamath, M. V.; Fallen, E. L.
1993-04-01
The evidence for deterministic chaos in normal heart rhythms is examined. Electrocardiograms were recorded of 29 subjects falling into four groups—a young healthy group, an older healthy group, and two groups of patients who had recently suffered an acute myocardial infarction. From the measured R-R intervals, a time series of 1000 first differences was constructed for each subject. The correlation integral of Grassberger and Procaccia was calculated for several subjects using these relatively short time series. No evidence was found for the existence of an attractor having a dimension less than about 4. However, a prediction method recently proposed by Sugihara and May and an autoregressive linear predictor both show that there is a measure of short-term predictability in the differenced R-R intervals. Further analysis revealed that the short-term predictability calculated by the Sugihara-May method is not consistent with the null hypothesis of a Gaussian random process. The evidence for a small amount of nonlinear dynamical behavior together with the short-term predictability suggest that there is an element of deterministic chaos in normal heart rhythms, although it is not strong or persistent. Finally, two useful parameters of the predictability curves are identified, namely, the `first step predictability' and the `predictability decay rate,' neither of which appears to be significantly correlated with the standard deviation of the R-R intervals.
Deterministic doping and the exploration of spin qubits
Schenkel, T.; Weis, C. D.; Persaud, A.; Lo, C. C.; Chakarov, I.; Schneider, D. H.; Bokor, J.
2015-01-09
Deterministic doping by single ion implantation, the precise placement of individual dopant atoms into devices, is a path for the realization of quantum computer test structures where quantum bits (qubits) are based on electron and nuclear spins of donors or color centers. We present a donor - quantum dot type qubit architecture and discuss the use of medium and highly charged ions extracted from an Electron Beam Ion Trap/Source (EBIT/S) for deterministic doping. EBIT/S are attractive for the formation of qubit test structures due to the relatively low emittance of ion beams from an EBIT/S and due to the potential energy associated with the ions' charge state, which can aid single ion impact detection. Following ion implantation, dopant specific diffusion mechanisms during device processing affect the placement accuracy and coherence properties of donor spin qubits. For bismuth, range straggling is minimal but its relatively low solubility in silicon limits thermal budgets for the formation of qubit test structures.
Baldocchi, Dennis
2015-03-24
Continuous eddy convariance measurements of carbon dioxide, water vapor and heat were measured continuously between an oak savanna and an annual grassland in California over a 4 year period. These systems serve as representative sites for biomes in Mediterranean climates and experience much seasonal and inter-annual variability in temperature and precipitation. These sites hence serve as natural laboratories for how whole ecosystem will respond to warmer and drier conditions. The savanna proved to be a moderate sink of carbon, taking up about 150 gC m-2y-1 compared to the annual grassland, which tended to be carbon neutral and often a source during drier years. But this carbon sink by the savanna came at a cost. This ecosystem used about 100 mm more water per year than the grassland. And because the savanna was darker and rougher its air temperature was about 0.5 C warmer. In addition to our flux measurements, we collected vast amounts of ancillary data to interpret the site and fluxes, making this site a key site for model validation and parameterization. Datasets consist of terrestrial and airborne lidar for determining canopy structure, ground penetrating radar data on root distribution, phenology cameras monitoring leaf area index and its seasonality, predawn water potential, soil moisture, stem diameter and physiological capacity of photosynthesis.
Fox, T.H. III; Richey, T. Jr.; Winders, G.R.
1962-10-23
A heat exchanger is designed for use in the transfer of heat between a radioactive fiuid and a non-radioactive fiuid. The exchanger employs a removable section containing the non-hazardous fluid extending into the section designed to contain the radioactive fluid. The removable section is provided with a construction to cancel out thermal stresses. The stationary section is pressurized to prevent leakage of the radioactive fiuid and to maintain a safe, desirable level for this fiuid. (AEC)
Groskinsky Link, B. L.; Cary, L.E.
1988-01-01
Stations were selected to monitor water discharge and water quality of streams in eastern Montana. This report describes the stations and indicates the availability of hydrologic data through 1985. Included are stations that are operated by organizations that do not belong to the National Water Data Exchange (NAWDEX) program operated by the U.S. Geological Survey. Each station description contains a narration of the station 's history including location, drainage area, elevation, operator, period of record, type of equipment and instruments used at the station, and data availability. The data collected at each station have been identified according to type: water discharge, chemical quality, and suspended sediment. Descriptions are provided for 113 stations. These data have potential uses in characterizing small hydrologic basins, as well as other uses. A map of eastern Montana shows the location of the stations selected. (USGS)
Central-site monitors do not account for factors such as outdoor-to-indoor transport and human activity patterns that inﬂuence personal exposures to ambient ﬁne-particulate matter (PM_{2.5}). We describe and compare different ambient PM_{2.5} exposure estimation...
Hollinger, David Y.; Davidson, Eric A.; Richardson, Andrew D.; Dail, D. B.; Scott, N.
2013-03-25
Summary of research carried out under Interagency Agreement DE-AI02-07ER64355 with the USDA Forest Service at the Howland Forest AmeriFlux site in central Maine. Includes a list of publications resulting in part or whole from this support.
Capillary-mediated interface perturbations: Deterministic pattern formation
NASA Astrophysics Data System (ADS)
Glicksman, Martin E.
2016-09-01
Leibniz-Reynolds analysis identifies a 4th-order capillary-mediated energy field that is responsible for shape changes observed during melting, and for interface speed perturbations during crystal growth. Field-theoretic principles also show that capillary-mediated energy distributions cancel over large length scales, but modulate the interface shape on smaller mesoscopic scales. Speed perturbations reverse direction at specific locations where they initiate inflection and branching on unstable interfaces, thereby enhancing pattern complexity. Simulations of pattern formation by several independent groups of investigators using a variety of numerical techniques confirm that shape changes during both melting and growth initiate at locations predicted from interface field theory. Finally, limit cycles occur as an interface and its capillary energy field co-evolve, leading to synchronized branching. Synchronous perturbations produce classical dendritic structures, whereas asynchronous perturbations observed in isotropic and weakly anisotropic systems lead to chaotic-looking patterns that remain nevertheless deterministic.
Deterministic Impulsive Vacuum Foundations for Quantum-Mechanical Wavefunctions
NASA Astrophysics Data System (ADS)
Valentine, John S.
2013-09-01
By assuming that a fermion de-constitutes immediately at source, that its constituents, as bosons, propagate uniformly as scalar vacuum terms with phase (radial) symmetry, and that fermions are unique solutions for specific phase conditions, we find a model that self-quantizes matter from continuous waves, unifying bosons and fermion ontologies in a single basis, in a constitution-invariant process. Vacuum energy has a wavefunction context, as a mass-energy term that enables wave collapse and increases its amplitude, with gravitational field as the gradient of the flux density. Gravitational and charge-based force effects emerge as statistics without special treatment. Confinement, entanglement, vacuum statistics, forces, and wavefunction terms emerge from the model's deterministic foundations.
Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations
Leininger, L D
2004-10-26
This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.
Robust Audio Watermarking Scheme Based on Deterministic Plus Stochastic Model
NASA Astrophysics Data System (ADS)
Dhar, Pranab Kumar; Kim, Cheol Hong; Kim, Jong-Myon
Digital watermarking has been widely used for protecting digital contents from unauthorized duplication. This paper proposes a new watermarking scheme based on spectral modeling synthesis (SMS) for copyright protection of digital contents. SMS defines a sound as a combination of deterministic events plus a stochastic component that makes it possible for a synthesized sound to attain all of the perceptual characteristics of the original sound. In our proposed scheme, watermarks are embedded into the highest prominent peak of the magnitude spectrum of each non-overlapping frame in peak trajectories. Simulation results indicate that the proposed watermarking scheme is highly robust against various kinds of attacks such as noise addition, cropping, re-sampling, re-quantization, and MP3 compression and achieves similarity values ranging from 17 to 22. In addition, our proposed scheme achieves signal-to-noise ratio (SNR) values ranging from 29 dB to 30 dB.
More on exact state reconstruction in deterministic digital control systems
NASA Technical Reports Server (NTRS)
Polites, Michael E.
1988-01-01
Presented is a special form of the Ideal State Reconstructor for deterministic digital control systems which is simpler to implement than the most general form. The Ideal State Reconstructor is so named because, if the plant parameters are known exactly, its output will exactly equal, not just approximate, the true state of the plant and accomplish this without any knowledge of the plant's initial state. Besides this, it adds no new states or eigenvalues to the system. Nor does it affect the plant equation for the system in any way; it affects the measurement equation only. It is characterized by the fact that discrete measurements are generated every T/N seconds and input into a multi-input/multi-output moving-average (MA) process. The output of this process is sampled every T seconds and utilized in reconstructing the state of the system.
Additivity principle in high-dimensional deterministic systems.
Saito, Keiji; Dhar, Abhishek
2011-12-16
The additivity principle (AP), conjectured by Bodineau and Derrida [Phys. Rev. Lett. 92, 180601 (2004)], is discussed for the case of heat conduction in three-dimensional disordered harmonic lattices to consider the effects of deterministic dynamics, higher dimensionality, and different transport regimes, i.e., ballistic, diffusive, and anomalous transport. The cumulant generating function (CGF) for heat transfer is accurately calculated and compared with the one given by the AP. In the diffusive regime, we find a clear agreement with the conjecture even if the system is high dimensional. Surprisingly, even in the anomalous regime the CGF is also well fitted by the AP. Lower-dimensional systems are also studied and the importance of three dimensionality for the validity is stressed. PMID:22243060
Connection between stochastic and deterministic modelling of microbial growth.
Kutalik, Zoltán; Razaz, Moe; Baranyi, József
2005-01-21
We present in this paper various links between individual and population cell growth. Deterministic models of the lag and subsequent growth of a bacterial population and their connection with stochastic models for the lag and subsequent generation times of individual cells are analysed. We derived the individual lag time distribution inherent in population growth models, which shows that the Baranyi model allows a wide range of shapes for individual lag time distribution. We demonstrate that individual cell lag time distributions cannot be retrieved from population growth data. We also present the results of our investigation on the effect of the mean and variance of the individual lag time and the initial cell number on the mean and variance of the population lag time. These relationships are analysed theoretically, and their consequence for predictive microbiology research is discussed.
Location deterministic biosensing from quantum-dot-nanowire assemblies.
Liu, Chao; Kim, Kwanoh; Fan, D L
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices. PMID:25316926
Deterministic secure communications using two-mode squeezed states
Marino, Alberto M.; Stroud, C. R. Jr.
2006-08-15
We propose a scheme for quantum cryptography that uses the squeezing phase of a two-mode squeezed state to transmit information securely between two parties. The basic principle behind this scheme is the fact that each mode of the squeezed field by itself does not contain any information regarding the squeezing phase. The squeezing phase can only be obtained through a joint measurement of the two modes. This, combined with the fact that it is possible to perform remote squeezing measurements, makes it possible to implement a secure quantum communication scheme in which a deterministic signal can be transmitted directly between two parties while the encryption is done automatically by the quantum correlations present in the two-mode squeezed state.
Location deterministic biosensing from quantum-dot-nanowire assemblies
Liu, Chao; Kim, Kwanoh; Fan, D. L.
2014-08-25
Semiconductor quantum dots (QDs) with high fluorescent brightness, stability, and tunable sizes, have received considerable interest for imaging, sensing, and delivery of biomolecules. In this research, we demonstrate location deterministic biochemical detection from arrays of QD-nanowire hybrid assemblies. QDs with diameters less than 10 nm are manipulated and precisely positioned on the tips of the assembled Gold (Au) nanowires. The manipulation mechanisms are quantitatively understood as the synergetic effects of dielectrophoretic (DEP) and alternating current electroosmosis (ACEO) due to AC electric fields. The QD-nanowire hybrid sensors operate uniquely by concentrating bioanalytes to QDs on the tips of nanowires before detection, offering much enhanced efficiency and sensitivity, in addition to the position-predictable rationality. This research could result in advances in QD-based biomedical detection and inspires an innovative approach for fabricating various QD-based nanodevices.
Fast deterministic ptychographic imaging using X-rays.
Yan, Ada W C; D'Alfonso, Adrian J; Morgan, Andrew J; Putkunz, Corey T; Allen, Leslie J
2014-08-01
We present a deterministic approach to the ptychographic retrieval of the wave at the exit surface of a specimen of condensed matter illuminated by X-rays. The method is based on the solution of an overdetermined set of linear equations, and is robust to measurement noise. The set of linear equations is efficiently solved using the conjugate gradient least-squares method implemented using fast Fourier transforms. The method is demonstrated using a data set obtained from a gold-chromium nanostructured test object. It is shown that the transmission function retrieved by this linear method is quantitatively comparable with established methods of ptychography, with a large decrease in computational time, and is thus a good candidate for real-time reconstruction.
Reinforcement learning output feedback NN control using deterministic learning technique.
Xu, Bin; Yang, Chenguang; Shi, Zhongke
2014-03-01
In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control. PMID:24807456
YALINA analytical benchmark analyses using the deterministic ERANOS code system.
Gohar, Y.; Aliberti, G.; Nuclear Engineering Division
2009-08-31
The growing stockpile of nuclear waste constitutes a severe challenge for the mankind for more than hundred thousand years. To reduce the radiotoxicity of the nuclear waste, the Accelerator Driven System (ADS) has been proposed. One of the most important issues of ADSs technology is the choice of the appropriate neutron spectrum for the transmutation of Minor Actinides (MA) and Long Lived Fission Products (LLFP). This report presents the analytical analyses obtained with the deterministic ERANOS code system for the YALINA facility within: (a) the collaboration between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research (JIPNR) Sosny of Belarus; and (b) the IAEA coordinated research projects for accelerator driven systems (ADS). This activity is conducted as a part of the Russian Research Reactor Fuel Return (RRRFR) Program and the Global Threat Reduction Initiative (GTRI) of DOE/NNSA.
Sensitivity analysis in a Lassa fever deterministic mathematical model
NASA Astrophysics Data System (ADS)
Abdullahi, Mohammed Baba; Doko, Umar Chado; Mamuda, Mamman
2015-05-01
Lassa virus that causes the Lassa fever is on the list of potential bio-weapons agents. It was recently imported into Germany, the Netherlands, the United Kingdom and the United States as a consequence of the rapid growth of international traffic. A model with five mutually exclusive compartments related to Lassa fever is presented and the basic reproduction number analyzed. A sensitivity analysis of the deterministic model is performed. This is done in order to determine the relative importance of the model parameters to the disease transmission. The result of the sensitivity analysis shows that the most sensitive parameter is the human immigration, followed by human recovery rate, then person to person contact. This suggests that control strategies should target human immigration, effective drugs for treatment and education to reduced person to person contact.
Validation of a Deterministic Vibroacoustic Response Prediction Model
NASA Technical Reports Server (NTRS)
Caimi, Raoul E.; Margasahayam, Ravi
1997-01-01
This report documents the recently completed effort involving validation of a deterministic theory for the random vibration problem of predicting the response of launch pad structures in the low-frequency range (0 to 50 hertz). Use of the Statistical Energy Analysis (SEA) methods is not suitable in this range. Measurements of launch-induced acoustic loads and subsequent structural response were made on a cantilever beam structure placed in close proximity (200 feet) to the launch pad. Innovative ways of characterizing random, nonstationary, non-Gaussian acoustics are used for the development of a structure's excitation model. Extremely good correlation was obtained between analytically computed responses and those measured on the cantilever beam. Additional tests are recommended to bound the problem to account for variations in launch trajectory and inclination.
Deterministic spin-wave interferometer based on the Rydberg blockade
Wei Ran; Deng Youjin; Pan Jianwei; Zhao Bo; Chen Yuao
2011-06-15
The spin-wave (SW) N-particle path-entangled |N,0>+|0,N> (NOON) state is an N-particle Fock state with two atomic spin-wave modes maximally entangled. Attributed to the property that the phase is sensitive to collective atomic motion, the SW NOON state can be utilized as an atomic interferometer and has promising application in quantum enhanced measurement. In this paper we propose an efficient protocol to deterministically produce the atomic SW NOON state by employing the Rydberg blockade. Possible errors in practical manipulations are analyzed. A feasible experimental scheme is suggested. Our scheme is far more efficient than the recent experimentally demonstrated one, which only creates a heralded second-order SW NOON state.
Scattering of electromagnetic light waves from a deterministic anisotropic medium
NASA Astrophysics Data System (ADS)
Li, Jia; Chang, Liping; Wu, Pinghui
2015-11-01
Based on the weak scattering theory of electromagnetic waves, analytical expressions are derived for the spectral densities and degrees of polarization of an electromagnetic plane wave scattered from a deterministic anisotropic medium. It is shown that the normalized spectral densities of scattered field is highly dependent of changes of the scattering angle and degrees of polarization of incident plane waves. The degrees of polarization of scattered field are also subjective to variations of these parameters. In addition, the anisotropic effective radii of the dielectric susceptibility can lead essential influences on both spectral densities and degrees of polarization of scattered field. They are highly dependent of the effective radii of the medium. The obtained results may be applicable to determine anisotropic parameters of medium by quantitatively measuring statistics of a far-zone scattered field.
Reinforcement learning output feedback NN control using deterministic learning technique.
Xu, Bin; Yang, Chenguang; Shi, Zhongke
2014-03-01
In this brief, a novel adaptive-critic-based neural network (NN) controller is investigated for nonlinear pure-feedback systems. The controller design is based on the transformed predictor form, and the actor-critic NN control architecture includes two NNs, whereas the critic NN is used to approximate the strategic utility function, and the action NN is employed to minimize both the strategic utility function and the tracking error. A deterministic learning technique has been employed to guarantee that the partial persistent excitation condition of internal states is satisfied during tracking control to a periodic reference orbit. The uniformly ultimate boundedness of closed-loop signals is shown via Lyapunov stability analysis. Simulation results are presented to demonstrate the effectiveness of the proposed control.
A deterministic global approach for mixed-discrete structural optimization
NASA Astrophysics Data System (ADS)
Lin, Ming-Hua; Tsai, Jung-Fa
2014-07-01
This study proposes a novel approach for finding the exact global optimum of a mixed-discrete structural optimization problem. Although many approaches have been developed to solve the mixed-discrete structural optimization problem, they cannot guarantee finding a global solution or they adopt too many extra binary variables and constraints in reformulating the problem. The proposed deterministic method uses convexification strategies and linearization techniques to convert a structural optimization problem into a convex mixed-integer nonlinear programming problem solvable to obtain a global optimum. To enhance the computational efficiency in treating complicated problems, the range reduction technique is also applied to tighten variable bounds. Several numerical experiments drawn from practical structural design problems are presented to demonstrate the effectiveness of the proposed method.
Classification and unification of the microscopic deterministic traffic models
NASA Astrophysics Data System (ADS)
Yang, Bo; Monterola, Christopher
2015-10-01
We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.
Deterministic nonclassicality for quantum-mechanical oscillators in thermal states
NASA Astrophysics Data System (ADS)
Marek, Petr; Lachman, Lukáš; Slodička, Lukáš; Filip, Radim
2016-07-01
Quantum nonclassicality is the basic building stone for the vast majority of quantum information applications and methods of its generation are at the forefront of research. One of the obstacles any method needs to clear is the looming presence of decoherence and noise which act against the nonclassicality and often erase it completely. In this paper we show that nonclassical states of a quantum harmonic oscillator initially in thermal equilibrium states can be deterministically created by coupling it to a single two-level system. This can be achieved even in the absorption regime in which the two-level system is initially in the ground state. The method is resilient to noise and it may actually benefit from it, as witnessed by the systems with higher thermal energy producing more nonclassical states.
A Deterministic Computational Procedure for Space Environment Electron Transport
NASA Technical Reports Server (NTRS)
Nealy, John E.; Chang, C. K.; Norman, Ryan B.; Blattnig, Steve R.; Badavi, Francis F.; Adamcyk, Anne M.
2010-01-01
A deterministic computational procedure for describing the transport of electrons in condensed media is formulated to simulate the effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The primary purpose for developing the procedure is to provide a means of rapidly performing numerous repetitive transport calculations essential for electron radiation exposure assessments for complex space structures. The present code utilizes well-established theoretical representations to describe the relevant interactions and transport processes. A combined mean free path and average trajectory approach is used in the transport formalism. For typical space environment spectra, several favorable comparisons with Monte Carlo calculations are made which have indicated that accuracy is not compromised at the expense of the computational speed.
Deterministic simulation of thermal neutron radiography and tomography
NASA Astrophysics Data System (ADS)
Pal Chowdhury, Rajarshi; Liu, Xin
2016-05-01
In recent years, thermal neutron radiography and tomography have gained much attention as one of the nondestructive testing methods. However, the application of thermal neutron radiography and tomography is hindered by their technical complexity, radiation shielding, and time-consuming data collection processes. Monte Carlo simulations have been developed in the past to improve the neutron imaging facility's ability. In this paper, a new deterministic simulation approach has been proposed and demonstrated to simulate neutron radiographs numerically using a ray tracing algorithm. This approach has made the simulation of neutron radiographs much faster than by previously used stochastic methods (i.e., Monte Carlo methods). The major problem with neutron radiography and tomography simulation is finding a suitable scatter model. In this paper, an analytic scatter model has been proposed that is validated by a Monte Carlo simulation.
NASA Astrophysics Data System (ADS)
Thaker, T. P.; Rathod, Ganesh W.; Rao, K. S.; Gupta, K. K.
2012-01-01
Surat, the financial capital of Gujarat, India, is a mega city with a population exceeding five millions. The city falls under Zone III of the Seismic Zoning Map of India. After the devastating 2001 Bhuj earthquake of Mw 7.7, much attention is paid towards the seismic microzonation activity in the state of Gujarat. In this work, an attempt has been made to evaluate the seismic hazard for Surat City (21.170 N, 72.830 E) based on the probabilistic and deterministic seismic hazard analysis. After collecting a catalogue of historical earthquakes in a 350 km radius around the city and after analyzing a database statistically, deterministic analysis has been carried out considering known tectonic sources; a further recurrence relationship for the control region is found out. Probabilistic seismic hazard analyses were then carried out for the Surat region considering five seismotectonic sources selected from a deterministic approach. The final results of the present investigations are presented in the form of peak ground acceleration and response spectra at bed rock level considering the local site conditions. Rock level Peak Ground Acceleration (PGA) and spectral acceleration values at 0.01 s and 1.0 s corresponding to 10% and 2% probability of exceedance in 50 years have been calculated. Further Uniform Hazard Response Spectrum (UHRS) at rock level for 5% damping, and 10% and 2% probability of exceedance in 50 years, were also developed for the city considering all site classes. These results can be directly used by engineers as basic inputs in earthquake-resistant design of structures in and around the city.
Deterministic Earthquake Hazard Assessment by Public Agencies in California
NASA Astrophysics Data System (ADS)
Mualchin, L.
2005-12-01
Even in its short recorded history, California has experienced a number of damaging earthquakes that have resulted in new codes and other legislation for public safety. In particular, the 1971 San Fernando earthquake produced some of the most lasting results such as the Hospital Safety Act, the Strong Motion Instrumentation Program, the Alquist-Priolo Special Studies Zone Act, and the California Department of Transportation (Caltrans') fault-based deterministic seismic hazard (DSH) map. The latter product provides values for earthquake ground motions based on Maximum Credible Earthquakes (MCEs), defined as the largest earthquakes that can reasonably be expected on faults in the current tectonic regime. For surface fault rupture displacement hazards, detailed study of the same faults apply. Originally, hospital, dam, and other critical facilities used seismic design criteria based on deterministic seismic hazard analyses (DSHA). However, probabilistic methods grew and took hold by introducing earthquake design criteria based on time factors and quantifying "uncertainties", by procedures such as logic trees. These probabilistic seismic hazard analyses (PSHA) ignored the DSH approach. Some agencies were influenced to adopt only the PSHA method. However, deficiencies in the PSHA method are becoming recognized, and the use of the method is now becoming a focus of strong debate. Caltrans is in the process of producing the fourth edition of its DSH map. The reason for preferring the DSH method is that Caltrans believes it is more realistic than the probabilistic method for assessing earthquake hazards that may affect critical facilities, and is the best available method for insuring public safety. Its time-invariant values help to produce robust design criteria that are soundly based on physical evidence. And it is the method for which there is the least opportunity for unwelcome surprises.
Denysenko, Dmytro; Jelic, Jelena; Reuter, Karsten; Volkmer, Dirk
2015-05-26
The isomorphous partial substitution of Zn(2+) ions in the secondary building unit (SBU) of MFU-4l leads to frameworks with the general formula [M(x)Zn(5-x)Cl4(BTDD)3], in which x≈2, M = Mn(II), Fe(II), Co(II), Ni(II), or Cu(II), and BTDD = bis(1,2,3-triazolato-[4,5-b],[4',5'-i])dibenzo-[1,4]-dioxin. Subsequent exchange of chloride ligands by nitrite, nitrate, triflate, azide, isocyanate, formate, acetate, or fluoride leads to a variety of MFU-4l derivatives, which have been characterized by using XRPD, EDX, IR, UV/Vis-NIR, TGA, and gas sorption measurements. Several MFU-4l derivatives show high catalytic activity in a liquid-phase oxidation of ethylbenzene to acetophenone with air under mild conditions, among which Co- and Cu derivatives with chloride side-ligands are the most active catalysts. Upon thermal treatment, several side-ligands can be transformed selectively into reactive intermediates without destroying the framework. Thus, at 300 °C, Co(II)-azide units in the SBU of Co-MFU-4l are converted into Co(II)-isocyanate under continuous CO gas flow, involving the formation of a nitrene intermediate. The reaction of Cu(II)-fluoride units with H2 at 240 °C leads to Cu(I) and proceeds through the heterolytic cleavage of the H2 molecule.
Drury, C.R.
1988-02-02
A heat exchanger having primary and secondary conduits in heat-exchanging relationship is described comprising: at least one serpentine tube having parallel sections connected by reverse bends, the serpentine tube constituting one of the conduits; a group of open-ended tubes disposed adjacent to the parallel sections, the open-ended tubes constituting the other of the conduits, and forming a continuous mass of contacting tubes extending between and surrounding the serpentine tube sections; and means securing the mass of tubes together to form a predetermined cross-section of the entirety of the mass of open-ended tubes and tube sections.
NASA Astrophysics Data System (ADS)
Nicolay, S.; Brodie of Brodie, E. B.; Touchon, M.; d'Aubenton-Carafa, Y.; Thermes, C.; Arneodo, A.
2004-10-01
We use the continuous wavelet transform to perform a space-scale analysis of the AT and GC skews (strand asymmetries) in human genomic sequences, which have been shown to correlate with gene transcription. This study reveals the existence of a characteristic scale ℓ c≃25±10 kb that separates a monofractal long-range correlated noisy regime at small scales (ℓ<ℓ c) from relaxational oscillatory behavior at large-scale (ℓ>ℓ c). We show that these large scale nonlinear oscillations enlighten an organization of the human genome into adjacent domains ( ≈400 kb) with preferential gene orientation. When using classical techniques from dynamical systems theory, we demonstrate that these relaxational oscillations display all the characteristic properties of the chaotic strange attractor behavior observed nearby homoclinic orbits of Shil'nikov type. We discuss the possibility that replication and gene regulation processes are governed by a low-dimensional dynamical system that displays deterministic chaos.
Oyeyemi, Olayinka A; Sours, Kevin M; Lee, Thomas; Kohen, Amnon; Resing, Katheryn A; Ahn, Natalie G; Klinman, Judith P
2011-09-27
The technique of hydrogen-deuterium exchange coupled to mass spectrometry (HDX-MS) has been applied to a mesophilic (E. coli) dihydrofolate reductase under conditions that allow direct comparison to a thermophilic (B. stearothermophilus) ortholog, Ec-DHFR and Bs-DHFR, respectively. The analysis of hydrogen-deuterium exchange patterns within proteolytically derived peptides allows spatial resolution, while requiring a series of controls to compare orthologous proteins with only ca. 40% sequence identity. These controls include the determination of primary structure effects on intrinsic rate constants for HDX as well as the use of existing 3-dimensional structures to evaluate the distance of each backbone amide hydrogen to the protein surface. Only a single peptide from the Ec-DHFR is found to be substantially more flexible than the Bs-DHFR at 25 °C in a region located within the protein interior at the intersection of the cofactor and substrate-binding sites. The surrounding regions of the enzyme are either unchanged or more flexible in the thermophilic DHFR from B. stearothermophilus. The region with increased flexibility in Ec-DHFR corresponds to one of two regions previously proposed to control the enthalpic barrier for hydride transfer in Bs-DHFR [Oyeyemi et al. (2010) Proc. Natl. Acad. Sci. U.S.A. 107, 10074]. PMID:21859100
Hellen, Christopher U. T.; de Breyne, Sylvain
2007-01-01
The 5′ untranslated regions (UTRs) of the RNA genomes of Flaviviridae of the Hepacivirus and Pestivirus genera contain internal ribosomal entry sites (IRESs) that are unrelated to the two principal classes of IRESs of Picornaviridae. The mechanism of translation initiation on hepacivirus/pestivirus (HP) IRESs, which involves factor-independent binding to ribosomal 40S subunits, also differs fundamentally from initiation on these picornavirus IRESs. Ribosomal binding to HP IRESs requires conserved sequences that form a pseudoknot and the adjacent IIId and IIIe domains; analogous elements do not occur in the two principal groups of picornavirus IRESs. Here, comparative sequence analysis was used to identify a subset of picornaviruses from multiple genera that contain 5′ UTR sequences with significant similarities to HP IRESs. They are avian encephalomyelitis virus, duck hepatitis virus 1, duck picornavirus, porcine teschovirus, porcine enterovirus 8, Seneca Valley virus, and simian picornavirus. Their 5′ UTRs are predicted to form several structures, in some of which the peripheral elements differ from the corresponding HP IRES elements but in which the core pseudoknot, domain IIId, and domain IIIe elements are all closely related. These findings suggest that HP-like IRESs have been exchanged between unrelated virus families by recombination and support the hypothesis that RNA viruses consist of modular coding and noncoding elements that can exchange and evolve independently. PMID:17392358
Deterministic and Stochastic Analysis of a Prey-Dependent Predator-Prey System
ERIC Educational Resources Information Center
Maiti, Alakes; Samanta, G. P.
2005-01-01
This paper reports on studies of the deterministic and stochastic behaviours of a predator-prey system with prey-dependent response function. The first part of the paper deals with the deterministic analysis of uniform boundedness, permanence, stability and bifurcation. In the second part the reproductive and mortality factors of the prey and…
NASA Astrophysics Data System (ADS)
mouloud, Hamidatou
2016-04-01
The objective of this paper is to analyze the seismic activity and the statistical treatment of seismicity catalog the Constantine region between 1357 and 2014 with 7007 seismic event. Our research is a contribution to improving the seismic risk management by evaluating the seismic hazard in the North-East Algeria. In the present study, Earthquake hazard maps for the Constantine region are calculated. Probabilistic seismic hazard analysis (PSHA) is classically performed through the Cornell approach by using a uniform earthquake distribution over the source area and a given magnitude range. This study aims at extending the PSHA approach to the case of a characteristic earthquake scenario associated with an active fault. The approach integrates PSHA with a high-frequency deterministic technique for the prediction of peak and spectral ground motion parameters in a characteristic earthquake. The method is based on the site-dependent evaluation of the probability of exceedance for the chosen strong-motion parameter. We proposed five sismotectonique zones. Four steps are necessary: (i) identification of potential sources of future earthquakes, (ii) assessment of their geological, geophysical and geometric, (iii) identification of the attenuation pattern of seismic motion, (iv) calculation of the hazard at a site and finally (v) hazard mapping for a region. In this study, the procedure of the earthquake hazard evaluation recently developed by Kijko and Sellevoll (1992) is used to estimate seismic hazard parameters in the northern part of Algeria.
ERIC Educational Resources Information Center
Coss, Maurice
Planning ideas and follow-up activities are described for a reciprocal exchange program between groups of 5th and 6th grade students in Manitoba who are "twinned" with another school in the province. Emphasis is on providing learning experiences which help students become familiar with the economic activity in the area, with the local government…
Wolowodiuk, Walter
1976-01-06
A heat exchanger of the straight tube type in which different rates of thermal expansion between the straight tubes and the supply pipes furnishing fluid to those tubes do not result in tube failures. The supply pipes each contain a section which is of helical configuration.
Daman, Ernest L.; McCallister, Robert A.
1979-01-01
A heat exchanger is provided having first and second fluid chambers for passing primary and secondary fluids. The chambers are spaced apart and have heat pipes extending from inside one chamber to inside the other chamber. A third chamber is provided for passing a purge fluid, and the heat pipe portion between the first and second chambers lies within the third chamber.
Nonadiabatic exchange dynamics during adiabatic frequency sweeps
NASA Astrophysics Data System (ADS)
Barbara, Thomas M.
2016-04-01
A Bloch equation analysis that includes relaxation and exchange effects during an adiabatic frequency swept pulse is presented. For a large class of sweeps, relaxation can be incorporated using simple first order perturbation theory. For anisochronous exchange, new expressions are derived for exchange augmented rotating frame relaxation. For isochronous exchange between sites with distinct relaxation rate constants outside the extreme narrowing limit, simple criteria for adiabatic exchange are derived and demonstrate that frequency sweeps commonly in use may not be adiabatic with regard to exchange unless the exchange rates are much larger than the relaxation rates. Otherwise, accurate assessment of the sensitivity to exchange dynamics will require numerical integration of the rate equations. Examples of this situation are given for experimentally relevant parameters believed to hold for in-vivo tissue. These results are of significance in the study of exchange induced contrast in magnetic resonance imaging.
Deterministic approach for multiple-source tsunami hazard assessment for Sines, Portugal
NASA Astrophysics Data System (ADS)
Wronna, M.; Omira, R.; Baptista, M. A.
2015-11-01
In this paper, we present a deterministic approach to tsunami hazard assessment for the city and harbour of Sines, Portugal, one of the test sites of project ASTARTE (Assessment, STrategy And Risk Reduction for Tsunamis in Europe). Sines has one of the most important deep-water ports, which has oil-bearing, petrochemical, liquid-bulk, coal, and container terminals. The port and its industrial infrastructures face the ocean southwest towards the main seismogenic sources. This work considers two different seismic zones: the Southwest Iberian Margin and the Gloria Fault. Within these two regions, we selected a total of six scenarios to assess the tsunami impact at the test site. The tsunami simulations are computed using NSWING, a Non-linear Shallow Water model wIth Nested Grids. In this study, the static effect of tides is analysed for three different tidal stages: MLLW (mean lower low water), MSL (mean sea level), and MHHW (mean higher high water). For each scenario, the tsunami hazard is described by maximum values of wave height, flow depth, drawback, maximum inundation area and run-up. Synthetic waveforms are computed at virtual tide gauges at specific locations outside and inside the harbour. The final results describe the impact at the Sines test site considering the single scenarios at mean sea level, the aggregate scenario, and the influence of the tide on the aggregate scenario. The results confirm the composite source of Horseshoe and Marques de Pombal faults as the worst-case scenario, with wave heights of over 10 m, which reach the coast approximately 22 min after the rupture. It dominates the aggregate scenario by about 60 % of the impact area at the test site, considering maximum wave height and maximum flow depth. The HSMPF scenario inundates a total area of 3.5 km2.
Francisco, M S P; Cardoso, W S; Gushikem, Y; Landers, R; Kholin, Y V
2004-09-28
In this work, the structural and textural properties of the SiO2/Nb2O5 system prepared by the sol-gel method and then modified by phosphoric acid were studied. The different materials were prepared, with three different mol % Nb2O5 (2.5, 5.0, and 7.5 mol %), and calcined in the temperature range of 423-1273 K. BET specific surface area determinations, scanning electron microscopy connected to a X-ray emission analyzer, Fourier transform infrared spectroscopy, and X-ray photoelectron spectroscopy (XPS) were used for the investigation. For the lowest temperature of calcination (423 K), the mesopores and micropores of the modified material were blocked, resulting in a decrease of the specific surface area compared to the SBET values obtained for the SiNb matrix. Under intermediate temperatures of calcination (423-873 K), the modified material acquired textural stability. By XPS analysis, the presence of the dihydrogenphosphate species was identified, the P/Nb atomic ratios being independent of the thermal treatment. 31P magic angle spinning NMR confirmed the XPS data and also showed that the chemical shift of the (H2PO4)- ions strongly depended on the crystallization degree of the Nb2O5. Structural thermal stability was also shown by the presence of Brønsted acid sites in the modified material calcined at high temperature (1273 K). The thermal stability is directly associated with obtainment of the same value for K+ exchange capacity (0.74 mmol g(-1), average value) for the modified materials calcined at 423 and 1273 K. The chemical analyses of phosphorus for the modified materials were made by using the inductively coupled plasma. The value was 0.36 mmol g(-1), corroborating the presence of (H2PO4)- ions. The ion exchange isotherms presented an S-shaped form characteristic of energetically heterogeneous ion exchangers, permitting application of a model of fixed polydentate centers, in which ion exchange took place. PMID:15379496
Roseboom, Winfried; De Lacey, Antonio L; Fernandez, Victor M; Hatchikian, E Claude; Albracht, Simon P J
2006-01-01
In [FeFe]-hydrogenases, the H cluster (hydrogen-activating cluster) contains a di-iron centre ([2Fe]H subcluster, a (L)(CO)(CN)Fe(mu-RS2)(mu-CO)Fe(CysS)(CO)(CN) group) covalently attached to a cubane iron-sulphur cluster ([4Fe-4S]H subcluster). The Cys-thiol functions as the link between one iron (called Fe1) of the [2Fe]H subcluster and one iron of the cubane subcluster. The other iron in the [2Fe]H subcluster is called Fe2. The light sensitivity of the Desulfovibrio desulfuricans enzyme in a variety of states has been studied with infrared (IR) spectroscopy. The aerobic inactive enzyme (H(inact) state) and the CO-inhibited active form (H(ox)-CO state) were stable in light. Illumination of the H(ox) state led to a kind of cannibalization; in some enzyme molecules the H cluster was destroyed and the released CO was captured by the H clusters in other molecules to form the light-stable H(ox)-CO state. Illumination of active enzyme under 13CO resulted in the complete exchange of the two intrinsic COs bound to Fe2. At cryogenic temperatures, light induced the photodissociation of the extrinsic CO and the bridging CO of the enzyme in the H(ox)-CO state. Electrochemical redox titrations showed that the enzyme in the H(inact) state converts to the transition state (H(trans)) in a reversible one-electron redox step (E (m, pH 7) = -75 mV). IR spectra demonstrate that the added redox equivalent not only affects the [4Fe-4S]H subcluster, but also the di-iron centre. Enzyme in the H(trans) state reacts with extrinsic CO, which binds to Fe2. The H(trans) state converts irreversibly into the H(ox) state in a redox-dependent reaction most likely involving two electrons (E (m, pH 7) = -261 mV). These electrons do not end up on any of the six Fe atoms of the H cluster; the possible destiny of the two redox equivalents is discussed. An additional reversible one-electron redox reaction leads to the H(red) state (E (m, pH 7) = -354 mV), where both Fe atoms of the [2Fe]H subcluster
Ismail, I M; Basahi, J M; Hassan, I A
2014-11-01
Egyptian pea cultivars (Pisum sativum L. cultivars Little Marvel, Perfection and Victory) grown in open-top chambers were exposed to either charcoal-filtered (FA) or non-filtered air (NF) for five consecutive years (2009-2013) at a rural site in northern Egypt. Net photosynthetic rates (PN), stomatal conductance (gs), intercellular CO2 (Ci) and chlorophyll fluorescence were measured. Ozone (O3) was found to be the most prevalent pollutant common at the rural site and is suspected to be involved in the alteration of the physiological parameters measured in the present investigation. PN of different cultivars were found to respond similarly; decreases of 23, 29 and 39% were observed in the cultivars Perfection, Little Marvel and Victory, respectively (averaged over the five years) due to ambient O3. The maximum impairment in PN was recorded in the cultivar Victory (46%) in 2013 when the highest O3 levels were recorded (90 nL L(-1)). The average stomatal conductance decreased by 20 and 18% in the cultivars Little Marvel and Perfection, respectively, while the average stomatal conductance increased on average by 27% in the cultivar Victory. A significant correlation was found between PN and Ci, indicating the importance of non-stomatal limitations of photosynthesis, especially in the cultivar Victory. The PN vs. Ci curves were fitted to a non-rectangular hyperbolic model. The actual quantum yield (ΦPSII) and photochemical quenching coefficient (qP) were significantly decreased in the leaves of plants exposed to NF air. Non-photochemical quenching (NPQ) was increased in all cultivars. Exposure to NF air caused reductions in chlorophyll (Chl a) of 19, 16 and 30% in the Little Marvel, Perfection and Victory cultivars, respectively.
Pu Anion Exchange Process Intensification
Taylor-Pashow, K.
2015-10-08
This project seeks to improve the efficiency of the plutonium anion-exchange process for purifying Pu through the development of alternate ion-exchange media. The objective of the project in FY15 was to develop and test a porous foam monolith material that could serve as a replacement for the current anion-exchange resin, Reillex® HPQ, used at the Savannah River Site (SRS) for purifying Pu. The new material provides advantages in efficiency over the current resin by the elimination of diffusive mass transport through large granular resin beads. By replacing the large resin beads with a porous foam there is much more efficient contact between the Pu solution and the anion-exchange sites present on the material. Several samples of a polystyrene based foam grafted with poly(4-vinylpyridine) were prepared and the Pu sorption was tested in batch contact tests.
Electromagnetic field enhancement and light localization in deterministic aperiodic nanostructures
NASA Astrophysics Data System (ADS)
Gopinath, Ashwin
The control of light matter interaction in periodic and random media has been investigated in depth during the last few decades, yet structures with controlled degree of disorder such as Deterministic Aperiodic Nano Structures (DANS) have been relatively unexplored. DANS are characterized by non-periodic yet long-range correlated (deterministic) morphologies and can be generated by the mathematical rules of symbolic dynamics and number theory. In this thesis, I have experimentally investigated the unique light transport and localization properties in planar dielectric and metal (plasmonics) DANS. In particular, I have focused on the design, nanofabrication and optical characterization of DANS, formed by arranging metal/dielectric nanoparticles in an aperiodic lattice. This effort is directed towards development of on-chip nanophotonic applications with emphasis on label-free bio-sensing and enhanced light emission. The DANS designed as Surface Enhanced Raman Scattering (SERS) substrate is composed of multi-scale aperiodic nanoparticle arrays fabricated by e-beam lithography and are capable of reproducibly demonstrating enhancement factors as high as ˜107. Further improvement of SERS efficiency is achieved by combining DANS formed by top-down approach with bottom-up reduction of gold nanoparticles, to fabricate novel nanostructures called plasmonic "nano-galaxies" which increases the SERS enhancement factors by 2--3 orders of magnitude while preserving the reproducibility. In this thesis, along with presenting details of fabrication and SERS characterization of these "rationally designed" SERS substrates, I will also present results on using these substrates for detection of DNA nucleobases, as well as reproducible label-free detection of pathogenic bacteria with species specificity. In addition to biochemical detection, the combination of broadband light scattering behavior and the ability for the generation of reproducible high fields in DANS make these
Chemical exchange program analysis.
Waffelaert, Pascale
2007-09-01
As part of its EMS, Sandia performs an annual environmental aspects/impacts analysis. The purpose of this analysis is to identify the environmental aspects associated with Sandia's activities, products, and services and the potential environmental impacts associated with those aspects. Division and environmental programs established objectives and targets based on the environmental aspects associated with their operations. In 2007 the most significant aspect identified was Hazardous Materials (Use and Storage). The objective for Hazardous Materials (Use and Storage) was to improve chemical handling, storage, and on-site movement of hazardous materials. One of the targets supporting this objective was to develop an effective chemical exchange program, making a business case for it in FY07, and fully implementing a comprehensive chemical exchange program in FY08. A Chemical Exchange Program (CEP) team was formed to implement this target. The team consists of representatives from the Chemical Information System (CIS), Pollution Prevention (P2), the HWMF, Procurement and the Environmental Management System (EMS). The CEP Team performed benchmarking and conducted a life-cycle analysis of the current management of chemicals at SNL/NM and compared it to Chemical Exchange alternatives. Those alternatives are as follows: (1) Revive the 'Virtual' Chemical Exchange Program; (2) Re-implement a 'Physical' Chemical Exchange Program using a Chemical Information System; and (3) Transition to a Chemical Management Services System. The analysis and benchmarking study shows that the present management of chemicals at SNL/NM is significantly disjointed and a life-cycle or 'Cradle-to-Grave' approach to chemical management is needed. This approach must consider the purchasing and maintenance costs as well as the cost of ultimate disposal of the chemicals and materials. A chemical exchange is needed as a mechanism to re-apply chemicals on site. This will not only reduce the quantity of
Non-deterministic modelling of food-web dynamics.
Planque, Benjamin; Lindstrøm, Ulf; Subbey, Sam
2014-01-01
A novel approach to model food-web dynamics, based on a combination of chance (randomness) and necessity (system constraints), was presented by Mullon et al. in 2009. Based on simulations for the Benguela ecosystem, they concluded that observed patterns of ecosystem variability may simply result from basic structural constraints within which the ecosystem functions. To date, and despite the importance of these conclusions, this work has received little attention. The objective of the present paper is to replicate this original model and evaluate the conclusions that were derived from its simulations. For this purpose, we revisit the equations and input parameters that form the structure of the original model and implement a comparable simulation model. We restate the model principles and provide a detailed account of the model structure, equations, and parameters. Our model can reproduce several ecosystem dynamic patterns: pseudo-cycles, variation and volatility, diet, stock-recruitment relationships, and correlations between species biomass series. The original conclusions are supported to a large extent by the current replication of the model. Model parameterisation and computational aspects remain difficult and these need to be investigated further. Hopefully, the present contribution will make this approach available to a larger research community and will promote the use of non-deterministic-network-dynamics models as 'null models of food-webs' as originally advocated. PMID:25299245
Automated optimum design of wing structures. Deterministic and probabilistic approaches
NASA Technical Reports Server (NTRS)
Rao, S. S.
1982-01-01
The automated optimum design of airplane wing structures subjected to multiple behavior constraints is described. The structural mass of the wing is considered the objective function. The maximum stress, wing tip deflection, root angle of attack, and flutter velocity during the pull up maneuver (static load), the natural frequencies of the wing structure, and the stresses induced in the wing structure due to landing and gust loads are suitably constrained. Both deterministic and probabilistic approaches are used for finding the stresses induced in the airplane wing structure due to landing and gust loads. A wing design is represented by a uniform beam with a cross section in the form of a hollow symmetric double wedge. The airfoil thickness and chord length are the design variables, and a graphical procedure is used to find the optimum solutions. A supersonic wing design is represented by finite elements. The thicknesses of the skin and the web and the cross sectional areas of the flanges are the design variables, and nonlinear programming techniques are used to find the optimum solution.
Agent-Based Deterministic Modeling of the Bone Marrow Homeostasis.
Kurhekar, Manish; Deshpande, Umesh
2016-01-01
Modeling of stem cells not only describes but also predicts how a stem cell's environment can control its fate. The first stem cell populations discovered were hematopoietic stem cells (HSCs). In this paper, we present a deterministic model of bone marrow (that hosts HSCs) that is consistent with several of the qualitative biological observations. This model incorporates stem cell death (apoptosis) after a certain number of cell divisions and also demonstrates that a single HSC can potentially populate the entire bone marrow. It also demonstrates that there is a production of sufficient number of differentiated cells (RBCs, WBCs, etc.). We prove that our model of bone marrow is biologically consistent and it overcomes the biological feasibility limitations of previously reported models. The major contribution of our model is the flexibility it allows in choosing model parameters which permits several different simulations to be carried out in silico without affecting the homeostatic properties of the model. We have also performed agent-based simulation of the model of bone marrow system proposed in this paper. We have also included parameter details and the results obtained from the simulation. The program of the agent-based simulation of the proposed model is made available on a publicly accessible website. PMID:27340402
A deterministic method for transient, three-dimensional neutron transport
NASA Astrophysics Data System (ADS)
Goluoglu, Sedat
A deterministic method for solving the time-dependent, three-dimensional Boltzmann transport equation with explicit representation of delayed neutrons has been developed and evaluated. The methodology used in this study for the time variable is the improved quasi-static (IQS) method. The position, energy, and angle variables of the neutron flux are computed using the three-dimensional (3-D) discrete ordinates code TORT. The resulting time-dependent, 3-D code is called TDTORT. The flux shape calculated by TORT is used to compute the point kinetics parameters (e.g., reactivity, generation time, etc.). The amplitude function is calculated by solving the point kinetics equations using LSODE (Livermore Solver of Ordinary differential Equations). Several transient 1-D, 2-D, and 3-D benchmark problems are used to verify TDTORT. The results show that methodology and code developed in this work have sufficient accuracy and speed to serve as a benchmarking tool for other less accurate models and codes. More importantly, a new computational tool based on transport theory now exists for analyzing the dynamic behavior of complex neutronic systems.
Mesoscopic quantum emitters from deterministic aggregates of conjugated polymers
Stangl, Thomas; Wilhelm, Philipp; Remmerssen, Klaas; Höger, Sigurd; Vogelsang, Jan; Lupton, John M.
2015-01-01
An appealing definition of the term “molecule” arises from consideration of the nature of fluorescence, with discrete molecular entities emitting a stream of single photons. We address the question of how large a molecular object may become by growing deterministic aggregates from single conjugated polymer chains. Even particles containing dozens of individual chains still behave as single quantum emitters due to efficient excitation energy transfer, whereas the brightness is raised due to the increased absorption cross-section of the suprastructure. Excitation energy can delocalize between individual polymer chromophores in these aggregates by both coherent and incoherent coupling, which are differentiated by their distinct spectroscopic fingerprints. Coherent coupling is identified by a 10-fold increase in excited-state lifetime and a corresponding spectral red shift. Exciton quenching due to incoherent FRET becomes more significant as aggregate size increases, resulting in single-aggregate emission characterized by strong blinking. This mesoscale approach allows us to identify intermolecular interactions which do not exist in isolated chains and are inaccessible in bulk films where they are present but masked by disorder. PMID:26417079
Is there a sharp phase transition for deterministic cellular automata?
NASA Astrophysics Data System (ADS)
Wootters, William K.; Langton, Chris G.
1990-09-01
Previous work has suggested that there is a kind of phase transition between deterministic automata exhibiting periodic behavior and those exhibiting chaotic behavior. However, unlike the usual phase transitions of physics, this transition takes place over a range of values of the parameter rather than at a specific value. The present paper asks whether the transition can be made sharp, either by taking the limit of an infinitely large rule table, or by changing the parameter in terms of which the space of automata is explored. We find strong evidence that, for the class of automata we consider, the transition does become sharp in the limit of an infinite number of symbols, the size of the neighborhood being held fixed. Our work also suggests an alternative parameter in terms of which it is likely that the transition will become fairly sharp even if one does not increase the number of symbols. In the course of our analysis, we find that mean field theory, which is our main tool, gives surprisingly good predictions of the statistical properties of the class of automata we consider.
Deterministic phase encoding encryption in single shot digital holography
NASA Astrophysics Data System (ADS)
Chen, G.-L.; Yang, W.-K.; Wang, J. C.; Chang, C.-C.
2008-11-01
We demonstrate a deterministic phase-encoded encryption system based on the digital holography and adopted a lenticular lens array (LLA) sheet as a phase modulator. In the proposed scheme the holographic patterns of encrypted images are captured digitally by a digital CCD. This work also adopt a novel, simple and effective technique that is used to suppress numerically the major blurring caused by the zero-order image in the numerical reconstruction. The decryption key is acquired as a digital hologram, called the key hologram. Therefore, the retrieval of the original information can be achieved by multiplying the encrypted hologram with a numerical generated phase-encoded wave. The storage and transmission of all holograms can be carried out by all-digital means. Simulation and experimental results demonstrate that the proposed approach can be operated in single procedure only and represent the satisfactory decrypted image. Finally, rotating and shifting the LLA is applied to investigate the tolerance of decryption to demonstrate the feasibility in the holographic encryption, as well as can also be used to provide the higher security.
Entrepreneurs, chance, and the deterministic concentration of wealth.
Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels.
Entrepreneurs, chance, and the deterministic concentration of wealth.
Fargione, Joseph E; Lehman, Clarence; Polasky, Stephen
2011-01-01
In many economies, wealth is strikingly concentrated. Entrepreneurs--individuals with ownership in for-profit enterprises--comprise a large portion of the wealthiest individuals, and their behavior may help explain patterns in the national distribution of wealth. Entrepreneurs are less diversified and more heavily invested in their own companies than is commonly assumed in economic models. We present an intentionally simplified individual-based model of wealth generation among entrepreneurs to assess the role of chance and determinism in the distribution of wealth. We demonstrate that chance alone, combined with the deterministic effects of compounding returns, can lead to unlimited concentration of wealth, such that the percentage of all wealth owned by a few entrepreneurs eventually approaches 100%. Specifically, concentration of wealth results when the rate of return on investment varies by entrepreneur and by time. This result is robust to inclusion of realities such as differing skill among entrepreneurs. The most likely overall growth rate of the economy decreases as businesses become less diverse, suggesting that high concentrations of wealth may adversely affect a country's economic growth. We show that a tax on large inherited fortunes, applied to a small portion of the most fortunate in the population, can efficiently arrest the concentration of wealth at intermediate levels. PMID:21814540
Particle separation using virtual deterministic lateral displacement (vDLD).
Collins, David J; Alan, Tuncay; Neild, Adrian
2014-05-01
We present a method for sensitive and tunable particle sorting that we term virtual deterministic lateral displacement (vDLD). The vDLD system is composed of a set of interdigital transducers (IDTs) within a microfluidic chamber that produce a force field at an angle to the flow direction. Particles above a critical diameter, a function of the force induced by viscous drag and the force field, are displaced laterally along the minimum force potential lines, while smaller particles continue in the direction of the fluid flow without substantial perturbations. We demonstrate the effective separation of particles in a continuous-flow system with size sensitivity comparable or better than other previously reported microfluidic separation techniques. Separation of 5.0 μm from 6.6 μm, 6.6 μm from 7.0 μm and 300 nm from 500 nm particles are all achieved using the same device architecture. With the high sensitivity and flexibility vDLD affords we expect to find application in a wide variety of microfluidic platforms. PMID:24638896
Deterministic ripple-spreading model for complex networks
NASA Astrophysics Data System (ADS)
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S.; Hines, Evor L.; di Paolo, Ezequiel
2011-04-01
This paper proposes a deterministic complex network model, which is inspired by the natural ripple-spreading phenomenon. The motivations and main advantages of the model are the following: (i) The establishment of many real-world networks is a dynamic process, where it is often observed that the influence of a few local events spreads out through nodes, and then largely determines the final network topology. Obviously, this dynamic process involves many spatial and temporal factors. By simulating the natural ripple-spreading process, this paper reports a very natural way to set up a spatial and temporal model for such complex networks. (ii) Existing relevant network models are all stochastic models, i.e., with a given input, they cannot output a unique topology. Differently, the proposed ripple-spreading model can uniquely determine the final network topology, and at the same time, the stochastic feature of complex networks is captured by randomly initializing ripple-spreading related parameters. (iii) The proposed model can use an easily manageable number of ripple-spreading related parameters to precisely describe a network topology, which is more memory efficient when compared with traditional adjacency matrix or similar memory-expensive data structures. (iv) The ripple-spreading model has a very good potential for both extensions and applications.
Kim, Y.; Shim, H. J.; Noh, T.
2006-07-01
To resolve the double-heterogeneity (DH) problem resulting from the TRISO fuel of high-temperature gas-cooled reactors (HTGRs), a synergistic combination of a deterministic method and the Monte Carlo method has been proposed. As the deterministic approach, the RPT (Reactivity-equivalent Physical Transformation) method is adopted. In the combined methodology, a reference k-infinite value is obtained by the Monte Carlo method for an initial state of a problem and it is used by the RPT method to transform the original DH problem into a conventional single-heterogeneous one, and the transformed problem is analyzed by the conventional deterministic methods. The combined methodology has been applied to the depletion analysis of typical HTGR fuels including both prismatic block and pebble. The reference solution is obtained using a Monte Carlo code MCCARD and the accuracy of the deterministic-only and the combined methods is evaluated. For the deterministic solution, the DRAGON and HELIOS codes were used. It has been shown that the combined method provides an accurate solution although the deterministic-only solution shows noticeable errors. For the pebble, the two deterministic codes cannot handle the DH problem. Nevertheless, we have shown that the solution of the DRAGON-MCCARD combined approach agrees well with the reference. (authors)
Deterministic propagation model for RFID using site-specific and FDTD
NASA Astrophysics Data System (ADS)
Cunha de Azambuja, Marcelo; Passuelo Hessel, Fabiano; Luís Berz, Everton; Bauermann Porfírio, Leandro; Ruhnke Valério, Paula; De Pieri Baladei, Suely; Jung, Carlos Fernando
2015-06-01
The conduction of experiments to evaluate a tag orientation and its readability in a laboratory offers great potential for reducing time and costs for users. This article presents a novel methodology for developing simulation models for RFID (radio-frequency identification) environments. The main challenges in adopting this model are: (1) to find out how the properties of each one of the materials, on which the tag is applied, influence the read range and to determine the necessary power for tag reading and (2) to find out the power of the backscattered signal received by the tag when energised by the RF wave transmitted by the reader. The validation tests, performed in four different kinds of environments, with tags applied to six different kinds of materials, six different distances and with a reader configured with three different powers, showed achievements on the average of 95.3% accuracy in the best scenario and 87.0% in the worst scenario. The methodology can be easily duplicated to generate simulation models to other different RFID environments.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-09
... Chicago Stock Exchange, Incorporated (``Exchange'' or ``CHX'') filed with the Securities and Exchange... Change The Exchange proposes to amend CHX Article 20, Rule 4 which governs orders that are eligible for... Exchange's Web site at ( http://www.chx.com ), at the Exchange's Office of the Secretary, and in...
Stochastic model of tumor-induced angiogenesis: Ensemble averages and deterministic equations
NASA Astrophysics Data System (ADS)
Terragni, F.; Carretero, M.; Capasso, V.; Bonilla, L. L.
2016-02-01
A recent conceptual model of tumor-driven angiogenesis including branching, elongation, and anastomosis of blood vessels captures some of the intrinsic multiscale structures of this complex system, yet allowing one to extract a deterministic integro-partial-differential description of the vessel tip density [Phys. Rev. E 90, 062716 (2014), 10.1103/PhysRevE.90.062716]. Here we solve the stochastic model, show that ensemble averages over many realizations correspond to the deterministic equations, and fit the anastomosis rate coefficient so that the total number of vessel tips evolves similarly in the deterministic and ensemble-averaged stochastic descriptions.
Bulgakov, N G; Maksimov, V N
2005-01-01
Specific application of deterministic analysis to investigate the contingencies of various components of natural biocenosis was illustrated by the example of fish production and biomass of phyto- and zooplankton. Deterministic analysis confirms the theoretic assumptions on food preferences of herbivorous fish: both silver and bighead carps avoided feeding on cyanobacteria. Being a facultative phytoplankton feeder, silver carp preferred microalgae to zooplankton. Deterministic analysis allowed us to demonstrate the contingency of the mean biomass of phyto- and zooplankton during both the whole fish production cycle and the individual periods. PMID:16004266
Condition for generating the same scattered spectral density by random and deterministic media.
Wang, Tao; Ding, Yi; Ji, Xiaoling; Zhao, Daomu
2015-02-01
We present a condition for generating the same scattered spectral density by random and deterministic media. Examples of light waves on scattering from a Gaussian-centered deterministic medium and a Gaussian-correlated quasi-homogeneous random medium are discussed. It is shown that the normalized far-zone scattered spectral density produced by a Gaussian-centered deterministic medium and by a Gaussian-correlated quasi-homogeneous random medium will be identical provided that the square of the effective width of normalized correlation coefficient of the quasi-homogeneous random medium is twice the square of the effective width of scattering potential of the determinate medium.
Pagowski, M O; Grell, G A; Devenyi, D; Peckham, S E; McKeen, S A; Gong, W; Monache, L D; McHenry, J N; McQueen, J; Lee, P
2006-02-02
Forecasts from seven air quality models and surface ozone data collected over the eastern USA and southern Canada during July and August 2004 provide a unique opportunity to assess benefits of ensemble-based ozone forecasting and devise methods to improve ozone forecasts. In this investigation, past forecasts from the ensemble of models and hourly surface ozone measurements at over 350 sites are used to issue deterministic 24-h forecasts using a method based on dynamic linear regression. Forecasts of hourly ozone concentrations as well as maximum daily 8-h and 1-h averaged concentrations are considered. It is shown that the forecasts issued with the application of this method have reduced bias and root mean square error and better overall performance scores than any of the ensemble members and the ensemble average. Performance of the method is similar to another method based on linear regression described previously by Pagowski et al., but unlike the latter, the current method does not require measurements from multiple monitors since it operates on individual time series. Improvement in the forecasts can be easily implemented and requires minimal computational cost.
Self-aligned deterministic coupling of single quantum emitter to nanofocused plasmonic modes
Gong, Su-Hyun; Kim, Je-Hyung; Ko, Young-Ho; Rodriguez, Christophe; Shin, Jonghwa; Lee, Yong-Hee; Dang, Le Si; Zhang, Xiang; Cho, Yong-Hoon
2015-01-01
The quantum plasmonics field has emerged and been growing increasingly, including study of single emitter–light coupling using plasmonic system and scalable quantum plasmonic circuit. This offers opportunity for the quantum control of light with compact device footprint. However, coupling of a single emitter to highly localized plasmonic mode with nanoscale precision remains an important challenge. Today, the spatial overlap between metallic structure and single emitter mostly relies either on chance or on advanced nanopositioning control. Here, we demonstrate deterministic coupling between three-dimensionally nanofocused plasmonic modes and single quantum dots (QDs) without any positioning for single QDs. By depositing a thin silver layer on a site-controlled pyramid QD wafer, three-dimensional plasmonic nanofocusing on each QD at the pyramid apex is geometrically achieved through the silver-coated pyramid facets. Enhancement of the QD spontaneous emission rate as high as 22 ± 16 is measured for all processed QDs emitting over ∼150-meV spectral range. This approach could apply to high fabrication yield on-chip devices for wide application fields, e.g., high-efficiency light-emitting devices and quantum information processing. PMID:25870303
Self-aligned deterministic coupling of single quantum emitter to nanofocused plasmonic modes.
Gong, Su-Hyun; Kim, Je-Hyung; Ko, Young-Ho; Rodriguez, Christophe; Shin, Jonghwa; Lee, Yong-Hee; Dang, Le Si; Zhang, Xiang; Cho, Yong-Hoon
2015-04-28
The quantum plasmonics field has emerged and been growing increasingly, including study of single emitter-light coupling using plasmonic system and scalable quantum plasmonic circuit. This offers opportunity for the quantum control of light with compact device footprint. However, coupling of a single emitter to highly localized plasmonic mode with nanoscale precision remains an important challenge. Today, the spatial overlap between metallic structure and single emitter mostly relies either on chance or on advanced nanopositioning control. Here, we demonstrate deterministic coupling between three-dimensionally nanofocused plasmonic modes and single quantum dots (QDs) without any positioning for single QDs. By depositing a thin silver layer on a site-controlled pyramid QD wafer, three-dimensional plasmonic nanofocusing on each QD at the pyramid apex is geometrically achieved through the silver-coated pyramid facets. Enhancement of the QD spontaneous emission rate as high as 22 ± 16 is measured for all processed QDs emitting over ∼150-meV spectral range. This approach could apply to high fabrication yield on-chip devices for wide application fields, e.g., high-efficiency light-emitting devices and quantum information processing.
NASA Astrophysics Data System (ADS)
Baker, K. M.; Eschenbach, E. A.; Madej, M.
2004-12-01
Extensive timber harvesting and the accompanying road construction in the Pacific Northwest region have decreased the quality of fish-bearing streams. The decommissioning of abandoned forest roads increases stream quality by decreasing erosion and downstream sedimentation. Road removal treatments have been performed in many locations. However, the management of these treatments has been generally site-specific, with little investigation of how the treatments will affect the entire watershed. Land managers have a need to design a watershed wide management policy to reduce sedimentation, while maintaining overall costs within a reasonable limit. Identifying the trade-offs of the costs of different treatment policies associated with net reduction of sediment can be quantified. This work further develops optimization approaches to manage road decommissioning projects. Previous work in deterministic dynamic programming and genetic algorithmes did not incorporate the uncertainty of the effectiveness of the road treatments. Stochastic dynamic programming is used to determine the road treatment policy that maximizes the expected sediment saved. This approach is used to determine a policy for the Lost Man Creek Watershed in Northern California containing 691 road segments and road crossings. The model determines the optimal treatment level for each road segment and road crossing while considering a budgetary constraint.
Comparison of deterministic and probabilistic calculation of ecological soil screening levels.
Regan, Helen M; Sample, Brad E; Ferson, Scott
2002-04-01
The U.S. Environmental Protection Agency (U.S. EPA) is sponsoring development of ecological soil screening levels (Eco-SSLs) for terrestrial wildlife. These are intended to be used to identify chemicals of potential ecological concern at Superfund sites. Ecological soil screening levels represent concentrations of contaminants in soils that are believed to be protective of ecological receptors. An exposure model, based on soil- and food-ingestion rates and the relationship between the concentrations of contaminants in soil and food, has been developed for estimation of wildlife Eco-SSLs. It is important to understand how conservative and protective these values are, how parameterization of the model influences the resulting Eco-SSL, and how the treatment of uncertainty impacts results. The Eco-SSLs were calculated for meadow voles (Microtus pennsylvanicus) and northern short-tailed shrews (Blarina brevicauda) for lead and DDT using deterministic and probabilistic methods. Conclusions obtained include that use of central-tendency point estimates may result in hazard quotients much larger than one; that a Monte Carlo approach also leads to hazard quotients that can be substantially larger than one; that, if no hazard quotients larger than one are allowed, any probabilistic approach is identical to a worst-case approach; and that an improvement in the quality and amount of data is necessary to increase confidence that Eco-SSLs are protective at their intended levels of conservatism. PMID:11951965
NASA Astrophysics Data System (ADS)
Boyer, D.; Miramontes, O.; Larralde, H.
2009-10-01
Many studies on animal and human movement patterns report the existence of scaling laws and power-law distributions. Whereas a number of random walk models have been proposed to explain observations, in many situations individuals actually rely on mental maps to explore strongly heterogeneous environments. In this work, we study a model of a deterministic walker, visiting sites randomly distributed on the plane and with varying weight or attractiveness. At each step, the walker minimizes a function that depends on the distance to the next unvisited target (cost) and on the weight of that target (gain). If the target weight distribution is a power law, p(k) ~ k-β, in some range of the exponent β, the foraging medium induces movements that are similar to Lévy flights and are characterized by non-trivial exponents. We explore variations of the choice rule in order to test the robustness of the model and argue that the addition of noise has a limited impact on the dynamics in strongly disordered media.
Broadband seismic monitoring of active volcanoes using deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Kumagai, H.; Nakano, M.; Maeda, T.; Yepes, H.; Palacios, P.; Ruiz, M. C.; Arrais, S.; Vaca, M.; Molina, I.; Yamashina, T.
2009-12-01
We systematically used two approaches to analyze broadband seismic signals observed at active volcanoes: one is waveform inversion of very-long-period (VLP) signals in the frequency domain assuming possible source mechanisms; the other is a source location method of long-period (LP) and tremor using their amplitudes. The deterministic approach of the waveform inversion is useful to constrain the source mechanism and location, but is basically only applicable to VLP signals with periods longer than a few seconds. The source location method uses seismic amplitudes corrected for site amplifications and assumes isotropic radiation of S waves. This assumption of isotropic radiation is apparently inconsistent with the hypothesis of crack geometry at the LP source. Using the source location method, we estimated the best-fit source location of a VLP/LP event at Cotopaxi using a frequency band of 7-12 Hz and Q = 60. This location was close to the best-fit source location determined by waveform inversion of the VLP/LP event using a VLP band of 5-12.5 s. The waveform inversion indicated that a crack mechanism better explained the VLP signals than an isotropic mechanism. These results indicated that isotropic radiation is not inherent to the source and only appears at high frequencies. We also obtained a best-fit location of an explosion event at Tungurahua when using a frequency band of 5-10 Hz and Q = 60. This frequency band and Q value also yielded reasonable locations for the sources of tremor signals associated with lahars and pyroclastic flows at Tungurahua. The isotropic radiation assumption may be valid in a high frequency range in which the path effect caused by the scattering of seismic waves results in an isotropic radiation pattern of S waves. The source location method may be categorized as a stochastic approach based on the nature of scattering waves. We further applied the waveform inversion to VLP signals observed at only two stations during a volcanic crisis
Deterministic and stochastic modifications of the Stokes formula
NASA Astrophysics Data System (ADS)
Ellmann, A.
2009-04-01
features for regional geoid model. Over recent decades two distinct groups of modification approaches - deterministic and stochastic, have been proposed in geodetic literature. The deterministic approaches principally aim at reducing the truncation bias (caused by neglecting of the remote zone) only, whereas the stochastic methods attempt also to incorporate the accuracy estimates of EGM's geopotential coefficients and terrestrial data. Both groups employ a modified Stokes function as the integral kernel for the near-zone integration. The selection of the upper modification limit is directly related to the quality of the EGM to be used. In practice, due to restricted access to terrestrial data the integration radius is often limited to a few hundred kilometres. This implies that a relatively high modification degree should counterbalance this limitation. On the other hand, the EGM error grows with increasing degree, which provides a rationale for choosing a compromise modification limit. Due to poor accuracy of the earlier EGM-s a rather small modification degree was favoured in the computations of many geoid models in the past. Importantly, the space technology advancements have significantly improved the accuracy of recent EGM-s, which allows the user to safely increase the modification degree (up to 100 or even beyond). However, certain difficulties may be encountered when determining (usually, from a system of linear equations) the modification parameters. The solution may become numerically unstable when a small integration cap and/or high modification degree is adopted for computations. Accordingly, this contribution revisits the principles of choosing the appropriate modification method in the context of contemporary EGM-s. Also the strategies for selecting appropriate modification limits are revisited. Typical and optimum outcomes of the modifications are discussed.
Development of a Deterministic Ethernet Building blocks for Space Applications
NASA Astrophysics Data System (ADS)
Fidi, C.; Jakovljevic, Mirko
2015-09-01
The benefits of using commercially based networking standards and protocols have been widely discussed and are expected to include reduction in overall mission cost, shortened integration and test (I&T) schedules, increased operations flexibility, and hardware and software upgradeability/scalability with developments ongoing in the commercial world. The deterministic Ethernet technology TTEthernet [1] diploid on the NASA Orion spacecraft has demonstrated the use of the TTEthernet technology for a safety critical human space flight application during the Exploration Flight Test 1 (EFT-1). The TTEthernet technology used within the NASA Orion program has been matured for the use within this mission but did not lead to a broader use in space applications or an international space standard. Therefore TTTech has developed a new version which allows to scale the technology for different applications not only the high end missions allowing to decrease the size of the building blocks leading to a reduction of size weight and power enabling the use in smaller applications. TTTech is currently developing a full space products offering for its TTEthernet technology to allow the use in different space applications not restricted to launchers and human spaceflight. A broad space market assessment and the current ESA TRP7594 lead to the development of a space grade TTEthernet controller ASIC based on the ESA qualified Atmel AT1C8RHA95 process [2]. In this paper we will describe our current TTEthernet controller development towards a space qualified network component allowing future spacecrafts to operate in significant radiation environments while using a single onboard network for reliable commanding and data transfer.
"Eztrack": A single-vehicle deterministic tracking algorithm
Carrano, C J
2007-12-20
A variety of surveillance operations require the ability to track vehicles over a long period of time using sequences of images taken from a camera mounted on an airborne or similar platform. In order to be able to see and track a vehicle for any length of time, either a persistent surveillance imager is needed that can image wide fields of view over a long time-span or a highly maneuverable smaller field-of-view imager is needed that can follow the vehicle of interest. The algorithm described here was designed for the persistence surveillance case. In turns out that most vehicle tracking algorithms described in the literature[1,2,3,4] are designed for higher frame rates (> 5 FPS) and relatively short ground sampling distances (GSD) and resolutions ({approx} few cm to a couple tens of cm). But for our datasets, we are restricted to lower resolutions and GSD's ({ge}0.5 m) and limited frame-rates ({le}2.0 Hz). As a consequence, we designed our own simple approach in IDL which is a deterministic, motion-guided object tracker. The object tracking relies both on object features and path dynamics. The algorithm certainly has room for future improvements, but we have found it to be a useful tool in evaluating effects of frame-rate, resolution/GSD, and spectral content (eg. grayscale vs. color imaging ). A block diagram of the tracking approach is given in Figure 1. We describe each of the blocks of the diagram in the upcoming sections.
Accurate deterministic solutions for the classic Boltzmann shock profile
NASA Astrophysics Data System (ADS)
Yue, Yubei
The Boltzmann equation or Boltzmann transport equation is a classical kinetic equation devised by Ludwig Boltzmann in 1872. It is regarded as a fundamental law in rarefied gas dynamics. Rather than using macroscopic quantities such as density, temperature, and pressure to describe the underlying physics, the Boltzmann equation uses a distribution function in phase space to describe the physical system, and all the macroscopic quantities are weighted averages of the distribution function. The information contained in the Boltzmann equation is surprisingly rich, and the Euler and Navier-Stokes equations of fluid dynamics can be derived from it using series expansions. Moreover, the Boltzmann equation can reach regimes far from the capabilities of fluid dynamical equations, such as the realm of rarefied gases---the topic of this thesis. Although the Boltzmann equation is very powerful, it is extremely difficult to solve in most situations. Thus the only hope is to solve it numerically. But soon one finds that even a numerical simulation of the equation is extremely difficult, due to both the complex and high-dimensional integral in the collision operator, and the hyperbolic phase-space advection terms. For this reason, until few years ago most numerical simulations had to rely on Monte Carlo techniques. In this thesis I will present a new and robust numerical scheme to compute direct deterministic solutions of the Boltzmann equation, and I will use it to explore some classical gas-dynamical problems. In particular, I will study in detail one of the most famous and intrinsically nonlinear problems in rarefied gas dynamics, namely the accurate determination of the Boltzmann shock profile for a gas of hard spheres.
Reduced-Complexity Deterministic Annealing for Vector Quantizer Design
NASA Astrophysics Data System (ADS)
Demirciler, Kemal; Ortega, Antonio
2005-12-01
This paper presents a reduced-complexity deterministic annealing (DA) approach for vector quantizer (VQ) design by using soft information processing with simplified assignment measures. Low-complexity distributions are designed to mimic the Gibbs distribution, where the latter is the optimal distribution used in the standard DA method. These low-complexity distributions are simple enough to facilitate fast computation, but at the same time they can closely approximate the Gibbs distribution to result in near-optimal performance. We have also derived the theoretical performance loss at a given system entropy due to using the simple soft measures instead of the optimal Gibbs measure. We use thederived result to obtain optimal annealing schedules for the simple soft measures that approximate the annealing schedule for the optimal Gibbs distribution. The proposed reduced-complexity DA algorithms have significantly improved the quality of the final codebooks compared to the generalized Lloyd algorithm and standard stochastic relaxation techniques, both with and without the pairwise nearest neighbor (PNN) codebook initialization. The proposed algorithms are able to evade the local minima and the results show that they are not sensitive to the choice of the initial codebook. Compared to the standard DA approach, the reduced-complexity DA algorithms can operate over 100 times faster with negligible performance difference. For example, for the design of a 16-dimensional vector quantizer having a rate of 0.4375 bit/sample for Gaussian source, the standard DA algorithm achieved 3.60 dB performance in 16 483 CPU seconds, whereas the reduced-complexity DA algorithm achieved the same performance in 136 CPU seconds. Other than VQ design, the DA techniques are applicable to problems such as classification, clustering, and resource allocation.
Merging deterministic and probabilistic approaches to forecast volcanic scenarios
NASA Astrophysics Data System (ADS)
Peruzzo, E.; Bisconti, L.; Barsanti, M.; Flandoli, F.; Papale, P.
2009-04-01
Volcanoes are extremely complex systems largely inaccessible to direct observation. As a consequence, many quantities which are relevant in determining the physical and chemical processes occurring at volcanoes are largely uncertain. On the other hand, the demand for eruption scenario forecast at many hazardous volcanoes in the world is pressing, reflecting into the development and use of increasingly complex physical models and numerical codes. Such codes are capable of accounting for the extremely complex, non-linear behaviour of the volcanic processes, and for the roles of several quantities in determining volcanic scenarios and hazards. However, they often require enormous computer resources and imply long (order of days to weeks) CPU times even on the most advanced parallel computation systems available to-date. As a consequence, they can hardly be used to reasonably cover the spectrum of possible conditions expected at a given volcano. At this purpose, we have started the development of a mixed deterministic-probabilistic approach with the aim of substantially reducing (form order 10000 to 10) the number of simulations needed to adequately represent possible scenarios and their probability of occurrence, corresponding to a given set of probability distributions for the initial/boundary conditions characterizing the system. The core of the problem is to find a "best" discretization of the continuous density function describing the random variables input to the model. This is done through the stochastic quantization theory (Graf and Luschgy, 2000). The application of this theory to volcanic scenario forecast has been tested through both an oversimplified analytical model and a more complex numerical model for magma flow in volcanic conduits, the latter still running in relatively short times to allow comparison with Monte Carlo simulations. The final aim is to define proper strategies and paradigms for application to more complex, time-demanding codes
Deterministic Computer-Controlled Polishing Process for High-Energy X-Ray Optics
NASA Technical Reports Server (NTRS)
Khan, Gufran S.; Gubarev, Mikhail; Speegle, Chet; Ramsey, Brian
2010-01-01
A deterministic computer-controlled polishing process for large X-ray mirror mandrels is presented. Using tool s influence function and material removal rate extracted from polishing experiments, design considerations of polishing laps and optimized operating parameters are discussed
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan D.; Falcao Salles, Joana
2015-03-17
Despite growing recognition that deterministic and stochastic factors simultaneously influence bacterial communities, little is known about mechanisms shifting their relative importance. To better understand underlying mechanisms, we developed a conceptual model linking ecosystem development during primary succession to shifts in the stochastic/deterministic balance. To evaluate the conceptual model we coupled spatiotemporal data on soil bacterial communities with environmental conditions spanning 105 years of salt marsh development. At the local scale there was a progression from stochasticity to determinism due to Na accumulation with increasing ecosystem age, supporting a main element of the conceptual model. At the regional-scale, soil organic matter (SOM) governed the relative influence of stochasticity and the type of deterministic ecological selection, suggesting scale-dependency in how deterministic ecological selection is imposed. Analysis of a new ecological simulation model supported these conceptual inferences. Looking forward, we propose an extended conceptual model that integrates primary and secondary succession in microbial systems.
A deterministic particle method for one-dimensional reaction-diffusion equations
NASA Technical Reports Server (NTRS)
Mascagni, Michael
1995-01-01
We derive a deterministic particle method for the solution of nonlinear reaction-diffusion equations in one spatial dimension. This deterministic method is an analog of a Monte Carlo method for the solution of these problems that has been previously investigated by the author. The deterministic method leads to the consideration of a system of ordinary differential equations for the positions of suitably defined particles. We then consider the time explicit and implicit methods for this system of ordinary differential equations and we study a Picard and Newton iteration for the solution of the implicit system. Next we solve numerically this system and study the discretization error both analytically and numerically. Numerical computation shows that this deterministic method is automatically adaptive to large gradients in the solution.
Yildirim, Necmettin; Kazanci, Caner
2011-01-01
A brief introduction to mathematical modeling of biochemical regulatory reaction networks is presented. Both deterministic and stochastic modeling techniques are covered with examples from enzyme kinetics, coupled reaction networks with oscillatory dynamics and bistability. The Yildirim-Mackey model for lactose operon is used as an example to discuss and show how deterministic and stochastic methods can be used to investigate various aspects of this bacterial circuit. PMID:21187231
Deterministic methods in radiation transport. A compilation of papers presented February 4-5, 1992
Rice, A. F.; Roussin, R. W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Deterministic methods in radiation transport. A compilation of papers presented February 4--5, 1992
Rice, A.F.; Roussin, R.W.
1992-06-01
The Seminar on Deterministic Methods in Radiation Transport was held February 4--5, 1992, in Oak Ridge, Tennessee. Eleven presentations were made and the full papers are published in this report, along with three that were submitted but not given orally. These papers represent a good overview of the state of the art in the deterministic solution of radiation transport problems for a variety of applications of current interest to the Radiation Shielding Information Center user community.
Implementation of Gy-Eq for deterministic effects limitation in shield design.
Wilson, John W; Kim, Myung-Hee Y; De Angelis, Giovanni; Cucinotta, Francis A; Yoshizawa, Nobuaki; Badavi, Francis F
2002-12-01
The NCRP has recently defined RBE values and a new quantity (Gy-Eq) for use in estimation of deterministic effects in space shielding and operations. The NCRP's RBE for neutrons is left ambiguous and not fully defined. In the present report we will suggest a complete definition of neutron RBE consistent with the NCRP recommendations and evaluate attenuation properties of deterministic effects (Gy-Eq) in comparison with other dosimetric quantities. PMID:12793740
Implementation of Gy-Eq for deterministic effects limitation in shield design
NASA Technical Reports Server (NTRS)
Wilson, John W.; Kim, Myung-Hee Y.; De Angelis, Giovanni; Cucinotta, Francis A.; Yoshizawa, Nobuaki; Badavi, Francis F.
2002-01-01
The NCRP has recently defined RBE values and a new quantity (Gy-Eq) for use in estimation of deterministic effects in space shielding and operations. The NCRP's RBE for neutrons is left ambiguous and not fully defined. In the present report we will suggest a complete definition of neutron RBE consistent with the NCRP recommendations and evaluate attenuation properties of deterministic effects (Gy-Eq) in comparison with other dosimetric quantities.
Deterministic photon-photon {radical}(SWAP)gate using a {Lambda} system
Koshino, Kazuki; Ishizaka, Satoshi; Nakamura, Yasunobu
2010-07-15
We theoretically present a method to realize a deterministic photon-photon {radical}(SWAP) gate using a three-level {Lambda} system interacting with single photons in reflection geometry. The {Lambda} system is used completely passively as a temporary memory for a photonic qubit; the initial state of the {Lambda} system may be arbitrary, and active control by auxiliary fields is unnecessary throughout the gate operations. These distinct merits make this entangling gate suitable for deterministic and scalable quantum computation.
Martin, Guillaume; Lambert, Amaury
2015-05-01
In large populations, the distribution of the trajectory of allele frequencies under selection and genetic drift approaches a semi-deterministic behavior: a deterministic trajectory started and ended at stochastic boundary values. This provides simple yet accurate approximations for the distribution of allelic frequencies over time (conditional on fixation), and of extinction and fixation times, for both hard and soft sweeps, and under arbitrary inbreeding and dominance.
NASA Astrophysics Data System (ADS)
Kang, S.; Kim, K.; Suk, B.; Yoo, H.
2007-12-01
Strong ground motion attenuation relationship represents a comprehensive trend of ground shakings at sites with distances from the source, geology, local soil conditions, and others. It is necessary to develop an attenuation relationship with careful considerations of characteristics of the target area for reliable seismic hazard/risk assessments. In the study, observed ground motions from the January 2007 magnitude 4.9 Odaesan earthquake and the events occurring in the Gyeongsang provinces are compared with the previously proposed ground attenuation relationships in the Korean Peninsula to select most appropriate one. In the meantime, a few strong ground motion attenuation relationships are proposed and introduced in HAZUS, which have been designed for the Western United States and the Central and Eastern United States. The selected relationship from the ones for the Korean Peninsula has been compared with attenuation relationships available in HAZUS. Then, the attenuation relation for the Western United States proposed by Sadigh et al. (1997) for the Site Class B has been selected for this study. Reliability of the assessment will be improved by using an appropriate attenuation relation. It has been used for the earthquake loss estimation of the Gyeongju area located in southeast Korea using the deterministic method in HAZUS with a scenario earthquake (M=6.7). Our preliminary estimates show 15.6% damage of houses, shelter needs for about three thousands residents, and 75 life losses in the study area for the scenario events occurring at 2 A.M. Approximately 96% of hospitals will be in normal operation in 24 hours from the proposed event. Losses related to houses will be more than 114 million US dollars. Application of the improved methodology for loss estimation in Korea will help decision makers for planning disaster responses and hazard mitigation.
Smith, Leon E.; Gesh, Christopher J.; Pagh, Richard T.; Miller, Erin A.; Shaver, Mark W.; Ashbaker, Eric D.; Batdorf, Michael T.; Ellis, J. E.; Kaye, William R.; McConn, Ronald J.; Meriwether, George H.; Ressler, Jennifer J.; Valsan, Andrei B.; Wareing, Todd A.
2008-10-31
Radiation transport modeling methods used in the radiation detection community fall into one of two broad categories: stochastic (Monte Carlo) and deterministic. Monte Carlo methods are typically the tool of choice for simulating gamma-ray spectrometers operating in homeland and national security settings (e.g. portal monitoring of vehicles or isotope identification using handheld devices), but deterministic codes that discretize the linear Boltzmann transport equation in space, angle, and energy offer potential advantages in computational efficiency for many complex radiation detection problems. This paper describes the development of a scenario simulation framework based on deterministic algorithms. Key challenges include: formulating methods to automatically define an energy group structure that can support modeling of gamma-ray spectrometers ranging from low to high resolution; combining deterministic transport algorithms (e.g. ray-tracing and discrete ordinates) to mitigate ray effects for a wide range of problem types; and developing efficient and accurate methods to calculate gamma-ray spectrometer response functions from the deterministic angular flux solutions. The software framework aimed at addressing these challenges is described and results from test problems that compare coupled deterministic-Monte Carlo methods and purely Monte Carlo approaches are provided.
Company, Anna; Prat, Irene; Frisch, Jonathan R; Mas-Ballesté, Ruben; Güell, Mireia; Juhász, Gergely; Ribas, Xavi; Münck, Eckard; Luis, Josep M; Que, Lawrence; Costas, Miquel
2011-02-01
The spectroscopic and chemical characterization of a new synthetic non-heme iron(IV)-oxo species [Fe(IV)(O)((Me,H) Pytacn)(S)](2+) (2, (Me,H)Pytacn=1-(2'-pyridylmethyl)-4,7-dimethyl-1,4,7-triazacyclononane, S=CH(3)CN or H(2)O) is described. Complex 2 was prepared by reaction of [Fe(II)(CF(3)SO(3))(2)((Me,H) Pytacn)] (1) with peracetic acid. Complex 2 bears a tetradentate N(4) ligand that leaves two cis sites available for binding an oxo group and a second external ligand but, unlike the related iron(IV)-oxo species with tetradentate ligands, it is remarkably stable at room temperature (t(1/2)>2 h at 288 K). Its ability to exchange the oxygen atom of the oxo ligand with water has been analyzed in detail by means of kinetic studies, and a mechanism is proposed on the basis of DFT calculations. Hydrogen-atom abstraction from C-H bonds and oxygen-atom transfer to sulfides by 2 have also been studied. Despite its thermal stability, 2 proves to be a very powerful oxidant that is capable of breaking the strong C-H bond of cyclohexane (bond dissociation energy=99.3 kcal mol(-1)).
Deterministic Modeling of the High Temperature Test Reactor
Ortensi, J.; Cogliati, J. J.; Pope, M. A.; Ferrer, R. M.; Ougouag, A. M.
2010-06-01
Idaho National Laboratory (INL) is tasked with the development of reactor physics analysis capability of the Next Generation Nuclear Power (NGNP) project. In order to examine INL’s current prismatic reactor deterministic analysis tools, the project is conducting a benchmark exercise based on modeling the High Temperature Test Reactor (HTTR). This exercise entails the development of a model for the initial criticality, a 19 column thin annular core, and the fully loaded core critical condition with 30 columns. Special emphasis is devoted to the annular core modeling, which shares more characteristics with the NGNP base design. The DRAGON code is used in this study because it offers significant ease and versatility in modeling prismatic designs. Despite some geometric limitations, the code performs quite well compared to other lattice physics codes. DRAGON can generate transport solutions via collision probability (CP), method of characteristics (MOC), and discrete ordinates (Sn). A fine group cross section library based on the SHEM 281 energy structure is used in the DRAGON calculations. HEXPEDITE is the hexagonal z full core solver used in this study and is based on the Green’s Function solution of the transverse integrated equations. In addition, two Monte Carlo (MC) based codes, MCNP5 and PSG2/SERPENT, provide benchmarking capability for the DRAGON and the nodal diffusion solver codes. The results from this study show a consistent bias of 2–3% for the core multiplication factor. This systematic error has also been observed in other HTTR benchmark efforts and is well documented in the literature. The ENDF/B VII graphite and U235 cross sections appear to be the main source of the error. The isothermal temperature coefficients calculated with the fully loaded core configuration agree well with other benchmark participants but are 40% higher than the experimental values. This discrepancy with the measurement stems from the fact that during the experiments the
Confined Crystal Growth in Space. Deterministic vs Stochastic Vibroconvective Effects
NASA Astrophysics Data System (ADS)
Ruiz, Xavier; Bitlloch, Pau; Ramirez-Piscina, Laureano; Casademunt, Jaume
The analysis of the correlations between characteristics of the acceleration environment and the quality of the crystalline materials grown in microgravity remains an open and interesting question. Acceleration disturbances in space environments usually give rise to effective gravity pulses, gravity pulse trains of finite duration, quasi-steady accelerations or g-jitters. To quantify these disturbances, deterministic translational plane polarized signals have largely been used in the literature [1]. In the present work, we take an alternative approach which models g-jitters in terms of a stochastic process in the form of the so-called narrow-band noise, which is designed to capture the main statistical properties of realistic g-jitters. In particular we compare their effects so single-frequency disturbances. The crystalline quality has been characterized, following previous analyses, in terms of two parameters, the longitudinal and the radial segregation coefficients. The first one averages transversally the dopant distribution, providing continuous longitudinal information of the degree of segregation along the growth process. The radial segregation characterizes the degree of lateral non-uniformity of the dopant in the solid-liquid interface at each instant of growth. In order to complete the description, and because the heat flux fluctuations at the interface have a direct impact on the crystal growth quality -growth striations -the time dependence of a Nusselt number associated to the growing interface has also been monitored. For realistic g-jitters acting orthogonally to the thermal gradient, the longitudinal segregation remains practically unperturbed in all simulated cases. Also, the Nusselt number is not significantly affected by the noise. On the other hand, radial segregation, despite its low magnitude, exhibits a peculiar low-frequency response in all realizations. [1] X. Ruiz, "Modelling of the influence of residual gravity on the segregation in
A Deterministic Approach to Active Debris Removal Target Selection
NASA Astrophysics Data System (ADS)
Lidtke, A.; Lewis, H.; Armellin, R.
2014-09-01
purpose of ADR are also drawn and a deterministic method for ADR target selection, which could reduce the number of ADR missions to be performed, is proposed.
Baldwin, Darryl Dean; Willi, Martin Leo; Fiveland, Scott Byron; Timmons, Kristine Ann
2010-12-14
A segmented heat exchanger system for transferring heat energy from an exhaust fluid to a working fluid. The heat exchanger system may include a first heat exchanger for receiving incoming working fluid and the exhaust fluid. The working fluid and exhaust fluid may travel through at least a portion of the first heat exchanger in a parallel flow configuration. In addition, the heat exchanger system may include a second heat exchanger for receiving working fluid from the first heat exchanger and exhaust fluid from a third heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the second heat exchanger in a counter flow configuration. Furthermore, the heat exchanger system may include a third heat exchanger for receiving working fluid from the second heat exchanger and exhaust fluid from the first heat exchanger. The working fluid and exhaust fluid may travel through at least a portion of the third heat exchanger in a parallel flow configuration.
Development of site-specific earthquake response spectra for eastern US sites
Beavers, J.E.; Brock, W.R.; Hunt, R.J.; Shaffer, K.E.
1993-08-01
Site-specific earthquake, uniform-hazard response spectra have been defined for the Department of Energy Oak Ridge, Tennessee, and Portsmouth, Ohio, sites for use in evaluating existing facilities and designing new facilities. The site-specific response spectra were defined from probabilistic and deterministic seismic hazard studies following the requirements in DOE-STD-1024-92, ``Guidelines for Probabilistic Seismic Hazard Curves at DOE Sites.` For these two sites, the results show that site-specific uniform-hazard response spectra are slightly higher in the high-frequency range and considerably lower in the low-frequency range compared with response spectra defined for these sites in the past.
NASA Astrophysics Data System (ADS)
Szymanowski, Mariusz; Kryza, Maciej
2015-11-01
Our study examines the role of auxiliary variables in the process of spatial modelling and mapping of climatological elements, with air temperature in Poland used as an example. The multivariable algorithms are the most frequently applied for spatialization of air temperature, and their results in many studies are proved to be better in comparison to those obtained by various one-dimensional techniques. In most of the previous studies, two main strategies were used to perform multidimensional spatial interpolation of air temperature. First, it was accepted that all variables significantly correlated with air temperature should be incorporated into the model. Second, it was assumed that the more spatial variation of air temperature was deterministically explained, the better was the quality of spatial interpolation. The main goal of the paper was to examine both above-mentioned assumptions. The analysis was performed using data from 250 meteorological stations and for 69 air temperature cases aggregated on different levels: from daily means to 10-year annual mean. Two cases were considered for detailed analysis. The set of potential auxiliary variables covered 11 environmental predictors of air temperature. Another purpose of the study was to compare the results of interpolation given by various multivariable methods using the same set of explanatory variables. Two regression models: multiple linear (MLR) and geographically weighted (GWR) method, as well as their extensions to the regression-kriging form, MLRK and GWRK, respectively, were examined. Stepwise regression was used to select variables for the individual models and the cross-validation method was used to validate the results with a special attention paid to statistically significant improvement of the model using the mean absolute error (MAE) criterion. The main results of this study led to rejection of both assumptions considered. Usually, including more than two or three of the most significantly
47 CFR 22.973 - Information exchange.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 2 2011-10-01 2011-10-01 false Information exchange. 22.973 Section 22.973... Cellular Radiotelephone Service § 22.973 Information exchange. (a) Prior notification. Public safety/CII... information to the public safety/CII licensee at least 10 business days before a new cell site is activated...
47 CFR 22.973 - Information exchange.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 47 Telecommunication 2 2013-10-01 2013-10-01 false Information exchange. 22.973 Section 22.973... Cellular Radiotelephone Service § 22.973 Information exchange. (a) Prior notification. Public safety/CII... information to the public safety/CII licensee at least 10 business days before a new cell site is activated...
47 CFR 22.973 - Information exchange.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 47 Telecommunication 2 2012-10-01 2012-10-01 false Information exchange. 22.973 Section 22.973... Cellular Radiotelephone Service § 22.973 Information exchange. (a) Prior notification. Public safety/CII... information to the public safety/CII licensee at least 10 business days before a new cell site is activated...
47 CFR 22.973 - Information exchange.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 47 Telecommunication 2 2014-10-01 2014-10-01 false Information exchange. 22.973 Section 22.973... Cellular Radiotelephone Service § 22.973 Information exchange. (a) Prior notification. Public safety/CII... information to the public safety/CII licensee at least 10 business days before a new cell site is activated...
47 CFR 22.973 - Information exchange.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 2 2010-10-01 2010-10-01 false Information exchange. 22.973 Section 22.973... Cellular Radiotelephone Service § 22.973 Information exchange. (a) Prior notification. Public safety/CII... information to the public safety/CII licensee at least 10 business days before a new cell site is activated...
Inorganic ion exchangers for nuclear waste remediation
Clearfield, A.; Bortun, A.; Bortun, L.; Behrens, E.
1997-10-01
The objective of this work is to provide a broad spectrum of inorganic ion exchangers that can be used for a range of applications and separations involving remediation of groundwater and tank wastes. The authors intend to scale-up the most promising exchangers, through partnership with AlliedSignal Inc., to provide samples for testing at various DOE sites. While much of the focus is on exchangers for removal of Cs{sup +} and Sr{sup 2+} from highly alkaline tank wastes, especially at Hanford, the authors have also synthesized exchangers for acid wastes, alkaline wastes, groundwater, and mercury, cobalt, and chromium removal. These exchangers are now available for use at DOE sites. Many of the ion exchangers described here are new, and others are improved versions of previously known exchangers. They are generally one of three types: (1) layered compounds, (2) framework or tunnel compounds, and (3) amorphous exchangers in which a gel exchanger is used to bind a fine powder into a bead for column use. Most of these exchangers can be regenerated and used again.
Educator Exchange Resource Guide.
ERIC Educational Resources Information Center
Garza, Cris; Rodriguez, Victor
This resource guide was developed for teachers and administrators interested in participating in intercultural and international exchange programs or starting an exchange program. An analysis of an exchange program's critical elements discusses exchange activities; orientation sessions; duration of exchange; criteria for participation; travel,…
Wildfire susceptibility mapping: comparing deterministic and stochastic approaches
NASA Astrophysics Data System (ADS)
Pereira, Mário; Leuenberger, Michael; Parente, Joana; Tonini, Marj
2016-04-01
Conservation of Nature and Forests (ICNF) (http://www.icnf.pt/portal) which provides a detailed description of the shape and the size of area burnt by each fire in each year of occurrence. Two methodologies for susceptibility mapping were compared. First, the deterministic approach, based on the study of Verde and Zêzere (2010), which includes the computation of the favorability scores for each variable and the fire occurrence probability, as well as the validation of each model, resulting from the integration of different variables. Second, as non-linear method we selected the Random Forest algorithm (Breiman, 2001): this led us to identifying the most relevant variables conditioning the presence of wildfire and allowed us generating a map of fire susceptibility based on the resulting variable importance measures. By means of GIS techniques, we mapped the obtained predictions which represent the susceptibility of the study area to fires. Results obtained applying both the methodologies for wildfire susceptibility mapping, as well as of wildfire hazard maps for different total annual burnt area scenarios, were compared with the reference maps and allow us to assess the best approach for susceptibility mapping in Portugal. References: - Breiman, L. (2001). Random forests. Machine Learning, 45, 5-32. - Verde, J. C., & Zêzere, J. L. (2010). Assessment and validation of wildfire susceptibility and hazard in Portugal. Natural Hazards and Earth System Science, 10(3), 485-497.
A new methodology for deterministic landslide risk assessment at the local scale
NASA Astrophysics Data System (ADS)
Cotecchia, F.; Santaloia, F.; Lollino, P.; Vitone, C.; Mitaritonna, G.; Parise, M.
2009-04-01
The present paper discusses the formulation of a methodology that is being developed for regional landslide risk assessment within geologically complex areas and some preliminary results of its application at the intermediate scale (i.e. between the regional and the slope scale). In particular, the methodology is the subject of an on-going multidisciplinary research project, which aims at the assessment of the landslide hazard, of the corresponding vulnerability of structures and of their exposition, involving different expertises. As such, both the landslide hazard and the structure vulnerability assessments are meant to be based upon the knowledge of the failure mechanisms and to benefit from scientific knowledge in the fields of both geotechnical engineering and structural mechanics. At the same time, the exposure of the elements at risk is to be investigated according to analyses of the socio-economical context where the risk is being evaluated. In the present paper only the work relating to landslide hazard is presented. This work aims at the further development of Quantitative Landslide Hazard Assessment, QHA, following a deterministic approach. As such, it is aimed at exporting the geo-mechanical interpretation of slope stability and landslide mechanisms from the slope scale (site-specific) to the regional scale. The results of such a methodology will be implemented in a GIS system and reported in guidelines. As concerns the landslide hazard assessment, the proposed methodology involves two interconnected working phases, the first one at regional scale and the second one at town scale. During the first phase, an analytical database of all the factors affecting the slope equilibrium is created and a geo-hydro-mechanical classification of the soil masses is defined together with the definition of the main landslide typologies present in the region. Thereafter, the connections existing among the sets of internal factors of landslides, which characterise the geo
NASA Astrophysics Data System (ADS)
Chung, Ming-Chien; Tan, Chih-Hao; Chen, Mien-Min; Su, Tai-Wei
2013-04-01
Taiwan is an active mountain belt created by the oblique collision between the northern Luzon arc and the Asian continental margin. The inherent complexities of geological nature create numerous discontinuities through rock masses and relatively steep hillside on the island. In recent years, the increase in the frequency and intensity of extreme natural events due to global warming or climate change brought significant landslides. The causes of landslides in these slopes are attributed to a number of factors. As is well known, rainfall is one of the most significant triggering factors for landslide occurrence. In general, the rainfall infiltration results in changing the suction and the moisture of soil, raising the unit weight of soil, and reducing the shear strength of soil in the colluvium of landslide. The stability of landslide is closely related to the groundwater pressure in response to rainfall infiltration, the geological and topographical conditions, and the physical and mechanical parameters. To assess the potential susceptibility to landslide, an effective modeling of rainfall-induced landslide is essential. In this paper, a deterministic approach is adopted to estimate the critical rainfall threshold of the rainfall-induced landslide. The critical rainfall threshold is defined as the accumulated rainfall while the safety factor of the slope is equal to 1.0. First, the process of deterministic approach establishes the hydrogeological conceptual model of the slope based on a series of in-situ investigations, including geological drilling, surface geological investigation, geophysical investigation, and borehole explorations. The material strength and hydraulic properties of the model were given by the field and laboratory tests. Second, the hydraulic and mechanical parameters of the model are calibrated with the long-term monitoring data. Furthermore, a two-dimensional numerical program, GeoStudio, was employed to perform the modelling practice. Finally
Deterministic photonic cluster state generation from quantum dot molecules
NASA Astrophysics Data System (ADS)
Economou, Sophia; Gimeno-Segovia, Mercedes; Rudolph, Terry
2014-03-01
Currently, the most promising approach for photon-based quantum information processing is measurement-based, or one-way, quantum computing. In this scheme, a large entangled state of photons is prepared upfront and the computation is implemented with single-qubit measurements alone. Available approaches to generating the cluster state are probabilistic, which makes scalability challenging. We propose to generate the cluster state using a quantum dot molecule with one electron spin per quantum dot. The two spins are coupled by exchange interaction and are periodically pulsed to produce photons. We show that the entanglement created by free evolution between the spins is transferred to the emitted photons, and thus a 2D photonic ladder can be created. Our scheme only utilizes single-spin gates and measurement, and is thus fully consistent with available technology.
Towards deterministically controlled InGaAs/GaAs lateral quantum dot molecules
NASA Astrophysics Data System (ADS)
Wang, L.; Rastelli, A.; Kiravittaya, S.; Atkinson, P.; Ding, F.; Bof Bufon, C. C.; Hermannstädter, C.; Witzany, M.; Beirne, G. J.; Michler, P.; Schmidt, O. G.
2008-04-01
We report on the fabrication, detailed characterization and modeling of lateral InGaAs quantum dot molecules (QDMs) embedded in a GaAs matrix and we discuss strategies to fully control their spatial configuration and electronic properties. The three-dimensional morphology of encapsulated QDMs was revealed by selective wet chemical etching of the GaAs top capping layer and subsequent imaging by atomic force microscopy (AFM). The AFM investigation showed that different overgrowth procedures have a profound consequence on the QDM height and shape. QDMs partially capped and annealed in situ for micro-photoluminescence spectroscopy consist of shallow but well-defined quantum dots (QDs) in contrast to misleading results usually provided by surface morphology measurements when they are buried by a thin GaAs layer. This uncapping approach is crucial for determining the QDM structural parameters, which are required for modeling the system. A single-band effective-mass approximation is employed to calculate the confined electron and heavy-hole energy levels, taking the geometry and structural information extracted from the uncapping experiments as inputs. The calculated transition energy of the single QDM shows good agreement with the experimentally observed values. By decreasing the edge-to-edge distance between the two QDs within a QDM, a splitting of the electron (hole) wavefunction into symmetric and antisymmetric states is observed, indicating the presence of lateral coupling. Site control of such lateral QDMs obtained by growth on a pre-patterned substrate, combined with a technology to fabricate gate structures at well-defined positions with respect to the QDMs, could lead to deterministically controlled devices based on QDMs.
Hydrogen exchange equilibria in thiols.
Hofstetter, Dustin; Thalmann, Basil; Nauser, Thomas; Koppenol, Willem H
2012-09-17
Cysteine, cysteinyl-glycine, glutathione, phenylalanyl-cysteinyl-glycine, and histidyl-cysteinyl-glycine were dissolved in acidic and neutral D(2)O in the presence of the radical generator 2,2'-azobis(2-methylpropionamidine) dihydrochloride and radical mediator compounds (benzyl alcohol and 2-propanol). An exchange of H-atoms by D-atoms took place in these peptides due to intramolecular H-abstraction equilibria. NMR measurements allow one to follow the extent of H-D exchanges and to identify the sites where these exchanges take place. Significant exchanges occur in acidic media in GSH at positions Glu-β and Glu-γ, in Phe-Cys-Gly at positions Phe ortho, Phe-β, Cys-α, Cys-β, and Gly-α, and in His-Cys-Gly at positions His H1, His H2, His β, Cys β, and Gly α. In neutral media, exchanges occur in Cys-Gly at position Cys β and in GSH at position Cys α. Phe-Cys-Gly and His-Cys-Gly were not examined in neutral media. Sites participating in the radical exchange equilibria are highly dependent on structure and pH; the availability of electron density in the form of lone pairs appears to increase the extent of exchange. Interestingly, and unexpectedly, 2D NMR experiments show that GSH rearranges itself in acidic solution: the signals shift, but their patterns do not change. The formation of a thiolactone from Gly and Cys residues matches the changes observed.
Ibrahim, Ahmad M; Wilson, P.; Sawan, M.; Mosher, Scott W; Peplow, Douglas E.; Grove, Robert E
2013-01-01
Three mesh adaptivity algorithms were developed to facilitate and expedite the use of the CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques in accurate full-scale neutronics simulations of fusion energy systems with immense sizes and complicated geometries. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility and resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation. Additionally, because of the significant increase in the efficiency of FW-CADIS simulations, the three algorithms enabled this difficult calculation to be accurately solved on a regular computer cluster, eliminating the need for a world-class super computer.
Stability analysis of multi-group deterministic and stochastic epidemic models with vaccination rate
NASA Astrophysics Data System (ADS)
Wang, Zhi-Gang; Gao, Rui-Mei; Fan, Xiao-Ming; Han, Qi-Xing
2014-09-01
We discuss in this paper a deterministic multi-group MSIR epidemic model with a vaccination rate, the basic reproduction number ℛ0, a key parameter in epidemiology, is a threshold which determines the persistence or extinction of the disease. By using Lyapunov function techniques, we show if ℛ0 is greater than 1 and the deterministic model obeys some conditions, then the disease will prevail, the infective persists and the endemic state is asymptotically stable in a feasible region. If ℛ0 is less than or equal to 1, then the infective disappear so the disease dies out. In addition, stochastic noises around the endemic equilibrium will be added to the deterministic MSIR model in order that the deterministic model is extended to a system of stochastic ordinary differential equations. In the stochastic version, we carry out a detailed analysis on the asymptotic behavior of the stochastic model. In addition, regarding the value of ℛ0, when the stochastic system obeys some conditions and ℛ0 is greater than 1, we deduce the stochastic system is stochastically asymptotically stable. Finally, the deterministic and stochastic model dynamics are illustrated through computer simulations.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-16
... Exchange, Inc. (``CHX'' or ``Exchange'') filed with the Securities and Exchange Commission (``Commission...'s Statement of the Terms of Substance of the Proposed Rule Change CHX proposes to amend Exchange... proposed rule change is available on the Exchange's Web site at...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-15
..., 2013, the Chicago Stock Exchange, Inc. (``CHX'' or ``Exchange'') filed with the Securities and Exchange... Substance of the Proposed Rule Change CHX proposes to amend Exchange Rules and its Schedule of Participant..., 2013. The text of this proposed rule change is available on the Exchange's Web site at...
Corrosive resistant heat exchanger
Richlen, Scott L.
1989-01-01
A corrosive and errosive resistant heat exchanger which recovers heat from a contaminated heat stream. The heat exchanger utilizes a boundary layer of innocuous gas, which is continuously replenished, to protect the heat exchanger surface from the hot contaminated gas. The innocuous gas is conveyed through ducts or perforations in the heat exchanger wall. Heat from the heat stream is transferred by radiation to the heat exchanger wall. Heat is removed from the outer heat exchanger wall by a heat recovery medium.
17 CFR 240.6a-3 - Supplemental material to be filed by exchanges.
Code of Federal Regulations, 2011 CFR
2011-04-01
... continuously on an Internet web site controlled by an exchange, in lieu of filing such information with the Commission, such exchange may: (i) Indicate the location of the Internet web site where such information...
17 CFR 240.6a-3 - Supplemental material to be filed by exchanges.
Code of Federal Regulations, 2010 CFR
2010-04-01
... continuously on an Internet web site controlled by an exchange, in lieu of filing such information with the Commission, such exchange may: (i) Indicate the location of the Internet web site where such information...
NASA Astrophysics Data System (ADS)
Chen, LiBing; Lu, Hong
2015-03-01
We show how a remote positive operator valued measurement (POVM) can be implemented deterministically by using partially entangled state(s). Firstly, we present a theoretical scheme for implementing deterministically a remote and controlled POVM onto any one of N qubits via a partially entangled ( N + 1)-qubit Greenberger-Horne-Zeilinger (GHZ) state, in which ( N - 1) administrators are included. Then, we design another scheme for implementing deterministically a POVM onto N remote qubits via N partially entangled qubit pairs. Our schemes have been designed for obtaining the optimal success probabilities: i.e. they are identical to those in the ordinary, local, POVMs. In these schemes, the POVM dictates the amount of entanglement needed. The fact that such overall treatment can save quantum resources is notable.
Deterministic seismic design and evaluation criteria to meet probabilistic performance goals
Short, S.A. ); Murray, R.C.; Nelson, T.A. ); Hill, J.R. . Office of Safety Appraisals)
1990-12-01
For DOE facilities across the United States, seismic design and evaluation criteria are based on probabilistic performance goals. In addition, other programs such as Advanced Light Water Reactors, New Production Reactors, and IPEEE for commercial nuclear power plants utilize design and evaluation criteria based on probabilistic performance goals. The use of probabilistic performance goals is a departure from design practice for commercial nuclear power plants which have traditionally been designed utilizing a deterministic specification of earthquake loading combined with deterministic response evaluation methods and permissible behavior limits. Approaches which utilize probabilistic seismic hazard curves for specification of earthquake loading and deterministic response evaluation methods and permissible behavior limits are discussed in this paper. Through the use of such design/evaluation approaches, it may be demonstrated that there is high likelihood that probabilistic performance goals can be achieved. 12 refs., 2 figs., 9 tabs.
A Comparison of Probabilistic and Deterministic Campaign Analysis for Human Space Exploration
NASA Technical Reports Server (NTRS)
Merrill, R. Gabe; Andraschko, Mark; Stromgren, Chel; Cirillo, Bill; Earle, Kevin; Goodliff, Kandyce
2008-01-01
Human space exploration is by its very nature an uncertain endeavor. Vehicle reliability, technology development risk, budgetary uncertainty, and launch uncertainty all contribute to stochasticity in an exploration scenario. However, traditional strategic analysis has been done in a deterministic manner, analyzing and optimizing the performance of a series of planned missions. History has shown that exploration scenarios rarely follow such a planned schedule. This paper describes a methodology to integrate deterministic and probabilistic analysis of scenarios in support of human space exploration. Probabilistic strategic analysis is used to simulate "possible" scenario outcomes, based upon the likelihood of occurrence of certain events and a set of pre-determined contingency rules. The results of the probabilistic analysis are compared to the nominal results from the deterministic analysis to evaluate the robustness of the scenario to adverse events and to test and optimize contingency planning.
A theorem allowing to derive deterministic evolution equations from stochastic evolution equations
NASA Astrophysics Data System (ADS)
Costanza, G.
2011-05-01
The deterministic evolution equations of classical as well as quantum mechanical models are derived from a set of stochastic evolution equations after taking an average over realizations using a theorem. Examples are given that show that deterministic quantum mechanical evolution equations, obtained initially by R.P. Feynman and subsequently studied by Boghosian and Taylor IV [B.M. Boghosian, W. Taylor IV, Phys. Rev. E 57 (1998) 54. See also arXiv:quant-ph/9904035] and Meyer [D.A. Meyer, Phys. Rev. E 55 (1997) 5261], among others, are derived from a set of stochastic evolution equations. In addition, a deterministic classical evolution equation for the diffusion of monomers, similar to the second Fick law, is also obtained.
Experimental demonstration on the deterministic quantum key distribution based on entangled photons
Chen, Hua; Zhou, Zhi-Yuan; Zangana, Alaa Jabbar Jumaah; Yin, Zhen-Qiang; Wu, Juan; Han, Yun-Guang; Wang, Shuang; Li, Hong-Wei; He, De-Yong; Tawfeeq, Shelan Khasro; Shi, Bao-Sen; Guo, Guang-Can; Chen, Wei; Han, Zheng-Fu
2016-01-01
As an important resource, entanglement light source has been used in developing quantum information technologies, such as quantum key distribution(QKD). There are few experiments implementing entanglement-based deterministic QKD protocols since the security of existing protocols may be compromised in lossy channels. In this work, we report on a loss-tolerant deterministic QKD experiment which follows a modified “Ping-Pong”(PP) protocol. The experiment results demonstrate for the first time that a secure deterministic QKD session can be fulfilled in a channel with an optical loss of 9 dB, based on a telecom-band entangled photon source. This exhibits a conceivable prospect of ultilizing entanglement light source in real-life fiber-based quantum communications. PMID:26860582
Kutkov, V; Buglova, E; McKenna, T
2011-06-01
Lessons learned from responses to past events have shown that more guidance is needed for the response to radiation emergencies (in this context, a 'radiation emergency' means the same as a 'nuclear or radiological emergency') which could lead to severe deterministic effects. The International Atomic Energy Agency (IAEA) requirements for preparedness and response for a radiation emergency, inter alia, require that arrangements shall be made to prevent, to a practicable extent, severe deterministic effects and to provide the appropriate specialised treatment for these effects. These requirements apply to all exposure pathways, both internal and external, and all reasonable scenarios, to include those resulting from malicious acts (e.g. dirty bombs). This paper briefly describes the approach used to develop the basis for emergency response criteria for protective actions to prevent severe deterministic effects in the case of external exposure and intake of radioactive material. PMID:21617296
Deterministic LOCC transformation of three-qubit pure states and entanglement transfer
Tajima, Hiroyasu
2013-02-15
A necessary and sufficient condition of the possibility of a deterministic local operations and classical communication (LOCC) transformation of three-qubit pure states is given. The condition shows that the three-qubit pure states are a partially ordered set parametrized by five well-known entanglement parameters and a novel parameter; the five are the concurrences C{sub AB}, C{sub AC}, C{sub BC}, the tangle {tau}{sub ABC} and the fifth parameter J{sub 5} of Acin et al. (2000) Ref. [19], while the other new one is the entanglement charge Q{sub e}. The order of the partially ordered set is defined by the possibility of a deterministic LOCC transformation from a state to another state. In this sense, the present condition is an extension of Nielsen's work (Nielsen (1999) [14]) to three-qubit pure states. We also clarify the rules of transfer and dissipation of the entanglement which is caused by deterministic LOCC transformations. Moreover, the minimum number of times of measurements to reproduce an arbitrary deterministic LOCC transformation between three-qubit pure states is given. - Highlights: Black-Right-Pointing-Pointer We obtained a necessary and sufficient condition for deterministic LOCC of 3 qubits. Black-Right-Pointing-Pointer We clarified rules of entanglement flow caused by measurements. Black-Right-Pointing-Pointer We found a new parameter which is interpreted as 'Charge of Entanglement'. Black-Right-Pointing-Pointer We gave a set of entanglements which determines whether two states are LU-eq. or not. Black-Right-Pointing-Pointer Our approach to deterministic LOCC of 3 qubits may be applicable to N qubits.
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as muchmore » geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.« less
NASA Astrophysics Data System (ADS)
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch
Ibrahim, Ahmad M.; Wilson, Paul P.H.; Sawan, Mohamed E.; Mosher, Scott W.; Peplow, Douglas E.; Wagner, John C.; Evans, Thomas M.; Grove, Robert E.
2015-06-30
The CADIS and FW-CADIS hybrid Monte Carlo/deterministic techniques dramatically increase the efficiency of neutronics modeling, but their use in the accurate design analysis of very large and geometrically complex nuclear systems has been limited by the large number of processors and memory requirements for their preliminary deterministic calculations and final Monte Carlo calculation. Three mesh adaptivity algorithms were developed to reduce the memory requirements of CADIS and FW-CADIS without sacrificing their efficiency improvement. First, a macromaterial approach enhances the fidelity of the deterministic models without changing the mesh. Second, a deterministic mesh refinement algorithm generates meshes that capture as much geometric detail as possible without exceeding a specified maximum number of mesh elements. Finally, a weight window coarsening algorithm decouples the weight window mesh and energy bins from the mesh and energy group structure of the deterministic calculations in order to remove the memory constraint of the weight window map from the deterministic mesh resolution. The three algorithms were used to enhance an FW-CADIS calculation of the prompt dose rate throughout the ITER experimental facility. Using these algorithms resulted in a 23.3% increase in the number of mesh tally elements in which the dose rates were calculated in a 10-day Monte Carlo calculation and, additionally, increased the efficiency of the Monte Carlo simulation by a factor of at least 3.4. The three algorithms enabled this difficult calculation to be accurately solved using an FW-CADIS simulation on a regular computer cluster, eliminating the need for a world-class super computer.
Prospective testing of neo-deterministic seismic hazard scenarios for the Italian territory
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Vaccari, Franco; Kossobokov, Vladimir; Panza, Giuliano F.
2013-04-01
for the space-time identification of strong earthquakes, with algorithms for the realistic modeling of ground motion. Accordingly, a set of deterministic scenarios of ground motion at bedrock, which refers to the time interval when a strong event is likely to occur within the alerted area, can be defined by means of full waveform modeling, both at regional and local scale. CN and M8S predictions, as well as the related time-dependent ground motion scenarios associated with the alarmed areas, are regularly updated every two months since 2006. The routine application of the time-dependent NDSHA approach provides information that can be useful in assigning priorities for timely mitigation actions and, at the same time, allows for a rigorous prospective testing and validation of the proposed methodology. As an example, for sites where ground shaking values greater than 0.2 g are estimated at bedrock, further investigations can be performed taking into account the local soil conditions, to assess the performances of relevant structures, such as historical and strategic buildings. The issues related with prospective testing and validation of the time-dependent NDSHA scenarios will be discussed, illustrating the results obtained for the recent strong earthquakes in Italy, including the May 20, 2012 Emilia earthquake.
NASA Astrophysics Data System (ADS)
Park, Junbo; Ralph, D. C.; Buhrman, R. A.
2013-12-01
We model 100 ps pulse switching dynamics of orthogonal spin transfer (OST) devices that employ an out-of-plane polarizer and an in-plane polarizer. Simulation results indicate that increasing the spin polarization ratio, CP = PIPP/POPP, results in deterministic switching of the free layer without over-rotation (360° rotation). By using spin torque asymmetry to realize an enhanced effective PIPP, we experimentally demonstrate this behavior in OST devices in parallel to anti-parallel switching. Modeling predicts that decreasing the effective demagnetization field can substantially reduce the minimum CP required to attain deterministic switching, while retaining low critical switching current, Ip ˜ 500 μA.
NASA Astrophysics Data System (ADS)
Schwartz, I.; Cogan, D.; Schmidgall, E. R.; Gantz, L.; Don, Y.; Zieliński, M.; Gershoni, D.
2015-11-01
We use one single, few-picosecond-long, variably polarized laser pulse to deterministically write any selected spin state of a quantum dot confined dark exciton whose life and coherence time are six and five orders of magnitude longer than the laser pulse duration, respectively. The pulse is tuned to an absorption resonance of an excited dark exciton state, which acquires nonnegligible oscillator strength due to residual mixing with bright exciton states. We obtain a high-fidelity one-to-one mapping from any point on the Poincaré sphere of the pulse polarization to a corresponding point on the Bloch sphere of the spin of the deterministically photogenerated dark exciton.
Palmer, Tim N; O'Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173
Palmer, Tim N.; O’Shea, Michael
2015-01-01
How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete. PMID:26528173
Hybrid method of deterministic and probabilistic approaches for multigroup neutron transport problem
Lee, D.
2012-07-01
A hybrid method of deterministic and probabilistic methods is proposed to solve Boltzmann transport equation. The new method uses a deterministic method, Method of Characteristics (MOC), for the fast and thermal neutron energy ranges and a probabilistic method, Monte Carlo (MC), for the intermediate resonance energy range. The hybrid method, in case of continuous energy problem, will be able to take advantage of fast MOC calculation and accurate resonance self shielding treatment of MC method. As a proof of principle, this paper presents the hybrid methodology applied to a multigroup form of Boltzmann transport equation and confirms that the hybrid method can produce consistent results with MC and MOC methods. (authors)
Deterministic amplification for cat-state engineering in circuit-QED
NASA Astrophysics Data System (ADS)
Joo, Jaewoo; Oi, Daniel; Elliott, Matthew; Ginossar, Eran; Spiller, Timothy
2015-03-01
We propose a novel implementation scheme of amplifying the size of Schroedinger cat states in superconducting circuits. While the amplification method in quantum optics is normally probabilistic, our scheme can be performed deterministically in circuit-QED. Using adiabatic methods and optimal control, we demonstrate that the amplification operation can be built deterministically in a system of a transmon qubit strongly coupled with a cavity. This amplification tool will in particular open the potential of continuous-variable nonclassical states toward practical quantum technologies, for example, stabilization of cat-type states and continuous-variable teleportation.
Piscitella, Roger R.
1987-01-01
In a woven ceramic heat exchanger using the basic tube-in-shell design, each heat exchanger consisting of tube sheets and tube, is woven separately. Individual heat exchangers are assembled in cross-flow configuration. Each heat exchanger is woven from high temperature ceramic fiber, the warp is continuous from tube to tube sheet providing a smooth transition and unitized construction.
Piscitella, Roger R.
1987-05-05
In a woven ceramic heat exchanger using the basic tube-in-shell design, each heat exchanger consisting of tube sheets and tube, is woven separately. Individual heat exchangers are assembled in cross-flow configuration. Each heat exchanger is woven from high temperature ceramic fiber, the warp is continuous from tube to tube sheet providing a smooth transition and unitized construction.
Sample exchange/evaluation (SEE) report - Phase III
Winters, W.I.
1996-01-01
This report describes the results from Phase III of the Sample Exchange Evaluation (SEE) program. The SEE program is used to compare analytical laboratory performance on samples from the Hanford Site`s high level waste tanks.
NASA Astrophysics Data System (ADS)
Li, S.
2002-05-01
Taking advantage of the recent developments in groundwater modeling research and computer, image and graphics processing, and objected oriented programming technologies, Dr. Li and his research group have recently developed a comprehensive software system for unified deterministic and stochastic groundwater modeling. Characterized by a new real-time modeling paradigm and improved computational algorithms, the software simulates 3D unsteady flow and reactive transport in general groundwater formations subject to both systematic and "randomly" varying stresses and geological and chemical heterogeneity. The software system has following distinct features and capabilities: Interactive simulation and real time visualization and animation of flow in response to deterministic as well as stochastic stresses. Interactive, visual, and real time particle tracking, random walk, and reactive plume modeling in both systematically and randomly fluctuating flow. Interactive statistical inference, scattered data interpolation, regression, and ordinary and universal Kriging, conditional and unconditional simulation. Real-time, visual and parallel conditional flow and transport simulations. Interactive water and contaminant mass balance analysis and visual and real-time flux update. Interactive, visual, and real time monitoring of head and flux hydrographs and concentration breakthroughs. Real-time modeling and visualization of aquifer transition from confined to unconfined to partially de-saturated or completely dry and rewetting Simultaneous and embedded subscale models, automatic and real-time regional to local data extraction; Multiple subscale flow and transport models Real-time modeling of steady and transient vertical flow patterns on multiple arbitrarily-shaped cross-sections and simultaneous visualization of aquifer stratigraphy, properties, hydrological features (rivers, lakes, wetlands, wells, drains, surface seeps), and dynamically adjusted surface flooding area
Electrically Switched Cesium Ion Exchange
JPH Sukamto; ML Lilga; RK Orth
1998-10-23
This report discusses the results of work to develop Electrically Switched Ion Exchange (ESIX) for separations of ions from waste streams relevant to DOE site clean-up. ESIX combines ion exchange and electrochemistry to provide a selective, reversible method for radionuclide separation that lowers costs and minimizes secondary waste generation typically associated with conventional ion exchange. In the ESIX process, an electroactive ion exchange film is deposited onto. a high surface area electrode, and ion uptake and elution are controlled directly by modulating the potential of the film. As a result, the production of secondary waste is minimized, since the large volumes of solution associated with elution, wash, and regeneration cycles typical of standard ion exchange are not needed for the ESIX process. The document is presented in two parts: Part I, the Summary Report, discusses the objectives of the project, describes the ESIX concept and the approach taken, and summarizes the major results; Part II, the Technology Description, provides a technical description of the experimental procedures and in-depth discussions on modeling, case studies, and cost comparisons between ESIX and currently used technologies.
Piscitella, R.R.
1984-07-16
This invention relates to a heat exchanger for waste heat recovery from high temperature industrial exhaust streams. In a woven ceramic heat exchanger using the basic tube-in-shell design, each heat exchanger consisting of tube sheets and tube, is woven separately. Individual heat exchangers are assembled in cross-flow configuration. Each heat exchanger is woven from high temperature ceramic fiber, the warp is continuous from tube to tube sheet providing a smooth transition and unitized construction.
Ion exchange polymers for anion separations
Jarvinen, G.D.; Marsh, S.F.; Bartsch, R.A.
1997-09-23
Anion exchange resins including at least two positively charged sites and a well-defined spacing between the positive sites are provided together with a process of removing anions or anionic metal complexes from aqueous solutions by use of such resins. The resins can be substituted poly(vinylpyridine) and substituted polystyrene.
Ion exchange polymers for anion separations
Jarvinen, Gordon D.; Marsh, S. Fredric; Bartsch, Richard A.
1997-01-01
Anion exchange resins including at least two positively charged sites and a ell-defined spacing between the positive sites are provided together with a process of removing anions or anionic metal complexes from aqueous solutions by use of such resins. The resins can be substituted poly(vinylpyridine) and substituted polystyrene.
Electrically controlled cesium ion exchange
Lilga, M.
1996-10-01
Several sites within the DOE complex (Savannah River, Idaho, Oak Ridge and Hanford) have underground storage tanks containing high-level waste resulting from nuclear engineering activities. To facilitate final disposal of the tank waste, it is advantageous to separate and concentrate the radionuclides for final immobilization in a vitrified glass matrix. This task proposes a new approach for radionuclide separation by combining ion exchange (IX) and electrochemistry to provide a selective and economic separation method.
In an earlier study, Puente and Obregón [Water Resour. Res. 32(1996)2825] reported on the usage of a deterministic fractal–multifractal (FM) methodology to faithfully describe an 8.3 h high-resolution rainfall time series in Boston, gathered every 15 s ...
ERIC Educational Resources Information Center
Moreland, James D., Jr
2013-01-01
This research investigates the instantiation of a Service-Oriented Architecture (SOA) within a hard real-time (stringent time constraints), deterministic (maximum predictability) combat system (CS) environment. There are numerous stakeholders across the U.S. Department of the Navy who are affected by this development, and therefore the system…
Deterministic Chaos in Open Well-stirred Bray-Liebhafsky Reaction System
NASA Astrophysics Data System (ADS)
Kolar-Anić, Ljiljana; Vukojević, Vladana; Pejić, Nataša; Grozdić, Tomislav; Anić, Slobodan
2004-12-01
Dynamics of the Bray-Liebhafsky (BL) oscillatory reaction is analyzed in a Continuously-fed well-Stirred Thank Reactor (CSTR). Deterministic chaos is found under different conditions, when temperature and acidity are chosen as control parameters. Dynamic patterns observed in real experiments are also numerically simulated.
Calculation of photon pulse height distribution using deterministic and Monte Carlo methods
NASA Astrophysics Data System (ADS)
Akhavan, Azadeh; Vosoughi, Naser
2015-12-01
Radiation transport techniques which are used in radiation detection systems comprise one of two categories namely probabilistic and deterministic. However, probabilistic methods are typically used in pulse height distribution simulation by recreating the behavior of each individual particle, the deterministic approach, which approximates the macroscopic behavior of particles by solution of Boltzmann transport equation, is being developed because of its potential advantages in computational efficiency for complex radiation detection problems. In current work linear transport equation is solved using two methods including collided components of the scalar flux algorithm which is applied by iterating on the scattering source and ANISN deterministic computer code. This approach is presented in one dimension with anisotropic scattering orders up to P8 and angular quadrature orders up to S16. Also, multi-group gamma cross-section library required for this numerical transport simulation is generated in a discrete appropriate form. Finally, photon pulse height distributions are indirectly calculated by deterministic methods that approvingly compare with those from Monte Carlo based codes namely MCNPX and FLUKA.
Deterministic linear-optics quantum computing based on a hybrid approach
Lee, Seung-Woo; Jeong, Hyunseok
2014-12-04
We suggest a scheme for all-optical quantum computation using hybrid qubits. It enables one to efficiently perform universal linear-optical gate operations in a simple and near-deterministic way using hybrid entanglement as off-line resources.
Deterministic switching of hierarchy during wrinkling in quasi-planar bilayers
Saha, Sourabh K.; Culpepper, Martin L.
2016-04-25
Emergence of hierarchy during compression of quasi-planar bilayers is preceded by a mode-locked state during which the quasi-planar form persists. Transition to hierarchy is determined entirely by geometrically observable parameters. This results in a universal transition phase diagram that enables one to deterministically tune hierarchy even with limited knowledge about material properties.
Controlling influenza disease: Comparison between discrete time Markov chain and deterministic model
NASA Astrophysics Data System (ADS)
Novkaniza, F.; Ivana, Aldila, D.
2016-04-01
Mathematical model of respiratory diseases spread with Discrete Time Markov Chain (DTMC) and deterministic approach for constant total population size are analyzed and compared in this article. Intervention of medical treatment and use of medical mask included in to the model as a constant parameter to controlling influenza spreads. Equilibrium points and basic reproductive ratio as the endemic criteria and it level set depend on some variable are given analytically and numerically as a results from deterministic model analysis. Assuming total of human population is constant from deterministic model, number of infected people also analyzed with Discrete Time Markov Chain (DTMC) model. Since Δt → 0, we could assume that total number of infected people might change only from i to i + 1, i - 1, or i. Approximation probability of an outbreak with gambler's ruin problem will be presented. We find that no matter value of basic reproductive ℛ0, either its larger than one or smaller than one, number of infection will always tends to 0 for t → ∞. Some numerical simulation to compare between deterministic and DTMC approach is given to give a better interpretation and a better understanding about the models results.
NASA Astrophysics Data System (ADS)
Tang, Zhili
2016-06-01
This paper solved aerodynamic drag reduction of transport wing fuselage configuration in transonic regime by using a parallel Nash evolutionary/deterministic hybrid optimization algorithm. Two sets of parameters are used, namely globally and locally. It is shown that optimizing separately local and global parameters by using Nash algorithms is far more efficient than considering these variables as a whole.
Comparison of space radiation calculations for deterministic and Monte Carlo transport codes
NASA Astrophysics Data System (ADS)
Lin, Zi-Wei; Adams, James; Barghouty, Abdulnasser; Randeniya, Sharmalee; Tripathi, Ram; Watts, John; Yepes, Pablo
For space radiation protection of astronauts or electronic equipments, it is necessary to develop and use accurate radiation transport codes. Radiation transport codes include deterministic codes, such as HZETRN from NASA and UPROP from the Naval Research Laboratory, and Monte Carlo codes such as FLUKA, the Geant4 toolkit and HETC-HEDS. The deterministic codes and Monte Carlo codes complement each other in that deterministic codes are very fast while Monte Carlo codes are more elaborate. Therefore it is important to investigate how well the results of deterministic codes compare with those of Monte Carlo transport codes and where they differ. In this study we evaluate these different codes in their space radiation applications by comparing their output results in the same given space radiation environments, shielding geometry and material. Typical space radiation environments such as the 1977 solar minimum galactic cosmic ray environment are used as the well-defined input, and simple geometries made of aluminum, water and/or polyethylene are used to represent the shielding material. We then compare various outputs of these codes, such as the dose-depth curves and the flux spectra of different fragments and other secondary particles. These comparisons enable us to learn more about the main differences between these space radiation transport codes. At the same time, they help us to learn the qualitative and quantitative features that these transport codes have in common.
A small-world network derived from the deterministic uniform recursive tree by line graph operation
NASA Astrophysics Data System (ADS)
Hou, Pengfeng; Zhao, Haixing; Mao, Yaping; Wang, Zhao
2016-03-01
The deterministic uniform recursive tree ({DURT}) is one of the deterministic versions of the uniform recursive tree ({URT}). Zhang et al (2008 Eur. Phys. J. B 63 507-13) studied the properties of DURT, including its topological characteristics and spectral properties. Although DURT shows a logarithmic scaling with the size of the network, DURT is not a small-world network since its clustering coefficient is zero. Lu et al (2012 Physica A 391 87-92) proposed a deterministic small-world network by adding some edges with a simple rule in each DURT iteration. In this paper, we intoduce a method for constructing a new deterministic small-world network by the line graph operation in each DURT iteration. The line graph operation brings about cliques at each node of the previous given graph, and the resulting line graph possesses larger clustering coefficients. On the other hand, this operation can decrease the diameter at almost one, then giving the analytic solutions to several topological characteristics of the model proposed. Supported by The Ministry of Science and Technology 973 project (No. 2010C B334708); National Science Foundation of China (Nos. 61164005, 11161037, 11101232, 11461054, 11551001); The Ministry of education scholars and innovation team support plan of Yangtze River (No. IRT1068); Qinghai Province Nature Science Foundation Project (Nos. 2012-Z-943, 2014-ZJ-907).
Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System
ERIC Educational Resources Information Center
Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.
2015-01-01
Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…
Taking Control: Stealth Assessment of Deterministic Behaviors within a Game-Based System
ERIC Educational Resources Information Center
Snow, Erica L.; Likens, Aaron D.; Allen, Laura K.; McNamara, Danielle S.
2016-01-01
Game-based environments frequently afford students the opportunity to exert agency over their learning paths by making various choices within the environment. The combination of log data from these systems and dynamic methodologies may serve as a stealth means to assess how students behave (i.e., deterministic or random) within these learning…
Tag-mediated cooperation with non-deterministic genotype-phenotype mapping
NASA Astrophysics Data System (ADS)
Zhang, Hong; Chen, Shu
2016-01-01
Tag-mediated cooperation provides a helpful framework for resolving evolutionary social dilemmas. However, most of the previous studies have not taken into account genotype-phenotype distinction in tags, which may play an important role in the process of evolution. To take this into consideration, we introduce non-deterministic genotype-phenotype mapping into a tag-based model with spatial prisoner's dilemma. By our definition, the similarity between genotypic tags does not directly imply the similarity between phenotypic tags. We find that the non-deterministic mapping from genotypic tag to phenotypic tag has non-trivial effects on tag-mediated cooperation. Although we observe that high levels of cooperation can be established under a wide variety of conditions especially when the decisiveness is moderate, the uncertainty in the determination of phenotypic tags may have a detrimental effect on the tag mechanism by disturbing the homophilic interaction structure which can explain the promotion of cooperation in tag systems. Furthermore, the non-deterministic mapping may undermine the robustness of the tag mechanism with respect to various factors such as the structure of the tag space and the tag flexibility. This observation warns us about the danger of applying the classical tag-based models to the analysis of empirical phenomena if genotype-phenotype distinction is significant in real world. Non-deterministic genotype-phenotype mapping thus provides a new perspective to the understanding of tag-mediated cooperation.
Vernekar, R; Krüger, T
2015-09-01
We investigate the effect of particle volume fraction on the efficiency of deterministic lateral displacement (DLD) devices. DLD is a popular passive sorting technique for microfluidic applications. Yet, it has been designed for treating dilute suspensions, and its efficiency for denser samples is not well known. We perform 3D simulations based on the immersed-boundary, lattice-Boltzmann and finite-element methods to model the flow of red blood cells (RBCs) in different DLD devices. We quantify the DLD efficiency in terms of appropriate "failure" probabilities and RBC counts in designated device outlets. Our main result is that the displacement mode breaks down upon an increase of RBC volume fraction, while the zigzag mode remains relatively robust. This suggests that the separation of larger particles (such as white blood cells) from a dense RBC background is simpler than separating smaller particles (such as platelets) from the same background. The observed breakdown stems from non-deterministic particle collisions interfering with the designed deterministic nature of DLD devices. Therefore, we postulate that dense suspension effects generally hamper efficient particle separation in devices based on deterministic principles.
Showing particles their place: deterministic colloid immobilization by gold nanomeshes.
Stelling, Christian; Mark, Andreas; Papastavrou, Georg; Retsch, Markus
2016-08-14
The defined immobilization of colloidal particles on a non-close packed lattice on solid substrates is a challenging task in the field of directed colloidal self-assembly. In this contribution the controlled self-assembly of polystyrene beads into chemically modified nanomeshes with a high particle surface coverage is demonstrated. For this, solely electrostatic interaction forces were exploited by the use of topographically shallow gold nanomeshes. Employing orthogonal functionalization, an electrostatic contrast between the glass surface and the gold nanomesh was introduced on a sub-micron scale. This surface charge contrast promotes a highly site-selective trapping of the negatively charged polystyrene particles from the liquid phase. AFM force spectroscopy with a polystyrene colloidal probe was used to rationalize this electrostatic focusing effect. It provides quantitative access to the occurring interaction forces between the particle and substrate surface and clarifies the role of the pH during the immobilization process. Furthermore, the structure of the non-close packed colloidal monolayers can be finely tuned by varying the ionic strength and geometric parameters between colloidal particles and nanomesh. Therefore one is able to specifically and selectively adsorb one or several particles into one individual nanohole. PMID:27416921
BOREAS TF-11 SSA-Fen Leaf Gas Exchange Data
NASA Technical Reports Server (NTRS)
Arkebauer, Timothy J.; Hall, Forrest G. (Editor); Knapp, David E. (Editor)
2000-01-01
The BOREAS TF-11 team gathered a variety of data to complement its tower flux measurements collected at the SSA-Fen site. This data set contains single-leaf gas exchange data from the SSA-Fen site during 1994 and 1995. These leaf gas exchange properties were measured for the dominant vascular plants using portable gas exchange systems. The data are stored in tabular ASCII files.
NASA Astrophysics Data System (ADS)
Doyen, G.; Drakova, D.
2015-08-01
We construct a world model consisting of a matter field living in 4 dimensional spacetime and a gravitational field living in 11 dimensional spacetime. The seven hidden dimensions are compactified within a radius estimated by reproducing the particle-wave characteristics of diffraction experiments. In the presence of matter fields the gravitational field develops localized modes with elementary excitations called gravonons which are induced by the sources (massive particles). The final world model treated here contains only gravonons and a scalar matter field. The gravonons are localized in the environment of the massive particles which generate them. The solution of the Schrödinger equation for the world model yields matter fields which are localized in the 4 dimensional subspace. The localization has the following properties: (i) There is a chooser mechanism for the selection of the localization site. (ii) The chooser selects one site on the basis of minor energy differences and differences in the gravonon structure between the sites, which at present cannot be controlled experimentally and therefore let the choice appear statistical. (iii) The changes from one localization site to a neighbouring one take place in a telegraph-signal like manner. (iv) The times at which telegraph like jumps occur depend on subtleties of the gravonon structure which at present cannot be controlled experimentally and therefore let the telegraph-like jumps appear statistical. (v) The fact that the dynamical law acts in the configuration space of fields living in 11 dimensional spacetime lets the events observed in 4 dimensional spacetime appear non-local. In this way the phenomenology of CQM is obtained without the need of introducing the process of collapse and a probabilistic interpretation of the wave function. Operators defining observables need not be introduced. All experimental findings are explained in a deterministic way as a consequence of the time development of the wave
Showing particles their place: deterministic colloid immobilization by gold nanomeshes
NASA Astrophysics Data System (ADS)
Stelling, Christian; Mark, Andreas; Papastavrou, Georg; Retsch, Markus
2016-07-01
The defined immobilization of colloidal particles on a non-close packed lattice on solid substrates is a challenging task in the field of directed colloidal self-assembly. In this contribution the controlled self-assembly of polystyrene beads into chemically modified nanomeshes with a high particle surface coverage is demonstrated. For this, solely electrostatic interaction forces were exploited by the use of topographically shallow gold nanomeshes. Employing orthogonal functionalization, an electrostatic contrast between the glass surface and the gold nanomesh was introduced on a sub-micron scale. This surface charge contrast promotes a highly site-selective trapping of the negatively charged polystyrene particles from the liquid phase. AFM force spectroscopy with a polystyrene colloidal probe was used to rationalize this electrostatic focusing effect. It provides quantitative access to the occurring interaction forces between the particle and substrate surface and clarifies the role of the pH during the immobilization process. Furthermore, the structure of the non-close packed colloidal monolayers can be finely tuned by varying the ionic strength and geometric parameters between colloidal particles and nanomesh. Therefore one is able to specifically and selectively adsorb one or several particles into one individual nanohole.The defined immobilization of colloidal particles on a non-close packed lattice on solid substrates is a challenging task in the field of directed colloidal self-assembly. In this contribution the controlled self-assembly of polystyrene beads into chemically modified nanomeshes with a high particle surface coverage is demonstrated. For this, solely electrostatic interaction forces were exploited by the use of topographically shallow gold nanomeshes. Employing orthogonal functionalization, an electrostatic contrast between the glass surface and the gold nanomesh was introduced on a sub-micron scale. This surface charge contrast promotes a
Giant exchange interaction in mixed lanthanides
Vieru, Veacheslav; Iwahara, Naoya; Ungur, Liviu; Chibotaru, Liviu F.
2016-01-01
Combining strong magnetic anisotropy with strong exchange interaction is a long standing goal in the design of quantum magnets. The lanthanide complexes, while exhibiting a very strong ionic anisotropy, usually display a weak exchange coupling, amounting to only a few wavenumbers. Recently, an isostructural series of mixed (Ln = Gd, Tb, Dy, Ho, Er) have been reported, in which the exchange splitting is estimated to reach hundreds wavenumbers. The microscopic mechanism governing the unusual exchange interaction in these compounds is revealed here by combining detailed modeling with density-functional theory and ab initio calculations. We find it to be basically kinetic and highly complex, involving non-negligible contributions up to seventh power of total angular momentum of each lanthanide site. The performed analysis also elucidates the origin of magnetization blocking in these compounds. Contrary to general expectations the latter is not always favored by strong exchange interaction. PMID:27087470
Giant exchange interaction in mixed lanthanides
NASA Astrophysics Data System (ADS)
Vieru, Veacheslav; Iwahara, Naoya; Ungur, Liviu; Chibotaru, Liviu F.
2016-04-01
Combining strong magnetic anisotropy with strong exchange interaction is a long standing goal in the design of quantum magnets. The lanthanide complexes, while exhibiting a very strong ionic anisotropy, usually display a weak exchange coupling, amounting to only a few wavenumbers. Recently, an isostructural series of mixed (Ln = Gd, Tb, Dy, Ho, Er) have been reported, in which the exchange splitting is estimated to reach hundreds wavenumbers. The microscopic mechanism governing the unusual exchange interaction in these compounds is revealed here by combining detailed modeling with density-functional theory and ab initio calculations. We find it to be basically kinetic and highly complex, involving non-negligible contributions up to seventh power of total angular momentum of each lanthanide site. The performed analysis also elucidates the origin of magnetization blocking in these compounds. Contrary to general expectations the latter is not always favored by strong exchange interaction.
Indiana Health Information Exchange
The Indiana Health Information Exchange is comprised of various Indiana health care institutions, established to help improve patient safety and is recognized as a best practice for health information exchange.
Anderson, Oscar A.
1978-01-01
An improved charge exchange system for substantially reducing pumping requirements of excess gas in a controlled thermonuclear reactor high energy neutral beam injector. The charge exchange system utilizes a jet-type blanket which acts simultaneously as the charge exchange medium and as a shield for reflecting excess gas.
Transmission Microscopy with Nanometer Resolution Using a Deterministic Single Ion Source
NASA Astrophysics Data System (ADS)
Jacob, Georg; Groot-Berning, Karin; Wolf, Sebastian; Ulm, Stefan; Couturier, Luc; Dawkins, Samuel T.; Poschinger, Ulrich G.; Schmidt-Kaler, Ferdinand; Singer, Kilian
2016-07-01
We realize a single particle microscope by using deterministically extracted laser-cooled 40Ca+ ions from a Paul trap as probe particles for transmission imaging. We demonstrate focusing of the ions to a spot size of 5.8 ±1.0 nm and a minimum two-sample deviation of the beam position of 1.5 nm in the focal plane. The deterministic source, even when used in combination with an imperfect detector, gives rise to a fivefold increase in the signal-to-noise ratio as compared with conventional Poissonian sources. Gating of the detector signal by the extraction event suppresses dark counts by 6 orders of magnitude. We implement a Bayes experimental design approach to microscopy in order to maximize the gain in spatial information. We demonstrate this method by determining the position of a 1 μ m circular hole structure to a precision of 2.7 nm using only 579 probe particles.
Di Maio, Francesco; Zio, Enrico; Smith, Curtis; Rychkov, Valentin
2015-07-06
The present special issue contains an overview of the research in the field of Integrated Deterministic and Probabilistic Safety Assessment (IDPSA) of Nuclear Power Plants (NPPs). Traditionally, safety regulation for NPPs design and operation has been based on Deterministic Safety Assessment (DSA) methods to verify criteria that assure plant safety in a number of postulated Design Basis Accident (DBA) scenarios. Referring to such criteria, it is also possible to identify those plant Structures, Systems, and Components (SSCs) and activities that are most important for safety within those postulated scenarios. Then, the design, operation, and maintenance of these “safety-related” SSCs and activities are controlled through regulatory requirements and supported by Probabilistic Safety Assessment (PSA).
Identification of the FitzHugh-Nagumo Model Dynamics via Deterministic Learning
NASA Astrophysics Data System (ADS)
Dong, Xunde; Wang, Cong
In this paper, a new method is proposed for the identification of the FitzHugh-Nagumo (FHN) model dynamics via deterministic learning. The FHN model is a classic and simple model for studying spiral waves in excitable media, such as the cardiac tissue, biological neural networks. Firstly, the FHN model described by partial differential equations (PDEs) is transformed into a set of ordinary differential equations (ODEs) by using finite difference method. Secondly, the dynamics of the ODEs is identified using the deterministic learning theory. It is shown that, for the spiral waves generated by the FHN model, the dynamics underlying the recurrent trajectory corresponding to any spatial point can be accurately identified by using the proposed approach. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Park, Min-Chul; Leportier, Thibault; Kim, Wooshik; Song, Jindong
2016-06-01
In this paper, we present a method to characterize not only shape but also depth of defects in line and space mask patterns. Features in a mask are too fine for conventional imaging system to resolve them and coherent imaging system providing only the pattern diffracted by the mask are used. Then, phase retrieval methods may be applied, but the accuracy it too low to determine the exact shape of the defect. Deterministic methods have been proposed to characterize accurately the defect, but it requires a reference pattern. We propose to use successively phase retrieval algorithm to retrieve the general shape of the mask and then deterministic approach to characterize precisely the defects detected.
Scaling of weighted spectral distribution in deterministic scale-free networks
NASA Astrophysics Data System (ADS)
Jiao, Bo; Nie, Yuan-ping; Shi, Jian-mai; Huang, Cheng-dong; Zhou, Ying; Du, Jing; Guo, Rong-hua; Tao, Ye-rong
2016-06-01
Scale-free networks are abundant in the real world. In this paper, we investigate the scaling properties of the weighted spectral distribution in several deterministic and stochastic models of evolving scale-free networks. First, we construct a new deterministic scale-free model whose node degrees have a unified format. Using graph structure features, we derive a precise formula for the spectral metric in this model. This formula verifies that the spectral metric grows sublinearly as network size (i.e., the number of nodes) grows. Additionally, the mathematical reasoning of the precise formula theoretically provides detailed explanations for this scaling property. Finally, we validate the scaling properties of the spectral metric using some stochastic models. The experimental results show that this scaling property can be retained regardless of local world, node deleting and assortativity adjustment.
Di Maio, Francesco; Zio, Enrico; Smith, Curtis; Rychkov, Valentin
2015-07-06
The present special issue contains an overview of the research in the field of Integrated Deterministic and Probabilistic Safety Assessment (IDPSA) of Nuclear Power Plants (NPPs). Traditionally, safety regulation for NPPs design and operation has been based on Deterministic Safety Assessment (DSA) methods to verify criteria that assure plant safety in a number of postulated Design Basis Accident (DBA) scenarios. Referring to such criteria, it is also possible to identify those plant Structures, Systems, and Components (SSCs) and activities that are most important for safety within those postulated scenarios. Then, the design, operation, and maintenance of these “safety-related” SSCs andmore » activities are controlled through regulatory requirements and supported by Probabilistic Safety Assessment (PSA).« less
Full 3D visualization tool-kit for Monte Carlo and deterministic transport codes
Frambati, S.; Frignani, M.
2012-07-01
We propose a package of tools capable of translating the geometric inputs and outputs of many Monte Carlo and deterministic radiation transport codes into open source file formats. These tools are aimed at bridging the gap between trusted, widely-used radiation analysis codes and very powerful, more recent and commonly used visualization software, thus supporting the design process and helping with shielding optimization. Three main lines of development were followed: mesh-based analysis of Monte Carlo codes, mesh-based analysis of deterministic codes and Monte Carlo surface meshing. The developed kit is considered a powerful and cost-effective tool in the computer-aided design for radiation transport code users of the nuclear world, and in particular in the fields of core design and radiation analysis. (authors)
A hybrid (Monte Carlo/deterministic) approach for multi-dimensional radiation transport
Bal, Guillaume; Davis, Anthony B.; Langmore, Ian
2011-08-20
Highlights: {yields} We introduce a variance reduction scheme for Monte Carlo (MC) transport. {yields} The primary application is atmospheric remote sensing. {yields} The technique first solves the adjoint problem using a deterministic solver. {yields} Next, the adjoint solution is used as an importance function for the MC solver. {yields} The adjoint problem is solved quickly since it ignores the volume. - Abstract: A novel hybrid Monte Carlo transport scheme is demonstrated in a scene with solar illumination, scattering and absorbing 2D atmosphere, a textured reflecting mountain, and a small detector located in the sky (mounted on a satellite or a airplane). It uses a deterministic approximation of an adjoint transport solution to reduce variance, computed quickly by ignoring atmospheric interactions. This allows significant variance and computational cost reductions when the atmospheric scattering and absorption coefficient are small. When combined with an atmospheric photon-redirection scheme, significant variance reduction (equivalently acceleration) is achieved in the presence of atmospheric interactions.
The geodynamo as a low-dimensional deterministic system at the edge of chaos
NASA Astrophysics Data System (ADS)
Ryan, D. A.; Sarson, G. R.
2008-08-01
We perform non-linear time series analysis tests on the SINT 2000 paleomagnetic record of the Earth's virtual axial dipole moment for the past 2 Ma, and find evidence of low-dimensional deterministic chaos. We reconstruct the phase space attractor using embedded time delay vectors, and compare the result with reconstructions from time series of a turbulent mean-field dynamo model, which exhibits a similar attractor structure. Considered alongside evidence of 1/f noise and lognormality in the paleomagnetic record, this suggests an important role for multiplicative noise, which may maintain the dynamo at the edge of chaos. In contrast to characterisations of geomagnetic reversals as stochastic processes, this work supports their interpretation as the outcome of a deterministic dynamical system.
Deterministic coupling of delta-doped nitrogen vacancy centers to a nanobeam photonic crystal cavity
Lee, Jonathan C.; Cui, Shanying; Zhang, Xingyu; Russell, Kasey J.; Magyar, Andrew P.; Hu, Evelyn L.; Bracher, David O.; Ohno, Kenichi; McLellan, Claire A.; Alemán, Benjamin; Bleszynski Jayich, Ania; Andrich, Paolo; Awschalom, David; Aharonovich, Igor
2014-12-29
The negatively charged nitrogen vacancy center (NV) in diamond has generated significant interest as a platform for quantum information processing and sensing in the solid state. For most applications, high quality optical cavities are required to enhance the NV zero-phonon line (ZPL) emission. An outstanding challenge in maximizing the degree of NV-cavity coupling is the deterministic placement of NVs within the cavity. Here, we report photonic crystal nanobeam cavities coupled to NVs incorporated by a delta-doping technique that allows nanometer-scale vertical positioning of the emitters. We demonstrate cavities with Q up to ∼24 000 and mode volume V ∼ 0.47(λ/n){sup 3} as well as resonant enhancement of the ZPL of an NV ensemble with Purcell factor of ∼20. Our fabrication technique provides a first step towards deterministic NV-cavity coupling using spatial control of the emitters.
Lee, Sylvanus Y; Amsden, Jason J; Boriskina, Svetlana V; Gopinath, Ashwin; Mitropolous, Alexander; Kaplan, David L; Omenetto, Fiorenzo G; Dal Negro, Luca
2010-07-01
Light scattering phenomena in periodic systems have been investigated for decades in optics and photonics. Their classical description relies on Bragg scattering, which gives rise to constructive interference at specific wavelengths along well defined propagation directions, depending on illumination conditions, structural periodicity, and the refractive index of the surrounding medium. In this paper, by engineering multifrequency colorimetric responses in deterministic aperiodic arrays of nanoparticles, we demonstrate significantly enhanced sensitivity to the presence of a single protein monolayer. These structures, which can be readily fabricated by conventional Electron Beam Lithography, sustain highly complex structural resonances that enable a unique optical sensing approach beyond the traditional Bragg scattering with periodic structures. By combining conventional dark-field scattering micro-spectroscopy and simple image correlation analysis, we experimentally demonstrate that deterministic aperiodic surfaces with engineered structural color are capable of detecting, in the visible spectral range, protein layers with thickness of a few tens of Angstroms. PMID:20566892
Lee, Sylvanus Y.; Amsden, Jason J.; Boriskina, Svetlana V.; Gopinath, Ashwin; Mitropolous, Alexander; Kaplan, David L.; Omenetto, Fiorenzo G.; Negro, Luca Dal
2010-01-01
Light scattering phenomena in periodic systems have been investigated for decades in optics and photonics. Their classical description relies on Bragg scattering, which gives rise to constructive interference at specific wavelengths along well defined propagation directions, depending on illumination conditions, structural periodicity, and the refractive index of the surrounding medium. In this paper, by engineering multifrequency colorimetric responses in deterministic aperiodic arrays of nanoparticles, we demonstrate significantly enhanced sensitivity to the presence of a single protein monolayer. These structures, which can be readily fabricated by conventional Electron Beam Lithography, sustain highly complex structural resonances that enable a unique optical sensing approach beyond the traditional Bragg scattering with periodic structures. By combining conventional dark-field scattering micro-spectroscopy and simple image correlation analysis, we experimentally demonstrate that deterministic aperiodic surfaces with engineered structural color are capable of detecting, in the visible spectral range, protein layers with thickness of a few tens of Angstroms. PMID:20566892
Charged quantum dot micropillar system for deterministic light-matter interactions
NASA Astrophysics Data System (ADS)
Androvitsaneas, P.; Young, A. B.; Schneider, C.; Maier, S.; Kamp, M.; Höfling, S.; Knauer, S.; Harbord, E.; Hu, C. Y.; Rarity, J. G.; Oulton, R.
2016-06-01
Quantum dots (QDs) are semiconductor nanostructures in which a three-dimensional potential trap produces an electronic quantum confinement, thus mimicking the behavior of single atomic dipole-like transitions. However, unlike atoms, QDs can be incorporated into solid-state photonic devices such as cavities or waveguides that enhance the light-matter interaction. A near unit efficiency light-matter interaction is essential for deterministic, scalable quantum-information (QI) devices. In this limit, a single photon input into the device will undergo a large rotation of the polarization of the light field due to the strong interaction with the QD. In this paper we measure a macroscopic (˜6∘ ) phase shift of light as a result of the interaction with a negatively charged QD coupled to a low-quality-factor (Q ˜290 ) pillar microcavity. This unexpectedly large rotation angle demonstrates that this simple low-Q -factor design would enable near-deterministic light-matter interactions.
Sheng Yubo; Deng Fuguo
2010-03-15
Entanglement purification is a very important element for long-distance quantum communication. Different from all the existing entanglement purification protocols (EPPs) in which two parties can only obtain some quantum systems in a mixed entangled state with a higher fidelity probabilistically by consuming quantum resources exponentially, here we present a deterministic EPP with hyperentanglement. Using this protocol, the two parties can, in principle, obtain deterministically maximally entangled pure states in polarization without destroying any less-entangled photon pair, which will improve the efficiency of long-distance quantum communication exponentially. Meanwhile, it will be shown that this EPP can be used to complete nonlocal Bell-state analysis perfectly. We also discuss this EPP in a practical transmission.
Deterministic Hadamard gate for microwave cat-state qubits in circuit QED
NASA Astrophysics Data System (ADS)
Nigg, Simon E.
2014-02-01
We propose the implementation of a deterministic Hadamard gate for logical photonic qubits encoded in superpositions of coherent states of a harmonic oscillator. The proposed scheme builds on a recently introduced set of conditional operations in the strong dispersive regime of circuit QED [Z. Leghtas et al., Phys. Rev. A 87, 042315 (2013), 10.1103/PhysRevA.87.042315]. We further propose an architecture for coupling two such logical qubits and provide a universal set of deterministic quantum gates. Based on parameter values taken from the current state of the art, we give estimates for the achievable gate fidelities accounting for fundamental gate imperfections and finite coherence time due to photon loss.
Deterministic amplification of Schrödinger cat states in circuit quantum electrodynamics
NASA Astrophysics Data System (ADS)
Joo, Jaewoo; Elliott, Matthew; Oi, Daniel K. L.; Ginossar, Eran; Spiller, Timothy P.
2016-02-01
Perfect deterministic amplification of arbitrary quantum states is prohibited by quantum mechanics, but determinism can be achieved by compromising between fidelity and amplification power. We propose a dynamical scheme for deterministically amplifying photonic Schrödinger cat states, which show great promise as a tool for quantum information processing. Our protocol is designed for strongly coupled circuit quantum electrodynamics and utilizes artificial atomic states and external microwave controls to engineer a set of optimal state transfers and achieve high fidelity amplification. We compare analytical results with full simulations of the open, driven Jaynes-Cummings model, using realistic device parameters for state of the art superconducting circuits. Amplification with a fidelity of 0.9 can be achieved for sizable cat states in the presence of cavity and atomic-level decoherence. This tool could be applied to practical continuous-variable information processing for the purification and stabilization of cat states in the presence of photon losses.
Neo-deterministic definition of earthquake hazard scenarios: a multiscale application to India
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Magrin, Andrea; Parvez, Imtiyaz A.; Rastogi, Bal K.; Vaccari, Franco; Cozzini, Stefano; Bisignano, Davide; Romanelli, Fabio; Panza, Giuliano F.; Ashish, Mr; Mir, Ramees R.
2014-05-01
The development of effective mitigation strategies requires scientifically consistent estimates of seismic ground motion; recent analysis, however, showed that the performances of the classical probabilistic approach to seismic hazard assessment (PSHA) are very unsatisfactory in anticipating ground shaking from future large earthquakes. Moreover, due to their basic heuristic limitations, the standard PSHA estimates are by far unsuitable when dealing with the protection of critical structures (e.g. nuclear power plants) and cultural heritage, where it is necessary to consider extremely long time intervals. Nonetheless, the persistence in resorting to PSHA is often explained by the need to deal with uncertainties related with ground shaking and earthquakes recurrence. We show that current computational resources and physical knowledge of the seismic waves generation and propagation processes, along with the improving quantity and quality of geophysical data, allow nowadays for viable numerical and analytical alternatives to the use of PSHA. The advanced approach considered in this study, namely the NDSHA (neo-deterministic seismic hazard assessment), is based on the physically sound definition of a wide set of credible scenario events and accounts for uncertainties and earthquakes recurrence in a substantially different way. The expected ground shaking due to a wide set of potential earthquakes is defined by means of full waveforms modelling, based on the possibility to efficiently compute synthetic seismograms in complex laterally heterogeneous anelastic media. In this way a set of scenarios of ground motion can be defined, either at national and local scale, the latter considering the 2D and 3D heterogeneities of the medium travelled by the seismic waves. The efficiency of the NDSHA computational codes allows for the fast generation of hazard maps at the regional scale even on a modern laptop computer. At the scenario scale, quick parametric studies can be easily
Accuracy of probabilistic and deterministic record linkage: the case of tuberculosis
de Oliveira, Gisele Pinto; Bierrenbach, Ana Luiza de Souza; de Camargo, Kenneth Rochel; Coeli, Cláudia Medina; Pinheiro, Rejane Sobrino
2016-01-01
ABSTRACT OBJECTIVE To analyze the accuracy of deterministic and probabilistic record linkage to identify TB duplicate records, as well as the characteristics of discordant pairs. METHODS The study analyzed all TB records from 2009 to 2011 in the state of Rio de Janeiro. A deterministic record linkage algorithm was developed using a set of 70 rules, based on the combination of fragments of the key variables with or without modification (Soundex or substring). Each rule was formed by three or more fragments. The probabilistic approach required a cutoff point for the score, above which the links would be automatically classified as belonging to the same individual. The cutoff point was obtained by linkage of the Notifiable Diseases Information System – Tuberculosis database with itself, subsequent manual review and ROC curves and precision-recall. Sensitivity and specificity for accurate analysis were calculated. RESULTS Accuracy ranged from 87.2% to 95.2% for sensitivity and 99.8% to 99.9% for specificity for probabilistic and deterministic record linkage, respectively. The occurrence of missing values for the key variables and the low percentage of similarity measure for name and date of birth were mainly responsible for the failure to identify records of the same individual with the techniques used. CONCLUSIONS The two techniques showed a high level of correlation for pair classification. Although deterministic linkage identified more duplicate records than probabilistic linkage, the latter retrieved records not identified by the former. User need and experience should be considered when choosing the best technique to be used. PMID:27556963
Deterministic and stochastic control of chimera states in delayed feedback oscillator
NASA Astrophysics Data System (ADS)
Semenov, V.; Zakharova, A.; Maistrenko, Y.; Schöll, E.
2016-06-01
Chimera states, characterized by the coexistence of regular and chaotic dynamics, are found in a nonlinear oscillator model with negative time-delayed feedback. The control of these chimera states by external periodic forcing is demonstrated by numerical simulations. Both deterministic and stochastic external periodic forcing are considered. It is shown that multi-cluster chimeras can be achieved by adjusting the external forcing frequency to appropriate resonance conditions. The constructive role of noise in the formation of a chimera states is shown.
Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; Wagner, Robert M.; Edwards, K. Dean; Green, Johney B.
2015-02-18
Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes andmore » thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.« less
Finney, Charles E.; Kaul, Brian C.; Daw, C. Stuart; Wagner, Robert M.; Edwards, K. Dean; Green, Johney B.
2015-02-18
Here we review developments in the understanding of cycle to cycle variability in internal combustion engines, with a focus on spark-ignited and premixed combustion conditions. Much of the research on cyclic variability has focused on stochastic aspects, that is, features that can be modeled as inherently random with no short term predictability. In some cases, models of this type appear to work very well at describing experimental observations, but the lack of predictability limits control options. Also, even when the statistical properties of the stochastic variations are known, it can be very difficult to discern their underlying physical causes and thus mitigate them. Some recent studies have demonstrated that under some conditions, cyclic combustion variations can have a relatively high degree of low dimensional deterministic structure, which implies some degree of predictability and potential for real time control. These deterministic effects are typically more pronounced near critical stability limits (e.g. near tipping points associated with ignition or flame propagation) such during highly dilute fueling or near the onset of homogeneous charge compression ignition. We review recent progress in experimental and analytical characterization of cyclic variability where low dimensional, deterministic effects have been observed. We describe some theories about the sources of these dynamical features and discuss prospects for interactive control and improved engine designs. In conclusion, taken as a whole, the research summarized here implies that the deterministic component of cyclic variability will become a pivotal issue (and potential opportunity) as engine manufacturers strive to meet aggressive emissions and fuel economy regulations in the coming decades.
Dini-Andreote, Francisco; Stegen, James C; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-03-17
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages--which provide a larger spatiotemporal scale relative to within stage analyses--revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended--and experimentally testable--conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems.
Using Reputation Systems and Non-Deterministic Routing to Secure Wireless Sensor Networks
Moya, José M.; Vallejo, Juan Carlos; Fraga, David; Araujo, Álvaro; Villanueva, Daniel; de Goyeneche, Juan-Mariano
2009-01-01
Security in wireless sensor networks is difficult to achieve because of the resource limitations of the sensor nodes. We propose a trust-based decision framework for wireless sensor networks coupled with a non-deterministic routing protocol. Both provide a mechanism to effectively detect and confine common attacks, and, unlike previous approaches, allow bad reputation feedback to the network. This approach has been extensively simulated, obtaining good results, even for unrealistically complex attack scenarios. PMID:22412345
Experimental demonstration of deterministic one-way quantum computation on a NMR quantum computer
Ju, Chenyong; Zhu Jing; Peng Xinhua; Chong Bo; Zhou Xianyi; Du Jiangfeng
2010-01-15
One-way quantum computing is an important and novel approach to quantum computation. By exploiting the existing particle-particle interactions, we report an experimental realization of the complete process of deterministic one-way quantum Deutsch-Josza algorithm in NMR, including graph state preparation, single-qubit measurements, and feed-forward corrections. The findings in our experiment may shed light on the future scalable one-way quantum computation.
SU-E-T-577: Commissioning of a Deterministic Algorithm for External Photon Beams
Zhu, T; Finlay, J; Mesina, C; Liu, H
2014-06-01
Purpose: We report commissioning results for a deterministic algorithm for external photon beam treatment planning. A deterministic algorithm solves the radiation transport equations directly using a finite difference method, thus improve the accuracy of dose calculation, particularly under heterogeneous conditions with results similar to that of Monte Carlo (MC) simulation. Methods: Commissioning data for photon energies 6 – 15 MV includes the percentage depth dose (PDD) measured at SSD = 90 cm and output ratio in water (Spc), both normalized to 10 cm depth, for field sizes between 2 and 40 cm and depths between 0 and 40 cm. Off-axis ratio (OAR) for the same set of field sizes was used at 5 depths (dmax, 5, 10, 20, 30 cm). The final model was compared with the commissioning data as well as additional benchmark data. The benchmark data includes dose per MU determined for 17 points for SSD between 80 and 110 cm, depth between 5 and 20 cm, and lateral offset of up to 16.5 cm. Relative comparisons were made in a heterogeneous phantom made of cork and solid water. Results: Compared to the commissioning beam data, the agreement are generally better than 2% with large errors (up to 13%) observed in the buildup regions of the FDD and penumbra regions of the OAR profiles. The overall mean standard deviation is 0.04% when all data are taken into account. Compared to the benchmark data, the agreements are generally better than 2%. Relative comparison in heterogeneous phantom is in general better than 4%. Conclusion: A commercial deterministic algorithm was commissioned for megavoltage photon beams. In a homogeneous medium, the agreement between the algorithm and measurement at the benchmark points is generally better than 2%. The dose accuracy for a deterministic algorithm is better than a convolution algorithm in heterogeneous medium.
The Claude Bernard Lecture, 1989 - Deterministic chaos: The science and the fiction
NASA Astrophysics Data System (ADS)
Ruelle, D.
1990-02-01
A general review of the ideas of chaos is presented. Particular attention is given to the problem of finding out whether or not various time evolutions observed in nature correspond to low-dimensional deterministic dynamics. The 'dimensions' of the order 6 that are obtained are found to be very close to the upper bound 2log(10)N permitted by the Grassberger-Procaccia algorithm (1983).
Development of a hybrid deterministic/stochastic method for 1D nuclear reactor kinetics
Terlizzi, Stefano; Dulla, Sandra; Ravetto, Piero; Rahnema, Farzad; Zhang, Dingkang
2015-12-31
A new method has been implemented for solving the time-dependent neutron transport equation efficiently and accurately. This is accomplished by coupling the hybrid stochastic-deterministic steady-state coarse-mesh radiation transport (COMET) method [1,2] with the new predictor-corrector quasi-static method (PCQM) developed at Politecnico di Torino [3]. In this paper, the coupled method is implemented and tested in 1D slab geometry.
NASA Astrophysics Data System (ADS)
Muthalif, Asan G. A.; Wahid, Azni N.; Nor, Khairul A. M.
2014-02-01
Engineering systems such as aircraft, ships and automotive are considered built-up structures. Dynamically they are taught of as being fabricated from many components that are classified as 'deterministic subsystems' (DS) and 'non-deterministic subsystems' (Non-DS). Structures' response of the DS is deterministic in nature and analysed using deterministic modelling methods such as finite element (FE) method. The response of Non-DS is statistical in nature and estimated using statistical modelling technique such as statistical energy analysis (SEA). SEA method uses power balance equation, in which any external input to the subsystem must be represented in terms of power. Often, input force is taken as point force and ensemble average power delivered by point force is already well-established. However, the external input can also be applied in the form of moments exerted by a piezoelectric (PZT) patch actuator. In order to be able to apply SEA method for input moments, a mathematical representation for moment generated by PZT patch in the form of average power is needed, which is attempted in this paper. A simply-supported plate with attached PZT patch is taken as a benchmark model. Analytical solution to estimate average power is derived using mobility approach. Ensemble average of power given by the PZT patch actuator to the benchmark model when subjected to structural uncertainties is also simulated using Lagrangian method and FEA software. The analytical estimation is compared with the Lagrangian model and FE method for validation. The effects of size and location of the PZT actuators on the power delivered to the plate are later investigated.
Mesh generation and energy group condensation studies for the jaguar deterministic transport code
Kennedy, R. A.; Watson, A. M.; Iwueke, C. I.; Edwards, E. J.
2012-07-01
The deterministic transport code Jaguar is introduced, and the modeling process for Jaguar is demonstrated using a two-dimensional assembly model of the Hoogenboom-Martin Performance Benchmark Problem. This single assembly model is being used to test and analyze optimal modeling methodologies and techniques for Jaguar. This paper focuses on spatial mesh generation and energy condensation techniques. In this summary, the models and processes are defined as well as thermal flux solution comparisons with the Monte Carlo code MC21. (authors)
Development of a hybrid deterministic/stochastic method for 1D nuclear reactor kinetics
NASA Astrophysics Data System (ADS)
Terlizzi, Stefano; Rahnema, Farzad; Zhang, Dingkang; Dulla, Sandra; Ravetto, Piero
2015-12-01
A new method has been implemented for solving the time-dependent neutron transport equation efficiently and accurately. This is accomplished by coupling the hybrid stochastic-deterministic steady-state coarse-mesh radiation transport (COMET) method [1,2] with the new predictor-corrector quasi-static method (PCQM) developed at Politecnico di Torino [3]. In this paper, the coupled method is implemented and tested in 1D slab geometry.
Dini-Andreote, Francisco; Stegen, James C.; van Elsas, Jan Dirk; Salles, Joana Falcão
2015-01-01
Ecological succession and the balance between stochastic and deterministic processes are two major themes within microbial ecology, but these conceptual domains have mostly developed independent of each other. Here we provide a framework that integrates shifts in community assembly processes with microbial primary succession to better understand mechanisms governing the stochastic/deterministic balance. Synthesizing previous work, we devised a conceptual model that links ecosystem development to alternative hypotheses related to shifts in ecological assembly processes. Conceptual model hypotheses were tested by coupling spatiotemporal data on soil bacterial communities with environmental conditions in a salt marsh chronosequence spanning 105 years of succession. Analyses within successional stages showed community composition to be initially governed by stochasticity, but as succession proceeded, there was a progressive increase in deterministic selection correlated with increasing sodium concentration. Analyses of community turnover among successional stages—which provide a larger spatiotemporal scale relative to within stage analyses—revealed that changes in the concentration of soil organic matter were the main predictor of the type and relative influence of determinism. Taken together, these results suggest scale-dependency in the mechanisms underlying selection. To better understand mechanisms governing these patterns, we developed an ecological simulation model that revealed how changes in selective environments cause shifts in the stochastic/deterministic balance. Finally, we propose an extended—and experimentally testable—conceptual model integrating ecological assembly processes with primary and secondary succession. This framework provides a priori hypotheses for future experiments, thereby facilitating a systematic approach to understand assembly and succession in microbial communities across ecosystems. PMID:25733885
Visualization of a Deterministic Radiation Transport Model Using Standard Visualization Tools
James A. Galbraith; L. Eric Greenwade
2004-05-01
Output from a deterministic radiation transport code running on a CRAY SV1 is imported into a standard distributed, parallel, visualization tool for analysis. Standard output files, consisting of tetrahedral meshes, are imported to the visualization tool through the creation of a application specific plug-in module. Visualization samples are included, providing visualization of steady state results. Different plot types and operators are utilized to enhance the analysis and assist in reporting the results of the analysis.
Liu, Xiaoying; Biswas, Sushmita; Jarrett, Jeremy W; Poutrina, Ekaterina; Urbas, Augustine; Knappenberger, Kenneth L; Vaia, Richard A; Nealey, Paul F
2015-12-01
Plasmonic heterostructures are deterministically constructed in organized arrays through chemical pattern directed assembly, a combination of top-down lithography and bottom-up assembly, and by the sequential immobilization of gold nanoparticles of three different sizes onto chemically patterned surfaces using tailored interaction potentials. These spatially addressable plasmonic chain nanostructures demonstrate localization of linear and nonlinear optical fields as well as nonlinear circular dichroism.
Lipid exchange between membranes.
Jähnig, F
1984-01-01
The exchange of lipid molecules between vesicle bilayers in water and a monolayer forming at the water surface was investigated theoretically within the framework of thermodynamics. The total number of exchanged molecules was found to depend on the bilayer curvature as expressed by the vesicle radius and on the boundary condition for exchange, i.e., whether during exchange the radius or the packing density of the vesicles remains constant. The boundary condition is determined by the rate of flip-flop within the bilayer relative to the rate of exchange between bi- and monolayer. If flip-flop is fast, exchange is independent of the vesicle radius; if flip-flop is slow, exchange increases with the vesicle radius. Available experimental results agree with the detailed form of this dependence. When the theory was extended to exchange between two bilayers of different curvature, the direction of exchange was also determined by the curvatures and the boundary conditions for exchange. Due to the dependence of the boundary conditions on flip-flop and, consequently, on membrane fluidity, exchange between membranes may partially be regulated by membrane fluidity. PMID:6518251
Roy, Swapnoneel; Thakur, Ashok Kumar
2008-01-01
Genome rearrangements have been modelled by a variety of primitives such as reversals, transpositions, block moves and block interchanges. We consider such a genome rearrangement primitive Strip Exchanges. Given a permutation, the challenge is to sort it by using minimum number of strip exchanges. A strip exchanging move interchanges the positions of two chosen strips so that they merge with other strips. The strip exchange problem is to sort a permutation using minimum number of strip exchanges. We present here the first non-trivial 2-approximation algorithm to this problem. We also observe that sorting by strip-exchanges is fixed-parameter-tractable. Lastly we discuss the application of strip exchanges in a different area Optical Character Recognition (OCR) with an example.
Deterministic Agent-Based Path Optimization by Mimicking the Spreading of Ripples.
Hu, Xiao-Bing; Wang, Ming; Leeson, Mark S; Di Paolo, Ezequiel A; Liu, Hao
2016-01-01
Inspirations from nature have contributed fundamentally to the development of evolutionary computation. Learning from the natural ripple-spreading phenomenon, this article proposes a novel ripple-spreading algorithm (RSA) for the path optimization problem (POP). In nature, a ripple spreads at a constant speed in all directions, and the node closest to the source is the first to be reached. This very simple principle forms the foundation of the proposed RSA. In contrast to most deterministic top-down centralized path optimization methods, such as Dijkstra's algorithm, the RSA is a bottom-up decentralized agent-based simulation model. Moreover, it is distinguished from other agent-based algorithms, such as genetic algorithms and ant colony optimization, by being a deterministic method that can always guarantee the global optimal solution with very good scalability. Here, the RSA is specifically applied to four different POPs. The comparative simulation results illustrate the advantages of the RSA in terms of effectiveness and efficiency. Thanks to the agent-based and deterministic features, the RSA opens new opportunities to attack some problems, such as calculating the exact complete Pareto front in multiobjective optimization and determining the kth shortest project time in project management, which are very difficult, if not impossible, for existing methods to resolve. The ripple-spreading optimization principle and the new distinguishing features and capacities of the RSA enrich the theoretical foundations of evolutionary computation.
Are deterministic expert systems for computer-assisted structure elucidation obsolete?
Elyashberg, Mikhail E; Blinov, Kirill A; Williams, Antony J; Molodtsov, Sergey G; Martin, Gary E
2006-01-01
Expert systems for spectroscopic molecular structure elucidation have been developed since the mid-1960s. Algorithms associated with the structure generation process within these systems are deterministic; that is, they are based on graph theory and combinatorial analysis. A series of expert systems utilizing 2D NMR spectra have been described in the literature and are capable of determining the molecular structures of large organic molecules including complex natural products. Recently, an opinion was expressed in the literature that these systems would fail when elucidating structures containing more than 30 heavy atoms. A suggestion was put forward that stochastic algorithms for structure generation would be necessary to overcome this shortcoming. In this article, we describe a comprehensive investigation of the capabilities of the deterministic expert system Structure Elucidator. The results of performing the structure elucidation of 250 complex natural products with this program were studied and generalized. The conclusion is that 2D NMR deterministic expert systems are certainly capable of elucidating large structures (up to about 100 heavy atoms) and can deal with the complexities associated with both poor and contradictory spectral data.
Improved Deterministic N-To-One Joint Remote Preparation of an Arbitrary Qubit via EPR Pairs
NASA Astrophysics Data System (ADS)
Liu, Wen-Jie; Chen, Zheng-Fei; Liu, Chao; Zheng, Yu
2015-02-01
Recently, Bich et al. (Int. J. Theor. Phys. 51: 2272, 2012) proposed two deterministic joint remote state preparation (JRSP) protocols of an arbitrary single-qubit state: one is for two preparers to remotely prepare for a receiver by using two Einstein-Podolsky-Rosen (ERP) pairs; the other is its generalized form in the case of arbitrary N ( N > 2) preparers via N ERP pairs. While examining these two protocols, we find that the success probability for the receiver achieving the desired state is not deterministic, i.e., , for N > 2 preparers in the second protocol. Through constructing two sets of adaptive projective measurement bases for both the real space and the complex space, an improved deterministic N-to-one JRSP protocol for an arbitrary single-qubit state is presented. Analysis shows our protocol can truly achieve the unit success probability, i.e., . What is more, the receiver can be randomly assigned even after the distribution of the qubits of EPR pairs, so it is more flexible and applicable in the network situation.
Chang, T; Schiff, S J; Sauer, T; Gossard, J P; Burke, R E
1994-01-01
Long time series of monosynaptic Ia-afferent to alpha-motoneuron reflexes were recorded in the L7 or S1 ventral roots in the cat. Time series were collected before and after spinalization at T13 during constant amplitude stimulations of group Ia muscle afferents in the triceps surae muscle nerves. Using autocorrelation to analyze the linear correlation in the time series demonstrated oscillations in the decerebrate state (4/4) that were eliminated after spinalization (5/5). Three tests for determinism were applied to these series: 1) local flow, 2) local dispersion, and 3) nonlinear prediction. These algorithms were validated with time series generated from known deterministic equations. For each experimental and theoretical time series used, matched time-series of stochastic surrogate data were generated to serve as mathematical and statistical controls. Two of the time series collected in the decerebrate state (2/4) demonstrated evidence for deterministic structure. This structure could not be accounted for by the autocorrelation in the data, and was abolished following spinalization. None of the time series collected in the spinalized state (0/5) demonstrated evidence of determinism. Although monosynaptic reflex variability is generally stochastic in the spinalized state, this simple driven system may display deterministic behavior in the decerebrate state. Images FIGURE 1 PMID:7948680
Improving the realism of deterministic multi-strain models: implications for modelling influenza A.
Minayev, Pavlo; Ferguson, Neil
2009-06-01
Understanding the interaction between epidemiological and evolutionary dynamics for antigenically variable pathogens remains a challenge, particularly if analytical insight is wanted. In particular, while a variety of relatively complex simulation models have reproduced the evolutionary dynamics of influenza, simpler models have given less satisfying descriptions of the patterns seen in data. Here, we develop a set of relatively simple deterministic models of the transmission dynamics of multi-strain pathogens which give increased biological realism compared with past work. We allow the intensity of cross-immunity generated against one strain given exposure to a different strain to depend on the extent of genetic difference between the strains. We show that the dynamics of this model are determined by the interplay of parameters defining the cross-immune response function and can include fully symmetric equilibria, self-organized strain structures, regular periodic and chaotic regimes. We then extend the model by incorporating transient strain-transcending immunity that acts as a density-dependent mechanism to lower overall infection prevalence and thus pathogen diversity. We conclude that while some aspects of the evolution of influenza can be captured by deterministic models, overall, the description obtainable using a purely deterministic framework is unsatisfactory, implying that stochasticity of strain generation (via mutation) and extinction needs to be captured to appropriately capture influenza dynamics.
CPT-based probabilistic and deterministic assessment of in situ seismic soil liquefaction potential
Moss, R.E.S.; Seed, R.B.; Kayen, R.E.; Stewart, J.P.; Der Kiureghian, A.; Cetin, K.O.
2006-01-01
This paper presents a complete methodology for both probabilistic and deterministic assessment of seismic soil liquefaction triggering potential based on the cone penetration test (CPT). A comprehensive worldwide set of CPT-based liquefaction field case histories were compiled and back analyzed, and the data then used to develop probabilistic triggering correlations. Issues investigated in this study include improved normalization of CPT resistance measurements for the influence of effective overburden stress, and adjustment to CPT tip resistance for the potential influence of "thin" liquefiable layers. The effects of soil type and soil character (i.e., "fines" adjustment) for the new correlations are based on a combination of CPT tip and sleeve resistance. To quantify probability for performancebased engineering applications, Bayesian "regression" methods were used, and the uncertainties of all variables comprising both the seismic demand and the liquefaction resistance were estimated and included in the analysis. The resulting correlations were developed using a Bayesian framework and are presented in both probabilistic and deterministic formats. The results are compared to previous probabilistic and deterministic correlations. ?? 2006 ASCE.
Comparison of Deterministic and Stochastic Models of the lac Operon Genetic Network
Stamatakis, Michail; Mantzaris, Nikos V.
2009-01-01
The lac operon has been a paradigm for genetic regulation with positive feedback, and several modeling studies have described its dynamics at various levels of detail. However, it has not yet been analyzed how stochasticity can enrich the system's behavior, creating effects that are not observed in the deterministic case. To address this problem we use a comparative approach. We develop a reaction network for the dynamics of the lac operon genetic switch and derive corresponding deterministic and stochastic models that incorporate biological details. We then analyze the effects of key biomolecular mechanisms, such as promoter strength and binding affinities, on the behavior of the models. No assumptions or approximations are made when building the models other than those utilized in the reaction network. Thus, we are able to carry out a meaningful comparison between the predictions of the two models to demonstrate genuine effects of stochasticity. Such a comparison reveals that in the presence of stochasticity, certain biomolecular mechanisms can profoundly influence the region where the system exhibits bistability, a key characteristic of the lac operon dynamics. For these cases, the temporal asymptotic behavior of the deterministic model remains unchanged, indicating a role of stochasticity in modulating the behavior of the system. PMID:19186128
Implementation speed of deterministic population passages compared to that of Rabi pulses
NASA Astrophysics Data System (ADS)
Chen, Jingwei; Wei, L. F.
2015-02-01
Fast Rabi π -pulse technique has been widely applied to various coherent quantum manipulations, although it requires precise designs of the pulse areas. Relaxing the precise pulse designs, various rapid adiabatic passage (RAP) approaches have been alternatively utilized to implement various population passages deterministically. However, the usual RAP protocol could not be implemented desirably fast, as the relevant adiabatic condition should be robustly satisfied during the passage. Here, we propose a modified shortcut to adiabaticity (STA) technique to accelerate significantly the desired deterministic quantum state population passages. This transitionless technique is beyond the usual rotating wave approximation (RWA) performed in the recent STA protocols, and thus can be applied to deliver various fast quantum evolutions wherein the relevant counter-rotating effects cannot be neglected. The proposal is demonstrated specifically with the driven two- and three-level systems. Numerical results show that with the present STA technique beyond the RWA the usual Stark-chirped RAPs and stimulated Raman adiabatic passages could be significantly speeded up; the deterministic population passages could be implemented as fast as the widely used fast Rabi π pulses, but are insensitive to the applied pulse areas.
Non-deterministic analysis of a liquid polymeric-film drying process
Chen, K.S.; Cairncross, R.A.
1997-04-01
In this study the authors employed the Monte Carlo/Latin Hypercube sampling technique to generate input parameters for a liquid polymeric-film drying model with prescribed uncertainty distributions. The one-dimensional drying model employed in this study was that developed by Cairncross et al. They found that the non-deterministic analysis with Monte Carlo/Latin Hypercube sampling provides a useful tool for characterizing the two responses (residual solvent volume and the maximum solvent partial vapor pressure) of a liquid polymeric-film drying process. More precisely, they found that the non-deterministic analysis via Monte Carlo/Latin Hypercube sampling not only provides estimates of statistical variations of the response variables but also yields more realistic estimates of mean values, which can differ significantly from those calculated using deterministic simulation. For input-parameter uncertainties in the range from 2 to 10% of their respective means, variations of response variables were found to be comparable to the mean values.
Hu, Xiao-Bing; Wang, Ming; Di Paolo, Ezequiel
2013-06-01
Searching the Pareto front for multiobjective optimization problems usually involves the use of a population-based search algorithm or of a deterministic method with a set of different single aggregate objective functions. The results are, in fact, only approximations of the real Pareto front. In this paper, we propose a new deterministic approach capable of fully determining the real Pareto front for those discrete problems for which it is possible to construct optimization algorithms to find the k best solutions to each of the single-objective problems. To this end, two theoretical conditions are given to guarantee the finding of the actual Pareto front rather than its approximation. Then, a general methodology for designing a deterministic search procedure is proposed. A case study is conducted, where by following the general methodology, a ripple-spreading algorithm is designed to calculate the complete exact Pareto front for multiobjective route optimization. When compared with traditional Pareto front search methods, the obvious advantage of the proposed approach is its unique capability of finding the complete Pareto front. This is illustrated by the simulation results in terms of both solution quality and computational efficiency.
A deterministic, gigabit serial timing, synchronization and data link for the RHIC LLRF
Hayes, T.; Smith, K.S.; Severino, F.
2011-03-28
A critical capability of the new RHIC low level rf (LLRF) system is the ability to synchronize signals across multiple locations. The 'Update Link' provides this functionality. The 'Update Link' is a deterministic serial data link based on the Xilinx RocketIO protocol that is broadcast over fiber optic cable at 1 gigabit per second (Gbps). The link provides timing events and data packets as well as time stamp information for synchronizing diagnostic data from multiple sources. The new RHIC LLRF was designed to be a flexible, modular system. The system is constructed of numerous independent RF Controller chassis. To provide synchronization among all of these chassis, the Update Link system was designed. The Update Link system provides a low latency, deterministic data path to broadcast information to all receivers in the system. The Update Link system is based on a central hub, the Update Link Master (ULM), which generates the data stream that is distributed via fiber optic links. Downstream chassis have non-deterministic connections back to the ULM that allow any chassis to provide data that is broadcast globally.
An alternative approach to measure similarity between two deterministic transient signals
NASA Astrophysics Data System (ADS)
Shin, Kihong
2016-06-01
In many practical engineering applications, it is often required to measure the similarity of two signals to gain insight into the conditions of a system. For example, an application that monitors machinery can regularly measure the signal of the vibration and compare it to a healthy reference signal in order to monitor whether or not any fault symptom is developing. Also in modal analysis, a frequency response function (FRF) from a finite element model (FEM) is often compared with an FRF from experimental modal analysis. Many different similarity measures are applicable in such cases, and correlation-based similarity measures may be most frequently used among these such as in the case where the correlation coefficient in the time domain and the frequency response assurance criterion (FRAC) in the frequency domain are used. Although correlation-based similarity measures may be particularly useful for random signals because they are based on probability and statistics, we frequently deal with signals that are largely deterministic and transient. Thus, it may be useful to develop another similarity measure that takes the characteristics of the deterministic transient signal properly into account. In this paper, an alternative approach to measure the similarity between two deterministic transient signals is proposed. This newly proposed similarity measure is based on the fictitious system frequency response function, and it consists of the magnitude similarity and the shape similarity. Finally, a few examples are presented to demonstrate the use of the proposed similarity measure.
NASA Astrophysics Data System (ADS)
Bracons, Marc; Meneveau, Charles; Parlange, Marc
2008-11-01
When representing a wind turbine in LES using a drag disk (e.g. A. Jimenez et al. 2007), the periodic effects due to the turbine's rotating elements remain unresolved. The periodic effects on the mean flow can be represented in a simulation using deterministic stresses in the wake. In this work, based on the Biot-Savart law with a helical vortex street and various simplifications, we develop an analytical expression for the deterministic, periodic velocity fluctuations in the wake. Then, the deterministic stress tensor is obtained by the product of the approximated fluctuating components of velocity, and integration over a helical period. The resulting model is implemented within a Large Eddy Simulation of an array of wind turbines, using the scale-dependent Lagrangian dynamic model (Bou-Zeid et al. 2005). The importance of the deterministic stresses on the computed wake structure is examined by varying the strength of the helical vortices.
NASA Astrophysics Data System (ADS)
Itoh, Kosuke; Nakada, Tsutomu
2013-04-01
Deterministic nonlinear dynamical processes are ubiquitous in nature. Chaotic sounds generated by such processes may appear irregular and random in waveform, but these sounds are mathematically distinguished from random stochastic sounds in that they contain deterministic short-time predictability in their temporal fine structures. We show that the human brain distinguishes deterministic chaotic sounds from spectrally matched stochastic sounds in neural processing and perception. Deterministic chaotic sounds, even without being attended to, elicited greater cerebral cortical responses than the surrogate control sounds after about 150 ms in latency after sound onset. Listeners also clearly discriminated these sounds in perception. The results support the hypothesis that the human auditory system is sensitive to the subtle short-time predictability embedded in the temporal fine structure of sounds.
The hydrogen exchange core and protein folding.
Li, R.; Woodward, C.
1999-01-01
A database of hydrogen-deuterium exchange results has been compiled for proteins for which there are published rates of out-exchange in the native state, protection against exchange during folding, and out-exchange in partially folded forms. The question of whether the slow exchange core is the folding core (Woodward C, 1993, Trends Biochem Sci 18:359-360) is reexamined in a detailed comparison of the specific amide protons (NHs) and the elements of secondary structure on which they are located. For each pulsed exchange or competition experiment, probe NHs are shown explicitly; the large number and broad distribution of probe NHs support the validity of comparing out-exchange with pulsed-exchange/competition experiments. There is a strong tendency for the same elements of secondary structure to carry NHs most protected in the native state, NHs first protected during folding, and NHs most protected in partially folded species. There is not a one-to-one correspondence of individual NHs. Proteins for which there are published data for native state out-exchange and theta values are also reviewed. The elements of secondary structure containing the slowest exchanging NHs in native proteins tend to contain side chains with high theta values or be connected to a turn/loop with high theta values. A definition for a protein core is proposed, and the implications for protein folding are discussed. Apparently, during folding and in the native state, nonlocal interactions between core sequences are favored more than other possible nonlocal interactions. Other studies of partially folded bovine pancreatic trypsin inhibitor (Barbar E, Barany G, Woodward C, 1995, Biochemistry 34:11423-11434; Barber E, Hare M, Daragan V, Barany G, Woodward C, 1998, Biochemistry 37:7822-7833), suggest that developing cores have site-specific energy barriers between microstates, one disordered, and the other(s) more ordered. PMID:10452602
Nonsurvivable momentum exchange system
NASA Technical Reports Server (NTRS)
Roder, Russell (Inventor); Ahronovich, Eliezer (Inventor); Davis, III, Milton C. (Inventor)
2007-01-01
A demiseable momentum exchange system includes a base and a flywheel rotatably supported on the base. The flywheel includes a web portion defining a plurality of web openings and a rim portion. The momentum exchange system further includes a motor for driving the flywheel and a cover for engaging the base to substantially enclose the flywheel. The system may also include components having a melting temperature below 1500 degrees Celsius. The momentum exchange system is configured to demise on reentry.
NASA Technical Reports Server (NTRS)
Snyder, W. V.; Hanson, R. J.
1986-01-01
Text Exchange System (TES) exchanges and maintains organized textual information including source code, documentation, data, and listings. System consists of two computer programs and definition of format for information storage. Comprehensive program used to create, read, and maintain TES files. TES developed to meet three goals: First, easy and efficient exchange of programs and other textual data between similar and dissimilar computer systems via magnetic tape. Second, provide transportable management system for textual information. Third, provide common user interface, over wide variety of computing systems, for all activities associated with text exchange.
Sample Exchange Evaluation (SEE) Report - Phase II
Winters, W.I.
1994-09-28
This report describes the results from Phase II of the Sample Exchange Evaluation (SEE) Program, a joint effort to compare analytical laboratory performance on samples from the Hanford Site`s high-level waste tanks. In Phase II, the program has been expanded to include inorganic constituents in addition to radionuclides. Results from Phase II that exceeded 20% relative percent difference criteria are identified.
Shchukina, O I; Zatirakha, A V; Smolenkov, A D; Nesterenko, P N; Shpigun, O A
2015-08-21
Novel polystyrene-divinylbenzene (PS-DVB) based anion exchangers differing from each other in the structure of the branched functional ion exchange layer are prepared to investigate the role of linker and functional site on ion exchange selectivity. The proposed method of synthesis includes the obtaining of aminated PS-DVB particles by means of their acylation with following reductive amination with methylamine. Further modification of the obtained secondary aminogroups is provided by the alkylation with either 1,4-butanediol diglycidyl ether (1,4-BDDGE) or resorcinol diglycidyl ether (RDGE), which form the linkers of different hydrophobicity, and amination of terminal epoxide rings with trimethylamine (TMA), dimethylethanolamine (DMEA), methyldiethanolamine (MDEA) or triethanolamine (TEA). The variation of the structure and hydrophobicity of the linker and terminal quaternary ammonium sites in the functional layer allows the alteration of selectivity and separation efficiency of the obtained adsorbents. The ion exchange selectivity and separation efficiency of the anion exchangers are evaluated using the model mixtures of anions (F(-), HCOO(-), Cl(-), NO2(-), Br(-), NO3(-), HPO4(2-) and SO4(2-)) in potassium hydroxide eluents. The adsorbents show the decrease of selectivity with increasing the hydrophilicity of the terminal functional site. The anion exchangers having more flexible and hydrophilic 1,4-BDDGE linker provide smaller separation factors for most of the analytes as compared with RDGE-containing adsorbents with the same terminal ion exchange sites, but are characterized with higher column efficiencies and better peak symmetry for polarizable anions. In case of 1,4-BDDGE-modified anion exchangers of the particle size of 3.3μm functionalized with DMEA and MDEA the calculated values of column efficiencies for polarizable NO3(-) and Br(-) are up to 49,000 and 53,000N/m, respectively, which is almost twice higher than the values obtained for the RDGE
A Hybrid Monte Carlo-Deterministic Method for Global Binary Stochastic Medium Transport Problems
Keady, K P; Brantley, P
2010-03-04
Global deep-penetration transport problems are difficult to solve using traditional Monte Carlo techniques. In these problems, the scalar flux distribution is desired at all points in the spatial domain (global nature), and the scalar flux typically drops by several orders of magnitude across the problem (deep-penetration nature). As a result, few particle histories may reach certain regions of the domain, producing a relatively large variance in tallies in those regions. Implicit capture (also known as survival biasing or absorption suppression) can be used to increase the efficiency of the Monte Carlo transport algorithm to some degree. A hybrid Monte Carlo-deterministic technique has previously been developed by Cooper and Larsen to reduce variance in global problems by distributing particles more evenly throughout the spatial domain. This hybrid method uses an approximate deterministic estimate of the forward scalar flux distribution to automatically generate weight windows for the Monte Carlo transport simulation, avoiding the necessity for the code user to specify the weight window parameters. In a binary stochastic medium, the material properties at a given spatial location are known only statistically. The most common approach to solving particle transport problems involving binary stochastic media is to use the atomic mix (AM) approximation in which the transport problem is solved using ensemble-averaged material properties. The most ubiquitous deterministic model developed specifically for solving binary stochastic media transport problems is the Levermore-Pomraning (L-P) model. Zimmerman and Adams proposed a Monte Carlo algorithm (Algorithm A) that solves the Levermore-Pomraning equations and another Monte Carlo algorithm (Algorithm B) that is more accurate as a result of improved local material realization modeling. Recent benchmark studies have shown that Algorithm B is often significantly more accurate than Algorithm A (and therefore the L-P model
Bifunctional anion-exchange resins with improved selectivity and exchange kinetics
Alexandratos, Spiro D.; Brown, Gilbert M.; Bonnesen, Peter V.; Moyer, Bruce A.
2000-01-01
Disclosed herein are a class of anion exchange resins containing two different exchange sites with improved selectivity and sorptive capability for chemical species in solution, such as heptavalent technetium (as pertechnetate anion, TcO.sub.4.sup.-). The resins are prepared by first reacting haloalkylated crosslinked copolymer beads with a large tertiary amine in a solvent in which the resin beads can swell, followed by reaction with a second, smaller, tertiary amine to more fully complete the functionalization of the resin. The resins have enhanced selectivity, capacity, and exchange kinetics.
Determining the bias and variance of a deterministic finger-tracking algorithm.
Morash, Valerie S; van der Velden, Bas H M
2016-06-01
Finger tracking has the potential to expand haptic research and applications, as eye tracking has done in vision research. In research applications, it is desirable to know the bias and variance associated with a finger-tracking method. However, assessing the bias and variance of a deterministic method is not straightforward. Multiple measurements of the same finger position data will not produce different results, implying zero variance. Here, we present a method of assessing deterministic finger-tracking variance and bias through comparison to a non-deterministic measure. A proof-of-concept is presented using a video-based finger-tracking algorithm developed for the specific purpose of tracking participant fingers during a psychological research study. The algorithm uses ridge detection on videos of the participant's hand, and estimates the location of the right index fingertip. The algorithm was evaluated using data from four participants, who explored tactile maps using only their right index finger and all right-hand fingers. The algorithm identified the index fingertip in 99.78 % of one-finger video frames and 97.55 % of five-finger video frames. Although the algorithm produced slightly biased and more dispersed estimates relative to a human coder, these differences (x=0.08 cm, y=0.04 cm) and standard deviations (σ x =0.16 cm, σ y =0.21 cm) were small compared to the size of a fingertip (1.5-2.0 cm). Some example finger-tracking results are provided where corrections are made using the bias and variance estimates. PMID:26174712
Abgrall, Rémi; Congedo, Pietro Marco
2013-02-15
This paper deals with the formulation of a semi-intrusive (SI) method allowing the computation of statistics of linear and non linear PDEs solutions. This method shows to be very efficient to deal with probability density function of whatsoever form, long-term integration and discontinuities in stochastic space. Given a stochastic PDE where randomness is defined on Ω, starting from (i) a description of the solution in term of a space variables, (ii) a numerical scheme defined for any event ω∈Ω and (iii) a (family) of random variables that may be correlated, the solution is numerically described by its conditional expectancies of point values or cell averages and its evaluation constructed from the deterministic scheme. One of the tools is a tessellation of the random space as in finite volume methods for the space variables. Then, using these conditional expectancies and the geometrical description of the tessellation, a piecewise polynomial approximation in the random variables is computed using a reconstruction method that is standard for high order finite volume space, except that the measure is no longer the standard Lebesgue measure but the probability measure. This reconstruction is then used to formulate a scheme on the numerical approximation of the solution from the deterministic scheme. This new approach is said semi-intrusive because it requires only a limited amount of modification in a deterministic solver to quantify uncertainty on the state when the solver includes uncertain variables. The effectiveness of this method is illustrated for a modified version of Kraichnan–Orszag three-mode problem where a discontinuous pdf is associated to the stochastic variable, and for a nozzle flow with shocks. The results have been analyzed in terms of accuracy and probability measure flexibility. Finally, the importance of the probabilistic reconstruction in the stochastic space is shown up on an example where the exact solution is computable, the viscous
NASA Astrophysics Data System (ADS)
Chen, J.; Kemna, A.; Hubbard, S.
2007-12-01
Cole-Cole model parameters (e.g., chargeability and time constant), extracted from spectral induced polarization (SIP) data, are being increasingly used to characterize subsurface properties. However, fitting Cole-Cole models (especially nested Cole-Cole models) to SIP data is challenging because of nonlinearity and non-uniqueness of the Cole-Cole models. This study compares conventional deterministic approaches (i.e., iterative based estimation methods) with Markov chain Monte Carlo (MCMC) based stochastic approaches for estimating Cole- Cole model parameters. The results of those case studies show that although deterministic methods are able to provide single optimal solutions under certain criteria (e.g., the least squares of misfit) and require minimal computing power, they suffer from two main limitations. The first limitation is that the optimal solutions heavily depend on the choice of the initial values. Different initial values may yield different inversion results, and in many cases, the deterministic methods even cannot converge for the chosen initial values. The second limitation is that those methods provide inadequate or inaccurate information about uncertainty in the estimation. On the contrary, the MCMC-based stochastic approaches are insensitive to the choice of the initial values and can provide extensive information about uncertainty in the estimation. From the drawn large number of samples, we can obtain exhaustive information about unknown parameters, such as the mean, the median, the mode, and even entire probability distribution of each unknown Cole-Cole model parameter. Although MCMC-based stochastic methods typically require that the forward models be run for thousands of times, this is not an issue given the current computer power. Through presentation of extensive synthetic and laboratory case studies, we will illustrate the benefits of the different methods when used individually and in combination with each other.
Wang, Jianjun; Shen, Jianhaua; Wu, Yucheng; Tu, Chen; Soininen , Janne; Stegen, James C.; He, Jizheng; Liu, Xingqi; Zhang, Lu; Zhang, Enlou
2013-02-28
Increasing evidence emerged for non-random spatial distributions for microbes, but the underlying processes resulting in microbial assemblage variation among and within Earth’s ecosystems is still lacking. For instance, some studies showed that the deterministic processes by habitat specialization are important, while other studies hold that bacterial communities are assembled by neutral forces. Here we examine the relative importance of deterministic and stochastic processes for bacterial communities from subsurface environments, as well as stream biofilm, lake water, lake sediment and soil using pyrosequencing of 16S rRNA gene. We show that there is a general pattern in phylogenetic signal in species niches across recent evolutionary time for all studied habitats, enabling us to infer the influences of community assembly processes from patterns of phylogenetic turnover in community composition. The phylogenetic dissimilarities among habitat types were significantly higher than within them, and the communities were clustered according to their original habitat types. For communities within habitat types, the highest phylogenetic turnover rate through space was observed in subsurface environments, followed by stream biofilm on mountainsides, whereas the sediment assemblages across regional scales showed the lowest turnover rate. Quantifying phylogenetic turnover as the deviation from a null expectation suggested that measured environmental variables imposed strong selection on bacterial communities for nearly all sample groups, and for three sample groups, that spatial distance reflects unmeasured environmental variables that impose selection, as opposed to spatial isolation. Such characterization of spatial and environmental variables proved essential for proper interpretation of partial mantel results based on observed beta diversity metrics. In summary, our results clearly indicate a dominant role of deterministic processes on bacterial assemblages and
The Diffusion Model Is Not a Deterministic Growth Model: Comment on Jones and Dzhafarov (2014)
Smith, Philip L.; Ratcliff, Roger; McKoon, Gail
2015-01-01
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov’s argument depends on enlarging the class of “diffusion” models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov’s attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research. PMID:25347314
Artyomov, Maxim N.; Das, Jayajit; Kardar, Mehran; Chakraborty, Arup K.
2007-01-01
Detection of different extracellular stimuli leading to functionally distinct outcomes is ubiquitous in cell biology, and is often mediated by differential regulation of positive and negative feedback loops that are a part of the signaling network. In some instances, these cellular responses are stimulated by small numbers of molecules, and so stochastic effects could be important. Therefore, we studied the influence of stochastic fluctuations on a simple signaling model with dueling positive and negative feedback loops. The class of models we have studied is characterized by single deterministic steady states for all parameter values, but the stochastic response is bimodal; a behavior that is distinctly different from models studied in the context of gene regulation. For example, when positive and negative regulation is roughly balanced, a unique deterministic steady state with an intermediate value for the amount of a downstream signaling product is found. However, for small numbers of signaling molecules, stochastic effects result in a bimodal distribution for this quantity, with neither mode corresponding to the deterministic solution; i.e., cells are in “on” or “off” states, not in some intermediate state. For a large number of molecules, the stochastic solution converges to the mean-field result. When fluctuations are important, we find that signal output scales with control parameters “anomalously” compared with mean-field predictions. The necessary and sufficient conditions for the phenomenon we report are quite common. So, our findings are expected to be of broad relevance, and suggest that stochastic effects can enable binary cellular decisions. PMID:18025473
NASA Astrophysics Data System (ADS)
Hannah, D. M.; Kantola, K.; Malcolm, I.
2012-12-01
River temperature influences strongly growth and survival in salmonid fish, which are often the target of river management strategies. Temperature is controlled by transfers of heat and water to/ from the river system, with land and water management modifying exchanges and consequently thermal regime. In the UK, fisheries managers are promoting riparian forest planting as a climate change adaption measure to reduce water temperature extremes. However, scientific understanding lags behind management and policy needs. Specifically, there is an urgent requirement to determine planting strategies that maximise expected benefits of riparian forest in terms of reduction in maximum water temperature. Scientific knowledge is necessary to underpin conceptual and deterministic models to inform management. To address this research gap, this paper analyses high resolution (15 minute) hydrometeorological data collected over a calendar year in the western Scottish Highlands (Loch Ard) to understand the controls and processes determining river temperature dynamics under open moorland (control), semi-natural woodland and commercial forest. The research programme aims: (1) to characterise spatial and temporal variability in riparian microclimate and stream water temperature regime across forest treatments; (2) to identify the hydrological, climatological and site-specific factors affecting stream temperature; (3) to estimate the energy balance at sites representative of each forest treatment and, thus, yield physical process understanding about dominant heat exchanges driving thermal variability; and (4) to use 1-3 to predict stream temperature sensitivity under different forestry and hydroclimatological scenarios. Results indicated that inter-treatment differences in mean and maximum daily water column temperature were ordered open > semi-natural > commercial during summer, but semi-natural > commercial > open during winter. Minimum water temperature was ordered commercial > semi
Ion exchange capacity of Nafion and Nafion composites
Chen, T.Y.; Leddy, J.
2000-03-21
The ion exchange capacity of recast Nafion films and composites of Nafion and polystyrene microbeads is determined by titration. Composite formation enhances exchange capacity; exchange capacity increases with the surface area microbeads in the composite. For recast films, an equivalent weight of 996 {+-} 24 is found, whereas the lowest equivalent weight (highest exchange capacity) found for composite is 878 {+-} 8. This suggests that {approx_gt} 13% of the exchange sites within recast films are inaccessible for ion exchange; for 1,100 equivalent weight material, {approx_gt} 25% of the sulfonates are inaccessible. Equivalent weight results are consistent with an ordered interfacial domain between Nafion and the microbeads. A fractal model based on microbead radii, microbead fraction, and interfacial domain thickness provides a predictive model for designing composites with increased exchange capacity and cation transport.
Higher Education Exchange, 2012
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2012-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Teachers' Centers Exchange Directory.
ERIC Educational Resources Information Center
Lance, Jeanne; Kreitzman, Ruth
This directory has three major sections. The foreword is a brief essay describing the purpose of the Teachers' Centers Exchange, the "network" of teachers' centers, and the reasons for compiling and publishing this directory. The second section gives descriptions of 78 teachers' centers in the Exchange's network. These descriptions highlight each…
Reimann, Robert C.; Root, Richard A.
1986-01-01
A gas-to-liquid heat exchanger system which transfers heat from a gas, generally the combustion gas of a direct-fired generator of an absorption machine, to a liquid, generally an absorbent solution. The heat exchanger system is in a counterflow fluid arrangement which creates a more efficient heat transfer.
Higher Education Exchange, 2010
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2010-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Higher Education Exchange, 2008
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2008-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…
Building Relationships through Exchange
ERIC Educational Resources Information Center
Primavera, Angi; Hall, Ellen
2011-01-01
From the moment of birth, children form and develop relationships with others in their world based on exchange. Children recognize that engaging in such encounters offers them the opportunity to enter into a relationship with another individual and to nurture that relationship through the exchange of messages and gifts, items and ideas. At Boulder…
Higher Education Exchange, 2004
ERIC Educational Resources Information Center
Brown, David W., Ed; Witte, Deborah, Ed.
2004-01-01
The Higher Education Exchange is part of a movement to strengthen higher education's democratic mission and foster a more democratic culture throughout American society. Working in this tradition, the Higher Education Exchange publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic…
Optimization of Heat Exchangers
Ivan Catton
2010-10-01
The objective of this research is to develop tools to design and optimize heat exchangers (HE) and compact heat exchangers (CHE) for intermediate loop heat transport systems found in the very high temperature reator (VHTR) and other Generation IV designs by addressing heat transfer surface augmentation and conjugate modeling. To optimize heat exchanger, a fast running model must be created that will allow for multiple designs to be compared quickly. To model a heat exchanger, volume averaging theory, VAT, is used. VAT allows for the conservation of mass, momentum and energy to be solved for point by point in a 3 dimensional computer model of a heat exchanger. The end product of this project is a computer code that can predict an optimal configuration for a heat exchanger given only a few constraints (input fluids, size, cost, etc.). As VAT computer code can be used to model characteristics )pumping power, temperatures, and cost) of heat exchangers more quickly than traditional CFD or experiment, optimization of every geometric parameter simultaneously can be made. Using design of experiment, DOE and genetric algorithms, GE, to optimize the results of the computer code will improve heat exchanger disign.
Higher Education Exchange, 2005
ERIC Educational Resources Information Center
Brown, David W., Ed; Witte, Deborah, Ed.
2005-01-01
The "Higher Education Exchange" is part of a movement to strengthen higher education's democratic mission and foster a more democratic culture throughout American society. Working in this tradition, the "Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic…
ERIC Educational Resources Information Center
Moseley, Christine
2003-01-01
In this activity, teachers in one state create and share an "exchange box" of environmental and cultural items with students of another state. The Environmental Exchange Box activity enables teachers to improve students' skills in scientific inquiry and develop attitudes and values conducive to science learning such as wonder, curiosity, and…
Higher Education Exchange, 2011
ERIC Educational Resources Information Center
Brown, David W., Ed.; Witte, Deborah, Ed.
2011-01-01
"Higher Education Exchange" publishes case studies, analyses, news, and ideas about efforts within higher education to develop more democratic societies. Contributors to this issue of the "Higher Education Exchange" examine whether institutions of higher learning are doing anything to increase the capacity of citizens to shape their future.…