Sample records for maximum open probability

  1. Inhibition of the calcium channel by intracellular protons in single ventricular myocytes of the guinea-pig.

    PubMed Central

    Kaibara, M; Kameyama, M

    1988-01-01

    1. The inhibitory effects of intracellular protons (Hi+) on the L-type Ca2+ channel activity were investigated in single ventricular myocytes of guinea-pigs by using the patch-clamp method in the open-cell-attached patch configuration, where 'run down' of the channel was partially prevented. 2. Hi+ reduced the unitary Ba2+ current of the Ca2+ channel by 10-20% without changing the maximum slope conductance. 3. Hi+ did not alter the number of channels in patches containing one or two channels. 4. Hi+ markedly reduced the mean current normalized by the unitary current, which gave the open-state probability multiplied by the number of channels in the patch. The dose-response curve between Hi+ and the open-state probability indicated half-maximum inhibition at pHi 6.6 and an apparent Hill coefficient of 1. 5. Hi+ shifted both the steady-state activation and inactivation curves in a negative direction by 10-15 mV, and the effects were reversible. 6. Hi+ did not affect the fast open-closed kinetics represented by the C-C-O scheme, apart from increasing the slow time constant of the closed time. 7. Hi+ increased the percentage of blank sweeps and reduced that of non-blank sweeps resulting in a decreased probability of channel opening. 8. Photo-oxidation with Rose Bengal abolished the reducing effect of Hi+ on the open-state probability (Po) in two out of ten experiments, suggesting the possible involvement of histidine residues in the Hi+ effect. 9. The above results indicate that Hi+ inhibits the Ba2+ current mainly by affecting the slow gating mechanism of the channel. PMID:2855346

  2. Phase Transition for the Large-Dimensional Contact Process with Random Recovery Rates on Open Clusters

    NASA Astrophysics Data System (ADS)

    Xue, Xiaofeng

    2016-12-01

    In this paper we are concerned with the contact process with random recovery rates on open clusters of bond percolation on Z^d. Let ξ be a random variable such that P(ξ ≥ 1)=1, which ensures E1/ξ <+∞, then we assign i. i. d. copies of ξ on the vertices as the random recovery rates. Assuming that each edge is open with probability p and the infection can only spread through the open edges, then we obtain that limsup _{d→ +∞}λ _d≤ λ _c=1/pE{1}/{ξ}, where λ _d is the critical value of the process on Z^d, i.e., the maximum of the infection rates with which the infection dies out with probability one when only the origin is infected at t=0. To prove the above main result, we show that the following phase transition occurs. Assuming that lceil log drceil vertices are infected at t=0, where these vertices can be located anywhere, then when the infection rate λ >λ _c, the process survives with high probability as d→ +∞ while when λ <λ _c, the process dies out at time O(log d) with high probability.

  3. Exact Maximum-Entropy Estimation with Feynman Diagrams

    NASA Astrophysics Data System (ADS)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  4. Entropy generation in biophysical systems

    NASA Astrophysics Data System (ADS)

    Lucia, U.; Maino, G.

    2013-03-01

    Recently, in theoretical biology and in biophysical engineering the entropy production has been verified to approach asymptotically its maximum rate, by using the probability of individual elementary modes distributed in accordance with the Boltzmann distribution. The basis of this approach is the hypothesis that the entropy production rate is maximum at the stationary state. In the present work, this hypothesis is explained and motivated, starting from the entropy generation analysis. This latter quantity is obtained from the entropy balance for open systems considering the lifetime of the natural real process. The Lagrangian formalism is introduced in order to develop an analytical approach to the thermodynamic analysis of the open irreversible systems. The stationary conditions of the open systems are thus obtained in relation to the entropy generation and the least action principle. Consequently, the considered hypothesis is analytically proved and it represents an original basic approach in theoretical and mathematical biology and also in biophysical engineering. It is worth remarking that the present results show that entropy generation not only increases but increases as fast as possible.

  5. Open-loop-feedback control of serum drug concentrations: pharmacokinetic approaches to drug therapy.

    PubMed

    Jelliffe, R W

    1983-01-01

    Recent developments to optimize open-loop-feedback control of drug dosage regimens, generally applicable to pharmacokinetically oriented therapy with many drugs, involve computation of patient-individualized strategies for obtaining desired serum drug concentrations. Analyses of past therapy are performed by least squares, extended least squares, and maximum a posteriori probability Bayesian methods of fitting pharmacokinetic models to serum level data. Future possibilities for truly optimal open-loop-feedback therapy with full Bayesian methods, and conceivably for optimal closed-loop therapy in such data-poor clinical situations, are also discussed. Implementation of these various therapeutic strategies, using automated, locally controlled infusion devices, has also been achieved in prototype form.

  6. Bell-shaped calcium-response curves of lns(l,4,5)P3- and calcium-gated channels from endoplasmic reticulum of cerebellum

    NASA Astrophysics Data System (ADS)

    Bezprozvanny, Llya; Watras, James; Ehrlich, Barbara E.

    1991-06-01

    RELEASE of calcium from intracellular stores occurs by two pathways, an inositol 1,4,5-trisphosphate (InsP3)-gated channel1-3 and a calcium-gated channel (ryanodine receptor)4-6. Using specific antibodies, both receptors were found in Purkinje cells of cerebellum7,8. We have now compared the functional properties of the channels corresponding to the two receptors by incorporating endoplasmic reticulum vesicles from canine cerebellum into planar bilayers. InsP3-gated channels were observed most frequently. Another channel type was activated by adenine nucleotides or caffeine, inhibited by ruthenium red, and modified by ryanodine, characteristics of the ryanodine receptor/channel6. The open probability of both channel types displayed a bell-shaped curve for dependence on calcium. For the InsP3-gated channel, the maximum probability of opening occurred at 0.2 µM free calcium, with sharp decreases on either side of the maximum. Maximum activity for the ryanodine receptor/channel was maintained between 1 and 100 µM calcium. Thus, within the physiological range of cytoplasmic calcium, the InsP3-gated channel itself allows positive feed-back and then negative feedback for calcium release, whereas the ryanodine receptor/channel behaves solely as a calcium-activated channel. The existence in the same cell of two channels with different responses to calcium and different ligand sensitivities provides a basis for complex patterns of intracellular calcium regulation.

  7. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  8. Cluster membership probability: polarimetric approach

    NASA Astrophysics Data System (ADS)

    Medhi, Biman J.; Tamura, Motohide

    2013-04-01

    Interstellar polarimetric data of the six open clusters Hogg 15, NGC 6611, NGC 5606, NGC 6231, NGC 5749 and NGC 6250 have been used to estimate the membership probability for the stars within them. For proper-motion member stars, the membership probability estimated using the polarimetric data is in good agreement with the proper-motion cluster membership probability. However, for proper-motion non-member stars, the membership probability estimated by the polarimetric method is in total disagreement with the proper-motion cluster membership probability. The inconsistencies in the determined memberships may be because of the fundamental differences between the two methods of determination: one is based on stellar proper motion in space and the other is based on selective extinction of the stellar output by the asymmetric aligned dust grains present in the interstellar medium. The results and analysis suggest that the scatter of the Stokes vectors q (per cent) and u (per cent) for the proper-motion member stars depends on the interstellar and intracluster differential reddening in the open cluster. It is found that this method could be used to estimate the cluster membership probability if we have additional polarimetric and photometric information for a star to identify it as a probable member/non-member of a particular cluster, such as the maximum wavelength value (λmax), the unit weight error of the fit (σ1), the dispersion in the polarimetric position angles (overline{ɛ }), reddening (E(B - V)) or the differential intracluster reddening (ΔE(B - V)). This method could also be used to estimate the membership probability of known member stars having no membership probability as well as to resolve disagreements about membership among different proper-motion surveys.

  9. Voltage-dependent K+ channel gating and voltage sensor toxin sensitivity depend on the mechanical state of the lipid membrane.

    PubMed

    Schmidt, Daniel; MacKinnon, Roderick

    2008-12-09

    Voltage-dependent K(+) (Kv) channels underlie action potentials through gating conformational changes that are driven by membrane voltage. In this study of the paddle chimera Kv channel, we demonstrate that the rate of channel opening, the voltage dependence of the open probability, and the maximum achievable open probability depend on the lipid membrane environment. The activity of the voltage sensor toxin VsTx1, which interferes with voltage-dependent gating by partitioning into the membrane and binding to the channel, also depends on the membrane. Membrane environmental factors that influence channel function are divisible into two general categories: lipid compositional and mechanical state. The mechanical state can have a surprisingly large effect on the function of a voltage-dependent K(+) channel, including its pharmacological interaction with voltage sensor toxins. The dependence of VSTx1 activity on the mechanical state of the membrane leads us to hypothesize that voltage sensor toxins exert their effect by perturbing the interaction forces that exist between the channel and the membrane.

  10. Voltage-dependent K+ channel gating and voltage sensor toxin sensitivity depend on the mechanical state of the lipid membrane

    PubMed Central

    Schmidt, Daniel; MacKinnon, Roderick

    2008-01-01

    Voltage-dependent K+ (Kv) channels underlie action potentials through gating conformational changes that are driven by membrane voltage. In this study of the paddle chimera Kv channel, we demonstrate that the rate of channel opening, the voltage dependence of the open probability, and the maximum achievable open probability depend on the lipid membrane environment. The activity of the voltage sensor toxin VsTx1, which interferes with voltage-dependent gating by partitioning into the membrane and binding to the channel, also depends on the membrane. Membrane environmental factors that influence channel function are divisible into two general categories: lipid compositional and mechanical state. The mechanical state can have a surprisingly large effect on the function of a voltage-dependent K+ channel, including its pharmacological interaction with voltage sensor toxins. The dependence of VSTx1 activity on the mechanical state of the membrane leads us to hypothesize that voltage sensor toxins exert their effect by perturbing the interaction forces that exist between the channel and the membrane. PMID:19050073

  11. Simulation study of traffic car accidents at a single lane roundabout

    NASA Astrophysics Data System (ADS)

    Echab, H.; Lakouari, N.; Ez-Zahraouy, H.; Benyoussef, A.

    2016-07-01

    In this paper, using the Nagel-Schreckenberg model, we numerically investigate the probability Pac of entering/circulating car accidents to occur at single-lane roundabout under the expanded open boundary. The roundabout consists of N on-ramps (respectively, off-ramps). The boundary is controlled by the injecting rates α1,α2 and the extracting rate β. The simulation results show that, depending on the injecting rates, the car accidents are more likely to happen when the capacity of the rotary is set to its maximum. Moreover, we found that the large values of rotary size L and the probability of preferential Pexit are reliable to improve safety and reduce accidents. However, the usage of indicator, the increase of β and/or N provokes an increase of car accident probability.

  12. The investigation of the lateral interaction effect's on traffic flow behavior under open boundaries

    NASA Astrophysics Data System (ADS)

    Bouadi, M.; Jetto, K.; Benyoussef, A.; El Kenz, A.

    2017-11-01

    In this paper, an open boundaries traffic flow system is studied by taking into account the lateral interaction with spatial defects. For a random defects distribution, if the vehicles velocities are weakly correlated, the traffic phases can be predicted by considering the corresponding inflow and outflow functions. Conversely, if the vehicles velocities are strongly correlated, a phase segregation appears inside the system's bulk which induces the maximum current appearance. Such velocity correlation depends mainly on the defects densities and the probabilities of lateral deceleration. However, for a compact defects distribution, the traffic phases are predictable by using the inflow in the system beginning, the inflow entering the defects zone and the outflow function.

  13. PMP Documents-HDSC/OWP

    Science.gov Websites

    Hydrometeorological Report No. 39 Probable Maximum Precipitation in the Hawaiian Islands 1963 Hydrometeorological Report No. 41 Probable Maximum and TVA Precipitation over the Tennessee River Basin above Chattanooga 1965 Hydrometeorological Report No. 46 Probable Maximum Precipitation, Mekong River Basin 1970

  14. Temperature and Voltage Coupling to Channel Opening in Transient Receptor Potential Melastatin 8 (TRPM8)*♦

    PubMed Central

    Raddatz, Natalia; Castillo, Juan P.; Gonzalez, Carlos; Alvarez, Osvaldo; Latorre, Ramon

    2014-01-01

    Expressed in somatosensory neurons of the dorsal root and trigeminal ganglion, the transient receptor potential melastatin 8 (TRPM8) channel is a Ca2+-permeable cation channel activated by cold, voltage, phosphatidylinositol 4,5-bisphosphate, and menthol. Although TRPM8 channel gating has been characterized at the single channel and macroscopic current levels, there is currently no consensus regarding the extent to which temperature and voltage sensors couple to the conduction gate. In this study, we extended the range of voltages where TRPM8-induced ionic currents were measured and made careful measurements of the maximum open probability the channel can attain at different temperatures by means of fluctuation analysis. The first direct measurements of TRPM8 channel temperature-driven conformational rearrangements provided here suggest that temperature alone is able to open the channel and that the opening reaction is voltage-independent. Voltage is a partial activator of TRPM8 channels, because absolute open probability values measured with fully activated voltage sensors are less than 1, and they decrease as temperature rises. By unveiling the fast temperature-dependent deactivation process, we show that TRPM8 channel deactivation is well described by a double exponential time course. The fast and slow deactivation processes are temperature-dependent with enthalpy changes of 27.2 and 30.8 kcal mol−1. The overall Q10 for the closing reaction is about 33. A three-tiered allosteric model containing four voltage sensors and four temperature sensors can account for the complex deactivation kinetics and coupling between voltage and temperature sensor activation and channel opening. PMID:25352597

  15. Estimation of submarine mass failure probability from a sequence of deposits with age dates

    USGS Publications Warehouse

    Geist, Eric L.; Chaytor, Jason D.; Parsons, Thomas E.; ten Brink, Uri S.

    2013-01-01

    The empirical probability of submarine mass failure is quantified from a sequence of dated mass-transport deposits. Several different techniques are described to estimate the parameters for a suite of candidate probability models. The techniques, previously developed for analyzing paleoseismic data, include maximum likelihood and Type II (Bayesian) maximum likelihood methods derived from renewal process theory and Monte Carlo methods. The estimated mean return time from these methods, unlike estimates from a simple arithmetic mean of the center age dates and standard likelihood methods, includes the effects of age-dating uncertainty and of open time intervals before the first and after the last event. The likelihood techniques are evaluated using Akaike’s Information Criterion (AIC) and Akaike’s Bayesian Information Criterion (ABIC) to select the optimal model. The techniques are applied to mass transport deposits recorded in two Integrated Ocean Drilling Program (IODP) drill sites located in the Ursa Basin, northern Gulf of Mexico. Dates of the deposits were constrained by regional bio- and magnetostratigraphy from a previous study. Results of the analysis indicate that submarine mass failures in this location occur primarily according to a Poisson process in which failures are independent and return times follow an exponential distribution. However, some of the model results suggest that submarine mass failures may occur quasiperiodically at one of the sites (U1324). The suite of techniques described in this study provides quantitative probability estimates of submarine mass failure occurrence, for any number of deposits and age uncertainty distributions.

  16. Multi-GPU parallel algorithm design and analysis for improved inversion of probability tomography with gravity gradiometry data

    NASA Astrophysics Data System (ADS)

    Hou, Zhenlong; Huang, Danian

    2017-09-01

    In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.

  17. Deep sediment resuspension and thick nepheloid layer generation by open-ocean convection

    NASA Astrophysics Data System (ADS)

    Durrieu de Madron, X.; Ramondenc, S.; Berline, L.; Houpert, L.; Bosse, A.; Martini, S.; Guidi, L.; Conan, P.; Curtil, C.; Delsaut, N.; Kunesch, S.; Ghiglione, J. F.; Marsaleix, P.; Pujo-Pay, M.; Séverin, T.; Testor, P.; Tamburini, C.

    2017-03-01

    The Gulf of Lions in the northwestern Mediterranean is one of the few sites around the world ocean exhibiting deep open-ocean convection. Based on 6 year long (2009-2015) time series from a mooring in the convection region, shipborne measurements from repeated cruises, from 2012 to 2015, and glider measurements, we report evidence of bottom thick nepheloid layer formation, which is coincident with deep sediment resuspension induced by bottom-reaching convection events. This bottom nepheloid layer, which presents a maximum thickness of more than 2000 m in the center of the convection region, probably results from the action of cyclonic eddies that are formed during the convection period and can persist within their core while they travel through the basin. The residence time of this bottom nepheloid layer appears to be less than a year. In situ measurements of suspended particle size further indicate that the bottom nepheloid layer is primarily composed of aggregates between 100 and 1000 µm in diameter, probably constituted of fine silts. Bottom-reaching open ocean convection, as well as deep dense shelf water cascading that occurred concurrently some years, lead to recurring deep sediments resuspension episodes. They are key mechanisms that control the concentration and characteristics of the suspended particulate matter in the basin, and in turn affect the bathypelagic biological activity.

  18. Gating kinetics of batrachotoxin-modified Na+ channels in the squid giant axon. Voltage and temperature effects.

    PubMed Central

    Correa, A M; Bezanilla, F; Latorre, R

    1992-01-01

    The gating kinetics of batrachotoxin-modified Na+ channels were studied in outside-out patches of axolemma from the squid giant axon by means of the cut-open axon technique. Single channel kinetics were characterized at different membrane voltages and temperatures. The probability of channel opening (Po) as a function of voltage was well described by a Boltzmann distribution with an equivalent number of gating particles of 3.58. The voltage at which the channel was open 50% of the time was a function of [Na+] and temperature. A decrease in the internal [Na+] induced a shift to the right of the Po vs. V curve, suggesting the presence of an integral negative fixed charge near the activation gate. An increase in temperature decreased Po, indicating a stabilization of the closed configuration of the channel and also a decrease in entropy upon channel opening. Probability density analysis of dwell times in the closed and open states of the channel at 0 degrees C revealed the presence of three closed and three open states. The slowest open kinetic component constituted only a small fraction of the total number of transitions and became negligible at voltages greater than -65 mV. Adjacent interval analysis showed that there is no correlation in the duration of successive open and closed events. Consistent with this analysis, maximum likelihood estimation of the rate constants for nine different single-channel models produced a preferred model (model 1) having a linear sequence of closed states and two open states emerging from the last closed state. The effect of temperature on the rate constants of model 1 was studied. An increase in temperature increased all rate constants; the shift in Po would be the result of an increase in the closing rates predominant over the change in the opening rates. The temperature study also provided the basis for building an energy diagram for the transitions between channel states. PMID:1318096

  19. Maximum predictive power and the superposition principle

    NASA Technical Reports Server (NTRS)

    Summhammer, Johann

    1994-01-01

    In quantum physics the direct observables are probabilities of events. We ask how observed probabilities must be combined to achieve what we call maximum predictive power. According to this concept the accuracy of a prediction must only depend on the number of runs whose data serve as input for the prediction. We transform each probability to an associated variable whose uncertainty interval depends only on the amount of data and strictly decreases with it. We find that for a probability which is a function of two other probabilities maximum predictive power is achieved when linearly summing their associated variables and transforming back to a probability. This recovers the quantum mechanical superposition principle.

  20. Impacts of savanna trees on forage quality for a large African herbivore

    PubMed Central

    De Kroon, Hans; Prins, Herbert H. T.

    2008-01-01

    Recently, cover of large trees in African savannas has rapidly declined due to elephant pressure, frequent fires and charcoal production. The reduction in large trees could have consequences for large herbivores through a change in forage quality. In Tarangire National Park, in Northern Tanzania, we studied the impact of large savanna trees on forage quality for wildebeest by collecting samples of dominant grass species in open grassland and under and around large Acacia tortilis trees. Grasses growing under trees had a much higher forage quality than grasses from the open field indicated by a more favourable leaf/stem ratio and higher protein and lower fibre concentrations. Analysing the grass leaf data with a linear programming model indicated that large savanna trees could be essential for the survival of wildebeest, the dominant herbivore in Tarangire. Due to the high fibre content and low nutrient and protein concentrations of grasses from the open field, maximum fibre intake is reached before nutrient requirements are satisfied. All requirements can only be satisfied by combining forage from open grassland with either forage from under or around tree canopies. Forage quality was also higher around dead trees than in the open field. So forage quality does not reduce immediately after trees die which explains why negative effects of reduced tree numbers probably go initially unnoticed. In conclusion our results suggest that continued destruction of large trees could affect future numbers of large herbivores in African savannas and better protection of large trees is probably necessary to sustain high animal densities in these ecosystems. PMID:18309522

  1. Hidden Markov analysis of mechanosensitive ion channel gating.

    PubMed

    Khan, R Nazim; Martinac, Boris; Madsen, Barry W; Milne, Robin K; Yeo, Geoffrey F; Edeson, Robert O

    2005-02-01

    Patch clamp data from the large conductance mechanosensitive channel (MscL) in E. coli was studied with the aim of developing a strategy for statistical analysis based on hidden Markov models (HMMs) and determining the number of conductance levels of the channel, together with mean current, mean dwell time and equilibrium probability of occupancy for each level. The models incorporated state-dependent white noise and moving average adjustment for filtering, with maximum likelihood parameter estimates obtained using an EM (expectation-maximisation) based iteration. Adjustment for filtering was included as it could be expected that the electronic filter used in recording would have a major effect on obviously brief intermediate conductance level sojourns. Preliminary data analysis revealed that the brevity of intermediate level sojourns caused difficulties in assignment of data points to levels as a result of over-estimation of noise variances. When reasonable constraints were placed on these variances using the better determined noise variances for the closed and fully open levels, idealisation anomalies were eliminated. Nevertheless, simulations suggested that mean sojourn times for the intermediate levels were still considerably over-estimated, and that recording bandwidth was a major limitation; improved results were obtained with higher bandwidth data (10 kHz sampled at 25 kHz). The simplest model consistent with these data had four open conductance levels, intermediate levels being approximately 20%, 51% and 74% of fully open. The mean lifetime at the fully open level was about 1 ms; estimates for the three intermediate levels were 54-92 micros, probably still over-estimates.

  2. Proton transport behavior through the influenza A M2 channel: insights from molecular simulation.

    PubMed

    Chen, Hanning; Wu, Yujie; Voth, Gregory A

    2007-11-15

    The structural properties of the influenza A virus M2 transmembrane channel in dimyristoylphosphatidylcholine bilayer for each of the four protonation states of the proton-gating His-37 tetrad and their effects on proton transport for this low-pH activated, highly proton-selective channel are studied by classical molecular dynamics with the multistate empirical valence-bond (MS-EVB) methodology. The excess proton permeation free energy profile and maximum ion conductance calculated from the MS-EVB simulation data combined with the Poisson-Nernst-Planck theory indicates that the triply protonated His-37 state is the most likely open state via a significant side-chain conformational change of the His-37 tetrad. This proposed open state of M2 has a calculated proton permeation free energy barrier of 7 kcal/mol and a maximum conductance of 53 pS compared to the experimental value of 6 pS. By contrast, the maximum conductance for Na(+) is calculated to be four orders of magnitude lower, in reasonable agreement with the experimentally observed proton selectivity. The pH value to activate the channel opening is estimated to be 5.5 from dielectric continuum theory, which is also consistent with experimental results. This study further reveals that the Ala-29 residue region is the primary binding site for the antiflu drug amantadine (AMT), probably because that domain is relatively spacious and hydrophobic. The presence of AMT is calculated to reduce the proton conductance by 99.8% due to a significant dehydration penalty of the excess proton in the vicinity of the channel-bound AMT.

  3. Open Quantum Walks with Noncommuting Jump Operators

    NASA Astrophysics Data System (ADS)

    Caballar, Roland Cristopher; Petruccione, Francesco; Sinayskiy, Ilya

    2014-03-01

    We examine homogeneous open quantum walks along a line, wherein each forward step is due to one quantum jump operator, and each backward step due to another quantum jump operator. We assume that these two quantum jump operators do not commute with each other. We show that if the system has N internal degrees of freedom, for particular forms of these quantum jump operators, we can obtain exact probability distributions which fall into two distinct classes, namely Gaussian distributions and solitonic distributions. We also show that it is possible for a maximum of 2 solitonic distributions to be present simultaneously in the system. Finally, we consider applications of these classes of jump operators in quantum state preparation and quantum information. We acknowledge support from the National Institute for Theoretical Physics (NITheP).

  4. Effects of variability in probable maximum precipitation patterns on flood losses

    NASA Astrophysics Data System (ADS)

    Zischg, Andreas Paul; Felder, Guido; Weingartner, Rolf; Quinn, Niall; Coxon, Gemma; Neal, Jeffrey; Freer, Jim; Bates, Paul

    2018-05-01

    The assessment of the impacts of extreme floods is important for dealing with residual risk, particularly for critical infrastructure management and for insurance purposes. Thus, modelling of the probable maximum flood (PMF) from probable maximum precipitation (PMP) by coupling hydrological and hydraulic models has gained interest in recent years. Herein, we examine whether variability in precipitation patterns exceeds or is below selected uncertainty factors in flood loss estimation and if the flood losses within a river basin are related to the probable maximum discharge at the basin outlet. We developed a model experiment with an ensemble of probable maximum precipitation scenarios created by Monte Carlo simulations. For each rainfall pattern, we computed the flood losses with a model chain and benchmarked the effects of variability in rainfall distribution with other model uncertainties. The results show that flood losses vary considerably within the river basin and depend on the timing and superimposition of the flood peaks from the basin's sub-catchments. In addition to the flood hazard component, the other components of flood risk, exposure, and vulnerability contribute remarkably to the overall variability. This leads to the conclusion that the estimation of the probable maximum expectable flood losses in a river basin should not be based exclusively on the PMF. Consequently, the basin-specific sensitivities to different precipitation patterns and the spatial organization of the settlements within the river basin need to be considered in the analyses of probable maximum flood losses.

  5. Probability analysis for consecutive-day maximum rainfall for Tiruchirapalli City (south India, Asia)

    NASA Astrophysics Data System (ADS)

    Sabarish, R. Mani; Narasimhan, R.; Chandhru, A. R.; Suribabu, C. R.; Sudharsan, J.; Nithiyanantham, S.

    2017-05-01

    In the design of irrigation and other hydraulic structures, evaluating the magnitude of extreme rainfall for a specific probability of occurrence is of much importance. The capacity of such structures is usually designed to cater to the probability of occurrence of extreme rainfall during its lifetime. In this study, an extreme value analysis of rainfall for Tiruchirapalli City in Tamil Nadu was carried out using 100 years of rainfall data. Statistical methods were used in the analysis. The best-fit probability distribution was evaluated for 1, 2, 3, 4 and 5 days of continuous maximum rainfall. The goodness of fit was evaluated using Chi-square test. The results of the goodness-of-fit tests indicate that log-Pearson type III method is the overall best-fit probability distribution for 1-day maximum rainfall and consecutive 2-, 3-, 4-, 5- and 6-day maximum rainfall series of Tiruchirapalli. To be reliable, the forecasted maximum rainfalls for the selected return periods are evaluated in comparison with the results of the plotting position.

  6. Sill intrusion in volcanic calderas: implications for vent opening probability

    NASA Astrophysics Data System (ADS)

    Giudicepietro, Flora; Macedonio, Giovanni; Martini, Marcello; D'Auria, Luca

    2017-04-01

    Calderas show peculiar behaviors with remarkable dynamic processes, which do not often culminate in eruptions. Observations and studies conducted in recent decades have shown that the most common cause of unrest in the calderas is due to magma intrusion; in particular, the intrusion of sills at shallow depths. Monogenic cones, with large areal dispersion, are quite common in the calderas, suggesting that the susceptibility analysis based on geological features, is not strictly suitable for estimating the vent opening probability in calderas. In general, the opening of a new eruptive vent can be regarded as a rock failure process. The stress field in the rocks that surrounds and tops the magmatic reservoirs plays an important role in causing the rock failure and creating the path that magma can follow towards the surface. In this conceptual framework, we approach the problem of getting clues about the probability of vent opening in volcanic calderas through the study of the stress field produced by the intrusion of magma, in particular, by the intrusion of a sill. We simulate the intrusion of a sill free to expand radially, with shape and dimensions which vary with time. The intrusion process is controlled by the elastic response of the rock plate above the sill, which bends because of the intrusion, and by gravity, that drives the magma towards the zones where the thickness of the sill is smaller. We calculated the stress field in the plate rock above the sill. We found that at the bottom of the rock plate above the sill the maximum intensity of tensile stress is concentrated at the front of the sill and spreads radially with it, over time. For this reason, we think that the front of the spreading sill is prone to open for eruptive vents. Even in the central area of the sill the intensity of stress is relatively high, but at the base of the rock plate stress is compressive. Under isothermal conditions, the stress soon reaches its maximum value (time interval depending on the model parameters) and then decreases over time during the intrusion. However, if we consider the effect of the cooling of magma, with the temperature which decreases with time and the viscosity that increases, we'll find that the stress in the rock above the sill gradually increases with time and becomes higher than in isothermal case. In order to investigate the role of the physical properties of magma and rock above the sill in the generation of the stress field we have carried out different simulations by varying the viscosity of magma and the rigidity of the rock and found that high viscosity magma produces a relatively high stress intensity, as well as a high rock rigidity does.

  7. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    PubMed

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  8. Estimating rate constants from single ion channel currents when the initial distribution is known.

    PubMed

    The, Yu-Kai; Fernandez, Jacqueline; Popa, M Oana; Lerche, Holger; Timmer, Jens

    2005-06-01

    Single ion channel currents can be analysed by hidden or aggregated Markov models. A classical result from Fredkin et al. (Proceedings of the Berkeley conference in honor of Jerzy Neyman and Jack Kiefer, vol I, pp 269-289, 1985) states that the maximum number of identifiable parameters is bounded by 2n(o)n(c), where n(o) and n(c) denote the number of open and closed states, respectively. We show that this bound can be overcome when the probabilities of the initial distribution are known and the data consist of several sweeps.

  9. Open Markov Processes and Reaction Networks

    NASA Astrophysics Data System (ADS)

    Swistock Pollard, Blake Stephen

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  10. Shallow magma accumulation at Kīlauea Volcano, Hawai‘i, revealed by microgravity surveys

    USGS Publications Warehouse

    Johnson, Daniel J.; Eggers, Albert A.; Bagnardi, Marco; Battaglia, Maurizio; Poland, Michael P.; Miklius, Asta

    2010-01-01

    Using microgravity data collected at Kīlauea Volcano, Hawai‘i (United States), between November 1975 and January 2008, we document significant mass increase beneath the east margin of Halema‘uma‘u Crater, within Kīlauea's summit caldera. Surprisingly, there was no sustained uplift accompanying the mass accumulation. We propose that the positive gravity residual in the absence of significant uplift is indicative of magma accumulation in void space (probably a network of interconnected cracks), which may have been created when magma withdrew from the summit in response to the 29 November 1975 M = 7.2 south flank earthquake. Subsequent refilling documented by gravity represents a gradual recovery from that earthquake. A new eruptive vent opened at the summit of Kīlauea in 2008 within a few hundred meters of the positive gravity residual maximum, probably tapping the reservoir that had been accumulating magma since the 1975 earthquake.

  11. Shallow magma accumulation at Kilauea Volcano, Hawai'i, revealed by microgravity surveys

    USGS Publications Warehouse

    Johnson, David J.; Eggers, Albert A.; Bagnardi, Marco; Battaglia, Maurizio; Poland, Michael P.; Miklius, Asta

    2010-01-01

    Using microgravity data collected at Kilauea Volcano, Hawai'i (United States), between November 1975 and January 2008, we document significant mass increase beneath the east margin of Halema'uma'u Crater, within Kilauea's summit caldera. Surprisingly, there was no sustained uplift accompanying the mass accumulation. We propose that the positive gravity residual in the absence of significant uplift is indicative of magma accumulation in void space (probably a network of interconnected cracks), which may have been created when magma withdrew from the summit in response to the 29 November 1975 M = 7.2 south flank earthquake. Subsequent refilling documented by gravity represents a gradual recovery from that earthquake. A new eruptive vent opened at the summit of Kilauea in 2008 within a few hundred meters of the positive gravity residual maximum, probably tapping the reservoir that had been accumulating magma since the 1975 earthquake.

  12. Net Surface Flux Budget Over Tropical Oceans Estimated from the Tropical Rainfall Measuring Mission (TRMM)

    NASA Astrophysics Data System (ADS)

    Fan, Tai-Fang

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  13. Magneto - Optical Imaging of Superconducting MgB2 Thin Films

    NASA Astrophysics Data System (ADS)

    Hummert, Stephanie Maria

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  14. Boron Carbide Filled Neutron Shielding Textile Polymers

    NASA Astrophysics Data System (ADS)

    Manzlak, Derrick Anthony

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  15. Parallel Unstructured Grid Generation for Complex Real-World Aerodynamic Simulations

    NASA Astrophysics Data System (ADS)

    Zagaris, George

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  16. Polymeric Radiation Shielding for Applications in Space: Polyimide Synthesis and Modeling of Multi-Layered Polymeric Shields

    NASA Astrophysics Data System (ADS)

    Schiavone, Clinton Cleveland

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  17. Processing and Conversion of Algae to Bioethanol

    NASA Astrophysics Data System (ADS)

    Kampfe, Sara Katherine

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  18. The Development of the CALIPSO LiDAR Simulator

    NASA Astrophysics Data System (ADS)

    Powell, Kathleen A.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  19. Exploring a Novel Approach to Technical Nuclear Forensics Utilizing Atomic Force Microscopy

    NASA Astrophysics Data System (ADS)

    Peeke, Richard Scot

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  20. Modeling of Critically-Stratified Gravity Flows: Application to the Eel River Continental Shelf, Northern California

    NASA Astrophysics Data System (ADS)

    Scully, Malcolm E.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  1. Production of Cyclohexylene-Containing Diamines in Pursuit of Novel Radiation Shielding Materials

    NASA Astrophysics Data System (ADS)

    Bate, Norah G.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  2. Development of Boron-Containing Polyimide Materials and Poly(arylene Ether)s for Radiation Shielding

    NASA Astrophysics Data System (ADS)

    Collins, Brittani May

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  3. Magnetization Dynamics and Anisotropy in Ferromagnetic/Antiferromagnetic Ni/NiO Bilayers

    NASA Astrophysics Data System (ADS)

    Petersen, Andreas

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  4. Assessing future vent opening locations at the Somma-Vesuvio volcanic complex: 2. Probability maps of the caldera for a future Plinian/sub-Plinian event with uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Tadini, A.; Bevilacqua, A.; Neri, A.; Cioni, R.; Aspinall, W. P.; Bisson, M.; Isaia, R.; Mazzarini, F.; Valentine, G. A.; Vitale, S.; Baxter, P. J.; Bertagnini, A.; Cerminara, M.; de Michieli Vitturi, M.; Di Roberto, A.; Engwell, S.; Esposti Ongaro, T.; Flandoli, F.; Pistolesi, M.

    2017-06-01

    In this study, we combine reconstructions of volcanological data sets and inputs from a structured expert judgment to produce a first long-term probability map for vent opening location for the next Plinian or sub-Plinian eruption of Somma-Vesuvio. In the past, the volcano has exhibited significant spatial variability in vent location; this can exert a significant control on where hazards materialize (particularly of pyroclastic density currents). The new vent opening probability mapping has been performed through (i) development of spatial probability density maps with Gaussian kernel functions for different data sets and (ii) weighted linear combination of these spatial density maps. The epistemic uncertainties affecting these data sets were quantified explicitly with expert judgments and implemented following a doubly stochastic approach. Various elicitation pooling metrics and subgroupings of experts and target questions were tested to evaluate the robustness of outcomes. Our findings indicate that (a) Somma-Vesuvio vent opening probabilities are distributed inside the whole caldera, with a peak corresponding to the area of the present crater, but with more than 50% probability that the next vent could open elsewhere within the caldera; (b) there is a mean probability of about 30% that the next vent will open west of the present edifice; (c) there is a mean probability of about 9.5% that the next medium-large eruption will enlarge the present Somma-Vesuvio caldera, and (d) there is a nonnegligible probability (mean value of 6-10%) that the next Plinian or sub-Plinian eruption will have its initial vent opening outside the present Somma-Vesuvio caldera.

  5. Dynamical analysis of Grover's search algorithm in arbitrarily high-dimensional search spaces

    NASA Astrophysics Data System (ADS)

    Jin, Wenliang

    2016-01-01

    We discuss at length the dynamical behavior of Grover's search algorithm for which all the Walsh-Hadamard transformations contained in this algorithm are exposed to their respective random perturbations inducing the augmentation of the dimension of the search space. We give the concise and general mathematical formulations for approximately characterizing the maximum success probabilities of finding a unique desired state in a large unsorted database and their corresponding numbers of Grover iterations, which are applicable to the search spaces of arbitrary dimension and are used to answer a salient open problem posed by Grover (Phys Rev Lett 80:4329-4332, 1998).

  6. 42 CFR 84.97 - Test for carbon dioxide in inspired gas; open- and closed-circuit apparatus; maximum allowable...

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... closed-circuit apparatus; maximum allowable limits. 84.97 Section 84.97 Public Health PUBLIC HEALTH... ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Self-Contained Breathing Apparatus § 84.97 Test for carbon dioxide in inspired gas; open- and closed-circuit apparatus; maximum allowable limits. (a) Open...

  7. 42 CFR 84.97 - Test for carbon dioxide in inspired gas; open- and closed-circuit apparatus; maximum allowable...

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... closed-circuit apparatus; maximum allowable limits. 84.97 Section 84.97 Public Health PUBLIC HEALTH... ACTIVITIES APPROVAL OF RESPIRATORY PROTECTIVE DEVICES Self-Contained Breathing Apparatus § 84.97 Test for carbon dioxide in inspired gas; open- and closed-circuit apparatus; maximum allowable limits. (a) Open...

  8. High-Performance Nanocomposites Designed for Radiation Shielding in Space and an Application of GIS for Analyzing Nanopowder Dispersion in Polymer Matrixes

    NASA Astrophysics Data System (ADS)

    Auslander, Joseph Simcha

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  9. Time-Resolved Magneto-Optical Imaging of Superconducting YBCO Thin Films in the High-Frequency AC Current Regime

    NASA Astrophysics Data System (ADS)

    Frey, Alexander

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  10. Use of Remote Sensing to Identify Essential Habitat for Aeschynomene virginica (L.) BSP, a Threatened Tidal Freshwater Wetland Plant

    NASA Astrophysics Data System (ADS)

    Mountz, Elizabeth M.

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  11. Silver-Polyimide Nanocomposite Films: Single-Stage Synthesis and Analysis of Metalized Partially-Fluorinated Polyimide BTDA/4-BDAF Prepared from Silver(I) Complexes

    NASA Astrophysics Data System (ADS)

    Abelard, Joshua Erold Robert

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  12. Multifunctional Polymer Synthesis and Incorporation of Gadolinium Compounds and Modified Tungsten Nanoparticles for Improvement of Radiation Shielding for use in Outer Space

    NASA Astrophysics Data System (ADS)

    Harbert, Emily Grace

    We begin by defining the concept of `open' Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain `boundary' states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow of probability vanishes between all pairs of states. External couplings which fix the probabilities of boundary states can maintain such systems in non-equilibrium steady states in which non-zero probability currents flow. We show that these non-equilibrium steady states minimize a quadratic form which we call 'dissipation.' This is closely related to Prigogine's principle of minimum entropy production. We bound the rate of change of the entropy of a driven non-equilibrium steady state relative to the underlying equilibrium state in terms of the flow of probability through the boundary of the process. We then consider open Markov processes as morphisms in a symmetric monoidal category by splitting up their boundary states into certain sets of `inputs' and `outputs.' Composition corresponds to gluing the outputs of one such open Markov process onto the inputs of another so that the probability flowing out of the first process is equal to the probability flowing into the second. Tensoring in this category corresponds to placing two such systems side by side. We construct a `black-box' functor characterizing the behavior of an open Markov process in terms of the space of possible steady state probabilities and probability currents along the boundary. The fact that this is a functor means that the behavior of a composite open Markov process can be computed by composing the behaviors of the open Markov processes from which it is composed. We prove a similar black-boxing theorem for reaction networks whose dynamics are given by the non-linear rate equation. Along the way we describe a more general category of open dynamical systems where composition corresponds to gluing together open dynamical systems.

  13. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  14. Linear prediction and single-channel recording.

    PubMed

    Carter, A A; Oswald, R E

    1995-08-01

    The measurement of individual single-channel events arising from the gating of ion channels provides a detailed data set from which the kinetic mechanism of a channel can be deduced. In many cases, the pattern of dwells in the open and closed states is very complex, and the kinetic mechanism and parameters are not easily determined. Assuming a Markov model for channel kinetics, the probability density function for open and closed time dwells should consist of a sum of decaying exponentials. One method of approaching the kinetic analysis of such a system is to determine the number of exponentials and the corresponding parameters which comprise the open and closed dwell time distributions. These can then be compared to the relaxations predicted from the kinetic model to determine, where possible, the kinetic constants. We report here the use of a linear technique, linear prediction/singular value decomposition, to determine the number of exponentials and the exponential parameters. Using simulated distributions and comparing with standard maximum-likelihood analysis, the singular value decomposition techniques provide advantages in some situations and are a useful adjunct to other single-channel analysis techniques.

  15. A prospective study of the impact of opening a casino on gambling behaviours: 2- and 4-year follow-ups.

    PubMed

    Jacques, Christian; Ladouceur, Robert

    2006-10-01

    It is widely believed that the rate of pathological gambling is related to the accessibility and availability of gambling activities. Few empirical studies have yet been conducted to evaluate this hypothesis. Using a longitudinal prospective design, the current study evaluates the impact of a casino in Canada's Hull, Quebec region. A random sample of respondents from Hull (experimental group) and from Quebec City (comparison group) completed the South Oaks Gambling Screen (SOGS) and gambling- related questions before the opening of the Hull Casino (pretest), 1 year after the opening (posttest), and on follow-up at Years 2 and 4. Although, 1 year after the opening of the casino, we did observe an increase in playing casino games and in the maximum amount of money lost in 1 day's gambling, this trend was not maintained over time (2- and 4-year follow-ups). In the Hull cohort, the rate of at-risk and probable pathological gamblers and the number of criteria on the SOGS did not increase at the 2- and 4-year follow-ups. The residents' reluctance to open a local casino was generally stable over time following the casino's opening. The discussion raises different explanatory factors and focuses on the context of the Regional Exposure Model as a potentially more applicable measure of studying the expansion of gambling.

  16. Steady-state kinetics of solitary batrachotoxin-treated sodium channels. Kinetics on a bounded continuum of polymer conformations.

    PubMed Central

    Rubinson, K A

    1992-01-01

    The underlying principles of the kinetics and equilibrium of a solitary sodium channel in the steady state are examined. Both the open and closed kinetics are postulated to result from round-trip excursions from a transition region that separates the openable and closed forms. Exponential behavior of the kinetics can have origins different from small-molecule systems. These differences suggest that the probability density functions (PDFs) that describe the time dependences of the open and closed forms arise from a distribution of rate constants. The distribution is likely to arise from a thermal modulation of the channel structure, and this provides a physical basis for the following three-variable equation: [formula; see text] Here, A0 is a scaling term, k is the mean rate constant, and sigma quantifies the Gaussian spread for the contributions of a range of effective rate constants. The maximum contribution is made by k, with rates faster and slower contributing less. (When sigma, the standard deviation of the spread, goes to zero, then p(f) = A0 e-kt.) The equation is applied to the single-channel steady-state probability density functions for batrachotoxin-treated sodium channels (1986. Keller et al. J. Gen. Physiol. 88: 1-23). The following characteristics are found: (a) The data for both open and closed forms of the channel are fit well with the above equation, which represents a Gaussian distribution of first-order rate processes. (b) The simple relationship [formula; see text] holds for the mean effective rat constants. Or, equivalently stated, the values of P open calculated from the k values closely agree with the P open values found directly from the PDF data. (c) In agreement with the known behavior of voltage-dependent rate constants, the voltage dependences of the mean effective rate constants for the opening and closing of the channel are equal and opposite over the voltage range studied. That is, [formula; see text] "Bursts" are related to the well-known cage effect of solution chemistry. PMID:1312365

  17. Why does Japan use the probability method to set design flood?

    NASA Astrophysics Data System (ADS)

    Nakamura, S.; Oki, T.

    2015-12-01

    Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of socio-economic situation in design flood, and they applied to Japanese rivers in 1958. The probability method was applied Japan to adapt the specific socio-economic and natural situation during the confusion after the war.

  18. Stationary properties of maximum-entropy random walks.

    PubMed

    Dixit, Purushottam D

    2015-10-01

    Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.

  19. 14 CFR 440.7 - Determination of maximum probable loss.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... determine the maximum probable loss (MPL) from covered claims by a third party for bodily injury or property... licensee, or permittee, if interagency consultation may delay issuance of the MPL determination. (c... after the MPL determination is issued. Any change in financial responsibility requirements as a result...

  20. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  1. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  2. Methods for estimating drought streamflow probabilities for Virginia streams

    USGS Publications Warehouse

    Austin, Samuel H.

    2014-01-01

    Maximum likelihood logistic regression model equations used to estimate drought flow probabilities for Virginia streams are presented for 259 hydrologic basins in Virginia. Winter streamflows were used to estimate the likelihood of streamflows during the subsequent drought-prone summer months. The maximum likelihood logistic regression models identify probable streamflows from 5 to 8 months in advance. More than 5 million streamflow daily values collected over the period of record (January 1, 1900 through May 16, 2012) were compiled and analyzed over a minimum 10-year (maximum 112-year) period of record. The analysis yielded the 46,704 equations with statistically significant fit statistics and parameter ranges published in two tables in this report. These model equations produce summer month (July, August, and September) drought flow threshold probabilities as a function of streamflows during the previous winter months (November, December, January, and February). Example calculations are provided, demonstrating how to use the equations to estimate probable streamflows as much as 8 months in advance.

  3. Clues to understanding cold sensation: Thermodynamics and electrophysiological analysis of the cold receptor TRPM8

    PubMed Central

    Brauchi, Sebastian; Orio, Patricio; Latorre, Ramon

    2004-01-01

    The cold and menthol receptor, TRPM8, also designated CMR1, is a member of the transient receptor potential (TRP) family of excitatory ion channels. TRPM8 is a channel activated by cold temperatures, voltage, and menthol. In this study, we characterize the cold- and voltage-induced activation of TRPM8 channel in an attempt to identify the temperature- and voltage-dependent components involved in channel activation. Under equilibrium conditions, decreasing temperature has two effects. (i) It shifts the normalized conductance vs. voltage curves toward the left, along the voltage axis. This effect indicates that the degree of order is higher when the channel is in the open configuration. (ii) It increases the maximum channel open probability, suggesting that temperature affects both voltage-dependent and -independent pathways. In the temperature range between 18°C and 25°C, large changes in enthalpy (ΔH = -112 kcal/mol) and entropy (ΔS = -384 cal/mol K) accompany the activation process. The Q10 calculated in the same temperature range is 24. This thermodynamic analysis strongly suggests that the process of opening involves large conformational changes of the channel-forming protein. Therefore, the highly temperature-dependent transition between open and closed configurations is possible because enthalpy and entropy are both large and compensate each other. Our data also demonstrate that temperature and voltage interact allosterically to enhance channel opening. PMID:15492228

  4. A hydroclimatological approach to predicting regional landslide probability using Landlab

    NASA Astrophysics Data System (ADS)

    Strauch, Ronda; Istanbulluoglu, Erkan; Nudurupati, Sai Siddhartha; Bandaragoda, Christina; Gasparini, Nicole M.; Tucker, Gregory E.

    2018-02-01

    We develop a hydroclimatological approach to the modeling of regional shallow landslide initiation that integrates spatial and temporal dimensions of parameter uncertainty to estimate an annual probability of landslide initiation based on Monte Carlo simulations. The physically based model couples the infinite-slope stability model with a steady-state subsurface flow representation and operates in a digital elevation model. Spatially distributed gridded data for soil properties and vegetation classification are used for parameter estimation of probability distributions that characterize model input uncertainty. Hydrologic forcing to the model is through annual maximum daily recharge to subsurface flow obtained from a macroscale hydrologic model. We demonstrate the model in a steep mountainous region in northern Washington, USA, over 2700 km2. The influence of soil depth on the probability of landslide initiation is investigated through comparisons among model output produced using three different soil depth scenarios reflecting the uncertainty of soil depth and its potential long-term variability. We found elevation-dependent patterns in probability of landslide initiation that showed the stabilizing effects of forests at low elevations, an increased landslide probability with forest decline at mid-elevations (1400 to 2400 m), and soil limitation and steep topographic controls at high alpine elevations and in post-glacial landscapes. These dominant controls manifest themselves in a bimodal distribution of spatial annual landslide probability. Model testing with limited observations revealed similarly moderate model confidence for the three hazard maps, suggesting suitable use as relative hazard products. The model is available as a component in Landlab, an open-source, Python-based landscape earth systems modeling environment, and is designed to be easily reproduced utilizing HydroShare cyberinfrastructure.

  5. Open clusters in the Kepler field. II. NGC 6866

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Janes, Kenneth; Hoq, Sadia; Barnes, Sydney A.

    We have developed a maximum-likelihood procedure to fit theoretical isochrones to the observed cluster color-magnitude diagrams of NGC 6866, an open cluster in the Kepler spacecraft field of view. The Markov chain Monte Carlo algorithm permits exploration of the entire parameter space of a set of isochrones to find both the best solution and the statistical uncertainties. For clusters in the age range of NGC 6866 with few, if any, red giant members, a purely photometric determination of the cluster properties is not well-constrained. Nevertheless, based on our UBVRI photometry alone, we have derived the distance, reddening, age, and metallicitymore » of the cluster and established estimates for the binary nature and membership probability of individual stars. We derive the following values for the cluster properties: (m – M) {sub V} = 10.98 ± 0.24, E(B – V) = 0.16 ± 0.04 (so the distance = 1250 pc), age =705 ± 170 Myr, and Z = 0.014 ± 0.005.« less

  6. Photographic mark-recapture analysis of local dynamics within an open population of dolphins.

    PubMed

    Fearnbach, H; Durban, J; Parsons, K; Claridge, D

    2012-07-01

    Identifying demographic changes is important for understanding population dynamics. However, this requires long-term studies of definable populations of distinct individuals, which can be particularly challenging when studying mobile cetaceans in the marine environment. We collected photo-identification data from 19 years (1992-2010) to assess the dynamics of a population of bottlenose dolphins (Tursiops truncatus) restricted to the shallow (<7 m) waters of Little Bahama Bank, northern Bahamas. This population was known to range beyond our study area, so we adopted a Bayesian mixture modeling approach to mark-recapture to identify clusters of individuals that used the area to different extents, and we specifically estimated trends in survival, recruitment, and abundance of a "resident" population with high probabilities of identification. There was a high probability (p= 0.97) of a long-term decrease in the size of this resident population from a maximum of 47 dolphins (95% highest posterior density intervals, HPDI = 29-61) in 1996 to a minimum of just 24 dolphins (95% HPDI = 14-37) in 2009, a decline of 49% (95% HPDI = approximately 5% to approximately 75%). This was driven by low per capita recruitment (average approximately 0.02) that could not compensate for relatively low apparent survival rates (average approximately 0.94). Notably, there was a significant increase in apparent mortality (approximately 5 apparent mortalities vs. approximately 2 on average) in 1999 when two intense hurricanes passed over the study area, with a high probability (p = 0.83) of a drop below the average survival probability (approximately 0.91 in 1999; approximately 0.94, on average). As such, our mark-recapture approach enabled us to make useful inference about local dynamics within an open population of bottlenose dolphins; this should be applicable to other studies challenged by sampling highly mobile individuals with heterogeneous space use.

  7. Estimation and prediction of maximum daily rainfall at Sagar Island using best fit probability models

    NASA Astrophysics Data System (ADS)

    Mandal, S.; Choudhury, B. U.

    2015-07-01

    Sagar Island, setting on the continental shelf of Bay of Bengal, is one of the most vulnerable deltas to the occurrence of extreme rainfall-driven climatic hazards. Information on probability of occurrence of maximum daily rainfall will be useful in devising risk management for sustaining rainfed agrarian economy vis-a-vis food and livelihood security. Using six probability distribution models and long-term (1982-2010) daily rainfall data, we studied the probability of occurrence of annual, seasonal and monthly maximum daily rainfall (MDR) in the island. To select the best fit distribution models for annual, seasonal and monthly time series based on maximum rank with minimum value of test statistics, three statistical goodness of fit tests, viz. Kolmogorove-Smirnov test (K-S), Anderson Darling test ( A 2 ) and Chi-Square test ( X 2) were employed. The fourth probability distribution was identified from the highest overall score obtained from the three goodness of fit tests. Results revealed that normal probability distribution was best fitted for annual, post-monsoon and summer seasons MDR, while Lognormal, Weibull and Pearson 5 were best fitted for pre-monsoon, monsoon and winter seasons, respectively. The estimated annual MDR were 50, 69, 86, 106 and 114 mm for return periods of 2, 5, 10, 20 and 25 years, respectively. The probability of getting an annual MDR of >50, >100, >150, >200 and >250 mm were estimated as 99, 85, 40, 12 and 03 % level of exceedance, respectively. The monsoon, summer and winter seasons exhibited comparatively higher probabilities (78 to 85 %) for MDR of >100 mm and moderate probabilities (37 to 46 %) for >150 mm. For different recurrence intervals, the percent probability of MDR varied widely across intra- and inter-annual periods. In the island, rainfall anomaly can pose a climatic threat to the sustainability of agricultural production and thus needs adequate adaptation and mitigation measures.

  8. VizieR Online Data Catalog: Proper motions of PM2000 open clusters (Krone-Martins+, 2010)

    NASA Astrophysics Data System (ADS)

    Krone-Martins, A.; Soubiran, C.; Ducourant, C.; Teixeira, R.; Le Campion, J. F.

    2010-04-01

    We present lists of proper-motions and kinematic membership probabilities in the region of 49 open clusters or possible open clusters. The stellar proper motions were taken from the Bordeaux PM2000 catalogue. The segregation between cluster and field stars and the assignment of membership probabilities was accomplished by applying a fully automated method based on parametrisations for the probability distribution functions and genetic algorithm optimisation heuristics associated with a derivative-based hill climbing algorithm for the likelihood optimization. (3 data files).

  9. A quantitative method for risk assessment of agriculture due to climate change

    NASA Astrophysics Data System (ADS)

    Dong, Zhiqiang; Pan, Zhihua; An, Pingli; Zhang, Jingting; Zhang, Jun; Pan, Yuying; Huang, Lei; Zhao, Hui; Han, Guolin; Wu, Dong; Wang, Jialin; Fan, Dongliang; Gao, Lin; Pan, Xuebiao

    2018-01-01

    Climate change has greatly affected agriculture. Agriculture is facing increasing risks as its sensitivity and vulnerability to climate change. Scientific assessment of climate change-induced agricultural risks could help to actively deal with climate change and ensure food security. However, quantitative assessment of risk is a difficult issue. Here, based on the IPCC assessment reports, a quantitative method for risk assessment of agriculture due to climate change is proposed. Risk is described as the product of the degree of loss and its probability of occurrence. The degree of loss can be expressed by the yield change amplitude. The probability of occurrence can be calculated by the new concept of climate change effect-accumulated frequency (CCEAF). Specific steps of this assessment method are suggested. This method is determined feasible and practical by using the spring wheat in Wuchuan County of Inner Mongolia as a test example. The results show that the fluctuation of spring wheat yield increased with the warming and drying climatic trend in Wuchuan County. The maximum yield decrease and its probability were 3.5 and 64.6%, respectively, for the temperature maximum increase 88.3%, and its risk was 2.2%. The maximum yield decrease and its probability were 14.1 and 56.1%, respectively, for the precipitation maximum decrease 35.2%, and its risk was 7.9%. For the comprehensive impacts of temperature and precipitation, the maximum yield decrease and its probability were 17.6 and 53.4%, respectively, and its risk increased to 9.4%. If we do not adopt appropriate adaptation strategies, the degree of loss from the negative impacts of multiclimatic factors and its probability of occurrence will both increase accordingly, and the risk will also grow obviously.

  10. Probabilistic description of probable maximum precipitation

    NASA Astrophysics Data System (ADS)

    Ben Alaya, Mohamed Ali; Zwiers, Francis W.; Zhang, Xuebin

    2017-04-01

    Probable Maximum Precipitation (PMP) is the key parameter used to estimate probable Maximum Flood (PMF). PMP and PMF are important for dam safety and civil engineering purposes. Even if the current knowledge of storm mechanisms remains insufficient to properly evaluate limiting values of extreme precipitation, PMP estimation methods are still based on deterministic consideration, and give only single values. This study aims to provide a probabilistic description of the PMP based on the commonly used method, the so-called moisture maximization. To this end, a probabilistic bivariate extreme values model is proposed to address the limitations of traditional PMP estimates via moisture maximization namely: (i) the inability to evaluate uncertainty and to provide a range PMP values, (ii) the interpretation that a maximum of a data series as a physical upper limit (iii) and the assumption that a PMP event has maximum moisture availability. Results from simulation outputs of the Canadian Regional Climate Model CanRCM4 over North America reveal the high uncertainties inherent in PMP estimates and the non-validity of the assumption that PMP events have maximum moisture availability. This later assumption leads to overestimation of the PMP by an average of about 15% over North America, which may have serious implications for engineering design.

  11. Mandibular kinematic changes after unilateral cross-bite with lateral shift correction.

    PubMed

    Venancio, F; Alarcon, J A; Lenguas, L; Kassem, M; Martin, C

    2014-10-01

    The aim of this randomised prospective study was to evaluate the effects of slow maxillary expansion with expansion plates and Hyrax expanders on the kinematics of the mandible after cross-bite correction. Thirty children (15 boys and 15 girls), aged 7·1-11·8, with unilateral cross-bite and functional shift were divided into two groups: expansion plate (n = 15) and Hyrax expander (n = 15). Thirty children with normal occlusion (14 boys and 16 girls, aged 7·3-11·6) served as control group. The maximum vertical opening, lateral mandibular shift (from maximum vertical opening to maximum intercuspation, from rest position to maximum intercuspation and from maximum vertical opening to rest position) and lateral excursions were recorded before and 4 months after treatment. After treatment, the expansion plate group showed a greater lateral shift from rest position to maximum intercuspation than did the control group. The expansion plate patients also presented greater left/contralateral excursion than did the control group. Comparisons of changes after treatment in the cross-bite groups showed significant decreases in the lateral shift from the maximum vertical opening to maximum intercuspation and from the maximum vertical opening to rest position, a significant increase in the homolateral excursion and a significant decrease in the contralateral excursion in the Hyrax expander group, whereas no significant differences were found in the expansion plate group. In conclusion, the Hyrax expander showed better results than did the expansion plate. The Hyrax expander with acrylic occlusal covering significantly improved the mandibular lateral shift and normalised the range of lateral excursion. © 2014 John Wiley & Sons Ltd.

  12. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  13. Testing the limits of optimality: the effect of base rates in the Monty Hall dilemma.

    PubMed

    Herbranson, Walter T; Wang, Shanglun

    2014-03-01

    The Monty Hall dilemma is a probability puzzle in which a player tries to guess which of three doors conceals a desirable prize. After an initial selection, one of the nonchosen doors is opened, revealing that it is not a winner, and the player is given the choice of staying with the initial selection or switching to the other remaining door. Pigeons and humans were tested on two variants of the Monty Hall dilemma, in which one of the three doors had either a higher or a lower chance of containing the prize than did the other two options. The optimal strategy in both cases was to initially choose the lowest-probability door available and then switch away from it. Whereas pigeons learned to approximate the optimal strategy, humans failed to do so on both accounts: They did not show a preference for low-probability options, and they did not consistently switch. An analysis of performance over the course of training indicated that pigeons learned to perform a sequence of responses on each trial, and that sequence was one that yielded the highest possible rate of reinforcement. Humans, in contrast, continued to vary their responses throughout the experiment, possibly in search of a more complex strategy that would exceed the maximum possible win rate.

  14. A microcomputer program for energy assessment and aggregation using the triangular probability distribution

    USGS Publications Warehouse

    Crovelli, R.A.; Balay, R.H.

    1991-01-01

    A general risk-analysis method was developed for petroleum-resource assessment and other applications. The triangular probability distribution is used as a model with an analytic aggregation methodology based on probability theory rather than Monte-Carlo simulation. Among the advantages of the analytic method are its computational speed and flexibility, and the saving of time and cost on a microcomputer. The input into the model consists of a set of components (e.g. geologic provinces) and, for each component, three potential resource estimates: minimum, most likely (mode), and maximum. Assuming a triangular probability distribution, the mean, standard deviation, and seven fractiles (F100, F95, F75, F50, F25, F5, and F0) are computed for each component, where for example, the probability of more than F95 is equal to 0.95. The components are aggregated by combining the means, standard deviations, and respective fractiles under three possible siutations (1) perfect positive correlation, (2) complete independence, and (3) any degree of dependence between these two polar situations. A package of computer programs named the TRIAGG system was written in the Turbo Pascal 4.0 language for performing the analytic probabilistic methodology. The system consists of a program for processing triangular probability distribution assessments and aggregations, and a separate aggregation routine for aggregating aggregations. The user's documentation and program diskette of the TRIAGG system are available from USGS Open File Services. TRIAGG requires an IBM-PC/XT/AT compatible microcomputer with 256kbyte of main memory, MS-DOS 3.1 or later, either two diskette drives or a fixed disk, and a 132 column printer. A graphics adapter and color display are optional. ?? 1991.

  15. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  16. A discrimination method for the detection of pneumonia using chest radiograph.

    PubMed

    Noor, Norliza Mohd; Rijal, Omar Mohd; Yunus, Ashari; Abu-Bakar, S A R

    2010-03-01

    This paper presents a statistical method for the detection of lobar pneumonia when using digitized chest X-ray films. Each region of interest was represented by a vector of wavelet texture measures which is then multiplied by the orthogonal matrix Q(2). The first two elements of the transformed vectors were shown to have a bivariate normal distribution. Misclassification probabilities were estimated using probability ellipsoids and discriminant functions. The result of this study recommends the detection of pneumonia by constructing probability ellipsoids or discriminant function using maximum energy and maximum column sum energy texture measures where misclassification probabilities were less than 0.15. 2009 Elsevier Ltd. All rights reserved.

  17. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models.

    PubMed

    Butler, T; Graham, L; Estep, D; Dawson, C; Westerink, J J

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  18. Definition and solution of a stochastic inverse problem for the Manning's n parameter field in hydrodynamic models

    NASA Astrophysics Data System (ADS)

    Butler, T.; Graham, L.; Estep, D.; Dawson, C.; Westerink, J. J.

    2015-04-01

    The uncertainty in spatially heterogeneous Manning's n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented. Technical details that arise in practice by applying the framework to determine the Manning's n parameter field in a shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of "condition" for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. This notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning's n parameter and the effect on model predictions is analyzed.

  19. Bayes factor design analysis: Planning for compelling evidence.

    PubMed

    Schönbrodt, Felix D; Wagenmakers, Eric-Jan

    2018-02-01

    A sizeable literature exists on the use of frequentist power analysis in the null-hypothesis significance testing (NHST) paradigm to facilitate the design of informative experiments. In contrast, there is almost no literature that discusses the design of experiments when Bayes factors (BFs) are used as a measure of evidence. Here we explore Bayes Factor Design Analysis (BFDA) as a useful tool to design studies for maximum efficiency and informativeness. We elaborate on three possible BF designs, (a) a fixed-n design, (b) an open-ended Sequential Bayes Factor (SBF) design, where researchers can test after each participant and can stop data collection whenever there is strong evidence for either [Formula: see text] or [Formula: see text], and (c) a modified SBF design that defines a maximal sample size where data collection is stopped regardless of the current state of evidence. We demonstrate how the properties of each design (i.e., expected strength of evidence, expected sample size, expected probability of misleading evidence, expected probability of weak evidence) can be evaluated using Monte Carlo simulations and equip researchers with the necessary information to compute their own Bayesian design analyses.

  20. Open quantum random walk in terms of quantum Bernoulli noise

    NASA Astrophysics Data System (ADS)

    Wang, Caishi; Wang, Ce; Ren, Suling; Tang, Yuling

    2018-03-01

    In this paper, we introduce an open quantum random walk, which we call the QBN-based open walk, by means of quantum Bernoulli noise, and study its properties from a random walk point of view. We prove that, with the localized ground state as its initial state, the QBN-based open walk has the same limit probability distribution as the classical random walk. We also show that the probability distributions of the QBN-based open walk include those of the unitary quantum walk recently introduced by Wang and Ye (Quantum Inf Process 15:1897-1908, 2016) as a special case.

  1. Mechanism of Tacrine Block at Adult Human Muscle Nicotinic Acetylcholine Receptors

    PubMed Central

    Prince, Richard J.; Pennington, Richard A.; Sine, Steven M.

    2002-01-01

    We used single-channel kinetic analysis to study the inhibitory effects of tacrine on human adult nicotinic receptors (nAChRs) transiently expressed in HEK 293 cells. Single channel recording from cell-attached patches revealed concentration- and voltage-dependent decreases in mean channel open probability produced by tacrine (IC50 4.6 μM at −70 mV, 1.6 μM at −150 mV). Two main effects of tacrine were apparent in the open- and closed-time distributions. First, the mean channel open time decreased with increasing tacrine concentration in a voltage-dependent manner, strongly suggesting that tacrine acts as an open-channel blocker. Second, tacrine produced a new class of closings whose duration increased with increasing tacrine concentration. Concentration dependence of closed-times is not predicted by sequential models of channel block, suggesting that tacrine blocks the nAChR by an unusual mechanism. To probe tacrine's mechanism of action we fitted a series of kinetic models to our data using maximum likelihood techniques. Models incorporating two tacrine binding sites in the open receptor channel gave dramatically improved fits to our data compared with the classic sequential model, which contains one site. Improved fits relative to the sequential model were also obtained with schemes incorporating a binding site in the closed channel, but only if it is assumed that the channel cannot gate with tacrine bound. Overall, the best description of our data was obtained with a model that combined two binding sites in the open channel with a single site in the closed state of the receptor. PMID:12198092

  2. Influences on formation and dissipation of high arctic fogs during summer and autumn and their interaction with aerosol

    NASA Astrophysics Data System (ADS)

    Nilsson, E. Douglas; Bigg, E. Keith

    1996-04-01

    Radiosondes established that the air in the near surface mixed layer was very frequently near saturation during the International Arctic Ocean Expedition 1991 which must have been a large factor in the frequent occurrence of fogs. Fogs were divided into groups of summer, transition and winter types depending on whether the advecting air, the ice surface or sea surface respectively was warmest and the source of heat. The probability of summer and transition fogs increased at air temperatures near 0°C while winter fogs had a maximum probability of occurrence at air temperatures between -5 and -10°C. Advection from the open sea was the primary cause of the summer group, the probability of occurrence being high during the 1st day's travel and appreciable until the end of 3days. Transition fogs reached its maximum probability of formation on the 4th day of advection. Radiation heating and cooling of the ice both appeared to have influenced summer and transition fogs, while winter fogs were strongly favoured by the long wave radiation loss at clear sky conditions. Another cause of winter fogs was the heat and moisture source of open leads. Wind speed was also a factor in the probability of fog formation, summer and transition fogs being favoured by winds between 2 and 6ms-1, while winter fogs were favoured by wind speeds of only 1ms-1. Concentrations of fog drops were generally lower than those of the cloud condensation nuclei active at 0.1%, having a median of 3cm-3. While a well-defined modal diameter of 20 25μm was found in all fogs, a second transient mode at about 100μm was also frequently observed. The observation of fog bows with supernumerary arcs pointed to the existence of fog droplets as large as 200 300µm in diameter at fog top. It is suggested that the large drops originated from droplets grown near the fog top and were brought to near the surface by an overturning of the fog layer. Shear induced wave motions and roll vortices were found to cause perturbations in the near-surface layer and appeared to influence fog formation and dissipation. The low observed droplet concentration in fogs limits their ability to modify aerosol number concentrations and size distributions, the persistent overlying stratus being a more likely site for effective interactions. It is suggested that variations in the fog formation described in this paper may be a useful indicator of circulation changes in the arctic consequent upon a global warming.

  3. Speech processing using conditional observable maximum likelihood continuity mapping

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogden, John; Nix, David

    A computer implemented method enables the recognition of speech and speech characteristics. Parameters are initialized of first probability density functions that map between the symbols in the vocabulary of one or more sequences of speech codes that represent speech sounds and a continuity map. Parameters are also initialized of second probability density functions that map between the elements in the vocabulary of one or more desired sequences of speech transcription symbols and the continuity map. The parameters of the probability density functions are then trained to maximize the probabilities of the desired sequences of speech-transcription symbols. A new sequence ofmore » speech codes is then input to the continuity map having the trained first and second probability function parameters. A smooth path is identified on the continuity map that has the maximum probability for the new sequence of speech codes. The probability of each speech transcription symbol for each input speech code can then be output.« less

  4. Cytoprophet: a Cytoscape plug-in for protein and domain interaction networks inference.

    PubMed

    Morcos, Faruck; Lamanna, Charles; Sikora, Marcin; Izaguirre, Jesús

    2008-10-01

    Cytoprophet is a software tool that allows prediction and visualization of protein and domain interaction networks. It is implemented as a plug-in of Cytoscape, an open source software framework for analysis and visualization of molecular networks. Cytoprophet implements three algorithms that predict new potential physical interactions using the domain composition of proteins and experimental assays. The algorithms for protein and domain interaction inference include maximum likelihood estimation (MLE) using expectation maximization (EM); the set cover approach maximum specificity set cover (MSSC) and the sum-product algorithm (SPA). After accepting an input set of proteins with Uniprot ID/Accession numbers and a selected prediction algorithm, Cytoprophet draws a network of potential interactions with probability scores and GO distances as edge attributes. A network of domain interactions between the domains of the initial protein list can also be generated. Cytoprophet was designed to take advantage of the visual capabilities of Cytoscape and be simple to use. An example of inference in a signaling network of myxobacterium Myxococcus xanthus is presented and available at Cytoprophet's website. http://cytoprophet.cse.nd.edu.

  5. 40 CFR 35.6555 - Competition.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... current and include enough qualified sources to ensure maximum open and free competition. Recipients must... transactions in a manner providing maximum full and open competition. (a) Restrictions on competition... bonding requirements; (3) Noncompetitive pricing practices between firms or between affiliated companies...

  6. Development and application of an empirical probability distribution for the prediction error of re-entry body maximum dynamic pressure

    NASA Technical Reports Server (NTRS)

    Lanzi, R. James; Vincent, Brett T.

    1993-01-01

    The relationship between actual and predicted re-entry maximum dynamic pressure is characterized using a probability density function and a cumulative distribution function derived from sounding rocket flight data. This paper explores the properties of this distribution and demonstrates applications of this data with observed sounding rocket re-entry body damage characteristics to assess probabilities of sustaining various levels of heating damage. The results from this paper effectively bridge the gap existing in sounding rocket reentry analysis between the known damage level/flight environment relationships and the predicted flight environment.

  7. The Everett-Wheeler interpretation and the open future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudbery, Anthony

    2011-03-28

    I discuss the meaning of probability in the Everett-Wheeler interpretation of quantum mechanics, together with the problem of defining histories. To resolve these, I propose an understanding of probability arising from a form of temporal logic: the probability of a future-tense proposition is identified with its truth value in a many-valued and context-dependent logic. In short, probability is degree of truth. These ideas relate to traditional naive ideas of time and chance. Indeed, I argue that Everettian quantum mechanics is the only form of scientific theory that truly incorporates the perception that the future is open.

  8. RadVel: The Radial Velocity Modeling Toolkit

    NASA Astrophysics Data System (ADS)

    Fulton, Benjamin J.; Petigura, Erik A.; Blunt, Sarah; Sinukoff, Evan

    2018-04-01

    RadVel is an open-source Python package for modeling Keplerian orbits in radial velocity (RV) timeseries. RadVel provides a convenient framework to fit RVs using maximum a posteriori optimization and to compute robust confidence intervals by sampling the posterior probability density via Markov Chain Monte Carlo (MCMC). RadVel allows users to float or fix parameters, impose priors, and perform Bayesian model comparison. We have implemented real-time MCMC convergence tests to ensure adequate sampling of the posterior. RadVel can output a number of publication-quality plots and tables. Users may interface with RadVel through a convenient command-line interface or directly from Python. The code is object-oriented and thus naturally extensible. We encourage contributions from the community. Documentation is available at http://radvel.readthedocs.io.

  9. The influence of synaptic size on AMPA receptor activation: a Monte Carlo model.

    PubMed

    Montes, Jesus; Peña, Jose M; DeFelipe, Javier; Herreras, Oscar; Merchan-Perez, Angel

    2015-01-01

    Physiological and electron microscope studies have shown that synapses are functionally and morphologically heterogeneous and that variations in size of synaptic junctions are related to characteristics such as release probability and density of postsynaptic AMPA receptors. The present article focuses on how these morphological variations impact synaptic transmission. We based our study on Monte Carlo computational simulations of simplified model synapses whose morphological features have been extracted from hundreds of actual synaptic junctions reconstructed by three-dimensional electron microscopy. We have examined the effects that parameters such as synaptic size or density of AMPA receptors have on the number of receptors that open after release of a single synaptic vesicle. Our results indicate that the maximum number of receptors that will open after the release of a single synaptic vesicle may show a ten-fold variation in the whole population of synapses. When individual synapses are considered, there is also a stochastical variability that is maximal in small synapses with low numbers of receptors. The number of postsynaptic receptors and the size of the synaptic junction are the most influential parameters, while the packing density of receptors or the concentration of extrasynaptic transporters have little or no influence on the opening of AMPA receptors.

  10. The Influence of Synaptic Size on AMPA Receptor Activation: A Monte Carlo Model

    PubMed Central

    Montes, Jesus; Peña, Jose M.; DeFelipe, Javier; Herreras, Oscar; Merchan-Perez, Angel

    2015-01-01

    Physiological and electron microscope studies have shown that synapses are functionally and morphologically heterogeneous and that variations in size of synaptic junctions are related to characteristics such as release probability and density of postsynaptic AMPA receptors. The present article focuses on how these morphological variations impact synaptic transmission. We based our study on Monte Carlo computational simulations of simplified model synapses whose morphological features have been extracted from hundreds of actual synaptic junctions reconstructed by three-dimensional electron microscopy. We have examined the effects that parameters such as synaptic size or density of AMPA receptors have on the number of receptors that open after release of a single synaptic vesicle. Our results indicate that the maximum number of receptors that will open after the release of a single synaptic vesicle may show a ten-fold variation in the whole population of synapses. When individual synapses are considered, there is also a stochastical variability that is maximal in small synapses with low numbers of receptors. The number of postsynaptic receptors and the size of the synaptic junction are the most influential parameters, while the packing density of receptors or the concentration of extrasynaptic transporters have little or no influence on the opening of AMPA receptors. PMID:26107874

  11. Time‐dependent renewal‐model probabilities when date of last earthquake is unknown

    USGS Publications Warehouse

    Field, Edward H.; Jordan, Thomas H.

    2015-01-01

    We derive time-dependent, renewal-model earthquake probabilities for the case in which the date of the last event is completely unknown, and compare these with the time-independent Poisson probabilities that are customarily used as an approximation in this situation. For typical parameter values, the renewal-model probabilities exceed Poisson results by more than 10% when the forecast duration exceeds ~20% of the mean recurrence interval. We also derive probabilities for the case in which the last event is further constrained to have occurred before historical record keeping began (the historic open interval), which can only serve to increase earthquake probabilities for typically applied renewal models.We conclude that accounting for the historic open interval can improve long-term earthquake rupture forecasts for California and elsewhere.

  12. Reliable radiocarbon evidence for the maximum extent of the West Antarctic Ice Sheet in the easternmost Amundsen Sea Embayment during the Last Glacial Maximum

    NASA Astrophysics Data System (ADS)

    Hillenbrand, C. D.; Klages, J. P.; Kuhn, G.; Smith, J.; Graham, A. G. C.; Gohl, K.; Wacker, L.

    2016-02-01

    We present the first age control and sedimentological data for the upper part of a stratified seismic unit that is unusually thick ( 6-9 m) for the outer shelf of the ASE and overlies an acoustically transparent unit. The transparent unit probably consists of soft till deposited during the last advance of grounded ice onto the outer shelf. We mapped subtle mega-scale glacial lineations (MSGL) on the seafloor and suggest that these are probably the expressions of bedforms originally moulded into the surface of the underlying till layer. We note that the lineations are less distinct when compared to MSGLs recorded in bathymetric data collected further upstream and suggest that this is because of the blanketing influence of the thick overlying drape. The uppermost part (≤ 3 m) of the stratified drape was sampled by two of our sediment cores and contains sufficient amounts of calcareous foraminifera throughout to establish reliable age models by radiocarbon dating. In combination with facies analysis of the recovered sediments the obtained radiocarbon dates suggest deposition of the draping unit in a sub-ice shelf/sub-sea ice to seasonal-open marine environment that existed on the outer shelf from well before (>45 ka BP) the Last Glacial Maximum until today. This indicates the maximum extent of grounded ice at the LGM must have been situated south of the two core locations, where a well-defined grounding-zone wedge (`GZWa') was deposited. The third sediment core was recovered from the toe of this wedge and retrieved grounding-line proximal glaciogenic debris flow sediments that were deposited by 14 cal. ka BP. Our new data therefore provide direct evidence for 1) the maximum extent of grounded ice in the easternmost ASE at the LGM (=GZWa), 2) the existence of a large shelf area seawards the wedge that was not covered by grounded ice during that time, and 3) landward grounding line retreat from GZWa prior to 14 cal. ka BP. This knowledge will help to improve LGM ice sheet reconstructions and to quantify precisely the volume of LGM ice-sheet build-up in Antarctica. Our study also alludes to the possibility that refugia for Antarctic shelf benthos may have existed in the ASE during the last glacial period.

  13. Maximum likelihood estimation of label imperfections and its use in the identification of mislabeled patterns

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    The problem of estimating label imperfections and the use of the estimation in identifying mislabeled patterns is presented. Expressions for the maximum likelihood estimates of classification errors and a priori probabilities are derived from the classification of a set of labeled patterns. Expressions also are given for the asymptotic variances of probability of correct classification and proportions. Simple models are developed for imperfections in the labels and for classification errors and are used in the formulation of a maximum likelihood estimation scheme. Schemes are presented for the identification of mislabeled patterns in terms of threshold on the discriminant functions for both two-class and multiclass cases. Expressions are derived for the probability that the imperfect label identification scheme will result in a wrong decision and are used in computing thresholds. The results of practical applications of these techniques in the processing of remotely sensed multispectral data are presented.

  14. Scaling behavior for random walks with memory of the largest distance from the origin

    NASA Astrophysics Data System (ADS)

    Serva, Maurizio

    2013-11-01

    We study a one-dimensional random walk with memory. The behavior of the walker is modified with respect to the simple symmetric random walk only when he or she is at the maximum distance ever reached from his or her starting point (home). In this case, having the choice to move farther or to move closer, the walker decides with different probabilities. If the probability of a forward step is higher then the probability of a backward step, the walker is bold, otherwise he or she is timorous. We investigate the asymptotic properties of this bold-timorous random walk, showing that the scaling behavior varies continuously from subdiffusive (timorous) to superdiffusive (bold). The scaling exponents are fully determined with a new mathematical approach based on a decomposition of the dynamics in active journeys (the walker is at the maximum distance) and lazy journeys (the walker is not at the maximum distance).

  15. Flood Frequency Curves - Use of information on the likelihood of extreme floods

    NASA Astrophysics Data System (ADS)

    Faber, B.

    2011-12-01

    Investment in the infrastructure that reduces flood risk for flood-prone communities must incorporate information on the magnitude and frequency of flooding in that area. Traditionally, that information has been a probability distribution of annual maximum streamflows developed from the historical gaged record at a stream site. Practice in the United States fits a Log-Pearson type3 distribution to the annual maximum flows of an unimpaired streamflow record, using the method of moments to estimate distribution parameters. The procedure makes the assumptions that annual peak streamflow events are (1) independent, (2) identically distributed, and (3) form a representative sample of the overall probability distribution. Each of these assumptions can be challenged. We rarely have enough data to form a representative sample, and therefore must compute and display the uncertainty in the estimated flood distribution. But, is there a wet/dry cycle that makes precipitation less than independent between successive years? Are the peak flows caused by different types of events from different statistical populations? How does the watershed or climate changing over time (non-stationarity) affect the probability distribution floods? Potential approaches to avoid these assumptions vary from estimating trend and shift and removing them from early data (and so forming a homogeneous data set), to methods that estimate statistical parameters that vary with time. A further issue in estimating a probability distribution of flood magnitude (the flood frequency curve) is whether a purely statistical approach can accurately capture the range and frequency of floods that are of interest. A meteorologically-based analysis produces "probable maximum precipitation" (PMP) and subsequently a "probable maximum flood" (PMF) that attempts to describe an upper bound on flood magnitude in a particular watershed. This analysis can help constrain the upper tail of the probability distribution, well beyond the range of gaged data or even historical or paleo-flood data, which can be very important in risk analyses performed for flood risk management and dam and levee safety studies.

  16. [Hygienic estimation of combined influence of noise and infrasound on the organism of military men].

    PubMed

    Akhmetzianov, I M; Zinkin, V N; Petreev, I V; Dragan, S P

    2011-11-01

    Hygienic estimation of combined influence of noise and infrasound on the organism of military men. Combined influence of noise and infrasound is accompanied by essential increase of risk of development neurosensory deafness and hypertensive illness. At combined influence of noise and infrasound with a maximum of a spectrum in the field of a sound range the probability of development neurosensory deafness will prevail. Thus probability of development of pathology of ear above the values established ISO 1999:1990. In a case if the spectrum maximum is necessary on an infrasonic range the probability of development of a hypertensive illness.

  17. Influence of maneuverability on helicopter combat effectiveness

    NASA Technical Reports Server (NTRS)

    Falco, M.; Smith, R.

    1982-01-01

    A computational procedure employing a stochastic learning method in conjunction with dynamic simulation of helicopter flight and weapon system operation was used to derive helicopter maneuvering strategies. The derived strategies maximize either survival or kill probability and are in the form of a feedback control based upon threat visual or warning system cues. Maneuverability parameters implicit in the strategy development include maximum longitudinal acceleration and deceleration, maximum sustained and transient load factor turn rate at forward speed, and maximum pedal turn rate and lateral acceleration at hover. Results are presented in terms of probability of skill for all combat initial conditions for two threat categories.

  18. Maximum parsimony, substitution model, and probability phylogenetic trees.

    PubMed

    Weng, J F; Thomas, D A; Mareels, I

    2011-01-01

    The problem of inferring phylogenies (phylogenetic trees) is one of the main problems in computational biology. There are three main methods for inferring phylogenies-Maximum Parsimony (MP), Distance Matrix (DM) and Maximum Likelihood (ML), of which the MP method is the most well-studied and popular method. In the MP method the optimization criterion is the number of substitutions of the nucleotides computed by the differences in the investigated nucleotide sequences. However, the MP method is often criticized as it only counts the substitutions observable at the current time and all the unobservable substitutions that really occur in the evolutionary history are omitted. In order to take into account the unobservable substitutions, some substitution models have been established and they are now widely used in the DM and ML methods but these substitution models cannot be used within the classical MP method. Recently the authors proposed a probability representation model for phylogenetic trees and the reconstructed trees in this model are called probability phylogenetic trees. One of the advantages of the probability representation model is that it can include a substitution model to infer phylogenetic trees based on the MP principle. In this paper we explain how to use a substitution model in the reconstruction of probability phylogenetic trees and show the advantage of this approach with examples.

  19. Speech-language therapy program for mouth opening in patients with oral and oropharyngeal cancer undergoing adjuvant radiotherapy: a pilot study.

    PubMed

    Marrafon, Caroline Somera; Matos, Leandro Luongo; Simões-Zenari, Marcia; Cernea, Claudio Roberto; Nemr, Katia

    2018-01-01

    Purpose Assess the effectiveness of an orofacial myofunctional therapeutic program in patients with oral or oropharyngeal cancer submitted to adjuvant radiotherapy through pre- and post-program comparison of maximum mandibular opening. Methods Prospective study involving five adult patients and five elderly patients postoperatively to oral cavity/oropharynx surgery who were awaiting the beginning of radiotherapy or had undergone fewer than five treatment sessions. The study participants had their maximum jaw opening measured using a sliding caliper at the beginning and end of the program. Two mobility exercises and three mandibular traction exercises were selected and weekly monitored presentially for 10 weeks. Descriptive data and pre- and post-therapy comparative measures were statistically analyzed using the Wilcoxon test. Results Ten patients (two women and eight men) with mean age of 58.4 years, median of 57.0 years, completed the therapeutic program. They presented mean maximum mandibular opening of 31.6 ± 11.7 and 36.4 ± 8.0 mm pre- and post-therapy, respectively (p =0.021). Conclusion The proposed orofacial myofunctional therapeutic program increased the maximum jaw opening of patients referred to adjuvant radiotherapy for oral cavity or oropharynx cancer treatment.

  20. 14 CFR 27.801 - Ditching.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... external doors and windows are accounted for in the investigation of the probable behavior of the... and windows must be designed to withstand the probable maximum local pressures. [Amdt. 27-11, 41 FR...

  1. 14 CFR 29.801 - Ditching.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... external doors and windows are accounted for in the investigation of the probable behavior of the... and windows must be designed to withstand the probable maximum local pressures. [Amdt. 29-12, 41 FR...

  2. Measurement of the Errors of Service Altimeter Installations During Landing-Approach and Take-Off Operations

    NASA Technical Reports Server (NTRS)

    Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.

    1960-01-01

    The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herberger, Sarah M.; Boring, Ronald L.

    Abstract Objectives: This paper discusses the differences between classical human reliability analysis (HRA) dependence and the full spectrum of probabilistic dependence. Positive influence suggests an error increases the likelihood of subsequent errors or success increases the likelihood of subsequent success. Currently the typical method for dependence in HRA implements the Technique for Human Error Rate Prediction (THERP) positive dependence equations. This assumes that the dependence between two human failure events varies at discrete levels between zero and complete dependence (as defined by THERP). Dependence in THERP does not consistently span dependence values between 0 and 1. In contrast, probabilistic dependencemore » employs Bayes Law, and addresses a continuous range of dependence. Methods: Using the laws of probability, complete dependence and maximum positive dependence do not always agree. Maximum dependence is when two events overlap to their fullest amount. Maximum negative dependence is the smallest amount that two events can overlap. When the minimum probability of two events overlapping is less than independence, negative dependence occurs. For example, negative dependence is when an operator fails to actuate Pump A, thereby increasing his or her chance of actuating Pump B. The initial error actually increases the chance of subsequent success. Results: Comparing THERP and probability theory yields different results in certain scenarios; with the latter addressing negative dependence. Given that most human failure events are rare, the minimum overlap is typically 0. And when the second event is smaller than the first event the max dependence is less than 1, as defined by Bayes Law. As such alternative dependence equations are provided along with a look-up table defining the maximum and maximum negative dependence given the probability of two events. Conclusions: THERP dependence has been used ubiquitously for decades, and has provided approximations of the dependencies between two events. Since its inception, computational abilities have increased exponentially, and alternative approaches that follow the laws of probability dependence need to be implemented. These new approaches need to consider negative dependence and identify when THERP output is not appropriate.« less

  4. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  5. Spider texture and amphibole preferred orientations

    NASA Astrophysics Data System (ADS)

    Shelley, David

    1994-05-01

    Foliation in blueschist facies chert from California is defined by layers of oriented alkali-amphibole which consistently curve towards and converge on pyrite (and possibly pyrrhotite) crystals. These foliation nodes, not previously described, are called here 'spider texture'. The texture is interpreted in terms of perturbations of the stress field in a matrix undergoing strain about rigid pyrite (or pyrrhotite) crystals, and it has important implications for understanding the mechanisms of amphibole preferred orientation development. Geometrical relationships between spider texture, pressure shadows and quartz preferred orientations suggest that amphiboles grew with a strong preferred orientation along planes of maximum shearing stress. The mechanism of foliation and preferred orientation development probably involved competitive anisotropic growth of amphibole prisms within the small gaps that open at steps on shear planes, followed by additional (micro-) porphyroblastic growth. The first stage of the mechanism is similar to slickenfibre development.

  6. Definition and solution of a stochastic inverse problem for the Manning’s n parameter field in hydrodynamic models

    DOE PAGES

    Butler, Troy; Graham, L.; Estep, D.; ...

    2015-02-03

    The uncertainty in spatially heterogeneous Manning’s n fields is quantified using a novel formulation and numerical solution of stochastic inverse problems for physics-based models. The uncertainty is quantified in terms of a probability measure and the physics-based model considered here is the state-of-the-art ADCIRC model although the presented methodology applies to other hydrodynamic models. An accessible overview of the formulation and solution of the stochastic inverse problem in a mathematically rigorous framework based on measure theory is presented in this paper. Technical details that arise in practice by applying the framework to determine the Manning’s n parameter field in amore » shallow water equation model used for coastal hydrodynamics are presented and an efficient computational algorithm and open source software package are developed. A new notion of “condition” for the stochastic inverse problem is defined and analyzed as it relates to the computation of probabilities. Finally, this notion of condition is investigated to determine effective output quantities of interest of maximum water elevations to use for the inverse problem for the Manning’s n parameter and the effect on model predictions is analyzed.« less

  7. Unambiguous discrimination between linearly dependent equidistant states with multiple copies

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-Hai; Ren, Gang

    2018-07-01

    Linearly independent quantum states can be unambiguously discriminated, but linearly dependent ones cannot. For linearly dependent quantum states, however, if C copies of the single states are available, then they may form linearly independent states, and can be unambiguously discriminated. We consider unambiguous discrimination among N = D + 1 linearly dependent states given that C copies are available and that the single copies span a D-dimensional space with equal inner products. The maximum unambiguous discrimination probability is derived for all C with equal a priori probabilities. For this classification of the linearly dependent equidistant states, our result shows that if C is even then adding a further copy fails to increase the maximum discrimination probability.

  8. Nonparametric probability density estimation by optimization theoretic techniques

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1976-01-01

    Two nonparametric probability density estimators are considered. The first is the kernel estimator. The problem of choosing the kernel scaling factor based solely on a random sample is addressed. An interactive mode is discussed and an algorithm proposed to choose the scaling factor automatically. The second nonparametric probability estimate uses penalty function techniques with the maximum likelihood criterion. A discrete maximum penalized likelihood estimator is proposed and is shown to be consistent in the mean square error. A numerical implementation technique for the discrete solution is discussed and examples displayed. An extensive simulation study compares the integrated mean square error of the discrete and kernel estimators. The robustness of the discrete estimator is demonstrated graphically.

  9. Pneumatic Control Device for the Pershing 2 Adaption Kit

    DTIC Science & Technology

    1979-03-14

    forward force to main- tain a pressure seal (this, versus an-I6-to 25 pound maximum reverse .force component due to pressure). In all probability, initial...stem forward force to main- tain a pressure seal (this, versus an 48-to-25-pound maximum " reverse.force, component due-topressue). In-all probability...PII Li L! Ramn Eniern Inc Contrato . 2960635 GAS GENERATOR COMPATIBILITY U TEST REPORT 1.j Requirement s The requirements for the Pershing II, Phase I

  10. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    PubMed

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  11. Blocking and isolation of a calcium channel from neurons in mammals and cephalopods utilizing a toxin fraction (FTX) from funnel-web spider poison.

    PubMed Central

    Llinás, R; Sugimori, M; Lin, J W; Cherksey, B

    1989-01-01

    A Ca2+-channel blocker derived from funnel-web spider toxin (FTX) has made it possible to define and study the ionic channels responsible for the Ca2+ conductance in mammalian Purkinje cell neurons and the preterminal in squid giant synapse. In cerebellar slices, FTX blocked Ca2+-dependent spikes in Purkinje cells, reduced the spike afterpotential hyperpolarization, and increased the Na+-dependent plateau potential. In the squid giant synapse, FTX blocked synaptic transmission without affecting the presynaptic action potential. Presynaptic voltage-clamp results show blockage of the inward Ca2+ current and of transmitter release. FTX was used to isolate channels from cerebellum and squid optic lobe. The isolated product was incorporated into black lipid membranes and was analyzed by using patch-clamp techniques. The channel from cerebellum exhibited a 10- to 12-pS conductance in 80 mM Ba2+ and 5-8 pS in 100 mM Ca2+ with voltage-dependent open probabilities and kinetics. High Ba2+ concentrations at the cytoplasmic side of the channel increased the average open time from 1 to 3 msec to more than 1 sec. A similar channel was also isolated from squid optic lobe. However, its conductance was higher in Ba2+, and the maximum opening probability was about half of that derived from cerebellar tissue and also was sensitive to high cytoplasmic Ba2+. Both channels were blocked by FTX, Cd2+, and Co2+ but were not blocked by omega-conotoxin or dihydropyridines. These results suggest that one of the main Ca2+ conductances in mammalian neurons and in the squid preterminal represents the activation of a previously undefined class of Ca2+ channel. We propose that it be termed the "P" channel, as it was first described in Purkinje cells. Images PMID:2537980

  12. Blocking and isolation of a calcium channel from neurons in mammals and cephalopods utilizing a toxin fraction (FTX) from funnel-web spider poison.

    PubMed

    Llinás, R; Sugimori, M; Lin, J W; Cherksey, B

    1989-03-01

    A Ca2+-channel blocker derived from funnel-web spider toxin (FTX) has made it possible to define and study the ionic channels responsible for the Ca2+ conductance in mammalian Purkinje cell neurons and the preterminal in squid giant synapse. In cerebellar slices, FTX blocked Ca2+-dependent spikes in Purkinje cells, reduced the spike afterpotential hyperpolarization, and increased the Na+-dependent plateau potential. In the squid giant synapse, FTX blocked synaptic transmission without affecting the presynaptic action potential. Presynaptic voltage-clamp results show blockage of the inward Ca2+ current and of transmitter release. FTX was used to isolate channels from cerebellum and squid optic lobe. The isolated product was incorporated into black lipid membranes and was analyzed by using patch-clamp techniques. The channel from cerebellum exhibited a 10- to 12-pS conductance in 80 mM Ba2+ and 5-8 pS in 100 mM Ca2+ with voltage-dependent open probabilities and kinetics. High Ba2+ concentrations at the cytoplasmic side of the channel increased the average open time from 1 to 3 msec to more than 1 sec. A similar channel was also isolated from squid optic lobe. However, its conductance was higher in Ba2+, and the maximum opening probability was about half of that derived from cerebellar tissue and also was sensitive to high cytoplasmic Ba2+. Both channels were blocked by FTX, Cd2+, and Co2+ but were not blocked by omega-conotoxin or dihydropyridines. These results suggest that one of the main Ca2+ conductances in mammalian neurons and in the squid preterminal represents the activation of a previously undefined class of Ca2+ channel. We propose that it be termed the "P" channel, as it was first described in Purkinje cells.

  13. 50 CFR 648.100 - Catch quotas and other restrictions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... at least a 50-percent probability of success, a fishing mortality rate (F) that produces the maximum... probability of success, that the F specified in paragraph (a) of this section will not be exceeded: (1... necessary to ensure, with at least a 50-percent probability of success, that the applicable specified F will...

  14. Classification of change detection and change blindness from near-infrared spectroscopy signals

    NASA Astrophysics Data System (ADS)

    Tanaka, Hirokazu; Katura, Takusige

    2011-08-01

    Using a machine-learning classification algorithm applied to near-infrared spectroscopy (NIRS) signals, we classify a success (change detection) or a failure (change blindness) in detecting visual changes for a change-detection task. Five subjects perform a change-detection task, and their brain activities are continuously monitored. A support-vector-machine algorithm is applied to classify the change-detection and change-blindness trials, and correct classification probability of 70-90% is obtained for four subjects. Two types of temporal shapes in classification probabilities are found: one exhibiting a maximum value after the task is completed (postdictive type), and another exhibiting a maximum value during the task (predictive type). As for the postdictive type, the classification probability begins to increase immediately after the task completion and reaches its maximum in about the time scale of neuronal hemodynamic response, reflecting a subjective report of change detection. As for the predictive type, the classification probability shows an increase at the task initiation and is maximal while subjects are performing the task, predicting the task performance in detecting a change. We conclude that decoding change detection and change blindness from NIRS signal is possible and argue some future applications toward brain-machine interfaces.

  15. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  16. Probabilistic assessment of precipitation-triggered landslides using historical records of landslide occurence, Seattle, Washington

    USGS Publications Warehouse

    Coe, J.A.; Michael, J.A.; Crovelli, R.A.; Savage, W.Z.; Laprade, W.T.; Nashem, W.D.

    2004-01-01

    Ninety years of historical landslide records were used as input to the Poisson and binomial probability models. Results from these models show that, for precipitation-triggered landslides, approximately 9 percent of the area of Seattle has annual exceedance probabilities of 1 percent or greater. Application of the Poisson model for estimating the future occurrence of individual landslides results in a worst-case scenario map, with a maximum annual exceedance probability of 25 percent on a hillslope near Duwamish Head in West Seattle. Application of the binomial model for estimating the future occurrence of a year with one or more landslides results in a map with a maximum annual exceedance probability of 17 percent (also near Duwamish Head). Slope and geology both play a role in localizing the occurrence of landslides in Seattle. A positive correlation exists between slope and mean exceedance probability, with probability tending to increase as slope increases. Sixty-four percent of all historical landslide locations are within 150 m (500 ft, horizontal distance) of the Esperance Sand/Lawton Clay contact, but within this zone, no positive or negative correlation exists between exceedance probability and distance to the contact.

  17. Development of a methodology for probable maximum precipitation estimation over the American River watershed using the WRF model

    NASA Astrophysics Data System (ADS)

    Tan, Elcin

    A new physically-based methodology for probable maximum precipitation (PMP) estimation is developed over the American River Watershed (ARW) using the Weather Research and Forecast (WRF-ARW) model. A persistent moisture flux convergence pattern, called Pineapple Express, is analyzed for 42 historical extreme precipitation events, and it is found that Pineapple Express causes extreme precipitation over the basin of interest. An average correlation between moisture flux convergence and maximum precipitation is estimated as 0.71 for 42 events. The performance of the WRF model is verified for precipitation by means of calibration and independent validation of the model. The calibration procedure is performed only for the first ranked flood event 1997 case, whereas the WRF model is validated for 42 historical cases. Three nested model domains are set up with horizontal resolutions of 27 km, 9 km, and 3 km over the basin of interest. As a result of Chi-square goodness-of-fit tests, the hypothesis that "the WRF model can be used in the determination of PMP over the ARW for both areal average and point estimates" is accepted at the 5% level of significance. The sensitivities of model physics options on precipitation are determined using 28 microphysics, atmospheric boundary layer, and cumulus parameterization schemes combinations. It is concluded that the best triplet option is Thompson microphysics, Grell 3D ensemble cumulus, and YSU boundary layer (TGY), based on 42 historical cases, and this TGY triplet is used for all analyses of this research. Four techniques are proposed to evaluate physically possible maximum precipitation using the WRF: 1. Perturbations of atmospheric conditions; 2. Shift in atmospheric conditions; 3. Replacement of atmospheric conditions among historical events; and 4. Thermodynamically possible worst-case scenario creation. Moreover, climate change effect on precipitation is discussed by emphasizing temperature increase in order to determine the physically possible upper limits of precipitation due to climate change. The simulation results indicate that the meridional shift in atmospheric conditions is the optimum method to determine maximum precipitation in consideration of cost and efficiency. Finally, exceedance probability analyses of the model results of 42 historical extreme precipitation events demonstrate that the 72-hr basin averaged probable maximum precipitation is 21.72 inches for the exceedance probability of 0.5 percent. On the other hand, the current operational PMP estimation for the American River Watershed is 28.57 inches as published in the hydrometeorological report no. 59 and a previous PMP value was 31.48 inches as published in the hydrometeorological report no. 36. According to the exceedance probability analyses of this proposed method, the exceedance probabilities of these two estimations correspond to 0.036 percent and 0.011 percent, respectively.

  18. Median mandibular flexure at different mouth opening and its relation to different facial types: A prospective clinical study

    PubMed Central

    Prasad, Mandava; Hussain, Mohammed Z.; Shetty, Sharath K.; Kumar, T. Ashok; Khaur, Mohit; George, Suja A.; Dalwai, Sameen

    2013-01-01

    Objective: To measure the arch width and Median mandibular flexure (MMF) values at relative rest and maximum jaw opening in young adults with Dolichofacial, Mesofacial, and Brachyfacial types and tested whether the variation in the facial pattern is related to the MMF values in South Indian population. Materials and Methods: This Prospective clinical study consisted of sample of 60 young adults. The subjects were grouped into 3 groups: Group 1: Brachyfacial, Group 2: Mesofacial and types, Group 3: Dolichofacial. Impressions were taken for all the 60 subjects and the casts were scanned and digitized. The intermolar width was measured for Dolichofacial, Mesofacial, and Brachyfacial subjects at relative rest (R) and maximum opening (O). Results: The statistical analysis of the observations included Descriptive and Inferential statistics. The statistical analysis was executed by means of Sigma graph pad prism software, USA Version-4. Kruskal wallis (ANOVA) followed by Dunns post hoc test was performed. Mann Whitney U-test was performed to assess the difference in MMF values between Males and Females of the three groups. The Mean (SD) Mandibular flexure in individuals with Brachyfacial type was 1.12 (0.09), Mesofacial type was 0.69 (0.21), and Dolichofacial type was 0.39 (0.08). Conclusions: The Mean intermolar width was maximum in Brachyfacial type and minimum in Dolichofacial type. MMF was maximum at the maximum mouth opening position and was maximum in individuals with Brachyfacial type. PMID:24082745

  19. 30 CFR 75.524 - Electric face equipment; electric equipment used in return air outby the last open crosscut...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... current between frames of equipment. 75.524 Section 75.524 Mineral Resources MINE SAFETY AND HEALTH... the last open crosscut; maximum level of alternating or direct electric current between frames of equipment. The maximum level of alternating or direct electric current that exists between the frames of any...

  20. 30 CFR 75.524 - Electric face equipment; electric equipment used in return air outby the last open crosscut...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... the last open crosscut; maximum level of alternating or direct electric current between frames of equipment. The maximum level of alternating or direct electric current that exists between the frames of any... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Electric face equipment; electric equipment...

  1. 30 CFR 75.524 - Electric face equipment; electric equipment used in return air outby the last open crosscut...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... the last open crosscut; maximum level of alternating or direct electric current between frames of equipment. The maximum level of alternating or direct electric current that exists between the frames of any... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Electric face equipment; electric equipment...

  2. 30 CFR 75.524 - Electric face equipment; electric equipment used in return air outby the last open crosscut...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... the last open crosscut; maximum level of alternating or direct electric current between frames of equipment. The maximum level of alternating or direct electric current that exists between the frames of any... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Electric face equipment; electric equipment...

  3. 30 CFR 75.524 - Electric face equipment; electric equipment used in return air outby the last open crosscut...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... the last open crosscut; maximum level of alternating or direct electric current between frames of equipment. The maximum level of alternating or direct electric current that exists between the frames of any... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Electric face equipment; electric equipment...

  4. Emergence and stability of intermediate open vesicles in disk-to-vesicle transitions.

    PubMed

    Li, Jianfeng; Zhang, Hongdong; Qiu, Feng; Shi, An-Chang

    2013-07-01

    The transition between two basic structures, a disk and an enclosed vesicle, of a finite membrane is studied by examining the minimum energy path (MEP) connecting these two states. The MEP is constructed using the string method applied to continuum elastic membrane models. The results reveal that, besides the commonly observed disk and vesicle, open vesicles (bowl-shaped vesicles or vesicles with a pore) can become stable or metastable shapes. The emergence, stability, and probability distribution of these open vesicles are analyzed. It is demonstrated that open vesicles can be stabilized by higher-order elastic energies. The estimated probability distribution of the different structures is in good agreement with available experiments.

  5. Calculation of transmission probability by solving an eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Bubin, Sergiy; Varga, Kálmán

    2010-11-01

    The electron transmission probability in nanodevices is calculated by solving an eigenvalue problem. The eigenvalues are the transmission probabilities and the number of nonzero eigenvalues is equal to the number of open quantum transmission eigenchannels. The number of open eigenchannels is typically a few dozen at most, thus the computational cost amounts to the calculation of a few outer eigenvalues of a complex Hermitian matrix (the transmission matrix). The method is implemented on a real space grid basis providing an alternative to localized atomic orbital based quantum transport calculations. Numerical examples are presented to illustrate the efficiency of the method.

  6. Analysis to determine the maximum dimensions of flexible apertures in sensored security netting products.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murton, Mark; Bouchier, Francis A.; vanDongen, Dale T.

    2013-08-01

    Although technological advances provide new capabilities to increase the robustness of security systems, they also potentially introduce new vulnerabilities. New capability sometimes requires new performance requirements. This paper outlines an approach to establishing a key performance requirement for an emerging intrusion detection sensor: the sensored net. Throughout the security industry, the commonly adopted standard for maximum opening size through barriers is a requirement based on square inchestypically 96 square inches. Unlike standard rigid opening, the dimensions of a flexible aperture are not fixed, but variable and conformable. It is demonstrably simple for a human intruder to move through a 96-square-inchmore » opening that is conformable to the human body. The longstanding 96-square-inch requirement itself, though firmly embedded in policy and best practice, lacks a documented empirical basis. This analysis concluded that the traditional 96-square-inch standard for openings is insufficient for flexible openings that are conformable to the human body. Instead, a circumference standard is recommended for these newer types of sensored barriers. The recommended maximum circumference for a flexible opening should be no more than 26 inches, as measured on the inside of the netting material.« less

  7. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  8. Classic Maximum Entropy Recovery of the Average Joint Distribution of Apparent FRET Efficiency and Fluorescence Photons for Single-molecule Burst Measurements

    PubMed Central

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2012-01-01

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694

  9. Dynamic analysis of pedestrian crossing behaviors on traffic flow at unsignalized mid-block crosswalks

    NASA Astrophysics Data System (ADS)

    Liu, Gang; He, Jing; Luo, Zhiyong; Yang, Wunian; Zhang, Xiping

    2015-05-01

    It is important to study the effects of pedestrian crossing behaviors on traffic flow for solving the urban traffic jam problem. Based on the Nagel-Schreckenberg (NaSch) traffic cellular automata (TCA) model, a new one-dimensional TCA model is proposed considering the uncertainty conflict behaviors between pedestrians and vehicles at unsignalized mid-block crosswalks and defining the parallel updating rules of motion states of pedestrians and vehicles. The traffic flow is simulated for different vehicle densities and behavior trigger probabilities. The fundamental diagrams show that no matter what the values of vehicle braking probability, pedestrian acceleration crossing probability, pedestrian backing probability and pedestrian generation probability, the system flow shows the "increasing-saturating-decreasing" trend with the increase of vehicle density; when the vehicle braking probability is lower, it is easy to cause an emergency brake of vehicle and result in great fluctuation of saturated flow; the saturated flow decreases slightly with the increase of the pedestrian acceleration crossing probability; when the pedestrian backing probability lies between 0.4 and 0.6, the saturated flow is unstable, which shows the hesitant behavior of pedestrians when making the decision of backing; the maximum flow is sensitive to the pedestrian generation probability and rapidly decreases with increasing the pedestrian generation probability, the maximum flow is approximately equal to zero when the probability is more than 0.5. The simulations prove that the influence of frequent crossing behavior upon vehicle flow is immense; the vehicle flow decreases and gets into serious congestion state rapidly with the increase of the pedestrian generation probability.

  10. A Deterministic Approach to Active Debris Removal Target Selection

    NASA Astrophysics Data System (ADS)

    Lidtke, A.; Lewis, H.; Armellin, R.

    2014-09-01

    Many decisions, with widespread economic, political and legal consequences, are being considered based on space debris simulations that show that Active Debris Removal (ADR) may be necessary as the concerns about the sustainability of spaceflight are increasing. The debris environment predictions are based on low-accuracy ephemerides and propagators. This raises doubts about the accuracy of those prognoses themselves but also the potential ADR target-lists that are produced. Target selection is considered highly important as removal of many objects will increase the overall mission cost. Selecting the most-likely candidates as soon as possible would be desirable as it would enable accurate mission design and allow thorough evaluation of in-orbit validations, which are likely to occur in the near-future, before any large investments are made and implementations realized. One of the primary factors that should be used in ADR target selection is the accumulated collision probability of every object. A conjunction detection algorithm, based on the smart sieve method, has been developed. Another algorithm is then applied to the found conjunctions to compute the maximum and true probabilities of collisions taking place. The entire framework has been verified against the Conjunction Analysis Tools in AGIs Systems Toolkit and relative probability error smaller than 1.5% has been achieved in the final maximum collision probability. Two target-lists are produced based on the ranking of the objects according to the probability they will take part in any collision over the simulated time window. These probabilities are computed using the maximum probability approach, that is time-invariant, and estimates of the true collision probability that were computed with covariance information. The top-priority targets are compared, and the impacts of the data accuracy and its decay are highlighted. General conclusions regarding the importance of Space Surveillance and Tracking for the purpose of ADR are also drawn and a deterministic method for ADR target selection, which could reduce the number of ADR missions to be performed, is proposed.

  11. Probability of stress-corrosion fracture under random loading

    NASA Technical Reports Server (NTRS)

    Yang, J. N.

    1974-01-01

    Mathematical formulation is based on cumulative-damage hypothesis and experimentally-determined stress-corrosion characteristics. Under both stationary random loadings, mean value and variance of cumulative damage are obtained. Probability of stress-corrosion fracture is then evaluated, using principle of maximum entropy.

  12. STOCHASTIC DUELS WITH HOMING,

    DTIC Science & Technology

    Duels where both marksmen ’home’ or ’zero in’ on one another are here considered, and the effect of this on the win probability is determined. It is...leads to win probabilities that can be straightforwardly evaluated. Maximum-likelihood estimation of the hit probability and homing from field data is outlined. The solutions of the duels are displayed as contour maps. (Author)

  13. Generalized Maximum Entropy

    NASA Technical Reports Server (NTRS)

    Cheeseman, Peter; Stutz, John

    2005-01-01

    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  14. Ultraviolet irradiation at elevated temperatures and thermal cycling in vacuum of FEP-A covered silicon solar cells

    NASA Technical Reports Server (NTRS)

    Broder, J. D.; Marsik, S. J.

    1978-01-01

    Experiments were designed and performed on silicon solar cells covered with heat-bonded FEP-A in an effort to explain the rapid degeneration of open-circuit voltage and maximum power observered on cells of this type included in an experiment on the ATS-6 spacecraft. Solar cells were exposed to ultraviolet light in vacuum at temperatures ranging from 30 to 105 C. The samples were then subjected to thermal cycling from 130 to -130 C. Inspection following irradiation indicated that all the covers remained physically intact. However, during the temperature cycling heat-bonded covers showed cracking. The test showed that heat-bonded FEP-A covers embrittle during UV exposure and the embrittlement is dependent upon sample temperature during irradiation. The results of the experiment suggest a probable mechanism for the degradation of the FEP-A cells on ATS-6.

  15. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  16. Delay Analysis and Optimization of Bandwidth Request under Unicast Polling in IEEE 802.16e over Gilbert-Elliot Error Channel

    NASA Astrophysics Data System (ADS)

    Hwang, Eunju; Kim, Kyung Jae; Roijers, Frank; Choi, Bong Dae

    In the centralized polling mode in IEEE 802.16e, a base station (BS) polls mobile stations (MSs) for bandwidth reservation in one of three polling modes; unicast, multicast, or broadcast pollings. In unicast polling, the BS polls each individual MS to allow to transmit a bandwidth request packet. This paper presents an analytical model for the unicast polling of bandwidth request in IEEE 802.16e networks over Gilbert-Elliot error channel. We derive the probability distribution for the delay of bandwidth requests due to wireless transmission errors and find the loss probability of request packets due to finite retransmission attempts. By using the delay distribution and the loss probability, we optimize the number of polling slots within a frame and the maximum retransmission number while satisfying QoS on the total loss probability which combines two losses: packet loss due to the excess of maximum retransmission and delay outage loss due to the maximum tolerable delay bound. In addition, we obtain the utilization of polling slots, which is defined as the ratio of the number of polling slots used for the MS's successful transmission to the total number of polling slots used by the MS over a long run time. Analysis results are shown to well match with simulation results. Numerical results give examples of the optimal number of polling slots within a frame and the optimal maximum retransmission number depending on delay bounds, the number of MSs, and the channel conditions.

  17. Evaluation of probable maximum snow accumulation: Development of a methodology for climate change studies

    NASA Astrophysics Data System (ADS)

    Klein, Iris M.; Rousseau, Alain N.; Frigon, Anne; Freudiger, Daphné; Gagnon, Patrick

    2016-06-01

    Probable maximum snow accumulation (PMSA) is one of the key variables used to estimate the spring probable maximum flood (PMF). A robust methodology for evaluating the PMSA is imperative so the ensuing spring PMF is a reasonable estimation. This is of particular importance in times of climate change (CC) since it is known that solid precipitation in Nordic landscapes will in all likelihood change over the next century. In this paper, a PMSA methodology based on simulated data from regional climate models is developed. Moisture maximization represents the core concept of the proposed methodology; precipitable water being the key variable. Results of stationarity tests indicate that CC will affect the monthly maximum precipitable water and, thus, the ensuing ratio to maximize important snowfall events. Therefore, a non-stationary approach is used to describe the monthly maximum precipitable water. Outputs from three simulations produced by the Canadian Regional Climate Model were used to give first estimates of potential PMSA changes for southern Quebec, Canada. A sensitivity analysis of the computed PMSA was performed with respect to the number of time-steps used (so-called snowstorm duration) and the threshold for a snowstorm to be maximized or not. The developed methodology is robust and a powerful tool to estimate the relative change of the PMSA. Absolute results are in the same order of magnitude as those obtained with the traditional method and observed data; but are also found to depend strongly on the climate projection used and show spatial variability.

  18. Application of the Maximum Amplitude-Early Rise Correlation to Cycle 23

    NASA Technical Reports Server (NTRS)

    Willson, Robert M.; Hathaway, David H.

    2004-01-01

    On the basis of the maximum amplitude-early rise correlation, cycle 23 could have been predicted to be about the size of the mean cycle as early as 12 mo following cycle minimum. Indeed, estimates for the size of cycle 23 throughout its rise consistently suggested a maximum amplitude that would not differ appreciably from the mean cycle, contrary to predictions based on precursor information. Because cycle 23 s average slope during the rising portion of the solar cycle measured 2.4, computed as the difference between the conventional maximum (120.8) and minimum (8) amplitudes divided by the ascent duration in months (47), statistically speaking, it should be a cycle of shorter period. Hence, conventional sunspot minimum for cycle 24 should occur before December 2006, probably near July 2006 (+/-4 mo). However, if cycle 23 proves to be a statistical outlier, then conventional sunspot minimum for cycle 24 would be delayed until after July 2007, probably near December 2007 (+/-4 mo). In anticipation of cycle 24, a chart and table are provided for easy monitoring of the nearness and size of its maximum amplitude once onset has occurred (with respect to the mean cycle and using the updated maximum amplitude-early rise relationship).

  19. Evaluation of Bite Force After Open Reduction and Internal Fixation Using Microplates

    PubMed Central

    Kumar, S Tharani; Saraf, Saurabh; Devi, S Prasanna

    2013-01-01

    The primary aim of this study is to determine maximum bite force in molar and incisor regions of healthy individuals, to evaluate the bite force after open reduction and internal fixation of mandibular fractures using micro plates, for a period of up to 6 weeks and to determine the rate of recovery of maximum bite force in an Indian population. PMID:24910656

  20. The returns and risks of investment portfolio in stock market crashes

    NASA Astrophysics Data System (ADS)

    Li, Jiang-Cheng; Long, Chao; Chen, Xiao-Dan

    2015-06-01

    The returns and risks of investment portfolio in stock market crashes are investigated by considering a theoretical model, based on a modified Heston model with a cubic nonlinearity, proposed by Spagnolo and Valenti. Through numerically simulating probability density function of returns and the mean escape time of the model, the results indicate that: (i) the maximum stability of returns is associated with the maximum dispersion of investment portfolio and an optimal stop-loss position; (ii) the maximum risks are related with a worst dispersion of investment portfolio and the risks of investment portfolio are enhanced by increasing stop-loss position. In addition, the good agreements between the theoretical result and real market data are found in the behaviors of the probability density function and the mean escape time.

  1. A METHOD FOR DETERMINING THE RADIALLY-AVERAGED EFFECTIVE IMPACT AREA FOR AN AIRCRAFT CRASH INTO A STRUCTURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, William C.

    This report presents a methodology for deriving the equations which can be used for calculating the radially-averaged effective impact area for a theoretical aircraft crash into a structure. Conventionally, a maximum effective impact area has been used in calculating the probability of an aircraft crash into a structure. Whereas the maximum effective impact area is specific to a single direction of flight, the radially-averaged effective impact area takes into consideration the real life random nature of the direction of flight with respect to a structure. Since the radially-averaged effective impact area is less than the maximum effective impact area, themore » resulting calculated probability of an aircraft crash into a structure is reduced.« less

  2. Evaluation of swimming performance for fish passage of longnose dace Rhinichthys cataractae using an experimental flume.

    PubMed

    Dockery, D R; McMahon, T E; Kappenman, K M; Blank, M

    2017-03-01

    The swimming performance of longnose dace Rhinichthys cataractae, the most widely distributed minnow (Cyprinidae) in North America, was assessed in relation to potential passage barriers. The study estimated passage success, maximum ascent distances and maximum sprint speed in an open-channel flume over a range of water velocities and temperatures (10·7, 15·3 and 19·3° C). Rhinichthys cataractae had high passage success (95%) in a 9·2 m flume section at mean test velocities of 39 and 64 cm s -1 , but success rate dropped to 66% at 78 cm s -1 . Only 20% of fish were able to ascend a 2·7 m section with a mean velocity of 122 cm s -1 . Rhinichthys cataractae actively selected low-velocity pathways located along the bottom and corners of the flume at all test velocities and adopted position-holding behaviour at higher water velocities. Mean volitional sprint speed was 174 cm s -1 when fish volitionally sprinted in areas of high water velocities. Swimming performance generally increased with water temperature and fish length. Based on these results, fishways with mean velocities <64 cm s -1 should allow passage of most R. cataractae. Water velocities >100 cm s -1 within structures should be limited to short distance (<1 m) and structures with velocities ≥158 cm s -1 would probably represent movement barriers. Study results highlighted the advantages of evaluating a multitude of swimming performance metrics in an open-channel flume, which can simulate the hydraulic features of fishways and allow for behavioural observations that can facilitate the design of effective passage structures. © 2016 The Fisheries Society of the British Isles.

  3. Maximum-likelihood fitting of data dominated by Poisson statistical uncertainties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoneking, M.R.; Den Hartog, D.J.

    1996-06-01

    The fitting of data by {chi}{sup 2}-minimization is valid only when the uncertainties in the data are normally distributed. When analyzing spectroscopic or particle counting data at very low signal level (e.g., a Thomson scattering diagnostic), the uncertainties are distributed with a Poisson distribution. The authors have developed a maximum-likelihood method for fitting data that correctly treats the Poisson statistical character of the uncertainties. This method maximizes the total probability that the observed data are drawn from the assumed fit function using the Poisson probability function to determine the probability for each data point. The algorithm also returns uncertainty estimatesmore » for the fit parameters. They compare this method with a {chi}{sup 2}-minimization routine applied to both simulated and real data. Differences in the returned fits are greater at low signal level (less than {approximately}20 counts per measurement). the maximum-likelihood method is found to be more accurate and robust, returning a narrower distribution of values for the fit parameters with fewer outliers.« less

  4. Maximum likelihood estimation for predicting the probability of obtaining variable shortleaf pine regeneration densities

    Treesearch

    Thomas B. Lynch; Jean Nkouka; Michael M. Huebschmann; James M. Guldin

    2003-01-01

    A logistic equation is the basis for a model that predicts the probability of obtaining regeneration at specified densities. The density of regeneration (trees/ha) for which an estimate of probability is desired can be specified by means of independent variables in the model. When estimating parameters, the dependent variable is set to 1 if the regeneration density (...

  5. The non-equilibrium statistical mechanics of a simple geophysical fluid dynamics model

    NASA Astrophysics Data System (ADS)

    Verkley, Wim; Severijns, Camiel

    2014-05-01

    Lorenz [1] has devised a dynamical system that has proved to be very useful as a benchmark system in geophysical fluid dynamics. The system in its simplest form consists of a periodic array of variables that can be associated with an atmospheric field on a latitude circle. The system is driven by a constant forcing, is damped by linear friction and has a simple advection term that causes the model to behave chaotically if the forcing is large enough. Our aim is to predict the statistics of Lorenz' model on the basis of a given average value of its total energy - obtained from a numerical integration - and the assumption of statistical stationarity. Our method is the principle of maximum entropy [2] which in this case reads: the information entropy of the system's probability density function shall be maximal under the constraints of normalization, a given value of the average total energy and statistical stationarity. Statistical stationarity is incorporated approximately by using `stationarity constraints', i.e., by requiring that the average first and possibly higher-order time-derivatives of the energy are zero in the maximization of entropy. The analysis [3] reveals that, if the first stationarity constraint is used, the resulting probability density function rather accurately reproduces the statistics of the individual variables. If the second stationarity constraint is used as well, the correlations between the variables are also reproduced quite adequately. The method can be generalized straightforwardly and holds the promise of a viable non-equilibrium statistical mechanics of the forced-dissipative systems of geophysical fluid dynamics. [1] E.N. Lorenz, 1996: Predictability - A problem partly solved, in Proc. Seminar on Predictability (ECMWF, Reading, Berkshire, UK), Vol. 1, pp. 1-18. [2] E.T. Jaynes, 2003: Probability Theory - The Logic of Science (Cambridge University Press, Cambridge). [3] W.T.M. Verkley and C.A. Severijns, 2014: The maximum entropy principle applied to a dynamical system proposed by Lorenz, Eur. Phys. J. B, 87:7, http://dx.doi.org/10.1140/epjb/e2013-40681-2 (open access).

  6. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  7. Covariance Based Pre-Filters and Screening Criteria for Conjunction Analysis

    NASA Astrophysics Data System (ADS)

    George, E., Chan, K.

    2012-09-01

    Several relationships are developed relating object size, initial covariance and range at closest approach to probability of collision. These relationships address the following questions: - Given the objects' initial covariance and combined hard body size, what is the maximum possible value of the probability of collision (Pc)? - Given the objects' initial covariance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the combined hard body radius, what is the minimum miss distance for which the probability of collision does not exceed the tolerance limit? - Given the objects' initial covariance and the miss distance, what is the maximum combined hard body radius for which the probability of collision does not exceed the tolerance limit? The first relationship above allows the elimination of object pairs from conjunction analysis (CA) on the basis of the initial covariance and hard-body sizes of the objects. The application of this pre-filter to present day catalogs with estimated covariance results in the elimination of approximately 35% of object pairs as unable to ever conjunct with a probability of collision exceeding 1x10-6. Because Pc is directly proportional to object size and inversely proportional to covariance size, this pre-filter will have a significantly larger impact on future catalogs, which are expected to contain a much larger fraction of small debris tracked only by a limited subset of available sensors. This relationship also provides a mathematically rigorous basis for eliminating objects from analysis entirely based on element set age or quality - a practice commonly done by rough rules of thumb today. Further, these relations can be used to determine the required geometric screening radius for all objects. This analysis reveals the screening volumes for small objects are much larger than needed, while the screening volumes for pairs of large objects may be inadequate. These relationships may also form the basis of an important metric for catalog maintenance by defining the maximum allowable covariance size for effective conjunction analysis. The application of these techniques promises to greatly improve the efficiency and completeness of conjunction analysis.

  8. 7 CFR 3015.182 - Open and free competition.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Open and free competition. 3015.182 Section 3015.182..., DEPARTMENT OF AGRICULTURE UNIFORM FEDERAL ASSISTANCE REGULATIONS Procurement § 3015.182 Open and free... regard to dollar value shall be conducted in a manner that provides maximum open and free competition. ...

  9. 7 CFR 3015.182 - Open and free competition.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 15 2014-01-01 2014-01-01 false Open and free competition. 3015.182 Section 3015.182..., DEPARTMENT OF AGRICULTURE UNIFORM FEDERAL ASSISTANCE REGULATIONS Procurement § 3015.182 Open and free... regard to dollar value shall be conducted in a manner that provides maximum open and free competition. ...

  10. 7 CFR 3015.182 - Open and free competition.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 15 2012-01-01 2012-01-01 false Open and free competition. 3015.182 Section 3015.182..., DEPARTMENT OF AGRICULTURE UNIFORM FEDERAL ASSISTANCE REGULATIONS Procurement § 3015.182 Open and free... regard to dollar value shall be conducted in a manner that provides maximum open and free competition. ...

  11. 7 CFR 3015.182 - Open and free competition.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 15 2011-01-01 2011-01-01 false Open and free competition. 3015.182 Section 3015.182..., DEPARTMENT OF AGRICULTURE UNIFORM FEDERAL ASSISTANCE REGULATIONS Procurement § 3015.182 Open and free... regard to dollar value shall be conducted in a manner that provides maximum open and free competition. ...

  12. 7 CFR 3015.182 - Open and free competition.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 15 2013-01-01 2013-01-01 false Open and free competition. 3015.182 Section 3015.182..., DEPARTMENT OF AGRICULTURE UNIFORM FEDERAL ASSISTANCE REGULATIONS Procurement § 3015.182 Open and free... regard to dollar value shall be conducted in a manner that provides maximum open and free competition. ...

  13. Ensemble-Biased Metadynamics: A Molecular Simulation Method to Sample Experimental Distributions

    PubMed Central

    Marinelli, Fabrizio; Faraldo-Gómez, José D.

    2015-01-01

    We introduce an enhanced-sampling method for molecular dynamics (MD) simulations referred to as ensemble-biased metadynamics (EBMetaD). The method biases a conventional MD simulation to sample a molecular ensemble that is consistent with one or more probability distributions known a priori, e.g., experimental intramolecular distance distributions obtained by double electron-electron resonance or other spectroscopic techniques. To this end, EBMetaD adds an adaptive biasing potential throughout the simulation that discourages sampling of configurations inconsistent with the target probability distributions. The bias introduced is the minimum necessary to fulfill the target distributions, i.e., EBMetaD satisfies the maximum-entropy principle. Unlike other methods, EBMetaD does not require multiple simulation replicas or the introduction of Lagrange multipliers, and is therefore computationally efficient and straightforward in practice. We demonstrate the performance and accuracy of the method for a model system as well as for spin-labeled T4 lysozyme in explicit water, and show how EBMetaD reproduces three double electron-electron resonance distance distributions concurrently within a few tens of nanoseconds of simulation time. EBMetaD is integrated in the open-source PLUMED plug-in (www.plumed-code.org), and can be therefore readily used with multiple MD engines. PMID:26083917

  14. Risky Business: Do Native Rodents Use Habitat and Odor Cues to Manage Predation Risk in Australian Deserts?

    PubMed Central

    Spencer, Emma E.; Crowther, Mathew S.; Dickman, Christopher R.

    2014-01-01

    In open, arid environments with limited shelter there may be strong selection on small prey species to develop behaviors that facilitate predator avoidance. Here, we predicted that rodents should avoid predator odor and open habitats to reduce their probability of encounter with potential predators, and tested our predictions using a native Australian desert rodent, the spinifex hopping-mouse (Notomys alexis). We tested the foraging and movement responses of N. alexis to non-native predator (fox and cat) odor, in sheltered and open macro- and microhabitats. Rodents did not respond to predator odor, perhaps reflecting the inconsistent selection pressure that is imposed on prey species in the desert environment due to the transience of predator-presence. However, they foraged primarily in the open and moved preferentially across open sand. The results suggest that N. alexis relies on escape rather than avoidance behavior when managing predation risk, with its bipedal movement probably allowing it to exploit open environments most effectively. PMID:24587396

  15. Risky business: do native rodents use habitat and odor cues to manage predation risk in Australian deserts?

    PubMed

    Spencer, Emma E; Crowther, Mathew S; Dickman, Christopher R

    2014-01-01

    In open, arid environments with limited shelter there may be strong selection on small prey species to develop behaviors that facilitate predator avoidance. Here, we predicted that rodents should avoid predator odor and open habitats to reduce their probability of encounter with potential predators, and tested our predictions using a native Australian desert rodent, the spinifex hopping-mouse (Notomys alexis). We tested the foraging and movement responses of N. alexis to non-native predator (fox and cat) odor, in sheltered and open macro- and microhabitats. Rodents did not respond to predator odor, perhaps reflecting the inconsistent selection pressure that is imposed on prey species in the desert environment due to the transience of predator-presence. However, they foraged primarily in the open and moved preferentially across open sand. The results suggest that N. alexis relies on escape rather than avoidance behavior when managing predation risk, with its bipedal movement probably allowing it to exploit open environments most effectively.

  16. The Best of Two Open Worlds at the National Open University of Nigeria

    ERIC Educational Resources Information Center

    Agbu, Jane-frances Obiageli; Mulder, Fred; de Vries, Fred; Tenebe, Vincent; Caine, Abel

    2016-01-01

    It will be wise for educational institutions, from primary to tertiary level, globally, to reflect on their position and profile with respect to the new concepts of Open Educational Resources (OER) and Massive Open Online Courses (MOOCs). Responses will be diverse of course but the potential is so manifest that many institutions probably will…

  17. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  18. Brain tumor segmentation from multimodal magnetic resonance images via sparse representation.

    PubMed

    Li, Yuhong; Jia, Fucang; Qin, Jing

    2016-10-01

    Accurately segmenting and quantifying brain gliomas from magnetic resonance (MR) images remains a challenging task because of the large spatial and structural variability among brain tumors. To develop a fully automatic and accurate brain tumor segmentation algorithm, we present a probabilistic model of multimodal MR brain tumor segmentation. This model combines sparse representation and the Markov random field (MRF) to solve the spatial and structural variability problem. We formulate the tumor segmentation problem as a multi-classification task by labeling each voxel as the maximum posterior probability. We estimate the maximum a posteriori (MAP) probability by introducing the sparse representation into a likelihood probability and a MRF into the prior probability. Considering the MAP as an NP-hard problem, we convert the maximum posterior probability estimation into a minimum energy optimization problem and employ graph cuts to find the solution to the MAP estimation. Our method is evaluated using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013) and obtained Dice coefficient metric values of 0.85, 0.75, and 0.69 on the high-grade Challenge data set, 0.73, 0.56, and 0.54 on the high-grade Challenge LeaderBoard data set, and 0.84, 0.54, and 0.57 on the low-grade Challenge data set for the complete, core, and enhancing regions. The experimental results show that the proposed algorithm is valid and ranks 2nd compared with the state-of-the-art tumor segmentation algorithms in the MICCAI BRATS 2013 challenge. Copyright © 2016 Elsevier B.V. All rights reserved.

  19. 40 CFR 258.14 - Seismic impact zones.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... horizontal acceleration in lithified earth material for the site. The owner or operator must place the... greater probability that the maximum horizontal acceleration in lithified earth material, expressed as a percentage of the earth's gravitational pull (g), will exceed 0.10g in 250 years. (2) Maximum horizontal...

  20. 40 CFR 258.14 - Seismic impact zones.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... horizontal acceleration in lithified earth material for the site. The owner or operator must place the... greater probability that the maximum horizontal acceleration in lithified earth material, expressed as a percentage of the earth's gravitational pull (g), will exceed 0.10g in 250 years. (2) Maximum horizontal...

  1. 40 CFR 258.14 - Seismic impact zones.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... horizontal acceleration in lithified earth material for the site. The owner or operator must place the... greater probability that the maximum horizontal acceleration in lithified earth material, expressed as a percentage of the earth's gravitational pull (g), will exceed 0.10g in 250 years. (2) Maximum horizontal...

  2. Analysis of meteorological droughts and dry spells in semiarid regions: a comparative analysis of probability distribution functions in the Segura Basin (SE Spain)

    NASA Astrophysics Data System (ADS)

    Pérez-Sánchez, Julio; Senent-Aparicio, Javier

    2017-08-01

    Dry spells are an essential concept of drought climatology that clearly defines the semiarid Mediterranean environment and whose consequences are a defining feature for an ecosystem, so vulnerable with regard to water. The present study was conducted to characterize rainfall drought in the Segura River basin located in eastern Spain, marked by the self seasonal nature of these latitudes. A daily precipitation set has been utilized for 29 weather stations during a period of 20 years (1993-2013). Furthermore, four sets of dry spell length (complete series, monthly maximum, seasonal maximum, and annual maximum) are used and simulated for all the weather stations with the following probability distribution functions: Burr, Dagum, error, generalized extreme value, generalized logistic, generalized Pareto, Gumbel Max, inverse Gaussian, Johnson SB, Log-Logistic, Log-Pearson 3, Triangular, Weibull, and Wakeby. Only the series of annual maximum spell offer a good adjustment for all the weather stations, thereby gaining the role of Wakeby as the best result, with a p value means of 0.9424 for the Kolmogorov-Smirnov test (0.2 significance level). Probability of dry spell duration for return periods of 2, 5, 10, and 25 years maps reveal the northeast-southeast gradient, increasing periods with annual rainfall of less than 0.1 mm in the eastern third of the basin, in the proximity of the Mediterranean slope.

  3. Automated measurement of spatial preference in the open field test with transmitted lighting.

    PubMed

    Kulikov, Alexander V; Tikhonova, Maria A; Kulikov, Victor A

    2008-05-30

    New modification of the open field was designed to improve automation of the test. The main innovations were: (1) transmitted lighting and (2) estimation of probability to find pixels associated with an animal in the selected region of arena as an objective index of spatial preference. Transmitted (inverted) lighting significantly ameliorated the contrast between an animal and arena and allowed to track white animals with similar efficacy as colored ones. Probability as a measure of preference of selected region was mathematically proved and experimentally verified. A good correlation between probability and classic indices of spatial preference (number of region entries and time spent therein) was shown. The algorithm of calculation of probability to find pixels associated with an animal in the selected region was implemented in the EthoStudio software. Significant interstrain differences in locomotion and the central zone preference (index of anxiety) were shown using the inverted lighting and the EthoStudio software in mice of six inbred strains. The effects of arena shape (circle or square) and a novel object presence in the center of arena on the open field behavior in mice were studied.

  4. Open Markov Processes and Reaction Networks

    ERIC Educational Resources Information Center

    Swistock Pollard, Blake Stephen

    2017-01-01

    We begin by defining the concept of "open" Markov processes, which are continuous-time Markov chains where probability can flow in and out through certain "boundary" states. We study open Markov processes which in the absence of such boundary flows admit equilibrium states satisfying detailed balance, meaning that the net flow…

  5. Maximum ikelihood estimation for the double-count method with independent observers

    USGS Publications Warehouse

    Manly, Bryan F.J.; McDonald, Lyman L.; Garner, Gerald W.

    1996-01-01

    Data collected under a double-count protocol during line transect surveys were analyzed using new maximum likelihood methods combined with Akaike's information criterion to provide estimates of the abundance of polar bear (Ursus maritimus Phipps) in a pilot study off the coast of Alaska. Visibility biases were corrected by modeling the detection probabilities using logistic regression functions. Independent variables that influenced the detection probabilities included perpendicular distance of bear groups from the flight line and the number of individuals in the groups. A series of models were considered which vary from (1) the simplest, where the probability of detection was the same for both observers and was not affected by either distance from the flight line or group size, to (2) models where probability of detection is different for the two observers and depends on both distance from the transect and group size. Estimation procedures are developed for the case when additional variables may affect detection probabilities. The methods are illustrated using data from the pilot polar bear survey and some recommendations are given for design of a survey over the larger Chukchi Sea between Russia and the United States.

  6. The Effects of Vegetative Type, Edges, Fire History, Rainfall and Management in Fire-Maintained Ecosystems

    NASA Technical Reports Server (NTRS)

    Breininger, David R.; Foster, Tammy E.; Carter, Geoffrey M.; Duncan, Brean W.; Stolen, Eric D.; Lyon, James E.

    2017-01-01

    The combined effects of repeated fires, climate, and landscape features (e.g., edges) need greater focus in fire ecology studies, which usually emphasize characteristics of the most recent fire and not fire history. Florida scrub-jays are an imperiled, territorial species that prefer medium (1.2-1.7 m) shrub heights. We measured short, medium, and tall habitat quality states annually within 10 ha grid cells that represented potential territories because frequent fires and vegetative recovery cause annual variation in habitat quality. We used multistate models and model selection to test competing hypotheses about how transition probabilities between states varied annually as functions of environmental covariates. Covariates included vegetative type, edges, precipitation, openings (gaps between shrubs), mechanical cutting, and fire characteristics. Fire characteristics not only included an annual presenceabsence of fire covariate, but also fire history covariates: time since the previous fire, the maximum fire-free interval, and the number of repeated fires. Statistical models with support included many covariates for each transition probability, often including fire history, interactions and nonlinear relationships. Tall territories resulted from 28 years of fire suppression and habitat fragmentation that reduced the spread of fires across landscapes. Despite 35 years of habitat restoration and prescribed fires, half the territories remained tall suggesting a regime shift to a less desirable habitat condition. Measuring territory quality states and environmental covariates each year combined with multistate modeling provided a useful empirical approach to quantify the effects of repeated fire in combinations with environmental variables on transition probabilities that drive management strategies and ecosystem change.

  7. Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.

    ERIC Educational Resources Information Center

    Cooper, William S.

    1983-01-01

    Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…

  8. Computing under-ice discharge: A proof-of-concept using hydroacoustics and the Probability Concept

    NASA Astrophysics Data System (ADS)

    Fulton, John W.; Henneberg, Mark F.; Mills, Taylor J.; Kohn, Michael S.; Epstein, Brian; Hittle, Elizabeth A.; Damschen, William C.; Laveau, Christopher D.; Lambrecht, Jason M.; Farmer, William H.

    2018-07-01

    Under-ice discharge is estimated using open-water reference hydrographs; however, the ratings for ice-affected sites are generally qualified as poor. The U.S. Geological Survey (USGS), in collaboration with the Colorado Water Conservation Board, conducted a proof-of-concept to develop an alternative method for computing under-ice discharge using hydroacoustics and the Probability Concept. The study site was located south of Minturn, Colorado (CO), USA, and was selected because of (1) its proximity to the existing USGS streamgage 09064600 Eagle River near Minturn, CO, and (2) its ease-of-access to verify discharge using a variety of conventional methods. From late September 2014 to early March 2015, hydraulic conditions varied from open water to under ice. These temporal changes led to variations in water depth and velocity. Hydroacoustics (tethered and uplooking acoustic Doppler current profilers and acoustic Doppler velocimeters) were deployed to measure the vertical-velocity profile at a singularly important vertical of the channel-cross section. Because the velocity profile was non-standard and cannot be characterized using a Power Law or Log Law, velocity data were analyzed using the Probability Concept, which is a probabilistic formulation of the velocity distribution. The Probability Concept-derived discharge was compared to conventional methods including stage-discharge and index-velocity ratings and concurrent field measurements; each is complicated by the dynamics of ice formation, pressure influences on stage measurements, and variations in cross-sectional area due to ice formation. No particular discharge method was assigned as truth. Rather one statistical metric (Kolmogorov-Smirnov; KS), agreement plots, and concurrent measurements provided a measure of comparability between various methods. Regardless of the method employed, comparisons between each method revealed encouraging results depending on the flow conditions and the absence or presence of ice cover. For example, during lower discharges dominated by under-ice and transition (intermittent open-water and under-ice) conditions, the KS metric suggests there is not sufficient information to reject the null hypothesis and implies that the Probability Concept and index-velocity rating represent similar distributions. During high-flow, open-water conditions, the comparisons are less definitive; therefore, it is important that the appropriate analytical method and instrumentation be selected. Six conventional discharge measurements were collected concurrently with Probability Concept-derived discharges with percent differences (%) of -9.0%, -21%, -8.6%, 17.8%, 3.6%, and -2.3%. This proof-of-concept demonstrates that riverine discharges can be computed using the Probability Concept for a range of hydraulic extremes (variations in discharge, open-water and under-ice conditions) immediately after the siting phase is complete, which typically requires one day. Computing real-time discharges is particularly important at sites, where (1) new streamgages are planned, (2) river hydraulics are complex, and (3) shifts in the stage-discharge rating are needed to correct the streamflow record. Use of the Probability Concept does not preclude the need to maintain a stage-area relation. Both the Probability Concept and index-velocity rating offer water-resource managers and decision makers alternatives for computing real-time discharge for open-water and under-ice conditions.

  9. Computing under-ice discharge: A proof-of-concept using hydroacoustics and the Probability Concept

    USGS Publications Warehouse

    Fulton, John W.; Henneberg, Mark F.; Mills, Taylor J.; Kohn, Michael S.; Epstein, Brian; Hittle, Elizabeth A.; Damschen, William C.; Laveau, Christopher D.; Lambrecht, Jason M.; Farmer, William H.

    2018-01-01

    Under-ice discharge is estimated using open-water reference hydrographs; however, the ratings for ice-affected sites are generally qualified as poor. The U.S. Geological Survey (USGS), in collaboration with the Colorado Water Conservation Board, conducted a proof-of-concept to develop an alternative method for computing under-ice discharge using hydroacoustics and the Probability Concept.The study site was located south of Minturn, Colorado (CO), USA, and was selected because of (1) its proximity to the existing USGS streamgage 09064600 Eagle River near Minturn, CO, and (2) its ease-of-access to verify discharge using a variety of conventional methods. From late September 2014 to early March 2015, hydraulic conditions varied from open water to under ice. These temporal changes led to variations in water depth and velocity. Hydroacoustics (tethered and uplooking acoustic Doppler current profilers and acoustic Doppler velocimeters) were deployed to measure the vertical-velocity profile at a singularly important vertical of the channel-cross section. Because the velocity profile was non-standard and cannot be characterized using a Power Law or Log Law, velocity data were analyzed using the Probability Concept, which is a probabilistic formulation of the velocity distribution. The Probability Concept-derived discharge was compared to conventional methods including stage-discharge and index-velocity ratings and concurrent field measurements; each is complicated by the dynamics of ice formation, pressure influences on stage measurements, and variations in cross-sectional area due to ice formation.No particular discharge method was assigned as truth. Rather one statistical metric (Kolmogorov-Smirnov; KS), agreement plots, and concurrent measurements provided a measure of comparability between various methods. Regardless of the method employed, comparisons between each method revealed encouraging results depending on the flow conditions and the absence or presence of ice cover.For example, during lower discharges dominated by under-ice and transition (intermittent open-water and under-ice) conditions, the KS metric suggests there is not sufficient information to reject the null hypothesis and implies that the Probability Concept and index-velocity rating represent similar distributions. During high-flow, open-water conditions, the comparisons are less definitive; therefore, it is important that the appropriate analytical method and instrumentation be selected. Six conventional discharge measurements were collected concurrently with Probability Concept-derived discharges with percent differences (%) of −9.0%, −21%, −8.6%, 17.8%, 3.6%, and −2.3%.This proof-of-concept demonstrates that riverine discharges can be computed using the Probability Concept for a range of hydraulic extremes (variations in discharge, open-water and under-ice conditions) immediately after the siting phase is complete, which typically requires one day. Computing real-time discharges is particularly important at sites, where (1) new streamgages are planned, (2) river hydraulics are complex, and (3) shifts in the stage-discharge rating are needed to correct the streamflow record. Use of the Probability Concept does not preclude the need to maintain a stage-area relation. Both the Probability Concept and index-velocity rating offer water-resource managers and decision makers alternatives for computing real-time discharge for open-water and under-ice conditions.

  10. Single-Pixel Optical Fluctuation Analysis of Calcium Channel Function in Active Zones of Motor Nerve Terminals

    PubMed Central

    Luo, Fujun; Dittrich, Markus; Stiles, Joel R.; Meriney, Stephen D.

    2011-01-01

    We used high-resolution fluorescence imaging and single-pixel optical fluctuation analysis to estimate the opening probability of individual voltage-gated calcium (Ca2+) channels during an action potential and the number of such Ca2+ channels within active zones of frog neuromuscular junctions. Analysis revealed ~36 Ca2+ channels within each active zone, similar to the number of docked synaptic vesicles but far less than the total number of transmembrane particles reported based on freeze-fracture analysis (~200–250). The probability that each channel opened during an action potential was only ~0.2. These results suggest why each active zone averages only one quantal release event during every other action potential, despite a substantial number of docked vesicles. With sparse Ca2+ channels and low opening probability, triggering of fusion for each vesicle is primarily controlled by Ca2+ influx through individual Ca2+ channels. In contrast, the entire synapse is highly reliable because it contains hundreds of active zones. PMID:21813687

  11. Learning for Maximum Impact: Four Critical but Overlooked Ideas

    ERIC Educational Resources Information Center

    Roberson, Sam

    2017-01-01

    Thinking and learning are corresponding and interdependent processes in every classroom. To improve learning, teachers must be open to new ideas, particularly ideas that locate conditions for maximum learning. This paper presents four overlooked but critical ideas that identify a common area, a GAP experience within which maximum learning is…

  12. A methodological framework to assess PMP and PMF in snow-dominated watersheds under changing climate conditions - A case study of three watersheds in Québec (Canada)

    NASA Astrophysics Data System (ADS)

    Rouhani, Hassan; Leconte, Robert

    2018-06-01

    Climate change will affect precipitation and flood regimes. It is anticipated that the Probable Maximum Precipitation (PMP) and Probable Maximum Flood (PMF) will be modified in a changing climate. This paper aims to quantify and analyze climate change influences on PMP and PMF in three watersheds with different climatic conditions across the province of Québec, Canada. Output data from the Canadian Regional Climate Model (CRCM) was used to estimate PMP and Probable Maximum Snow Accumulation (PMSA) in future climate projections, which was then used to force the SWAT hydrological model to estimate PMF. PMP and PMF values were estimated for two time horizons each spanning 30 years: 1961-1990 (recent past) and 2041-2070 (future). PMP and PMF were separately analyzed for two seasons: summer-fall and spring. Results show that PMF in the watershed located in southern Québec would remain unchanged in the future horizon, but the trend for the watersheds located in the northeastern and northern areas of the province is an increase of up to 11%.

  13. Fitting distributions to microbial contamination data collected with an unequal probability sampling design.

    PubMed

    Williams, M S; Ebel, E D; Cao, Y

    2013-01-01

    The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.

  14. Dropout rates in medical students at one school before and after the installation of admission tests in Austria.

    PubMed

    Reibnegger, Gilbert; Caluba, Hans-Christian; Ithaler, Daniel; Manhal, Simone; Neges, Heide Maria; Smolle, Josef

    2011-08-01

    Admission to medical studies in Austria since academic year 2005-2006 has been regulated by admission tests. At the Medical University of Graz, an admission test focusing on secondary-school-level knowledge in natural sciences has been used for this purpose. The impact of this important change on dropout rates of female versus male students and older versus younger students is reported. All 2,860 students admitted to the human medicine diploma program at the Medical University of Graz from academic years 2002-2003 to 2008-2009 were included. Nonparametric and semiparametric survival analysis techniques were employed to compare cumulative probability of dropout between demographic groups. Cumulative probability of dropout was significantly reduced in students selected by active admission procedure versus those admitted openly (P < .0001). Relative hazard ratio of selected versus openly admitted students was only 0.145 (95% CI, 0.106-0.198). Among openly admitted students, but not for selected ones, the cumulative probabilities for dropout were higher for females (P < .0001) and for older students (P < .0001). Generally, dropout hazard is highest during the second year of study. The introduction of admission testing significantly decreased the cumulative probability for dropout. In openly admitted students a significantly higher risk for dropout was found in female students and in older students, whereas no such effects can be detected after admission testing. Future research should focus on the sex dependence, with the aim of improving success rates among female applicants on the admission tests.

  15. A compositional framework for Markov processes

    NASA Astrophysics Data System (ADS)

    Baez, John C.; Fong, Brendan; Pollard, Blake S.

    2016-03-01

    We define the concept of an "open" Markov process, or more precisely, continuous-time Markov chain, which is one where probability can flow in or out of certain states called "inputs" and "outputs." One can build up a Markov process from smaller open pieces. This process is formalized by making open Markov processes into the morphisms of a dagger compact category. We show that the behavior of a detailed balanced open Markov process is determined by a principle of minimum dissipation, closely related to Prigogine's principle of minimum entropy production. Using this fact, we set up a functor mapping open detailed balanced Markov processes to open circuits made of linear resistors. We also describe how to "black box" an open Markov process, obtaining the linear relation between input and output data that holds in any steady state, including nonequilibrium steady states with a nonzero flow of probability through the system. We prove that black boxing gives a symmetric monoidal dagger functor sending open detailed balanced Markov processes to Lagrangian relations between symplectic vector spaces. This allows us to compute the steady state behavior of an open detailed balanced Markov process from the behaviors of smaller pieces from which it is built. We relate this black box functor to a previously constructed black box functor for circuits.

  16. Maximum entropy analysis of NMR data of flexible multirotor molecules partially oriented in nematic solution: 2,2':5',2″-terthiophene, 2,2'- and 3,3'-dithiophene

    NASA Astrophysics Data System (ADS)

    Caldarelli, Stefano; Catalano, Donata; Di Bari, Lorenzo; Lumetti, Marco; Ciofalo, Maurizio; Alberto Veracini, Carlo

    1994-07-01

    The dipolar couplings observed by NMR spectroscopy of solutes in nematic solvents (LX-NMR) are used to build up the maximum entropy (ME) probability distribution function of the variables describing the orientational and internal motion of the molecule. The ME conformational distributions of 2,2'- and 3,3'-dithiophene and 2,2':5',2″-terthiophene (α-terthienyl)thus obtained are compared with the results of previous studies. The 2,2'- and 3,3'-dithiophene molecules exhibit equilibria among cisoid and transoid forms; the probability maxima correspond to planar and twisted conformers for 2,2'- or 3,3'-dithiophene, respectively, 2,2':5',2″-Terthiophene has two internal degrees of freedom; the ME approach indicates that the trans, trans and cis, trans planar conformations are the most probable. The correlation between the two intramolecular rotations is also discussed.

  17. Method and device for landing aircraft dependent on runway occupancy time

    NASA Technical Reports Server (NTRS)

    Ghalebsaz Jeddi, Babak (Inventor)

    2012-01-01

    A technique for landing aircraft using an aircraft landing accident avoidance device is disclosed. The technique includes determining at least two probability distribution functions; determining a safe lower limit on a separation between a lead aircraft and a trail aircraft on a glide slope to the runway; determining a maximum sustainable safe attempt-to-land rate on the runway based on the safe lower limit and the probability distribution functions; directing the trail aircraft to enter the glide slope with a target separation from the lead aircraft corresponding to the maximum sustainable safe attempt-to-land rate; while the trail aircraft is in the glide slope, determining an actual separation between the lead aircraft and the trail aircraft; and directing the trail aircraft to execute a go-around maneuver if the actual separation approaches the safe lower limit. Probability distribution functions include runway occupancy time, and landing time interval and/or inter-arrival distance.

  18. Valuing innovative endoscopic techniques: per-oral endoscopic myotomy for the management of achalasia.

    PubMed

    Shah, Eric D; Chang, Andrew C; Law, Ryan

    2018-04-20

    Unclear reimbursement for new and innovative endoscopic procedures can limit adoption in clinical practice despite effectiveness in clinical trials. The aim of this study was to determine maximum cost-effective reimbursement for per-oral endoscopic myotomy (POEM) in treating achalasia. We constructed a decision analytic model assessing POEM versus laparoscopic Heller myotomy with Dor fundoplication (LHM) in managing achalasia from a payer perspective over a 1-year time horizon. Reimbursement data were derived from 2017 Medicare data. Responder rates were based on clinically meaningful improvement in validated Eckardt scores. Validated health utility values were assigned to terminal health states based on data previously derived with a standard gamble technique. Contemporary willingness-to-pay (WTP) levels per quality-adjusted life year (QALY) were used to estimate maximum reimbursement for POEM using threshold analysis. Effectiveness of POEM and LHM was similar at one year of follow-up (0.91 QALY). Maximum cost-effective reimbursement for POEM was $1,200.07 to $1,389.85 (33.4-38.7 total 2017 RVUs). This compares to contemporary total reimbursement of 10-15 total RVU for advanced endoscopic procedures. The model was most sensitive to the probability of GERD after procedure. The rate of conversion to open laparotomy due to perforation or bleeding was infrequent in published clinical practice experience, thus did not significantly affect reimbursement. POEM is an example of an innovative and potentially disruptive endoscopic technique offering greater cost-effective value and similar outcomes to the established surgical standard at contemporary reimbursement levels. Copyright © 2018 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.

  19. A Few Meters Matter: Local Habitats Drive Reproductive Cycles in a Tropical Lizard.

    PubMed

    Otero, Luisa M; Huey, Raymond B; Gorman, George C

    2015-09-01

    Reproductive phenology often varies geographically within species, driven by environmental gradients that alter growth and reproduction. However, environments can differ between adjacent habitats at single localities. In lowland Puerto Rico, both open (sunny, warm) and forested (shady, cool) habitats may be only meters apart. The lizard Anolis cristatellus lives in both habitats: it thermoregulates carefully in the open but is a thermoconformer in the forest. To determine whether reproduction differs between habitats, we compared reproductive cycles of females in open versus forest habitats at two localities for over 2 years. Open females were more likely than forest females to be reproductive throughout the year, probably because open females were able to bask and thereby achieve warmer body temperatures. These between-habitat differences in reproduction were especially marked in cool months and are equivalent in magnitude to those between populations separated by elevation. Thus, environmental differences (even on a microlandscape scale) matter to reproduction and probably to demography.

  20. Using optimal transport theory to estimate transition probabilities in metapopulation dynamics

    USGS Publications Warehouse

    Nichols, Jonathan M.; Spendelow, Jeffrey A.; Nichols, James D.

    2017-01-01

    This work considers the estimation of transition probabilities associated with populations moving among multiple spatial locations based on numbers of individuals at each location at two points in time. The problem is generally underdetermined as there exists an extremely large number of ways in which individuals can move from one set of locations to another. A unique solution therefore requires a constraint. The theory of optimal transport provides such a constraint in the form of a cost function, to be minimized in expectation over the space of possible transition matrices. We demonstrate the optimal transport approach on marked bird data and compare to the probabilities obtained via maximum likelihood estimation based on marked individuals. It is shown that by choosing the squared Euclidean distance as the cost, the estimated transition probabilities compare favorably to those obtained via maximum likelihood with marked individuals. Other implications of this cost are discussed, including the ability to accurately interpolate the population's spatial distribution at unobserved points in time and the more general relationship between the cost and minimum transport energy.

  1. High throughput nonparametric probability density estimation.

    PubMed

    Farmer, Jenny; Jacobs, Donald

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.

  2. High throughput nonparametric probability density estimation

    PubMed Central

    Farmer, Jenny

    2018-01-01

    In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803

  3. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  4. TOOTH (The Open study Of dental pulp stem cell Therapy in Humans): Study protocol for evaluating safety and feasibility of autologous human adult dental pulp stem cell therapy in patients with chronic disability after stroke.

    PubMed

    Nagpal, Anjali; Kremer, Karlea L; Hamilton-Bruce, Monica A; Kaidonis, Xenia; Milton, Austin G; Levi, Christopher; Shi, Songtao; Carey, Leeanne; Hillier, Susan; Rose, Miranda; Zacest, Andrew; Takhar, Parabjit; Koblar, Simon A

    2016-07-01

    Stroke represents a significant global disease burden. As of 2015, there is no chemical or biological therapy proven to actively enhance neurological recovery during the chronic phase post-stroke. Globally, cell-based therapy in stroke is at the stage of clinical translation and may improve neurological function through various mechanisms such as neural replacement, neuroprotection, angiogenesis, immuno-modulation, and neuroplasticity. Preclinical evidence in a rodent model of middle cerebral artery ischemic stroke as reported in four independent studies indicates improvement in neurobehavioral function with adult human dental pulp stem cell therapy. Human adult dental pulp stem cells present an exciting potential therapeutic option for improving post-stroke disability. TOOTH (The Open study Of dental pulp stem cell Therapy in Humans) will investigate the use of autologous stem cell therapy for stroke survivors with chronic disability, with the following objectives: (a) determine the maximum tolerable dose of autologous dental pulp stem cell therapy; (b) define that dental pulp stem cell therapy at the maximum tolerable dose is safe and feasible in chronic stroke; and (c) estimate the parameters of efficacy required to design a future Phase 2/3 clinical trial. TOOTH is a Phase 1, open-label, single-blinded clinical trial with a pragmatic design that comprises three stages: Stage 1 will involve the selection of 27 participants with middle cerebral artery ischemic stroke and the commencement of autologous dental pulp stem cell isolation, growth, and testing in sequential cohorts (n = 3). Stage 2 will involve the transplantation of dental pulp stem cell in each cohort of participants with an ascending dose and subsequent observation for a 6-month period for any dental pulp stem cell-related adverse events. Stage 3 will investigate the neurosurgical intervention of the maximum tolerable dose of autologous dental pulp stem cell followed by 9 weeks of intensive task-specific rehabilitation. Advanced magnetic resonance and positron emission tomography neuro-imaging, and clinical assessment will be employed to probe any change afforded by stem cell therapy in combination with rehabilitation. Nine participants will step-wise progress in Stage 2 to a dose of up to 10 million dental pulp stem cell, employing a cumulative 3 + 3 statistical design with low starting stem cell dose and subsequent dose escalation, assuming that an acceptable probability of dose-limiting complications is between 1 in 6 (17%) and 1 in 3 (33%) of patients. In Stage 3, another 18 participants will receive an intracranial injection with the maximum tolerable dose of dental pulp stem cell. The primary outcomes to be measured are safety and feasibility of intracranial administration of autologous human adult DPSC in patients with chronic stroke and determination of the maximum tolerable dose in human subjects. Secondary outcomes include estimation of the measures of effectiveness required to design a future Phase 2/3 clinical trial. © 2016 World Stroke Organization.

  5. Dynamic Equilibrium of Cardiac Troponin C's Hydrophobic Cleft and Its Modulation by Ca2+ Sensitizers and a Ca2+ Sensitivity Blunting Phosphomimic, cTnT(T204E).

    PubMed

    Schlecht, William; Dong, Wen-Ji

    2017-10-18

    Several studies have suggested that conformational dynamics are important in the regulation of thin filament activation in cardiac troponin C (cTnC); however, little direct evidence has been offered to support these claims. In this study, a dye homodimerization approach is developed and implemented that allows the determination of the dynamic equilibrium between open and closed conformations in cTnC's hydrophobic cleft. Modulation of this equilibrium by Ca 2+ , cardiac troponin I (cTnI), cardiac troponin T (cTnT), Ca 2+ -sensitizers, and a Ca 2+ -desensitizing phosphomimic of cTnT (cTnT(T204E) is characterized. Isolated cTnC contained a small open conformation population in the absence of Ca 2+ that increased significantly upon the addition of saturating levels of Ca 2+ . This suggests that the Ca 2+ -induced activation of thin filament arises from an increase in the probability of hydrophobic cleft opening. The inclusion of cTnI increased the population of open cTnC, and the inclusion of cTnT had the opposite effect. Samples containing Ca 2+ -desensitizing cTnT(T204E) showed a slight but insignificant decrease in open conformation probability compared to samples with cardiac troponin T, wild type [cTnT(wt)], while Ca 2+ sensitizer treated samples generally increased open conformation probability. These findings show that an equilibrium between the open and closed conformations of cTnC's hydrophobic cleft play a significant role in tuning the Ca 2+ sensitivity of the heart.

  6. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  7. 50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...

  8. 50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...

  9. 50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...

  10. Laser damage metrology in biaxial nonlinear crystals using different test beams

    NASA Astrophysics Data System (ADS)

    Hildenbrand, Anne; Wagner, Frank R.; Akhouayri, Hassan; Natoli, Jean-Yves; Commandre, Mireille

    2008-01-01

    Laser damage measurements in nonlinear optical crystals, in particular in biaxial crystals, may be influenced by several effects proper to these materials or greatly enhanced in these materials. Before discussion of these effects, we address the topic of error bar determination for probability measurements. Error bars for the damage probabilities are important because nonlinear crystals are often small and expensive, thus only few sites are used for a single damage probability measurement. We present the mathematical basics and a flow diagram for the numerical calculation of error bars for probability measurements that correspond to a chosen confidence level. Effects that possibly modify the maximum intensity in a biaxial nonlinear crystal are: focusing aberration, walk-off and self-focusing. Depending on focusing conditions, propagation direction, polarization of the light and the position of the focus point in the crystal, strong aberrations may change the beam profile and drastically decrease the maximum intensity in the crystal. A correction factor for this effect is proposed, but quantitative corrections are not possible without taking into account the experimental beam profile after the focusing lens. The characteristics of walk-off and self-focusing have quickly been reviewed for the sake of completeness of this article. Finally, parasitic second harmonic generation may influence the laser damage behavior of crystals. The important point for laser damage measurements is that the amount of externally observed SHG after the crystal does not correspond to the maximum amount of second harmonic light inside the crystal.

  11. Inconvenient Truth or Convenient Fiction? Probable Maximum Precipitation and Nonstationarity

    NASA Astrophysics Data System (ADS)

    Nielsen-Gammon, J. W.

    2017-12-01

    According to the inconvenient truth that Probable Maximum Precipitation (PMP) represents a non-deterministic, statistically very rare event, future changes in PMP involve a complex interplay between future frequencies of storm type, storm morphology, and environmental characteristics, many of which are poorly constrained by global climate models. On the other hand, according to the convenient fiction that PMP represents an estimate of the maximum possible precipitation that can occur at a given location, as determined by storm maximization and transposition, the primary climatic driver of PMP change is simply a change in maximum moisture availability. Increases in boundary-layer and total-column moisture have been observed globally, are anticipated from basic physical principles, and are robustly projected to continue by global climate models. Thus, using the same techniques that are used within the PMP storm maximization process itself, future PMP values may be projected. The resulting PMP trend projections are qualitatively consistent with observed trends of extreme rainfall within Texas, suggesting that in this part of the world the inconvenient truth is congruent with the convenient fiction.

  12. A computer program for estimating instream travel times and concentrations of a potential contaminant in the Yellowstone River, Montana

    USGS Publications Warehouse

    McCarthy, Peter M.

    2006-01-01

    The Yellowstone River is very important in a variety of ways to the residents of southeastern Montana; however, it is especially vulnerable to spilled contaminants. In 2004, the U.S. Geological Survey, in cooperation with Montana Department of Environmental Quality, initiated a study to develop a computer program to rapidly estimate instream travel times and concentrations of a potential contaminant in the Yellowstone River using regression equations developed in 1999 by the U.S. Geological Survey. The purpose of this report is to describe these equations and their limitations, describe the development of a computer program to apply the equations to the Yellowstone River, and provide detailed instructions on how to use the program. This program is available online at [http://pubs.water.usgs.gov/sir2006-5057/includes/ytot.xls]. The regression equations provide estimates of instream travel times and concentrations in rivers where little or no contaminant-transport data are available. Equations were developed and presented for the most probable flow velocity and the maximum probable flow velocity. These velocity estimates can then be used to calculate instream travel times and concentrations of a potential contaminant. The computer program was developed so estimation equations for instream travel times and concentrations can be solved quickly for sites along the Yellowstone River between Corwin Springs and Sidney, Montana. The basic types of data needed to run the program are spill data, streamflow data, and data for locations of interest along the Yellowstone River. Data output from the program includes spill location, river mileage at specified locations, instantaneous discharge, mean-annual discharge, drainage area, and channel slope. Travel times and concentrations are provided for estimates of the most probable velocity of the peak concentration and the maximum probable velocity of the peak concentration. Verification of estimates of instream travel times and concentrations for the Yellowstone River requires information about the flow velocity throughout the 520 mi of river in the study area. Dye-tracer studies would provide the best data about flow velocities and would provide the best verification of instream travel times and concentrations estimated from this computer program; however, data from such studies does not currently (2006) exist and new studies would be expensive and time-consuming. An alternative approach used in this study for verification of instream travel times is based on the use of flood-wave velocities determined from recorded streamflow hydrographs at selected mainstem streamflow-gaging stations along the Yellowstone River. The ratios of flood-wave velocity to the most probable velocity for the base flow estimated from the computer program are within the accepted range of 2.5 to 4.0 and indicate that flow velocities estimated from the computer program are reasonable for the Yellowstone River. The ratios of flood-wave velocity to the maximum probable velocity are within a range of 1.9 to 2.8 and indicate that the maximum probable flow velocities estimated from the computer program, which corresponds to the shortest travel times and maximum probable concentrations, are conservative and reasonable for the Yellowstone River.

  13. 77 FR 12514 - Drawbridge Operation Regulation; Hood Canal, WA

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-01

    ... of 45 minutes during and after openings of the draw span. The stopped road traffic on this two-lane... hundred feet unless the maximum opening of 600 feet is requested. The current regulations remain in effect... three navigational openings for vessel passage, the movable floating span, subject to this proposed...

  14. Unmanned Ground Vehicle Communications Relays: Lessons Learned

    DTIC Science & Technology

    2014-04-01

    technology, specifically an open-source VPN package, OpenVPN . This technology provides a wrapper around the network messages, providing a plug-and-play...performed in OpenVPN :  Set the Maximum Transmission Unit (MTU) to 1600. This is because each VPN endpoint has an MTU of 1500 (the default for Ethernet

  15. ‘Sleepy’ inward rectifier channels in guinea-pig cardiomyocytes are activated only during strong hyperpolarization

    PubMed Central

    Liu, Gong Xin; Daut, Jürgen

    2002-01-01

    K+ channels of isolated guinea-pig cardiomyocytes were studied using the patch-clamp technique. At transmembrane potentials between −120 and −220 mV we observed inward currents through an apparently novel channel. The novel channel was strongly rectifying, no outward currents could be recorded. Between −200 and −160 mV it had a slope conductance of 42.8 ± 3.0 pS (s.d.; n = 96). The open probability (Po) showed a sigmoid voltage dependence and reached a maximum of 0.93 at −200 mV, half-maximal activation was approximately −150 mV. The voltage dependence of Po was not affected by application of 50 μm isoproterenol. The open-time distribution could be described by a single exponential function, the mean open time ranged between 73.5 ms at −220 mV and 1.4 ms at −160 mV. At least two exponential components were required to fit the closed time distribution. Experiments with different external Na+, K+ and Cl− concentrations suggested that the novel channel is K+ selective. Extracellular Ba2+ ions gave rise to a voltage-dependent reduction in Po by inducing long closed states; Cs+ markedly reduced mean open time at −200 mV. In cell-attached recordings the novel channel frequently converted to a classical inward rectifier channel, and vice versa. This conversion was not voltage dependent. After excision of the patch, the novel channel always converted to a classical inward rectifier channel within 0–3 min. This conversion was not affected by intracellular Mg2+, phosphatidylinositol (4,5)-bisphosphate or spermine. Taken together, our findings suggest that the novel K+ channel represents a different ‘mode’ of the classical inward rectifier channel in which opening occurs only at very negative potentials. PMID:11897847

  16. Predicting critical transitions in dynamical systems from time series using nonstationary probability density modeling.

    PubMed

    Kwasniok, Frank

    2013-11-01

    A time series analysis method for predicting the probability density of a dynamical system is proposed. A nonstationary parametric model of the probability density is estimated from data within a maximum likelihood framework and then extrapolated to forecast the future probability density and explore the system for critical transitions or tipping points. A full systematic account of parameter uncertainty is taken. The technique is generic, independent of the underlying dynamics of the system. The method is verified on simulated data and then applied to prediction of Arctic sea-ice extent.

  17. On the Pontryagin maximum principle for systems with delays. Economic applications

    NASA Astrophysics Data System (ADS)

    Kim, A. V.; Kormyshev, V. M.; Kwon, O. B.; Mukhametshin, E. R.

    2017-11-01

    The Pontryagin maximum principle [6] is the key stone of finite-dimensional optimal control theory [1, 2, 5]. So beginning with opening the maximum principle it was important to extend the maximum principle on various classes of dynamical systems. In t he paper we consider some aspects of application of i-smooth analysis [3, 4] in the theory of the Pontryagin maximum principle [6] for systems with delays, obtained results can be applied by elaborating optimal program controls in economic models with delays.

  18. Fretting Fatigue with Cylindrical-On-Flat Contact: Crack Nucleation, Crack Path and Fatigue Life

    PubMed Central

    Noraphaiphipaksa, Nitikorn; Manonukul, Anchalee; Kanchanomai, Chaosuan

    2017-01-01

    Fretting fatigue experiments and finite element analysis were carried out to investigate the influence of cylindrical-on-flat contact on crack nucleation, crack path and fatigue life of medium-carbon steel. The location of crack nucleation was predicted using the maximum shear stress range criterion and the maximum relative slip amplitude criterion. The prediction using the maximum relative slip amplitude criterion gave the better agreement with the experimental result, and should be used for the prediction of the location of crack nucleation. Crack openings under compressive bulk stresses were found in the fretting fatigues with flat-on-flat contact and cylindrical-on-flat contacts, i.e., fretting-contact-induced crack openings. The crack opening stress of specimen with flat-on-flat contact was lower than those of specimens with cylindrical-on-flat contacts, while that of specimen with 60-mm radius contact pad was lower than that of specimen with 15-mm radius contact pad. The fretting fatigue lives were estimated by integrating the fatigue crack growth curve from an initial propagating crack length to a critical crack length. The predictions of fretting fatigue life with consideration of crack opening were in good agreement with the experimental results. PMID:28772522

  19. Watershed Regressions for Pesticides (WARP) for Predicting Annual Maximum and Annual Maximum Moving-Average Concentrations of Atrazine in Streams

    USGS Publications Warehouse

    Stone, Wesley W.; Gilliom, Robert J.; Crawford, Charles G.

    2008-01-01

    Regression models were developed for predicting annual maximum and selected annual maximum moving-average concentrations of atrazine in streams using the Watershed Regressions for Pesticides (WARP) methodology developed by the National Water-Quality Assessment Program (NAWQA) of the U.S. Geological Survey (USGS). The current effort builds on the original WARP models, which were based on the annual mean and selected percentiles of the annual frequency distribution of atrazine concentrations. Estimates of annual maximum and annual maximum moving-average concentrations for selected durations are needed to characterize the levels of atrazine and other pesticides for comparison to specific water-quality benchmarks for evaluation of potential concerns regarding human health or aquatic life. Separate regression models were derived for the annual maximum and annual maximum 21-day, 60-day, and 90-day moving-average concentrations. Development of the regression models used the same explanatory variables, transformations, model development data, model validation data, and regression methods as those used in the original development of WARP. The models accounted for 72 to 75 percent of the variability in the concentration statistics among the 112 sampling sites used for model development. Predicted concentration statistics from the four models were within a factor of 10 of the observed concentration statistics for most of the model development and validation sites. Overall, performance of the models for the development and validation sites supports the application of the WARP models for predicting annual maximum and selected annual maximum moving-average atrazine concentration in streams and provides a framework to interpret the predictions in terms of uncertainty. For streams with inadequate direct measurements of atrazine concentrations, the WARP model predictions for the annual maximum and the annual maximum moving-average atrazine concentrations can be used to characterize the probable levels of atrazine for comparison to specific water-quality benchmarks. Sites with a high probability of exceeding a benchmark for human health or aquatic life can be prioritized for monitoring.

  20. Maximizing the Detection Probability of Kilonovae Associated with Gravitational Wave Observations

    NASA Astrophysics Data System (ADS)

    Chan, Man Leong; Hu, Yi-Ming; Messenger, Chris; Hendry, Martin; Heng, Ik Siong

    2017-01-01

    Estimates of the source sky location for gravitational wave signals are likely to span areas of up to hundreds of square degrees or more, making it very challenging for most telescopes to search for counterpart signals in the electromagnetic spectrum. To boost the chance of successfully observing such counterparts, we have developed an algorithm that optimizes the number of observing fields and their corresponding time allocations by maximizing the detection probability. As a proof-of-concept demonstration, we optimize follow-up observations targeting kilonovae using telescopes including the CTIO-Dark Energy Camera, Subaru-HyperSuprimeCam, Pan-STARRS, and the Palomar Transient Factory. We consider three simulated gravitational wave events with 90% credible error regions spanning areas from ∼ 30 {\\deg }2 to ∼ 300 {\\deg }2. Assuming a source at 200 {Mpc}, we demonstrate that to obtain a maximum detection probability, there is an optimized number of fields for any particular event that a telescope should observe. To inform future telescope design studies, we present the maximum detection probability and corresponding number of observing fields for a combination of limiting magnitudes and fields of view over a range of parameters. We show that for large gravitational wave error regions, telescope sensitivity rather than field of view is the dominating factor in maximizing the detection probability.

  1. Learning probability distributions from smooth observables and the maximum entropy principle: some remarks

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Monasson, Rémi

    2015-09-01

    The maximum entropy principle (MEP) is a very useful working hypothesis in a wide variety of inference problems, ranging from biological to engineering tasks. To better understand the reasons of the success of MEP, we propose a statistical-mechanical formulation to treat the space of probability distributions constrained by the measures of (experimental) observables. In this paper we first review the results of a detailed analysis of the simplest case of randomly chosen observables. In addition, we investigate by numerical and analytical means the case of smooth observables, which is of practical relevance. Our preliminary results are presented and discussed with respect to the efficiency of the MEP.

  2. spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains

    NASA Astrophysics Data System (ADS)

    Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo

    2016-09-01

    The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.

  3. optBINS: Optimal Binning for histograms

    NASA Astrophysics Data System (ADS)

    Knuth, Kevin H.

    2018-03-01

    optBINS (optimal binning) determines the optimal number of bins in a uniform bin-width histogram by deriving the posterior probability for the number of bins in a piecewise-constant density model after assigning a multinomial likelihood and a non-informative prior. The maximum of the posterior probability occurs at a point where the prior probability and the the joint likelihood are balanced. The interplay between these opposing factors effectively implements Occam's razor by selecting the most simple model that best describes the data.

  4. Approximation of the ruin probability using the scaled Laplace transform inversion

    PubMed Central

    Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak

    2015-01-01

    The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796

  5. The Spiral of Life

    NASA Astrophysics Data System (ADS)

    Cajiao Vélez, F.; Kamiński, J. Z.; Krajewska, K.

    2018-04-01

    High-energy photoionization driven by short and circularly-polarized laser pulses is studied in the framework of the relativistic strong-field approximation. The saddle-point analysis of the integrals defining the probability amplitude is used to determine the general properties of the probability distributions. Additionally, an approximate solution to the saddle-point equation is derived. This leads to the concept of the three-dimensional spiral of life in momentum space, around which the ionization probability distribution is maximum. We demonstrate that such spiral is also obtained from a classical treatment.

  6. Laser spot dynamics.

    PubMed

    Postan, A

    1987-03-01

    The dynamics of a pulsed laser spot covering an optical aperture of a receiver is analyzed. This analysis includes the influence of diffraction, jitter, atmospheric absorption and scattering, and atmospheric turbulence. A simple expression for the probability of response of the receiver illuminated by the laser spot is derived. It is found that this probability would not always increase as the laser beam divergence decreases. Moreover, this probability has an optimum (maximum) with respect to the laser beam divergence or rather with respect to the diameter of the transmitting optics.

  7. The impact of in-situ stress and outcrop-based fracture geometry on hydraulic aperture and upscaled permeability in fractured reservoirs

    NASA Astrophysics Data System (ADS)

    Bisdom, Kevin; Bertotti, Giovanni; Nick, Hamidreza M.

    2016-10-01

    Aperture has a controlling impact on porosity and permeability and is a source of uncertainty in modeling of naturally fractured reservoirs. This uncertainty results from difficulties in accurately quantifying aperture in the subsurface and from a limited fundamental understanding of the mechanical and diagenetic processes that control aperture. In the absence of cement bridges and high pore pressure, fractures in the subsurface are generally considered to be closed. However, experimental work, outcrop analyses and subsurface data show that some fractures remain open, and that aperture varies even along a single fracture. However, most fracture flow models consider constant apertures for fractures. We create a stress-dependent heterogeneous aperture by combining Finite Element modeling of discrete fracture networks with an empirical aperture model. Using a modeling approach that considers fractures explicitly, we quantify equivalent permeability, i.e. combined matrix and stress-dependent fracture flow. Fracture networks extracted from a large outcropping pavement form the basis of these models. The results show that the angle between fracture strike and σ1 has a controlling impact on aperture and permeability, where hydraulic opening is maximum for an angle of 15°. At this angle, the fracture experiences a minor amount of shear displacement that allows the fracture to remain open even when fluid pressure is lower than the local normal stress. Averaging the heterogeneous aperture to scale up permeability probably results in an underestimation of flow, indicating the need to incorporate full aperture distributions rather than simplified aperture models in reservoir-scale flow models.

  8. Assessing performance and validating finite element simulations using probabilistic knowledge

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolin, Ronald M.; Rodriguez, E. A.

    Two probabilistic approaches for assessing performance are presented. The first approach assesses probability of failure by simultaneously modeling all likely events. The probability each event causes failure along with the event's likelihood of occurrence contribute to the overall probability of failure. The second assessment method is based on stochastic sampling using an influence diagram. Latin-hypercube sampling is used to stochastically assess events. The overall probability of failure is taken as the maximum probability of failure of all the events. The Likelihood of Occurrence simulation suggests failure does not occur while the Stochastic Sampling approach predicts failure. The Likelihood of Occurrencemore » results are used to validate finite element predictions.« less

  9. Regional interdisciplinary paleoflood approach to assess extreme flood potential

    USGS Publications Warehouse

    Jarrett, Robert D.; Tomlinson, Edward M.

    2000-01-01

    In the past decade, there has been a growing interest of dam safety officials to incorporate a risk‐based analysis for design‐flood hydrology. Extreme or rare floods, with probabilities in the range of about 10−3 to 10−7 chance of occurrence per year, are of continuing interest to the hydrologic and engineering communities for purposes of planning and design of structures such as dams [National Research Council, 1988]. The National Research Council stresses that as much information as possible about floods needs to be used for evaluation of the risk and consequences of any decision. A regional interdisciplinary paleoflood approach was developed to assist dam safety officials and floodplain managers in their assessments of the risk of large floods. The interdisciplinary components included documenting maximum paleofloods and a regional analyses of contemporary extreme rainfall and flood data to complement a site‐specific probable maximum precipitation study [Tomlinson and Solak, 1997]. The cost‐effective approach, which can be used in many other hydrometeorologic settings, was applied to Elkhead Reservoir in Elkhead Creek (531 km2) in northwestern Colorado; the regional study area was 10,900 km2. Paleoflood data using bouldery flood deposits and noninundation surfaces for 88 streams were used to document maximum flood discharges that have occurred during the Holocene. Several relative dating methods were used to determine the age of paleoflood deposits and noninundation surfaces. No evidence of substantial flooding was found in the study area. The maximum paleoflood of 135 m3 s−1 for Elkhead Creek is about 13% of the site‐specific probable maximum flood of 1020 m3 s−1. Flood‐frequency relations using the expected moments algorithm, which better incorporates paleoflood data, were developed to assess the risk of extreme floods. Envelope curves encompassing maximum rainfall (181 sites) and floods (218 sites) were developed for northwestern Colorado to help define maximum contemporary and Holocene flooding in Elkhead Creek and in a regional frequency context. Study results for Elkhead Reservoir were accepted by the Colorado State Engineer for dam safety certification.

  10. A tool for the estimation of the distribution of landslide area in R

    NASA Astrophysics Data System (ADS)

    Rossi, M.; Cardinali, M.; Fiorucci, F.; Marchesini, I.; Mondini, A. C.; Santangelo, M.; Ghosh, S.; Riguer, D. E. L.; Lahousse, T.; Chang, K. T.; Guzzetti, F.

    2012-04-01

    We have developed a tool in R (the free software environment for statistical computing, http://www.r-project.org/) to estimate the probability density and the frequency density of landslide area. The tool implements parametric and non-parametric approaches to the estimation of the probability density and the frequency density of landslide area, including: (i) Histogram Density Estimation (HDE), (ii) Kernel Density Estimation (KDE), and (iii) Maximum Likelihood Estimation (MLE). The tool is available as a standard Open Geospatial Consortium (OGC) Web Processing Service (WPS), and is accessible through the web using different GIS software clients. We tested the tool to compare Double Pareto and Inverse Gamma models for the probability density of landslide area in different geological, morphological and climatological settings, and to compare landslides shown in inventory maps prepared using different mapping techniques, including (i) field mapping, (ii) visual interpretation of monoscopic and stereoscopic aerial photographs, (iii) visual interpretation of monoscopic and stereoscopic VHR satellite images and (iv) semi-automatic detection and mapping from VHR satellite images. Results show that both models are applicable in different geomorphological settings. In most cases the two models provided very similar results. Non-parametric estimation methods (i.e., HDE and KDE) provided reasonable results for all the tested landslide datasets. For some of the datasets, MLE failed to provide a result, for convergence problems. The two tested models (Double Pareto and Inverse Gamma) resulted in very similar results for large and very large datasets (> 150 samples). Differences in the modeling results were observed for small datasets affected by systematic biases. A distinct rollover was observed in all analyzed landslide datasets, except for a few datasets obtained from landslide inventories prepared through field mapping or by semi-automatic mapping from VHR satellite imagery. The tool can also be used to evaluate the probability density and the frequency density of landslide volume.

  11. Probabilistic Open Set Recognition

    NASA Astrophysics Data System (ADS)

    Jain, Lalit Prithviraj

    Real-world tasks in computer vision, pattern recognition and machine learning often touch upon the open set recognition problem: multi-class recognition with incomplete knowledge of the world and many unknown inputs. An obvious way to approach such problems is to develop a recognition system that thresholds probabilities to reject unknown classes. Traditional rejection techniques are not about the unknown; they are about the uncertain boundary and rejection around that boundary. Thus traditional techniques only represent the "known unknowns". However, a proper open set recognition algorithm is needed to reduce the risk from the "unknown unknowns". This dissertation examines this concept and finds existing probabilistic multi-class recognition approaches are ineffective for true open set recognition. We hypothesize the cause is due to weak adhoc assumptions combined with closed-world assumptions made by existing calibration techniques. Intuitively, if we could accurately model just the positive data for any known class without overfitting, we could reject the large set of unknown classes even under this assumption of incomplete class knowledge. For this, we formulate the problem as one of modeling positive training data by invoking statistical extreme value theory (EVT) near the decision boundary of positive data with respect to negative data. We provide a new algorithm called the PI-SVM for estimating the unnormalized posterior probability of class inclusion. This dissertation also introduces a new open set recognition model called Compact Abating Probability (CAP), where the probability of class membership decreases in value (abates) as points move from known data toward open space. We show that CAP models improve open set recognition for multiple algorithms. Leveraging the CAP formulation, we go on to describe the novel Weibull-calibrated SVM (W-SVM) algorithm, which combines the useful properties of statistical EVT for score calibration with one-class and binary support vector machines. Building from the success of statistical EVT based recognition methods such as PI-SVM and W-SVM on the open set problem, we present a new general supervised learning algorithm for multi-class classification and multi-class open set recognition called the Extreme Value Local Basis (EVLB). The design of this algorithm is motivated by the observation that extrema from known negative class distributions are the closest negative points to any positive sample during training, and thus should be used to define the parameters of a probabilistic decision model. In the EVLB, the kernel distribution for each positive training sample is estimated via an EVT distribution fit over the distances to the separating hyperplane between positive training sample and closest negative samples, with a subset of the overall positive training data retained to form a probabilistic decision boundary. Using this subset as a frame of reference, the probability of a sample at test time decreases as it moves away from the positive class. Possessing this property, the EVLB is well-suited to open set recognition problems where samples from unknown or novel classes are encountered at test. Our experimental evaluation shows that the EVLB provides a substantial improvement in scalability compared to standard radial basis function kernel machines, as well as P I-SVM and W-SVM, with improved accuracy in many cases. We evaluate our algorithm on open set variations of the standard visual learning benchmarks, as well as with an open subset of classes from Caltech 256 and ImageNet. Our experiments show that PI-SVM, WSVM and EVLB provide significant advances over the previous state-of-the-art solutions for the same tasks.

  12. Even under Obama's Plan, Pell Grants Trail Tuition

    ERIC Educational Resources Information Center

    Field, Kelly

    2009-01-01

    Making Pell Grants an entitlement and tying the maximum award to a measure of inflation, as President Obama has proposed, would probably yield larger awards and stop the cycle of shortfalls that have plagued the program. The president's plan, which would index the maximum award to the Consumer Price Index (CPI) plus one percentage point, probably…

  13. Bayesian image reconstruction - The pixon and optimal image modeling

    NASA Technical Reports Server (NTRS)

    Pina, R. K.; Puetter, R. C.

    1993-01-01

    In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.

  14. Quantitative comparison of randomization designs in sequential clinical trials based on treatment balance and allocation randomness.

    PubMed

    Zhao, Wenle; Weng, Yanqiu; Wu, Qi; Palesch, Yuko

    2012-01-01

    To evaluate the performance of randomization designs under various parameter settings and trial sample sizes, and identify optimal designs with respect to both treatment imbalance and allocation randomness, we evaluate 260 design scenarios from 14 randomization designs under 15 sample sizes range from 10 to 300, using three measures for imbalance and three measures for randomness. The maximum absolute imbalance and the correct guess (CG) probability are selected to assess the trade-off performance of each randomization design. As measured by the maximum absolute imbalance and the CG probability, we found that performances of the 14 randomization designs are located in a closed region with the upper boundary (worst case) given by Efron's biased coin design (BCD) and the lower boundary (best case) from the Soares and Wu's big stick design (BSD). Designs close to the lower boundary provide a smaller imbalance and a higher randomness than designs close to the upper boundary. Our research suggested that optimization of randomization design is possible based on quantified evaluation of imbalance and randomness. Based on the maximum imbalance and CG probability, the BSD, Chen's biased coin design with imbalance tolerance method, and Chen's Ehrenfest urn design perform better than popularly used permuted block design, EBCD, and Wei's urn design. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Development of Physics-Based Hurricane Wave Response Functions: Application to Selected Sites on the U.S. Gulf Coast

    NASA Astrophysics Data System (ADS)

    McLaughlin, P. W.; Kaihatu, J. M.; Irish, J. L.; Taylor, N. R.; Slinn, D.

    2013-12-01

    Recent hurricane activity in the Gulf of Mexico has led to a need for accurate, computationally efficient prediction of hurricane damage so that communities can better assess risk of local socio-economic disruption. This study focuses on developing robust, physics based non-dimensional equations that accurately predict maximum significant wave height at different locations near a given hurricane track. These equations (denoted as Wave Response Functions, or WRFs) were developed from presumed physical dependencies between wave heights and hurricane characteristics and fit with data from numerical models of waves and surge under hurricane conditions. After curve fitting, constraints which correct for fully developed sea state were used to limit the wind wave growth. When applied to the region near Gulfport, MS, back prediction of maximum significant wave height yielded root mean square errors between 0.22-0.42 (m) at open coast stations and 0.07-0.30 (m) at bay stations when compared to the numerical model data. The WRF method was also applied to Corpus Christi, TX and Panama City, FL with similar results. Back prediction errors will be included in uncertainty evaluations connected to risk calculations using joint probability methods. These methods require thousands of simulations to quantify extreme value statistics, thus requiring the use of reduced methods such as the WRF to represent the relevant physical processes.

  16. Open-loop frequency acquisition for suppressed-carrier biphase signals using one-pole arm filters

    NASA Technical Reports Server (NTRS)

    Shah, B.; Holmes, J. K.

    1991-01-01

    Open loop frequency acquisition performance is discussed for suppressed carrier binary phase shift keyed signals in terms of the probability of detecting the carrier frequency offset when the arms of the Costas loop detector have one pole filters. The approach, which does not require symbol timing, uses fast Fourier transforms (FFTs) to detect the carrier frequency offset. The detection probability, which depends on both the 3 dB arm filter bandwidth and the received symbol signal to noise ratio, is derived and is shown to be independent of symbol timing. It is shown that the performance of this technique is slightly better that other open loop acquisition techniques which use integrators in the arms and whose detection performance varies with symbol timing.

  17. Kinetic aspects of chain growth in Fischer-Tropsch synthesis.

    PubMed

    Filot, Ivo A W; Zijlstra, Bart; Broos, Robin J P; Chen, Wei; Pestman, Robert; Hensen, Emiel J M

    2017-04-28

    Microkinetics simulations are used to investigate the elementary reaction steps that control chain growth in the Fischer-Tropsch reaction. Chain growth in the FT reaction on stepped Ru surfaces proceeds via coupling of CH and CR surface intermediates. Essential to the growth mechanism are C-H dehydrogenation and C hydrogenation steps, whose kinetic consequences have been examined by formulating two novel kinetic concepts, the degree of chain-growth probability control and the thermodynamic degree of chain-growth probability control. For Ru the CO conversion rate is controlled by the removal of O atoms from the catalytic surface. The temperature of maximum CO conversion rate is higher than the temperature to obtain maximum chain-growth probability. Both maxima are determined by Sabatier behavior, but the steps that control chain-growth probability are different from those that control the overall rate. Below the optimum for obtaining long hydrocarbon chains, the reaction is limited by the high total surface coverage: in the absence of sufficient vacancies the CHCHR → CCHR + H reaction is slowed down. Beyond the optimum in chain-growth probability, CHCR + H → CHCHR and OH + H → H 2 O limit the chain-growth process. The thermodynamic degree of chain-growth probability control emphasizes the critical role of the H and free-site coverage and shows that at high temperature, chain depolymerization contributes to the decreased chain-growth probability. That is to say, during the FT reaction chain growth is much faster than chain depolymerization, which ensures high chain-growth probability. The chain-growth rate is also fast compared to chain-growth termination and the steps that control the overall CO conversion rate, which are O removal steps for Ru.

  18. Comparing fuel reduction treatments for reducing wildfire size and intensity in a boreal forest landscape of northeastern China.

    PubMed

    Wu, Zhiwei; He, Hong S; Liu, Zhihua; Liang, Yu

    2013-06-01

    Fuel load is often used to prioritize stands for fuel reduction treatments. However, wildfire size and intensity are not only related to fuel loads but also to a wide range of other spatially related factors such as topography, weather and human activity. In prioritizing fuel reduction treatments, we propose using burn probability to account for the effects of spatially related factors that can affect wildfire size and intensity. Our burn probability incorporated fuel load, ignition probability, and spread probability (spatial controls to wildfire) at a particular location across a landscape. Our goal was to assess differences in reducing wildfire size and intensity using fuel-load and burn-probability based treatment prioritization approaches. Our study was conducted in a boreal forest in northeastern China. We derived a fuel load map from a stand map and a burn probability map based on historical fire records and potential wildfire spread pattern. The burn probability map was validated using historical records of burned patches. We then simulated 100 ignitions and six fuel reduction treatments to compare fire size and intensity under two approaches of fuel treatment prioritization. We calibrated and validated simulated wildfires against historical wildfire data. Our results showed that fuel reduction treatments based on burn probability were more effective at reducing simulated wildfire size, mean and maximum rate of spread, and mean fire intensity, but less effective at reducing maximum fire intensity across the burned landscape than treatments based on fuel load. Thus, contributions from both fuels and spatially related factors should be considered for each fuel reduction treatment. Published by Elsevier B.V.

  19. A new method for estimating the probable maximum hail loss of a building portfolio based on hailfall intensity determined by radar measurements

    NASA Astrophysics Data System (ADS)

    Aller, D.; Hohl, R.; Mair, F.; Schiesser, H.-H.

    2003-04-01

    Extreme hailfall can cause massive damage to building structures. For the insurance and reinsurance industry it is essential to estimate the probable maximum hail loss of their portfolio. The probable maximum loss (PML) is usually defined with a return period of 1 in 250 years. Statistical extrapolation has a number of critical points, as historical hail loss data are usually only available from some events while insurance portfolios change over the years. At the moment, footprints are derived from historical hail damage data. These footprints (mean damage patterns) are then moved over a portfolio of interest to create scenario losses. However, damage patterns of past events are based on the specific portfolio that was damaged during that event and can be considerably different from the current spread of risks. A new method for estimating the probable maximum hail loss to a building portfolio is presented. It is shown that footprints derived from historical damages are different to footprints of hail kinetic energy calculated from radar reflectivity measurements. Based on the relationship between radar-derived hail kinetic energy and hail damage to buildings, scenario losses can be calculated. A systematic motion of the hail kinetic energy footprints over the underlying portfolio creates a loss set. It is difficult to estimate the return period of losses calculated with footprints derived from historical damages being moved around. To determine the return periods of the hail kinetic energy footprints over Switzerland, 15 years of radar measurements and 53 years of agricultural hail losses are available. Based on these data, return periods of several types of hailstorms were derived for different regions in Switzerland. The loss set is combined with the return periods of the event set to obtain an exceeding frequency curve, which can be used to derive the PML.

  20. Incorporating sequence information into the scoring function: a hidden Markov model for improved peptide identification.

    PubMed

    Khatun, Jainab; Hamlett, Eric; Giddings, Morgan C

    2008-03-01

    The identification of peptides by tandem mass spectrometry (MS/MS) is a central method of proteomics research, but due to the complexity of MS/MS data and the large databases searched, the accuracy of peptide identification algorithms remains limited. To improve the accuracy of identification we applied a machine-learning approach using a hidden Markov model (HMM) to capture the complex and often subtle links between a peptide sequence and its MS/MS spectrum. Our model, HMM_Score, represents ion types as HMM states and calculates the maximum joint probability for a peptide/spectrum pair using emission probabilities from three factors: the amino acids adjacent to each fragmentation site, the mass dependence of ion types and the intensity dependence of ion types. The Viterbi algorithm is used to calculate the most probable assignment between ion types in a spectrum and a peptide sequence, then a correction factor is added to account for the propensity of the model to favor longer peptides. An expectation value is calculated based on the model score to assess the significance of each peptide/spectrum match. We trained and tested HMM_Score on three data sets generated by two different mass spectrometer types. For a reference data set recently reported in the literature and validated using seven identification algorithms, HMM_Score produced 43% more positive identification results at a 1% false positive rate than the best of two other commonly used algorithms, Mascot and X!Tandem. HMM_Score is a highly accurate platform for peptide identification that works well for a variety of mass spectrometer and biological sample types. The program is freely available on ProteomeCommons via an OpenSource license. See http://bioinfo.unc.edu/downloads/ for the download link.

  1. Open WorldCat and Its Impact on Academic Libraries

    ERIC Educational Resources Information Center

    El-Sherbini, Magda

    2007-01-01

    This paper analyzes librarians' reactions to the Open OCLC WorldCat. A detailed survey was sent to ARL libraries to explore what, if anything, the libraries are currently doing to prepare for these changes and how they plan to cope with the probability of having all their records open to the whole world. Survey findings indicate that most of the…

  2. Sudden transition and sudden change from open spin environments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Zheng-Da; School of Science, Jiangnan University, Wuxi 214122; Xu, Jing-Bo, E-mail: xujb@zju.edu.cn

    2014-11-15

    We investigate the necessary conditions for the existence of sudden transition or sudden change phenomenon for appropriate initial states under dephasing. As illustrative examples, we study the behaviors of quantum correlation dynamics of two noninteracting qubits in independent and common open spin environments, respectively. For the independent environments case, we find that the quantum correlation dynamics is closely related to the Loschmidt echo and the dynamics exhibits a sudden transition from classical to quantum correlation decay. It is also shown that the sudden change phenomenon may occur for the common environment case and stationary quantum discord is found at themore » high temperature region of the environment. Finally, we investigate the quantum criticality of the open spin environment by exploring the probability distribution of the Loschmidt echo and the scaling transformation behavior of quantum discord, respectively. - Highlights: • Sudden transition or sudden change from open spin baths are studied. • Quantum discord is related to the Loschmidt echo in independent open spin baths. • Steady quantum discord is found in a common open spin bath. • The probability distribution of the Loschmidt echo is analyzed. • The scaling transformation behavior of quantum discord is displayed.« less

  3. VizieR Online Data Catalog: Shape parameters for 154 Galactic open clusters (Zhai+, 2017)

    NASA Astrophysics Data System (ADS)

    Zhai, M.; Abt, H.; Zhao, G.; Li, C.

    2017-06-01

    The data used are from database WEBDA (http://www.univie.ac.at/webda/). We have found 946 open clusters with equatorial coordinates for each cluster member. Since cluster members are easily contaminated by field stars, we have only adopted stars with membership probabilities higher than 70% as cluster members. It is rarely possible to determine a cluster's shape with a small number of members, so we have only considered relatively richer clusters, which host more than 20 of the most probable member stars. After these selections, there are 154 clusters left. (1 data file).

  4. Electron-conformational transformations govern the temperature dependence of the cardiac ryanodine receptor gating

    NASA Astrophysics Data System (ADS)

    Moskvin, A. S.; Iaparov, B. I.; Ryvkin, A. M.; Solovyova, O. E.; Markhasin, V. S.

    2015-07-01

    Temperature influences many aspects of cardiac excitation-contraction coupling, in particular, hypothermia increases the open probability ( P open) of cardiac sarcoplasmic reticulum (SR) Ca2+-release channels (ryanodine-sensitive RyR channels) rising the SR Ca2+ load in mammalian myocytes. However, to the best of our knowledge, no theoretical models are available for that effect. Traditional Markov chain models do not provide a reasonable molecular mechanistic insight on the origin of the temperature effects. Here in the paper we address a simple physically clear electron-conformational model to describe the RyR gating and argue that a synergetic effect of external thermal fluctuation forces (Gaussian-Markovian noise) and internal friction via the temperature stimulation/suppression of the open-close RyR tunneling probability can be considered as a main contributor to temperature effects on the RyR gating. Results of the computer modeling allowed us to successfully reproduce all the temperature effects observed for an isolated RyR gating in vitro under reducing the temperature: increase in P open and mean open time without any significant effect on mean closed

  5. Excluding joint probabilities from quantum theory

    NASA Astrophysics Data System (ADS)

    Allahverdyan, Armen E.; Danageozian, Arshag

    2018-03-01

    Quantum theory does not provide a unique definition for the joint probability of two noncommuting observables, which is the next important question after the Born's probability for a single observable. Instead, various definitions were suggested, e.g., via quasiprobabilities or via hidden-variable theories. After reviewing open issues of the joint probability, we relate it to quantum imprecise probabilities, which are noncontextual and are consistent with all constraints expected from a quantum probability. We study two noncommuting observables in a two-dimensional Hilbert space and show that there is no precise joint probability that applies for any quantum state and is consistent with imprecise probabilities. This contrasts with theorems by Bell and Kochen-Specker that exclude joint probabilities for more than two noncommuting observables, in Hilbert space with dimension larger than two. If measurement contexts are included into the definition, joint probabilities are not excluded anymore, but they are still constrained by imprecise probabilities.

  6. Sediment fluxes and delta evolution at Tuapaat, Disko Island, Greenland

    NASA Astrophysics Data System (ADS)

    Kroon, A.; Andersen, T. J.; Bendixen, M.

    2013-12-01

    Ice and snow and freezing temperatures have an important influence on the coastal morphodynamics in arctic polar coastal environments. Global climate changes induce many changes along the arctic coasts. Sea-levels are rising due to thermal expansion and due to an increased fresh water flux from the glaciers and land ice masses while ice coverage of the coastal waters decreases and the open water periods in summer extend. On a yearly basis, there is a strong variation over the seasons with open waters and active rivers in summer and ice-covered coastal waters and inactive rivers in winter. The coastal processes by waves and tides are thus often limited to the summer and early fall. On a daily basis, there is also a strong variation in fluvial discharges due to the daily variations in glacier melt with maximum melt in the afternoon and minimum values at night. At the same time, the actual flux of the river to the coastal bay is also influenced by the tidal phase: low tides in the afternoon will probably give the maximum plumes in the coastal waters and high tides in the early morning will reduce the input of sediments to the coastal waters to zero. The southern shore of Disko Island in western Greenland has four deltas: Igpik, Signiffik, Tuappat and Skansen. The sediments of these deltas are a mixture of sand and gravel and they are fed by melting glaciers. The Tuapaat delta is located at the end of a pro-glacial and fluvial valley at about 16 km from the glacier. The shores of the delta are reworked by waves, predominantly from southwestern (largest fetch, over 50 km), southern, and southeastern directions. The environment has a micro- to meso- tidal range with a spring tidal range of 2.7m. The morphologic changes on the delta over the last decades clearly showed an eastward migration of the main delta channel, probably due to wave-driven alongshore processes in the ice-free periods. In this presentation, we focus on quantification of sediment fluxes on the Tuapaat delta in western Greenland. We highlight the variation of the fluxes over days with changing river discharges and tidal phases. We use field observations of discharges and sediment fluxes at the lower part of the river close to the delta apex and at the delta mouth (ADV-frame and CTD-observations) during an 8 days period from neap-tide to spring-tide in July 2013. Besides, we estimate the wave impact during the period, using climatic variables and a numerical model.

  7. Linkage to 10q22 for maximum intraocular pressure and 1p32 for maximum cup-to-disc ratio in an extended primary open-angle glaucoma pedigree.

    PubMed

    Charlesworth, Jac C; Dyer, Thomas D; Stankovich, Jim M; Blangero, John; Mackey, David A; Craig, Jamie E; Green, Catherine M; Foote, Simon J; Baird, Paul N; Sale, Michèle M

    2005-10-01

    The purpose of this study was to identify genetic contributions to primary open-angle glaucoma (POAG) through investigations of two quantitative components of the POAG phenotype. Genome-wide multipoint variance-components linkage analyses of maximum recorded intraocular pressure (IOP) and maximum vertical cup-to-disc ratio were conducted on data from a single, large Australian POAG pedigree that has been found to segregate the myocilin Q368X mutation in some individuals. Multipoint linkage analysis of maximum recorded IOP produced a peak LOD score of 3.3 (P = 0.00015) near marker D10S537 on 10q22, whereas the maximum cup-to-disc ratio produced a peak LOD score of 2.3 (P = 0.00056) near markers D1S197 to D1S220 on 1p32. Inclusion of the myocilin Q368X mutation as a covariate provided evidence of an interaction between this mutation and the IOP and cup-to-disc ratio loci. Significant linkage has been identified for maximum IOP and suggestive linkage for vertical cup-to-disc ratio. Identification of genes contributing to the variance of these traits will enhance understanding of the pathophysiology of POAG as a whole.

  8. Moment and maximum likelihood estimators for Weibull distributions under length- and area-biased sampling

    Treesearch

    Jeffrey H. Gove

    2003-01-01

    Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a...

  9. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  10. Probable flood predictions in ungauged coastal basins of El Salvador

    USGS Publications Warehouse

    Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.

    2008-01-01

    A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.

  11. Rating competitors before tournament starts: How it's affecting team progression in a soccer tournament

    NASA Astrophysics Data System (ADS)

    Yusof, Muhammad Mat; Sulaiman, Tajularipin; Khalid, Ruzelan; Hamid, Mohamad Shukri Abdul; Mansor, Rosnalini

    2014-12-01

    In professional sporting events, rating competitors before tournament start is a well-known approach to distinguish the favorite team and the weaker teams. Various methodologies are used to rate competitors. In this paper, we explore four ways to rate competitors; least squares rating, maximum likelihood strength ratio, standing points in large round robin simulation and previous league rank position. The tournament metric we used to evaluate different types of rating approach is tournament outcome characteristics measure. The tournament outcome characteristics measure is defined by the probability that a particular team in the top 100q pre-tournament rank percentile progress beyond round R, for all q and R. Based on simulation result, we found that different rating approach produces different effect to the team. Our simulation result shows that from eight teams participate in knockout standard seeding, Perak has highest probability to win for tournament that use the least squares rating approach, PKNS has highest probability to win using the maximum likelihood strength ratio and the large round robin simulation approach, while Perak has the highest probability to win a tournament using previous league season approach.

  12. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  13. A comprehensive phylogeny of the genus Kurixalus (Rhacophoridae, Anura) sheds light on the geographical range evolution of frilled swamp treefrogs.

    PubMed

    Lv, Yun-Yun; He, Kai; Klaus, Sebastian; Brown, Rafe M; Li, Jia-Tang

    2018-04-01

    Currently, the genus Kurixalus comprises 14 species distributed in Southern, Southeast and East Asia. Because of their relatively low dispersal capability and intolerance of seawater, this group is ideal for the study of terrestrial range evolution, especially that portion of its range that extends into the island archipelagos of Southern Asia. We assembled a large dataset of mitochondrial and nuclear genes, and estimated phylogeny by maximum likelihood and Bayesian methods, and we explored the history of each species via divergence-time estimation based on fossil-calibrations. A variety of ancestral-area reconstruction strategies were employed to estimate past changes of the species' geographical range, and to evaluate the impact of different abiotic barriers on range evolution. We found that frilled swamp treefrogs probably originated in Taiwan or South Vietnam in the Oligocene. Alternatively, the lineage leading to Kurixalus appendiculatus strongly supports a hypothesis of terrestrial connection between the Indian and Asian continents in the Oligocene. The outcome of both our divergence-time estimates and ancestral-area reconstruction suggests that the divergence between species from Indochina and Taiwan can probably be attributed to the opening of the South China Sea, approximately 33 million years ago. We could not find evidence for dispersal between mainland China and Taiwan Island. Formation of both Mekong and Red River valleys did not have any impact on Kurixalus species diversification. However, coincidence in timing of climate change and availability of plausible dispersal routes from the Oligocene to the middle Miocene, plausibly implied that Kurixalus diversification in Asia resulted from contemporaneous, climate-induced environmental upheaval (Late Oligocene Warming at 29 Ma; Mi-1 glaciation since 24.4-21.5 Ma; Mid-Miocene Climatic Optimum at 14 Ma), which alternatively opened and closed dispersal routes. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Cross-talk between ATP-regulated K+ channels and Na+ transport via cellular metabolism in frog skin principal cells.

    PubMed Central

    Urbach, V; Van Kerkhove, E; Maguire, D; Harvey, B J

    1996-01-01

    Isolated frog skin epithelium, mounted in an Ussing chamber and bathed in standard NaCl Ringer solution, recycles K+ across the basolateral membrane of principal cells through an inward-rectifier K+ channel (Kir) operating in parallel with a Na+-K+-ATPase pump. Here we report on the metabolic control of the Kir channel using patch clamping, short-circuit current measurement and enzymatic determination of cellular (ATP (ATPi). 2. The constitutively active Kir channel in the basolateral membrane has the characteristics of an ATP-regulated K+ channel and is now classed as a KATP channel. In excised inside-out patches the open probability (Po) of KATP channels was reduced by ATPi with half-maximum inhibition at an ATPi concentration of 50 microM. 3. ATPi measured (under normal Na+ transport conditions) with luciferin-luciferase was 1.50 +/- 0.23 mM (mean +/- S.E.M.; range, 0.4-3.3 mM n = 11). Thus the KATP channel would be expected to be inactive in intact cells if ATPi was the sole regulator of channel activity. KATP channels which were inactivated by 1 mM ATPi in excised patches could be reactivated by addition of 100 microM ADP on the cytosolic side. When added alone, ADP blocks this channel with half-maximal inhibition at [ADPi] > 5 mM. 4. Sulphonylureas inhibit single KATP channels in cell-attached patches as well as the total basolateral K+ current measured in frog skin epithelia perforated with nystatin on the apical side. 5. Na+-K+-ATPase activity is a major determinant of cytosolic ATP. Blocking the pump activity with ouabain produced a time-dependent increase in ATPi and reduced the open probability of KATP channels in cell-attached membranes. 6. We conclude that the ratio of ATP/ADP is an important metabolic coupling factor between the rate of Na+-K+ pumping and K+ recycling. Images Figure 9 PMID:9011625

  15. Remote Sensing and Wetland Ecology: a South African Case Study.

    PubMed

    De Roeck, Els R; Verhoest, Niko E C; Miya, Mtemi H; Lievens, Hans; Batelaan, Okke; Thomas, Abraham; Brendonck, Luc

    2008-05-26

    Remote sensing offers a cost efficient means for identifying and monitoring wetlands over a large area and at different moments in time. In this study, we aim at providing ecologically relevant information on characteristics of temporary and permanent isolated open water wetlands, obtained by standard techniques and relatively cheap imagery. The number, surface area, nearest distance, and dynamics of isolated temporary and permanent wetlands were determined for the Western Cape, South Africa. Open water bodies (wetlands) were mapped from seven Landsat images (acquired during 1987 - 2002) using supervised maximum likelihood classification. The number of wetlands fluctuated over time. Most wetlands were detected in the winter of 2000 and 2002, probably related to road constructions. Imagery acquired in summer contained fewer wetlands than in winter. Most wetlands identified from Landsat images were smaller than one hectare. The average distance to the nearest wetland was larger in summer. In comparison to temporary wetlands, fewer, but larger permanent wetlands were detected. In addition, classification of non-vegetated wetlands on an Envisat ASAR radar image (acquired in June 2005) was evaluated. The number of detected small wetlands was lower for radar imagery than optical imagery (acquired in June 2002), probably because of deterioration of the spatial information content due the extensive pre-processing requirements of the radar image. Both optical and radar classifications allow to assess wetland characteristics that potentially influence plant and animal metacommunity structure. Envisat imagery, however, was less suitable than Landsat imagery for the extraction of detailed ecological information, as only large wetlands can be detected. This study has indicated that ecologically relevant data can be generated for the larger wetlands through relatively cheap imagery and standard techniques, despite the relatively low resolution of Landsat and Envisat imagery. For the characterisation of very small wetlands, high spatial resolution optical or radar images are needed. This study exemplifies the benefits of integrating remote sensing and ecology and hence stimulates interdisciplinary research of isolated wetlands.

  16. Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?

    PubMed Central

    Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia

    2014-01-01

    Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur. PMID:25390896

  17. Asymptotic properties of a bold random walk

    NASA Astrophysics Data System (ADS)

    Serva, Maurizio

    2014-08-01

    In a recent paper we proposed a non-Markovian random walk model with memory of the maximum distance ever reached from the starting point (home). The behavior of the walker is different from the simple symmetric random walk only when she is at this maximum distance, where, having the choice to move either farther or closer, she decides with different probabilities. If the probability of a forward step is higher than the probability of a backward step, the walker is bold and her behavior turns out to be superdiffusive; otherwise she is timorous and her behavior turns out to be subdiffusive. The scaling behavior varies continuously from subdiffusive (timorous) to superdiffusive (bold) according to a single parameter γ ∈R. We investigate here the asymptotic properties of the bold case in the nonballistic region γ ∈[0,1/2], a problem which was left partially unsolved previously. The exact results proved in this paper require new probabilistic tools which rely on the construction of appropriate martingales of the random walk and its hitting times.

  18. Application of Radar-Rainfall Estimates to Probable Maximum Precipitation in the Carolinas

    NASA Astrophysics Data System (ADS)

    England, J. F.; Caldwell, R. J.; Sankovich, V.

    2011-12-01

    Extreme storm rainfall data are essential in the assessment of potential impacts on design precipitation amounts, which are used in flood design criteria for dams and nuclear power plants. Probable Maximum Precipitation (PMP) from National Weather Service Hydrometeorological Report 51 (HMR51) is currently used for design rainfall estimates in the eastern U.S. The extreme storm database associated with the report has not been updated since the early 1970s. In the past several decades, several extreme precipitation events have occurred that have the potential to alter the PMP values, particularly across the Southeast United States (e.g., Hurricane Floyd 1999). Unfortunately, these and other large precipitation-producing storms have not been analyzed with the detail required for application in design studies. This study focuses on warm-season tropical cyclones (TCs) in the Carolinas, as these systems are the critical maximum rainfall mechanisms in the region. The goal is to discern if recent tropical events may have reached or exceeded current PMP values. We have analyzed 10 storms using modern datasets and methodologies that provide enhanced spatial and temporal resolution relative to point measurements used in past studies. Specifically, hourly multisensor precipitation reanalysis (MPR) data are used to estimate storm total precipitation accumulations at various durations throughout each storm event. The accumulated grids serve as input to depth-area-duration calculations. Individual storms are then maximized using back-trajectories to determine source regions for moisture. The development of open source software has made this process time and resource efficient. Based on the current methodology, two of the ten storms analyzed have the potential to challenge HMR51 PMP values. Maximized depth-area curves for Hurricane Floyd indicate exceedance at 24- and 72-hour durations for large area sizes, while Hurricane Fran (1996) appears to exceed PMP at large area sizes for short-duration, 6-hour storms. Utilizing new methods and data, however, requires careful consideration of the potential limitations and caveats associated with the analysis and further evaluation of the newer storms within the context of historical storms from HMR51. Here, we provide a brief background on extreme rainfall in the Carolinas, along with an overview of the methods employed for converting MPR to depth-area relationships. Discussion of the issues and limitations, evaluation of the various techniques, and comparison to HMR51 storms and PMP values are also presented.

  19. Maximum caliber inference of nonequilibrium processes

    NASA Astrophysics Data System (ADS)

    Otten, Moritz; Stock, Gerhard

    2010-07-01

    Thirty years ago, Jaynes suggested a general theoretical approach to nonequilibrium statistical mechanics, called maximum caliber (MaxCal) [Annu. Rev. Phys. Chem. 31, 579 (1980)]. MaxCal is a variational principle for dynamics in the same spirit that maximum entropy is a variational principle for equilibrium statistical mechanics. Motivated by the success of maximum entropy inference methods for equilibrium problems, in this work the MaxCal formulation is applied to the inference of nonequilibrium processes. That is, given some time-dependent observables of a dynamical process, one constructs a model that reproduces these input data and moreover, predicts the underlying dynamics of the system. For example, the observables could be some time-resolved measurements of the folding of a protein, which are described by a few-state model of the free energy landscape of the system. MaxCal then calculates the probabilities of an ensemble of trajectories such that on average the data are reproduced. From this probability distribution, any dynamical quantity of the system can be calculated, including population probabilities, fluxes, or waiting time distributions. After briefly reviewing the formalism, the practical numerical implementation of MaxCal in the case of an inference problem is discussed. Adopting various few-state models of increasing complexity, it is demonstrated that the MaxCal principle indeed works as a practical method of inference: The scheme is fairly robust and yields correct results as long as the input data are sufficient. As the method is unbiased and general, it can deal with any kind of time dependency such as oscillatory transients and multitime decays.

  20. Scalar pair production in a magnetic field in de Sitter universe

    NASA Astrophysics Data System (ADS)

    Băloi, Mihaela-Andreea; Crucean, Cosmin; Popescu, Diana

    2018-05-01

    The production of scalar particles by the dipole magnetic field in de Sitter expanding universe is analyzed. The amplitude and probability of transition are computed using perturbative methods. A graphical study of the transition probability is performed obtaining that the rate of pair production is important in the early universe. Our results prove that in the process of pair production by the external magnetic field the momentum conservation law is broken. We also found that the probabilities are maximum when the particles are emitted perpendicular to the direction of magnetic dipole momentum. The total probability is computed and is analysed in terms of the angle between particles momenta.

  1. Maximum Likelihood Analysis in the PEN Experiment

    NASA Astrophysics Data System (ADS)

    Lehman, Martin

    2013-10-01

    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  2. Scaling exponents for ordered maxima

    DOE PAGES

    Ben-Naim, E.; Krapivsky, P. L.; Lemons, N. W.

    2015-12-22

    We study extreme value statistics of multiple sequences of random variables. For each sequence with N variables, independently drawn from the same distribution, the running maximum is defined as the largest variable to date. We compare the running maxima of m independent sequences and investigate the probability S N that the maxima are perfectly ordered, that is, the running maximum of the first sequence is always larger than that of the second sequence, which is always larger than the running maximum of the third sequence, and so on. The probability S N is universal: it does not depend on themore » distribution from which the random variables are drawn. For two sequences, S N~N –1/2, and in general, the decay is algebraic, S N~N –σm, for large N. We analytically obtain the exponent σ 3≅1.302931 as root of a transcendental equation. Moreover, the exponents σ m grow with m, and we show that σ m~m for large m.« less

  3. On the consideration of scaling properties of extreme rainfall in Madrid (Spain) for developing a generalized intensity-duration-frequency equation and assessing probable maximum precipitation estimates

    NASA Astrophysics Data System (ADS)

    Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel

    2018-01-01

    The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor ( k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series. [Figure not available: see fulltext.

  4. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    NASA Astrophysics Data System (ADS)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  5. Open-graded wearing courses in the Pacific Northwest.

    DOT National Transportation Integrated Search

    2011-06-01

    The Oregon Department of Transportation (ODOT) has been placing -inch nominal maximum aggregate size (NMAS) open-graded wearing courses (OGWCs) in structural layers of two inches or more for about 30 years. Despite this, OGWC performance in the Pac...

  6. Numerical optimization using flow equations.

    PubMed

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  7. Numerical optimization using flow equations

    NASA Astrophysics Data System (ADS)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  8. Application of Markov chain model to daily maximum temperature for thermal comfort in Malaysia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordin, Muhamad Asyraf bin Che; Hassan, Husna

    2015-10-22

    The Markov chain’s first order principle has been widely used to model various meteorological fields, for prediction purposes. In this study, a 14-year (2000-2013) data of daily maximum temperatures in Bayan Lepas were used. Earlier studies showed that the outdoor thermal comfort range based on physiologically equivalent temperature (PET) index in Malaysia is less than 34°C, thus the data obtained were classified into two state: normal state (within thermal comfort range) and hot state (above thermal comfort range). The long-run results show the probability of daily temperature exceed TCR will be only 2.2%. On the other hand, the probability dailymore » temperature within TCR will be 97.8%.« less

  9. The evolution of micro-cursoriality in mammals.

    PubMed

    Lovegrove, Barry G; Mowoe, Metobor O

    2014-04-15

    In this study we report on the evolution of micro-cursoriality, a unique case of cursoriality in mammals smaller than 1 kg. We obtained new running speed and limb morphology data for two species of elephant-shrews (Elephantulus spp., Macroscelidae) from Namaqualand, South Africa, which we compared with published data for other mammals. Elephantulus maximum running speeds were higher than those of most mammals smaller than 1 kg. Elephantulus also possess exceptionally high metatarsal:femur ratios (1.07) that are typically associated with fast unguligrade cursors. Cursoriality evolved in the Artiodactyla, Perissodactyla and Carnivora coincident with global cooling and the replacement of forests with open landscapes in the Oligocene and Miocene. The majority of mammal species, though, remained non-cursorial, plantigrade and small (<1 kg). The extraordinary running speed and digitigrady of elephant-shrews was established in the Early Eocene in the earliest macroscelid Prodiacodon, but was probably inherited from Paleocene, Holarctic stem macroscelids. Micro-cursoriality in macroscelids evolved from the plesiomorphic plantigrade foot of the possum-like ancestral mammal earlier than in other mammalian crown groups. Micro-cursoriality evolved first in forests, presumably in response to selection for rapid running speeds facilitated by local knowledge, in order to avoid predators. During the Miocene, micro-cursoriality was pre-adaptive to open, arid habitats, and became more derived in the newly evolved Elephantulus and Macroscelides elephant-shrews with trail running.

  10. Estimation method of finger tapping dynamics using simple magnetic detection system

    NASA Astrophysics Data System (ADS)

    Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo

    2010-05-01

    We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.

  11. Estimation method of finger tapping dynamics using simple magnetic detection system.

    PubMed

    Kandori, Akihiko; Sano, Yuko; Miyashita, Tsuyoshi; Okada, Yoshihisa; Irokawa, Masataka; Shima, Keisuke; Tsuji, Toshio; Yokoe, Masaru; Sakoda, Saburo

    2010-05-01

    We have developed the simple estimation method of a finger tapping dynamics model for investigating muscle resistance and stiffness during tapping movement in normal subjects. We measured finger tapping movements of 207 normal subjects using a magnetic finger tapping detection system. Each subject tapped two fingers in time with a metronome at 1, 2, 3, 4, and 5 Hz. The velocity and acceleration values for both the closing and opening tapping data were used to estimate a finger tapping dynamics model. Using the frequency response of the ratio of acceleration to velocity of the mechanical impedance parameters, we estimated the resistance (friction coefficient) and compliance (stiffness). We found two dynamics models for the maximum open position and tap position. In the maximum open position, the extensor muscle resistance was twice as high as the flexor muscle resistance and males had a higher spring constant. In the tap position, the flexor muscle resistance was much higher than the extensor muscle resistance. This indicates that the tapping dynamics in the maximum open position are controlled by the balance of extensor and flexor muscle friction resistances and the flexor stiffness, and the flexor friction resistance is the main component in the tap position. It can be concluded that our estimation method makes it possible to understand the tapping dynamics.

  12. Tendency towards maximum complexity in a nonequilibrium isolated system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calbet, Xavier; Lopez-Ruiz, Ricardo

    2001-06-01

    The time evolution equations of a simplified isolated ideal gas, the {open_quotes}tetrahedral{close_quotes} gas, are derived. The dynamical behavior of the Lopez-Ruiz{endash}Mancini{endash}Calbet complexity [R. Lopez-Ruiz, H. L. Mancini, and X. Calbet, Phys. Lett. A >209, 321 (1995)] is studied in this system. In general, it is shown that the complexity remains within the bounds of minimum and maximum complexity. We find that there are certain restrictions when the isolated {open_quotes}tetrahedral{close_quotes} gas evolves towards equilibrium. In addition to the well-known increase in entropy, the quantity called disequilibrium decreases monotonically with time. Furthermore, the trajectories of the system in phase space approach themore » maximum complexity path as it evolves toward equilibrium.« less

  13. [Implication of inverse-probability weighting method in the evaluation of diagnostic test with verification bias].

    PubMed

    Kang, Leni; Zhang, Shaokai; Zhao, Fanghui; Qiao, Youlin

    2014-03-01

    To evaluate and adjust the verification bias existed in the screening or diagnostic tests. Inverse-probability weighting method was used to adjust the sensitivity and specificity of the diagnostic tests, with an example of cervical cancer screening used to introduce the Compare Tests package in R software which could be implemented. Sensitivity and specificity calculated from the traditional method and maximum likelihood estimation method were compared to the results from Inverse-probability weighting method in the random-sampled example. The true sensitivity and specificity of the HPV self-sampling test were 83.53% (95%CI:74.23-89.93)and 85.86% (95%CI: 84.23-87.36). In the analysis of data with randomly missing verification by gold standard, the sensitivity and specificity calculated by traditional method were 90.48% (95%CI:80.74-95.56)and 71.96% (95%CI:68.71-75.00), respectively. The adjusted sensitivity and specificity under the use of Inverse-probability weighting method were 82.25% (95% CI:63.11-92.62) and 85.80% (95% CI: 85.09-86.47), respectively, whereas they were 80.13% (95%CI:66.81-93.46)and 85.80% (95%CI: 84.20-87.41) under the maximum likelihood estimation method. The inverse-probability weighting method could effectively adjust the sensitivity and specificity of a diagnostic test when verification bias existed, especially when complex sampling appeared.

  14. Open Quantum Random Walks on the Half-Line: The Karlin-McGregor Formula, Path Counting and Foster's Theorem

    NASA Astrophysics Data System (ADS)

    Jacq, Thomas S.; Lardizabal, Carlos F.

    2017-11-01

    In this work we consider open quantum random walks on the non-negative integers. By considering orthogonal matrix polynomials we are able to describe transition probability expressions for classes of walks via a matrix version of the Karlin-McGregor formula. We focus on absorbing boundary conditions and, for simpler classes of examples, we consider path counting and the corresponding combinatorial tools. A non-commutative version of the gambler's ruin is studied by obtaining the probability of reaching a certain fortune and the mean time to reach a fortune or ruin in terms of generating functions. In the case of the Hadamard coin, a counting technique for boundary restricted paths in a lattice is also presented. We discuss an open quantum version of Foster's Theorem for the expected return time together with applications.

  15. Modulation of hand aperture during reaching in persons with incomplete cervical spinal cord injury.

    PubMed

    Stahl, Victoria A; Hayes, Heather B; Buetefisch, Cathrin M; Wolf, Steven L; Trumbower, Randy D

    2015-03-01

    The intact neuromotor system prepares for object grasp by first opening the hand to an aperture that is scaled according to object size and then closing the hand around the object. After cervical spinal cord injury (SCI), hand function is significantly impaired, but the degree to which object-specific hand aperture scaling is affected remains unknown. Here, we hypothesized that persons with incomplete cervical SCI have a reduced maximum hand opening capacity but exhibit novel neuromuscular coordination strategies that permit object-specific hand aperture scaling during reaching. To test this hypothesis, we measured hand kinematics and surface electromyography from seven muscles of the hand and wrist during attempts at maximum hand opening as well as reaching for four balls of different diameters. Our results showed that persons with SCI exhibited significantly reduced maximum hand aperture compared to able-bodied (AB) controls. However, persons with SCI preserved the ability to scale peak hand aperture with ball size during reaching. Persons with SCI also used distinct muscle coordination patterns that included increased co-activity of flexors and extensors at the wrist and hand compared to AB controls. These results suggest that motor planning for aperture modulation is preserved even though execution is limited by constraints on hand opening capacity and altered muscle co-activity. Thus, persons with incomplete cervical SCI may benefit from rehabilitation aimed at increasing hand opening capacity and reducing flexor-extensor co-activity at the wrist and hand.

  16. Modulation of hand aperture during reaching in persons with incomplete cervical spinal cord injury

    PubMed Central

    Stahl, Victoria; Hayes, Heather B; Buetefisch, Cathrin; Wolf, Steven L; Trumbower, Randy D

    2014-01-01

    The intact neuromotor system prepares for object grasp by first opening the hand to an aperture that is scaled according to object size and then closing the hand around the object. After cervical spinal cord injury (SCI), hand function is significantly impaired, but the degree to which object-specific hand aperture scaling is affected remains unknown. Here we hypothesized that persons with incomplete cervical SCI have a reduced maximum hand opening capacity but exhibit novel neuromuscular coordination strategies that permit object-specific hand aperture scaling during reaching. To test this hypothesis, we measured hand kinematics and surface electromyography (EMG) from seven muscles of the hand and wrist during attempts at maximum hand opening as well as reaching for four balls of different diameters. Our results showed that persons with SCI exhibited significantly reduced maximum hand aperture compared to able-bodied (AB) controls. However, persons with SCI preserved the ability to scale peak hand aperture with ball size during reaching. Persons with SCI also used distinct muscle coordination patterns that included increased co-activity of flexors and extensors at the wrist and hand compared to AB controls. These results suggest that motor planning for aperture modulation is preserved even though execution is limited by constraints on hand opening capacity and altered muscle co-activity. Thus, persons with incomplete cervical SCI may benefit from rehabilitation aimed at increasing hand opening capacity and reducing flexor-extensor co-activity at the wrist and hand. PMID:25511164

  17. Validity of temporomandibular disorder examination procedures for assessment of temporomandibular joint status.

    PubMed

    Schmitter, Marc; Kress, Bodo; Leckel, Michael; Henschel, Volkmar; Ohlmann, Brigitte; Rammelsberg, Peter

    2008-06-01

    This hypothesis-generating study was performed to determine which items in the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) and additional diagnostic tests have the best predictive accuracy for joint-related diagnoses. One hundred forty-nine TMD patients and 43 symptom-free subjects were examined in clinical examinations and with magnetic resonance imaging (MRI). The importance of each variable of the clinical examination for correct joint-related diagnosis was assessed by using MRI diagnoses. For this purpose, "random forest" statistical software (based on classification trees) was used. Maximum unassisted jaw opening, maximum assisted jaw opening, history of locked jaw, joint sound with and without compression, joint pain, facial pain, pain on palpation of the lateral pterygoid area, and overjet proved suitable for distinguishing between subtypes of joint-related TMD. Measurement of excursion, protrusion, and midline deviation were less important. The validity of clinical TMD examination procedures can be enhanced by using the 16 variables of greatest importance identified in this study. In addition to other variables, maximum unassisted and assisted opening and a history of locked jaw were important when assessing the status of the TMJ.

  18. Stereotactic probability and variability of speech arrest and anomia sites during stimulation mapping of the language dominant hemisphere.

    PubMed

    Chang, Edward F; Breshears, Jonathan D; Raygor, Kunal P; Lau, Darryl; Molinaro, Annette M; Berger, Mitchel S

    2017-01-01

    OBJECTIVE Functional mapping using direct cortical stimulation is the gold standard for the prevention of postoperative morbidity during resective surgery in dominant-hemisphere perisylvian regions. Its role is necessitated by the significant interindividual variability that has been observed for essential language sites. The aim in this study was to determine the statistical probability distribution of eliciting aphasic errors for any given stereotactically based cortical position in a patient cohort and to quantify the variability at each cortical site. METHODS Patients undergoing awake craniotomy for dominant-hemisphere primary brain tumor resection between 1999 and 2014 at the authors' institution were included in this study, which included counting and picture-naming tasks during dense speech mapping via cortical stimulation. Positive and negative stimulation sites were collected using an intraoperative frameless stereotactic neuronavigation system and were converted to Montreal Neurological Institute coordinates. Data were iteratively resampled to create mean and standard deviation probability maps for speech arrest and anomia. Patients were divided into groups with a "classic" or an "atypical" location of speech function, based on the resultant probability maps. Patient and clinical factors were then assessed for their association with an atypical location of speech sites by univariate and multivariate analysis. RESULTS Across 102 patients undergoing speech mapping, the overall probabilities of speech arrest and anomia were 0.51 and 0.33, respectively. Speech arrest was most likely to occur with stimulation of the posterior inferior frontal gyrus (maximum probability from individual bin = 0.025), and variance was highest in the dorsal premotor cortex and the posterior superior temporal gyrus. In contrast, stimulation within the posterior perisylvian cortex resulted in the maximum mean probability of anomia (maximum probability = 0.012), with large variance in the regions surrounding the posterior superior temporal gyrus, including the posterior middle temporal, angular, and supramarginal gyri. Patients with atypical speech localization were far more likely to have tumors in canonical Broca's or Wernicke's areas (OR 7.21, 95% CI 1.67-31.09, p < 0.01) or to have multilobar tumors (OR 12.58, 95% CI 2.22-71.42, p < 0.01), than were patients with classic speech localization. CONCLUSIONS This study provides statistical probability distribution maps for aphasic errors during cortical stimulation mapping in a patient cohort. Thus, the authors provide an expected probability of inducing speech arrest and anomia from specific 10-mm 2 cortical bins in an individual patient. In addition, they highlight key regions of interindividual mapping variability that should be considered preoperatively. They believe these results will aid surgeons in their preoperative planning of eloquent cortex resection.

  19. Slider--maximum use of probability information for alignment of short sequence reads and SNP detection.

    PubMed

    Malhis, Nawar; Butterfield, Yaron S N; Ester, Martin; Jones, Steven J M

    2009-01-01

    A plethora of alignment tools have been created that are designed to best fit different types of alignment conditions. While some of these are made for aligning Illumina Sequence Analyzer reads, none of these are fully utilizing its probability (prb) output. In this article, we will introduce a new alignment approach (Slider) that reduces the alignment problem space by utilizing each read base's probabilities given in the prb files. Compared with other aligners, Slider has higher alignment accuracy and efficiency. In addition, given that Slider matches bases with probabilities other than the most probable, it significantly reduces the percentage of base mismatches. The result is that its SNP predictions are more accurate than other SNP prediction approaches used today that start from the most probable sequence, including those using base quality.

  20. Open Mess Management Career Ladder AFS 742X0 and CEM Code 74200.

    DTIC Science & Technology

    1980-12-01

    I. OPEN MESS MANAGERS (SPC049, N=187) 11. FOOD / BEVERAGE OPERATIONS ASSISTANI MANAGERS ’LUSTER (GRP076, N=92) a. Bar and Operations Managers (GKP085...said they will or probably will reenlist. 1I. FOOD / BEVERAGE OPERATIONS ASSISTANT MANAGERS CLUSTER (GRP076).- This cluster of 9-2 reslpo nrts-(23...operation of open mess food and beverage functions. The majority of these airmen identify themselves as Assistant Managers of open mess facilities and are

  1. Occupancy Modeling Species-Environment Relationships with Non-ignorable Survey Designs.

    PubMed

    Irvine, Kathryn M; Rodhouse, Thomas J; Wright, Wilson J; Olsen, Anthony R

    2018-05-26

    Statistical models supporting inferences about species occurrence patterns in relation to environmental gradients are fundamental to ecology and conservation biology. A common implicit assumption is that the sampling design is ignorable and does not need to be formally accounted for in analyses. The analyst assumes data are representative of the desired population and statistical modeling proceeds. However, if datasets from probability and non-probability surveys are combined or unequal selection probabilities are used, the design may be non ignorable. We outline the use of pseudo-maximum likelihood estimation for site-occupancy models to account for such non-ignorable survey designs. This estimation method accounts for the survey design by properly weighting the pseudo-likelihood equation. In our empirical example, legacy and newer randomly selected locations were surveyed for bats to bridge a historic statewide effort with an ongoing nationwide program. We provide a worked example using bat acoustic detection/non-detection data and show how analysts can diagnose whether their design is ignorable. Using simulations we assessed whether our approach is viable for modeling datasets composed of sites contributed outside of a probability design Pseudo-maximum likelihood estimates differed from the usual maximum likelihood occu31 pancy estimates for some bat species. Using simulations we show the maximum likelihood estimator of species-environment relationships with non-ignorable sampling designs was biased, whereas the pseudo-likelihood estimator was design-unbiased. However, in our simulation study the designs composed of a large proportion of legacy or non-probability sites resulted in estimation issues for standard errors. These issues were likely a result of highly variable weights confounded by small sample sizes (5% or 10% sampling intensity and 4 revisits). Aggregating datasets from multiple sources logically supports larger sample sizes and potentially increases spatial extents for statistical inferences. Our results suggest that ignoring the mechanism for how locations were selected for data collection (e.g., the sampling design) could result in erroneous model-based conclusions. Therefore, in order to ensure robust and defensible recommendations for evidence-based conservation decision-making, the survey design information in addition to the data themselves must be available for analysts. Details for constructing the weights used in estimation and code for implementation are provided. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  2. Synergistic interplay of Gβγ and phosphatidylinositol 4,5-bisphosphate dictates Kv7.4 channel activity.

    PubMed

    Povstyan, Oleksandr V; Barrese, Vincenzo; Stott, Jennifer B; Greenwood, Iain A

    2017-02-01

    Kv7.4 channels are key determinants of arterial contractility and cochlear mechanosensation that, like all Kv7 channels, have an obligatory requirement for phosphatidylinositol 4,5-bisphosphate (PIP 2 ). βγ G proteins (Gβγ) have been identified as novel positive regulators of Kv7.4. The present study ascertained whether Gβγ increased Kv7.4 open probability through an increased sensitivity to PIP 2 . In HEK cells stably expressing Kv7.4, PIP 2 or Gβγ increased open probability in a concentration dependent manner. Depleting PIP 2 prevented any Gβγ-mediated stimulation whilst an array of Gβγ inhibitors prohibited any PIP 2 -induced current enhancement. A combination of PIP 2 and Gβγ at sub-efficacious concentrations increased channel open probability considerably. The stimulatory effects of three Kv7.2-7.5 channel activators were also lost by PIP 2 depletion or Gβγ inhibitors. This study alters substantially our understanding of the fundamental processes that dictate Kv7.4 activity, revealing a more complex and subtle paradigm where the reliance on local phosphoinositide is dictated by interaction with Gβγ.

  3. Fluctuation relation for heat exchange in Markovian open quantum systems

    NASA Astrophysics Data System (ADS)

    Ramezani, M.; Golshani, M.; Rezakhani, A. T.

    2018-04-01

    A fluctuation relation for the heat exchange of an open quantum system under a thermalizing Markovian dynamics is derived. We show that the probability that the system absorbs an amount of heat from its bath, at a given time interval, divided by the probability of the reverse process (releasing the same amount of heat to the bath) is given by an exponential factor which depends on the amount of heat and the difference between the temperatures of the system and the bath. Interestingly, this relation is akin to the standard form of the fluctuation relation (for forward-backward dynamics). We also argue that the probability of the violation of the second law of thermodynamics in the form of the Clausius statement (i.e., net heat transfer from a cold system to its hot bath) drops exponentially with both the amount of heat and the temperature differences of the baths.

  4. Fluctuation relation for heat exchange in Markovian open quantum systems.

    PubMed

    Ramezani, M; Golshani, M; Rezakhani, A T

    2018-04-01

    A fluctuation relation for the heat exchange of an open quantum system under a thermalizing Markovian dynamics is derived. We show that the probability that the system absorbs an amount of heat from its bath, at a given time interval, divided by the probability of the reverse process (releasing the same amount of heat to the bath) is given by an exponential factor which depends on the amount of heat and the difference between the temperatures of the system and the bath. Interestingly, this relation is akin to the standard form of the fluctuation relation (for forward-backward dynamics). We also argue that the probability of the violation of the second law of thermodynamics in the form of the Clausius statement (i.e., net heat transfer from a cold system to its hot bath) drops exponentially with both the amount of heat and the temperature differences of the baths.

  5. Determining open cluster membership. A Bayesian framework for quantitative member classification

    NASA Astrophysics Data System (ADS)

    Stott, Jonathan J.

    2018-01-01

    Aims: My goal is to develop a quantitative algorithm for assessing open cluster membership probabilities. The algorithm is designed to work with single-epoch observations. In its simplest form, only one set of program images and one set of reference images are required. Methods: The algorithm is based on a two-stage joint astrometric and photometric assessment of cluster membership probabilities. The probabilities were computed within a Bayesian framework using any available prior information. Where possible, the algorithm emphasizes simplicity over mathematical sophistication. Results: The algorithm was implemented and tested against three observational fields using published survey data. M 67 and NGC 654 were selected as cluster examples while a third, cluster-free, field was used for the final test data set. The algorithm shows good quantitative agreement with the existing surveys and has a false-positive rate significantly lower than the astrometric or photometric methods used individually.

  6. 45 CFR 1183.36 - Procurement.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... HUMANITIES INSTITUTE OF MUSEUM AND LIBRARY SERVICES UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... in a manner providing full and open competition consistent with the standards of § 1183.36. Some of... ensure maximum open and free competition. Also, grantees and subgrantees will not preclude potential...

  7. 45 CFR 1183.36 - Procurement.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... HUMANITIES INSTITUTE OF MUSEUM AND LIBRARY SERVICES UNIFORM ADMINISTRATIVE REQUIREMENTS FOR GRANTS AND... in a manner providing full and open competition consistent with the standards of § 1183.36. Some of... ensure maximum open and free competition. Also, grantees and subgrantees will not preclude potential...

  8. Understanding the Thermodynamics of Biological Order

    ERIC Educational Resources Information Center

    Peterson, Jacob

    2012-01-01

    By growth in size and complexity (i.e., changing from more probable to less probable states), plants and animals appear to defy the second law of thermodynamics. The usual explanation describes the input of nutrient and sunlight energy into open thermodynamic systems. However, energy input alone does not address the ability to organize and create…

  9. Resonances in the cumulative reaction probability for a model electronically nonadiabatic reaction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, J.; Bowman, J.M.

    1996-05-01

    The cumulative reaction probability, flux{endash}flux correlation function, and rate constant are calculated for a model, two-state, electronically nonadiabatic reaction, given by Shin and Light [S. Shin and J. C. Light, J. Chem. Phys. {bold 101}, 2836 (1994)]. We apply straightforward generalizations of the flux matrix/absorbing boundary condition approach of Miller and co-workers to obtain these quantities. The upper adiabatic electronic potential supports bound states, and these manifest themselves as {open_quote}{open_quote}recrossing{close_quote}{close_quote} resonances in the cumulative reaction probability, at total energies above the barrier to reaction on the lower adiabatic potential. At energies below the barrier, the cumulative reaction probability for themore » coupled system is shifted to higher energies relative to the one obtained for the ground state potential. This is due to the effect of an additional effective barrier caused by the nuclear kinetic operator acting on the ground state, adiabatic electronic wave function, as discussed earlier by Shin and Light. Calculations are reported for five sets of electronically nonadiabatic coupling parameters. {copyright} {ital 1996 American Institute of Physics.}« less

  10. Power loss in open cavity diodes and a modified Child-Langmuir law

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biswas, Debabrata; Kumar, Raghwendra; Puri, R.R.

    Diodes used in most high power devices are inherently open. It is shown that under such circumstances, there is a loss of electromagnetic radiation leading to a lower critical current as compared to closed diodes. The power loss can be incorporated in the standard Child-Langmuir framework by introducing an effective potential. The modified Child-Langmuir law can be used to predict the maximum power loss for a given plate separation and potential difference as well as the maximum transmitted current for this power loss. The effectiveness of the theory is tested numerically.

  11. Redescription of Spirodentalium Walcott ( Gastropoda: Late Cambrian) from Wisconsin ( USA).

    USGS Publications Warehouse

    Yochelson, E.L.

    1987-01-01

    Spirodentalium Walcott, 1890, was originally described as a scaphopod. A reinterpretation of the type lot suggests that its overall shape is that of an open-coiled gastropod. If so, this is probably the earliest known open-coiled form and it is sinistral in coiling direction. -Author

  12. College Students' Openness toward Autism Spectrum Disorders: Improving Peer Acceptance

    ERIC Educational Resources Information Center

    Nevill, Rose E. A.; White, Susan W.

    2011-01-01

    One probable consequence of rising rates of autism spectrum disorder diagnosis in individuals without co-occurring intellectual disability is that more young adults with diagnoses or traits of ASD will attend college and require appropriate supports. This study sought to explore college students' openness to peers who demonstrate…

  13. Alternative probability theories for cognitive psychology.

    PubMed

    Narens, Louis

    2014-01-01

    Various proposals for generalizing event spaces for probability functions have been put forth in the mathematical, scientific, and philosophic literatures. In cognitive psychology such generalizations are used for explaining puzzling results in decision theory and for modeling the influence of context effects. This commentary discusses proposals for generalizing probability theory to event spaces that are not necessarily boolean algebras. Two prominent examples are quantum probability theory, which is based on the set of closed subspaces of a Hilbert space, and topological probability theory, which is based on the set of open sets of a topology. Both have been applied to a variety of cognitive situations. This commentary focuses on how event space properties can influence probability concepts and impact cognitive modeling. Copyright © 2013 Cognitive Science Society, Inc.

  14. The utility of Bayesian predictive probabilities for interim monitoring of clinical trials

    PubMed Central

    Connor, Jason T.; Ayers, Gregory D; Alvarez, JoAnn

    2014-01-01

    Background Bayesian predictive probabilities can be used for interim monitoring of clinical trials to estimate the probability of observing a statistically significant treatment effect if the trial were to continue to its predefined maximum sample size. Purpose We explore settings in which Bayesian predictive probabilities are advantageous for interim monitoring compared to Bayesian posterior probabilities, p-values, conditional power, or group sequential methods. Results For interim analyses that address prediction hypotheses, such as futility monitoring and efficacy monitoring with lagged outcomes, only predictive probabilities properly account for the amount of data remaining to be observed in a clinical trial and have the flexibility to incorporate additional information via auxiliary variables. Limitations Computational burdens limit the feasibility of predictive probabilities in many clinical trial settings. The specification of prior distributions brings additional challenges for regulatory approval. Conclusions The use of Bayesian predictive probabilities enables the choice of logical interim stopping rules that closely align with the clinical decision making process. PMID:24872363

  15. Lava flow risk maps at Mount Cameroon volcano

    NASA Astrophysics Data System (ADS)

    Favalli, M.; Fornaciai, A.; Papale, P.; Tarquini, S.

    2009-04-01

    Mount Cameroon, in the southwest Cameroon, is one of the most active volcanoes in Africa. Rising 4095 m asl, it has erupted nine times since the beginning of the past century, more recently in 1999 and 2000. Mount Cameroon documented eruptions are represented by moderate explosive and effusive eruptions occurred from both summit and flank vents. A 1922 SW-flank eruption produced a lava flow that reached the Atlantic coast near the village of Biboundi, and a lava flow from a 1999 south-flank eruption stopped only 200 m from the sea, threatening the villages of Bakingili and Dibunscha. More than 450,000 people live or work around the volcano, making the risk from lava flow invasion a great concern. In this work we propose both conventional hazard and risk maps and novel quantitative risk maps which relate vent locations to the expected total damage on existing buildings. These maps are based on lava flow simulations starting from 70,000 different vent locations, a probability distribution of vent opening, a law for the maximum length of lava flows, and a database of buildings. The simulations were run over the SRTM Digital Elevation Model (DEM) using DOWNFLOW, a fast DEM-driven model that is able to compute detailed invasion areas of lava flows from each vent. We present three different types of risk maps (90-m-pixel) for buildings around Mount Cameroon volcano: (1) a conventional risk map that assigns a probability of devastation by lava flows to each pixel representing buildings; (2) a reversed risk map where each pixel expresses the total damage expected as a consequence of vent opening in that pixel (the damage is expressed as the total surface of urbanized areas invaded); (3) maps of the lava catchments of the main towns around the volcano, within every catchment the pixels are classified according to the expected impact they might produce on the relative town in the case of a vent opening in that pixel. Maps of type (1) and (3) are useful for long term planning. Maps of type (2) and (3) are useful at the onset of a new eruption, when a vent forms. The combined use of these maps provides an efficient tool for lava flow risk assessment at Mount Cameroon.

  16. The effect of aortic morphology on peri-operative mortality of ruptured abdominal aortic aneurysm.

    PubMed

    2015-06-01

    To investigate whether aneurysm shape and extent, which indicate whether a patient with ruptured abdominal aortic aneurysm (rAAA) is eligible for endovascular repair (EVAR), influence the outcome of both EVAR and open surgical repair. The influence of six morphological parameters (maximum aortic diameter, aneurysm neck diameter, length and conicality, proximal neck angle, and maximum common iliac diameter) on mortality and reinterventions within 30 days was investigated in rAAA patients randomized before morphological assessment in the Immediate Management of the Patient with Rupture: Open Versus Endovascular strategies (IMPROVE) trial. Patients with a proven diagnosis of rAAA, who underwent repair and had their admission computerized tomography scan submitted to the core laboratory, were included. Among 458 patients (364 men, mean age 76 years), who had either EVAR (n = 177) or open repair (n = 281) started, there were 155 deaths and 88 re-interventions within 30 days of randomization analysed according to a pre-specified plan. The mean maximum aortic diameter was 8.6 cm. There were no substantial correlations between the six morphological variables. Aneurysm neck length was shorter in those undergoing open repair (vs. EVAR). Aneurysm neck length (mean 23.3, SD 16.1 mm) was inversely associated with mortality for open repair and overall: adjusted OR 0.72 (95% CI 0.57, 0.92) for each 16 mm (SD) increase in length. There were no convincing associations of morphological parameters with reinterventions. Short aneurysm necks adversely influence mortality after open repair of rAAA and preclude conventional EVAR. This may help explain why observational studies, but not randomized trials, have shown an early survival benefit for EVAR. ISRCTN 48334791. © The Author 2015. Published by Oxford University Press on behalf of the European Society of Cardiology.

  17. Evaluation of Image Segmentation and Object Recognition Algorithms for Image Parsing

    DTIC Science & Technology

    2013-09-01

    generation of the features from the key points. OpenCV uses Euclidean distance to match the key points and has the option to use Manhattan distance...feature vector includes polarity and intensity information. Final step is matching the key points. In OpenCV , Euclidean distance or Manhattan...the code below is one way and OpenCV offers the function radiusMatch (a pair must have a distance less than a given maximum distance). OpenCV’s

  18. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    NASA Astrophysics Data System (ADS)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  19. Forecasting of the selected features of Poaceae (R. Br.) Barnh., Artemisia L. and Ambrosia L. pollen season in Szczecin, north-western Poland, using Gumbel's distribution.

    PubMed

    Puc, Małgorzata; Wolski, Tomasz

    2013-01-01

    The allergenic pollen content of the atmosphere varies according to climate, biogeography and vegetation. Minimisation of the pollen allergy symptoms is related to the possibility of avoidance of large doses of the allergen. Measurements performed in Szczecin over a period of 13 years (2000-2012 inclusive) permitted prediction of theoretical maximum concentrations of pollen grains and their probability for the pollen season of Poaceae, Artemisia and Ambrosia. Moreover, the probabilities were determined of a given date as the beginning of the pollen season, the date of the maximum pollen count, Seasonal Pollen Index value and the number of days with pollen count above threshold values. Aerobiological monitoring was conducted using a Hirst volumetric trap (Lanzoni VPPS). Linear trend with determination coefficient (R(2)) was calculated. Model for long-term forecasting was performed by the method based on Gumbel's distribution. A statistically significant negative correlation was determined between the duration of pollen season of Poaceae and Artemisia and the Seasonal Pollen Index value. Seasonal, total pollen counts of Artemisia and Ambrosia showed a strong and statistically significant decreasing tendency. On the basis of Gumbel's distribution, a model was proposed for Szczecin, allowing prediction of the probabilities of the maximum pollen count values that can appear once in e.g. 5, 10 or 100 years. Short pollen seasons are characterised by a higher intensity of pollination than long ones. Prediction of the maximum pollen count values, dates of the pollen season beginning, and the number of days with pollen count above the threshold, on the basis of Gumbel's distribution, is expected to lead to improvement in the prophylaxis and therapy of persons allergic to pollen.

  20. Multiscale Resilience of Complex Systems

    NASA Astrophysics Data System (ADS)

    Tchiguirinskaia, I.; Schertzer, D. J. M.; Giangola-Murzyn, A.; Hoang Cong, T.

    2014-12-01

    We first argue the need for well defined resilience metrics to better evaluate the resilience of complex systems such as (peri-)urban flood management systems. We review both the successes and limitations of resilience metrics in the framework of dynamical systems and their generalization in the framework of the viability theory. We then point out that the most important step to achieve is to define resilience across scales instead of doing it at a given scale. Our preliminary, critical analysis of the series of attempts to define an operational resilience metrics led us to consider a scale invariant metrics based on the scale independent codimension of extreme singularities. Multifractal downscaling of climate scenarios can be considered as a first illustration. We focussed on a flood scenario evaluation method with the help of two singularities γ_s and γ_Max, corresponding respectively to an effective and a probable maximum singularity, that yield an innovative framework to address the issues of flood resilience systems in a scale independent manner. Indeed, the stationarity of the universal multifractal parameters would result into a rather stable value of probable maximum singularity γ_s. By fixing the limit of acceptability for a maximum flood water depth at a given scale, with a corresponding singularity, we effectively fix the threshold of the probable maximum singularity γ_s as a criterion of the flood resilience we accept. Then various scenarios of flood resilient measures could be simulated with the help of Multi-Hydro under upcoming climat scenarios. The scenarios that result in estimates of either γ_Max or γ_s below the pre-selected γ_s value will assure the effective flood resilience of the whole modeled system across scales. The research for this work was supported, in part, by the EU FP7 SMARTesT and INTERREG IVB RainGain projects.

  1. T he Analysis of the seasonal variations of equatorial plasma bubble, occurrence observed from Oukaimeden Observatory, Morroco

    NASA Astrophysics Data System (ADS)

    Amine, Lagheryeb; Zouhair, Benkhaldoun; Jonathan, Makela; Mohamed, Kaab; Aziza, Bounhir; Brian, Hardin; Dan, Fisher; Tmuthy, Duly

    2016-04-01

    T he Analysis of the seasonal variations of equatorial plasma bubble, occurrence using the 630.0 nm airglow images collected by the PICASSO imager deployed at the Oukkaimden observatory in Morocco. Data have been taken since November 2013 to december 2015. We show the monthly average of appearance of EPBs. A maximum probability for bubble development is seen in the data in January and between late February and early March. We also observe that there are a maximum period of appearance where the plasma is observed (3-5 nights successivies) and we will discuss its connection with the solar activity in storm time. Future analysis will compare the probability of bubble occurrence in our site with the data raised in other observation sites.

  2. Security Analysis of Measurement-Device-Independent Quantum Key Distribution in Collective-Rotation Noisy Environment

    NASA Astrophysics Data System (ADS)

    Li, Na; Zhang, Yu; Wen, Shuang; Li, Lei-lei; Li, Jian

    2018-01-01

    Noise is a problem that communication channels cannot avoid. It is, thus, beneficial to analyze the security of MDI-QKD in noisy environment. An analysis model for collective-rotation noise is introduced, and the information theory methods are used to analyze the security of the protocol. The maximum amount of information that Eve can eavesdrop is 50%, and the eavesdropping can always be detected if the noise level ɛ ≤ 0.68. Therefore, MDI-QKD protocol is secure as quantum key distribution protocol. The maximum probability that the relay outputs successful results is 16% when existing eavesdropping. Moreover, the probability that the relay outputs successful results when existing eavesdropping is higher than the situation without eavesdropping. The paper validates that MDI-QKD protocol has better robustness.

  3. Maximum number of habitable planets at the time of Earth's origin: new hints for panspermia?

    PubMed

    von Bloh, Werner; Franck, Siegfried; Bounama, Christine; Schellnhuber, Hans-Joachim

    2003-04-01

    New discoveries have fuelled the ongoing discussion of panspermia, i.e. the transport of life from one planet to another within the solar system (interplanetary panspermia) or even between different planetary systems (interstellar panspermia). The main factor for the probability of interstellar panspermia is the average density of stellar systems containing habitable planets. The combination of recent results for the formation rate of Earth-like planets with our estimations of extrasolar habitable zones allows us to determine the number of habitable planets in the Milky Way over cosmological time scales. We find that there was a maximum number of habitable planets around the time of Earth's origin. If at all, interstellar panspermia was most probable at that time and may have kick-started life on our planet.

  4. Maximum efficiency of state-space models of nanoscale energy conversion devices

    NASA Astrophysics Data System (ADS)

    Einax, Mario; Nitzan, Abraham

    2016-07-01

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.

  5. Maximum efficiency of state-space models of nanoscale energy conversion devices.

    PubMed

    Einax, Mario; Nitzan, Abraham

    2016-07-07

    The performance of nano-scale energy conversion devices is studied in the framework of state-space models where a device is described by a graph comprising states and transitions between them represented by nodes and links, respectively. Particular segments of this network represent input (driving) and output processes whose properly chosen flux ratio provides the energy conversion efficiency. Simple cyclical graphs yield Carnot efficiency for the maximum conversion yield. We give general proof that opening a link that separate between the two driving segments always leads to reduced efficiency. We illustrate these general result with simple models of a thermoelectric nanodevice and an organic photovoltaic cell. In the latter an intersecting link of the above type corresponds to non-radiative carriers recombination and the reduced maximum efficiency is manifested as a smaller open-circuit voltage.

  6. History of human activity in last 800 years reconstructed from combined archive data and high-resolution analyses of varved lake sediments from Lake Czechowskie, Northern Poland

    NASA Astrophysics Data System (ADS)

    Słowiński, Michał; Tyszkowski, Sebastian; Ott, Florian; Obremska, Milena; Kaczmarek, Halina; Theuerkauf, Martin; Wulf, Sabine; Brauer, Achim

    2016-04-01

    The aim of the study was to reconstruct human and landscape development in the Tuchola Pinewoods (Northern Poland) during the last 800 years. We apply an approach that combines historic maps and documents with pollen data. Pollen data were obtained from varved lake sediments at a resolution of 5 years. The chronology of the sediment record is based on varve counting, AMS 14C dating, 137Cs activity concentration measurements and tephrochronology (Askja AD 1875). We applied the REVEALS model to translate pollen percentage data into regional plant abundances. The interpretation of the pollen record is furthermore based on pollen accumulation rate data. The pollen record and historic documents show similar trends in vegetation development. During the first phase (AD 1200-1412), the Lake Czechowskie area was still largely forested with Quercus, Carpinus and Pinus forests. Vegetation was more open during the second phase (AD 1412-1776), and reached maximum openness during the third phase (AD 1776-1905). Furthermore, intensified forest management led to a transformation from mixed to pine dominated forests during this period. Since the early 20th century, the forest cover increased again with dominance of the Scots pine in the stand. While pollen and historic data show similar trends, they differ substantially in the degree of openness during the four phases with pollen data commonly suggesting more open conditions. We discuss potential causes for this discrepancy, which include unsuitable parameters settings in REVEALS and unknown changes in forest structure. Using pollen accumulation data as a third proxy record we aim to identify the most probable causes. Finally, we discuss the observed vegetation change in relation the socio-economic development of the area. This study is a contribution to the Virtual Institute of Integrated Climate and Landscape Evolution Analysis - ICLEA- of the Helmholtz Association and National Science Centre, Poland (grant No. 2011/01/B/ST10/07367 and 2015/17/B/ST10/03430).

  7. Development of maintenance practices for Oregon F-Mix : Interim report

    DOT National Transportation Integrated Search

    1999-08-01

    The Oregon Department of Transportation (ODOT) produces an open-graded asphalt pavement(F-mix) unlike most open-graded pavements used in the United States. Its 25 mm maximum aggregate size and typical lift thickness of 50 mm is more like the porous a...

  8. Compaction and measurement of field density for Oregon open-graded (F-MIX) asphalt pavement

    DOT National Transportation Integrated Search

    1999-06-01

    A research project conducted by Oregon State University (OSU) and the Oregon Department of Transportation (ODOT) investigated compaction of Oregon F-mix asphalt pavement, an open-graded mix with 25-mm maximum size aggregate and air voids typically in...

  9. Multi-bottle, no compressor, mean pressure control system for a Stirling engine

    DOEpatents

    Corey, John A.

    1990-01-01

    The invention relates to an apparatus for mean pressure control of a Stirling engine without the need for a compressor. The invention includes a multi-tank system in which there is at least one high pressure level tank and one low pressure level tank wherein gas flows through a maximum pressure and supply line from the engine to the high pressure tank when a first valve is opened until the maximum pressure of the engine drops below that of the high pressure tank opening an inlet regulator to permit gas flow from the engine to the low pressure tank. When gas flows toward the engine it flows through the minimum pressure supply line 2 when a second valve is opened from the low pressure tank until the tank reaches the engine's minimum pressure level at which time the outlet regulator opens permitting gas to be supplied from the high pressure tank to the engine. Check valves between the two tanks prevent any backflow of gas from occurring.

  10. Cardiac sodium channel Markov model with temperature dependence and recovery from inactivation.

    PubMed Central

    Irvine, L A; Jafri, M S; Winslow, R L

    1999-01-01

    A Markov model of the cardiac sodium channel is presented. The model is similar to the CA1 hippocampal neuron sodium channel model developed by Kuo and Bean (1994. Neuron. 12:819-829) with the following modifications: 1) an additional open state is added; 2) open-inactivated transitions are made voltage-dependent; and 3) channel rate constants are exponential functions of enthalpy, entropy, and voltage and have explicit temperature dependence. Model parameters are determined using a simulated annealing algorithm to minimize the error between model responses and various experimental data sets. The model reproduces a wide range of experimental data including ionic currents, gating currents, tail currents, steady-state inactivation, recovery from inactivation, and open time distributions over a temperature range of 10 degrees C to 25 degrees C. The model also predicts measures of single channel activity such as first latency, probability of a null sweep, and probability of reopening. PMID:10096885

  11. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  12. Measuring real-time streamflow using emerging technologies: Radar, hydroacoustics, and the probability concept

    NASA Astrophysics Data System (ADS)

    Fulton, John; Ostrowski, Joseph

    2008-07-01

    SummaryForecasting streamflow during extreme hydrologic events such as floods can be problematic. This is particularly true when flow is unsteady, and river forecasts rely on models that require uniform-flow rating curves to route water from one forecast point to another. As a result, alternative methods for measuring streamflow are needed to properly route flood waves and account for inertial and pressure forces in natural channels dominated by nonuniform-flow conditions such as mild water surface slopes, backwater, tributary inflows, and reservoir operations. The objective of the demonstration was to use emerging technologies to measure instantaneous streamflow in open channels at two existing US Geological Survey streamflow-gaging stations in Pennsylvania. Surface-water and instream-point velocities were measured using hand-held radar and hydroacoustics. Streamflow was computed using the probability concept, which requires velocity data from a single vertical containing the maximum instream velocity. The percent difference in streamflow at the Susquehanna River at Bloomsburg, PA ranged from 0% to 8% with an average difference of 4% and standard deviation of 8.81 m 3/s. The percent difference in streamflow at Chartiers Creek at Carnegie, PA ranged from 0% to 11% with an average difference of 5% and standard deviation of 0.28 m 3/s. New generation equipment is being tested and developed to advance the use of radar-derived surface-water velocity and instantaneous streamflow to facilitate the collection and transmission of real-time streamflow that can be used to parameterize hydraulic routing models.

  13. Three-input majority function as the unique optimal function for the bias amplification using nonlocal boxes

    NASA Astrophysics Data System (ADS)

    Mori, Ryuhei

    2016-11-01

    Brassard et al. [Phys. Rev. Lett. 96, 250401 (2006), 10.1103/PhysRevLett.96.250401] showed that shared nonlocal boxes with a CHSH (Clauser, Horne, Shimony, and Holt) probability greater than 3/+√{6 } 6 yield trivial communication complexity. There still exists a gap with the maximum CHSH probability 2/+√{2 } 4 achievable by quantum mechanics. It is an interesting open question to determine the exact threshold for the trivial communication complexity. Brassard et al.'s idea is based on recursive bias amplification by the three-input majority function. It was not obvious if another choice of function exhibits stronger bias amplification. We show that the three-input majority function is the unique optimal function, so that one cannot improve the threshold 3/+√{6 } 6 by Brassard et al.'s bias amplification. In this work, protocols for computing the function used for the bias amplification are restricted to be nonadaptive protocols or a particular adaptive protocol inspired by Pawłowski et al.'s protocol for information causality [Nature (London) 461, 1101 (2009), 10.1038/nature08400]. We first show an adaptive protocol inspired by Pawłowski et al.'s protocol, and then show that the adaptive protocol improves upon nonadaptive protocols. Finally, we show that the three-input majority function is the unique optimal function for the bias amplification if we apply the adaptive protocol to each step of the bias amplification.

  14. Measuring real-time streamflow using emerging technologies: Radar, hydroacoustics, and the probability concept

    USGS Publications Warehouse

    Fulton, J.; Ostrowski, J.

    2008-01-01

    Forecasting streamflow during extreme hydrologic events such as floods can be problematic. This is particularly true when flow is unsteady, and river forecasts rely on models that require uniform-flow rating curves to route water from one forecast point to another. As a result, alternative methods for measuring streamflow are needed to properly route flood waves and account for inertial and pressure forces in natural channels dominated by nonuniform-flow conditions such as mild water surface slopes, backwater, tributary inflows, and reservoir operations. The objective of the demonstration was to use emerging technologies to measure instantaneous streamflow in open channels at two existing US Geological Survey streamflow-gaging stations in Pennsylvania. Surface-water and instream-point velocities were measured using hand-held radar and hydroacoustics. Streamflow was computed using the probability concept, which requires velocity data from a single vertical containing the maximum instream velocity. The percent difference in streamflow at the Susquehanna River at Bloomsburg, PA ranged from 0% to 8% with an average difference of 4% and standard deviation of 8.81 m3/s. The percent difference in streamflow at Chartiers Creek at Carnegie, PA ranged from 0% to 11% with an average difference of 5% and standard deviation of 0.28 m3/s. New generation equipment is being tested and developed to advance the use of radar-derived surface-water velocity and instantaneous streamflow to facilitate the collection and transmission of real-time streamflow that can be used to parameterize hydraulic routing models.

  15. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  16. Kinematic Modeling of Normal Voluntary Mandibular Opening and Closing Velocity-Initial Study.

    PubMed

    Gawriołek, Krzysztof; Gawriołek, Maria; Komosa, Marek; Piotrowski, Paweł R; Azer, Shereen S

    2015-06-01

    Determination and quantification of voluntary mandibular velocity movement has not been a thoroughly studied parameter of masticatory movement. This study attempted to objectively define kinematics of mandibular movement based on numerical (digital) analysis of the relations and interactions of velocity diagram records in healthy female individuals. Using a computerized mandibular scanner (K7 Evaluation Software), 72 diagrams of voluntary mandibular velocity movements (36 for opening, 36 for closing) for women with clinically normal motor and functional activities of the masticatory system were recorded. Multiple measurements were analyzed focusing on the curve for maximum velocity records. For each movement, the loop of temporary velocities was determined. The diagram was then entered into AutoCad calculation software where movement analysis was performed. The real maximum velocity values on opening (Vmax ), closing (V0 ), and average velocity values (Vav ) as well as movement accelerations (a) were recorded. Additionally, functional (A1-A2) and geometric (P1-P4) analysis of loop constituent phases were performed, and the relations between the obtained areas were defined. Velocity means and correlation coefficient values for various velocity phases were calculated. The Wilcoxon test produced the following maximum and average velocity results: Vmax = 394 ± 102, Vav = 222 ± 61 for opening, and Vmax = 409 ± 94, Vav = 225 ± 55 mm/s for closing. Both mandibular movement range and velocity change showed significant variability achieving the highest velocity in P2 phase. Voluntary mandibular velocity presents significant variations between healthy individuals. Maximum velocity is obtained when incisal separation is between 12.8 and 13.5 mm. An improved understanding of the patterns of normal mandibular movements may provide an invaluable diagnostic aid to pathological changes within the masticatory system. © 2014 by the American College of Prosthodontists.

  17. Effect of open rhinoplasty on the smile line.

    PubMed

    Tabrizi, Reza; Mirmohamadsadeghi, Hoori; Daneshjoo, Danadokht; Zare, Samira

    2012-05-01

    Open rhinoplasty is an esthetic surgical technique that is becoming increasingly popular, and can affect the nose and upper lip compartments. The aim of this study was to evaluate the effect of open rhinoplasty on tooth show and the smile line. The study participants were 61 patients with a mean age of 24.3 years (range, 17.2 to 39.6 years). The surgical procedure consisted of an esthetic open rhinoplasty without alar resection. Analysis of tooth show was limited to pre- and postoperative (at 12 months) tooth show measurements at rest and the maximum smile with a ruler (when participants held their heads naturally). Statistical analyses were performed with SPSS 13.0, and paired-sample t tests were used to compare tooth show means before and after the operation. Analysis of the rest position showed no statistically significant change in tooth show (P = .15), but analysis of participants' maximum smile data showed a statistically significant increase in tooth show after surgery (P < .05). In contrast, Pearson correlation analysis showed a positive relation between rhinoplasty and tooth show increases in maximum smile, especially in subjects with high smile lines. This study shows that the nasolabial compartment is a single unit and any change in 1 part may influence the other parts. Further studies should be conducted to investigate these interactions. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  18. Simultaneous minimization of leaf travel distance and tongue-and-groove effect for segmental intensity-modulated radiation therapy.

    PubMed

    Dai, Jianrong; Que, William

    2004-12-07

    This paper introduces a method to simultaneously minimize the leaf travel distance and the tongue-and-groove effect for IMRT leaf sequences to be delivered in segmental mode. The basic idea is to add a large enough number of openings through cutting or splitting existing openings for those leaf pairs with openings fewer than the number of segments so that all leaf pairs have the same number of openings. The cutting positions are optimally determined with a simulated annealing technique called adaptive simulated annealing. The optimization goal is set to minimize the weighted summation of the leaf travel distance and tongue-and-groove effect. Its performance was evaluated with 19 beams from three clinical cases; one brain, one head-and-neck and one prostate case. The results show that it can reduce the leaf travel distance and (or) tongue-and-groove effect; the reduction of the leaf travel distance reaches its maximum of about 50% when minimized alone; the reduction of the tongue-and-groove reaches its maximum of about 70% when minimized alone. The maximum reduction in the leaf travel distance translates to a 1 to 2 min reduction in treatment delivery time per fraction, depending on leaf speed. If the method is implemented clinically, it could result in significant savings in treatment delivery time, and also result in significant reduction in the wear-and-tear of MLC mechanics.

  19. Influence of the Oxygen Electrode Open Ratio and Electrolyte Evaporation on the Performance of Li-O2 Batteries.

    PubMed

    Mohazabrad, Farhad; Wang, Fangzhou; Li, Xianglin

    2017-05-10

    This study experimentally investigates and numerically simulates the influence of the cathode electrode open ratio (ratio of oxygen-opening area to the total electrode surface area) on the performance of Li-O 2 batteries at various discharge current densities. At the current density of 0.1 mA/cm 2 , the maximum discharge capacity is achieved at 25% open ratio among the tested open ratios (0-100%). As the open ratio increases from 25% to 100%, the specific discharge capacity decreases from 995 to 397 mA h/g carbon . A similar trend is observed at 0.3 mA/cm 2 , while the maximum discharge capacity is obtained at 3% open ratio among the tested open ratios. The model that assumes the electrode is always fully saturated by the electrolyte does not obtain similar trends with experimental results, while the model that considers electrolyte loss by evaporation and the volume change of the solid obtains the same trend with experimental observations. The open ratio governs not only availability of oxygen but also the evaporation of the electrolyte and the contact resistance. The faster evaporation of the electrolyte at a higher open ratio can be the main reason for the decrease of the discharge capacity, especially when the open ratio is relatively high (above 25%). Meanwhile, the contact resistance of the battery, measured by the electrochemical impedance spectroscopy (EIS), increases from 3.97 to 7.02 Ω when the open ratio increased from 3% to 95%. The increase of the Ohmic overpotential, however, is negligible (on the order of millivolts) because of the low discharge and charge current rates (on the order of 0.1 mA).

  20. Comparison of VO[subscript 2] Maximum Obtained from 20 m Shuttle Run and Cycle Ergometer in Children with and without Developmental Coordination Disorder

    ERIC Educational Resources Information Center

    Cairney, John; Hay, John; Veldhuizen, Scott; Faught, Brent

    2010-01-01

    Oxygen consumption at peak physical exertion (VO[subscript 2] maximum) is the most widely used indicator of cardiorespiratory fitness. The purpose of this study was to compare two protocols for its estimation, cycle ergometer testing and the 20 m shuttle run, among children with and without probable developmental coordination disorder (pDCD). The…

  1. The probability of quantal secretion near a single calcium channel of an active zone.

    PubMed Central

    Bennett, M R; Farnell, L; Gibson, W G

    2000-01-01

    A Monte Carlo analysis has been made of calcium dynamics and quantal secretion at microdomains in which the calcium reaches very high concentrations over distances of <50 nm from a channel and for which calcium dynamics are dominated by diffusion. The kinetics of calcium ions in microdomains due to either the spontaneous or evoked opening of a calcium channel, both of which are stochastic events, are described in the presence of endogenous fixed and mobile buffers. Fluctuations in the number of calcium ions within 50 nm of a channel are considerable, with the standard deviation about half the mean. Within 10 nm of a channel these numbers of ions can give rise to calcium concentrations of the order of 100 microM. The temporal changes in free calcium and calcium bound to different affinity indicators in the volume of an entire varicosity or bouton following the opening of a single channel are also determined. A Monte Carlo analysis is also presented of how the dynamics of calcium ions at active zones, after the arrival of an action potential and the stochastic opening of a calcium channel, determine the probability of exocytosis from docked vesicles near the channel. The synaptic vesicles in active zones are found docked in a complex with their calcium-sensor associated proteins and a voltage-sensitive calcium channel, forming a secretory unit. The probability of quantal secretion from an isolated secretory unit has been determined for different distances of an open calcium channel from the calcium sensor within an individual unit: a threefold decrease in the probability of secretion of a quantum occurs with a doubling of the distance from 25 to 50 nm. The Monte Carlo analysis also shows that the probability of secretion of a quantum is most sensitive to the size of the single-channel current compared with its sensitivity to either the binding rates of the sites on the calcium-sensor protein or to the number of these sites that must bind a calcium ion to trigger exocytosis of a vesicle. PMID:10777721

  2. Maruhn-Greiner Maximum of Uranium Fission for Confirmation of Low Energy Nuclear Reactions LENR via a Compound Nucleus with Double Magic Numbers

    NASA Astrophysics Data System (ADS)

    Hora, H.; Miley, G. H.

    2007-12-01

    One of the most convincing facts about LENR due to deuterons of very high concentration in host metals as palladium is the measurement of the large scale minimum of the reaction probability depending on the nucleon number A of generated elements at A = 153 where a local maximum was measured. This is similar to the fission of uranium at A = 119 where the local maximum follows from the Maruhn-Greiner theory if the splitting nuclei are excited to about MeV energy. The LENR generated elements can be documented any time after the reaction by SIMS or K-shell X-ray excitation to show the very unique distribution with the local maximum. An explanation is based on the strong Debye screening of the Maxwellian deuterons within the degenerate rigid electron background especially within the swimming electron layer at the metal surface or at interfaces. The deuterons behave like neutrals at distances of about 2 picometers. They may form clusters due to soft attraction in the range above thermal energy. Clusters of 10 pm diameter may react over long time probabilities (megaseconds) with Pd nuclei leading to a double magic number compound nucleus which splits like in fission to the A = 153 element distribution.

  3. Decision analysis with approximate probabilities

    NASA Technical Reports Server (NTRS)

    Whalen, Thomas

    1992-01-01

    This paper concerns decisions under uncertainty in which the probabilities of the states of nature are only approximately known. Decision problems involving three states of nature are studied. This is due to the fact that some key issues do not arise in two-state problems, while probability spaces with more than three states of nature are essentially impossible to graph. The primary focus is on two levels of probabilistic information. In one level, the three probabilities are separately rounded to the nearest tenth. This can lead to sets of rounded probabilities which add up to 0.9, 1.0, or 1.1. In the other level, probabilities are rounded to the nearest tenth in such a way that the rounded probabilities are forced to sum to 1.0. For comparison, six additional levels of probabilistic information, previously analyzed, were also included in the present analysis. A simulation experiment compared four criteria for decisionmaking using linearly constrained probabilities (Maximin, Midpoint, Standard Laplace, and Extended Laplace) under the eight different levels of information about probability. The Extended Laplace criterion, which uses a second order maximum entropy principle, performed best overall.

  4. Probability Analysis of the Wave-Slamming Pressure Values of the Horizontal Deck with Elastic Support

    NASA Astrophysics Data System (ADS)

    Zuo, Weiguang; Liu, Ming; Fan, Tianhui; Wang, Pengtao

    2018-06-01

    This paper presents the probability distribution of the slamming pressure from an experimental study of regular wave slamming on an elastically supported horizontal deck. The time series of the slamming pressure during the wave impact were first obtained through statistical analyses on experimental data. The exceeding probability distribution of the maximum slamming pressure peak and distribution parameters were analyzed, and the results show that the exceeding probability distribution of the maximum slamming pressure peak accords with the three-parameter Weibull distribution. Furthermore, the range and relationships of the distribution parameters were studied. The sum of the location parameter D and the scale parameter L was approximately equal to 1.0, and the exceeding probability was more than 36.79% when the random peak was equal to the sample average during the wave impact. The variation of the distribution parameters and slamming pressure under different model conditions were comprehensively presented, and the parameter values of the Weibull distribution of wave-slamming pressure peaks were different due to different test models. The parameter values were found to decrease due to the increased stiffness of the elastic support. The damage criterion of the structure model caused by the wave impact was initially discussed, and the structure model was destroyed when the average slamming time was greater than a certain value during the duration of the wave impact. The conclusions of the experimental study were then described.

  5. How to model a negligible probability under the WTO sanitary and phytosanitary agreement?

    PubMed

    Powell, Mark R

    2013-06-01

    Since the 1997 EC--Hormones decision, World Trade Organization (WTO) Dispute Settlement Panels have wrestled with the question of what constitutes a negligible risk under the Sanitary and Phytosanitary Agreement. More recently, the 2010 WTO Australia--Apples Panel focused considerable attention on the appropriate quantitative model for a negligible probability in a risk assessment. The 2006 Australian Import Risk Analysis for Apples from New Zealand translated narrative probability statements into quantitative ranges. The uncertainty about a "negligible" probability was characterized as a uniform distribution with a minimum value of zero and a maximum value of 10(-6) . The Australia - Apples Panel found that the use of this distribution would tend to overestimate the likelihood of "negligible" events and indicated that a triangular distribution with a most probable value of zero and a maximum value of 10⁻⁶ would correct the bias. The Panel observed that the midpoint of the uniform distribution is 5 × 10⁻⁷ but did not consider that the triangular distribution has an expected value of 3.3 × 10⁻⁷. Therefore, if this triangular distribution is the appropriate correction, the magnitude of the bias found by the Panel appears modest. The Panel's detailed critique of the Australian risk assessment, and the conclusions of the WTO Appellate Body about the materiality of flaws found by the Panel, may have important implications for the standard of review for risk assessments under the WTO SPS Agreement. © 2012 Society for Risk Analysis.

  6. Ensemble modeling of stochastic unsteady open-channel flow in terms of its time-space evolutionary probability distribution - Part 2: numerical application

    NASA Astrophysics Data System (ADS)

    Dib, Alain; Kavvas, M. Levent

    2018-03-01

    The characteristic form of the Saint-Venant equations is solved in a stochastic setting by using a newly proposed Fokker-Planck Equation (FPE) methodology. This methodology computes the ensemble behavior and variability of the unsteady flow in open channels by directly solving for the flow variables' time-space evolutionary probability distribution. The new methodology is tested on a stochastic unsteady open-channel flow problem, with an uncertainty arising from the channel's roughness coefficient. The computed statistical descriptions of the flow variables are compared to the results obtained through Monte Carlo (MC) simulations in order to evaluate the performance of the FPE methodology. The comparisons show that the proposed methodology can adequately predict the results of the considered stochastic flow problem, including the ensemble averages, variances, and probability density functions in time and space. Unlike the large number of simulations performed by the MC approach, only one simulation is required by the FPE methodology. Moreover, the total computational time of the FPE methodology is smaller than that of the MC approach, which could prove to be a particularly crucial advantage in systems with a large number of uncertain parameters. As such, the results obtained in this study indicate that the proposed FPE methodology is a powerful and time-efficient approach for predicting the ensemble average and variance behavior, in both space and time, for an open-channel flow process under an uncertain roughness coefficient.

  7. Open star clusters in the Milky Way. Comparison of photometric and trigonometric distance scales based on Gaia TGAS data

    NASA Astrophysics Data System (ADS)

    Kovaleva, Dana A.; Piskunov, Anatoly E.; Kharchenko, Nina V.; Röser, Siegfried; Schilbach, Elena; Scholz, Ralf-Dieter; Reffert, Sabine; Yen, Steffi X.

    2017-10-01

    Context. The global survey of star clusters in the Milky Way (MWSC) is a comprehensive list of 3061 objects that provides, among other parameters, distances to clusters based on isochrone fitting. The Tycho-Gaia Astrometric Solution (TGAS) catalogue, which is a part of Gaia data release 1 (Gaia DR1), delivers accurate trigonometric parallax measurements for more than 2 million stars, including those in star clusters. Aims: We compare the open cluster photometric distance scale with the measurements given by the trigonometric parallaxes from TGAS to evaluate the consistency between these values. Methods: The average parallaxes of probable cluster members available in TGAS provide the trigonometric distance scale of open clusters, while the photometric scale is given by the distances published in the MWSC. Sixty-four clusters are suited for comparison as they have more than 16 probable members with parallax measurements in TGAS. We computed the average parallaxes of the probable members and compared these to the photometric parallaxes derived within the MWSC. Results: We find a good agreement between the trigonometric TGAS-based and the photometric MWSC-based distance scales of open clusters, which for distances less than 2.3 kpc coincide at a level of about 0.1 mas with no dependence on the distance. If at all, there is a slight systematic offset along the Galactic equator between 30° and 160° galactic longitude.

  8. Relationship between masticatory performance using a gummy jelly and masticatory movement.

    PubMed

    Uesugi, Hanako; Shiga, Hiroshi

    2017-10-01

    The purpose of this study was to clarify the relationship between masticatory performance using a gummy jelly and masticatory movement. Thirty healthy males were asked to chew a gummy jelly on their habitual chewing side for 20s, and the parameters of masticatory performance and masticatory movement were calculated as follows. For evaluating the masticatory performance, the amount of glucose extraction during chewing of a gummy jelly was measured. For evaluating the masticatory movement, the movement of the mandibular incisal point was recorded using the MKG K6-I, and ten parameters of the movement path (opening distance and masticatory width), movement rhythm (opening time, closing time, occluding time, and cycle time), stability of movement (stability of path and stability of rhythm), and movement velocity (opening maximum velocity and closing maximum velocity) were calculated from 10 cycles of chewing beginning with the fifth cycle. The relationship between the amount of glucose extraction and parameters representing masticatory movement was investigated and then stepwise multiple linear regression analysis was performed. The amount of glucose extraction was associated with 7 parameters representing the masticatory movement. Stepwise multiple linear regression analysis showed that the opening distance, closing time, stability of rhythm, and closing maximum velocity were the most important factors affecting the glucose extraction. From these results it was suggested that there was a close relation between masticatory performance and masticatory movement, and that the masticatory performance could be increased by rhythmic, rapid and stable mastication with a large opening distance. Copyright © 2017 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  9. High-Throughput Screening of Sulfide Thermoelectric Materials Using Electron Transport Calculations with OpenMX and BoltzTraP

    NASA Astrophysics Data System (ADS)

    Miyata, Masanobu; Ozaki, Taisuke; Takeuchi, Tsunehiro; Nishino, Shunsuke; Inukai, Manabu; Koyano, Mikio

    2018-06-01

    The electron transport properties of 809 sulfides have been investigated using density functional theory (DFT) calculations in the relaxation time approximation, and a material design rule established for high-performance sulfide thermoelectric (TE) materials. Benchmark electron transport calculations were performed for Cu12Sb4S13 and Cu26V2Ge6S32, revealing that the ratio of the scattering probability of electrons and phonons ( κ lat τ el -1 ) was constant at about 2 × 1014 W K-1 m-1 s-1. The calculated thermopower S dependence of the theoretical dimensionless figure of merit ZT DFT of the 809 sulfides showed a maximum at 140 μV K-1 to 170 μV K-1. Under the assumption of constant κ lat τ el -1 of 2 × 1014 W K-1 m-1 s-1 and constant group velocity v of electrons, a slope of the density of states of 8.6 states eV-2 to 10 states eV-2 is suitable for high- ZT sulfide TE materials. The Lorenz number L dependence of ZT DFT for the 809 sulfides showed a maximum at L of approximately 2.45 × 10-8 V2 K-2. This result demonstrates that the potential of high- ZT sulfide materials is highest when the electron thermal conductivity κ el of the symmetric band is equal to that of the asymmetric band.

  10. A three-step Maximum-A-Posterior probability method for InSAR data inversion of coseismic rupture with application to four recent large earthquakes in Asia

    NASA Astrophysics Data System (ADS)

    Sun, J.; Shen, Z.; Burgmann, R.; Liang, F.

    2012-12-01

    We develop a three-step Maximum-A-Posterior probability (MAP) method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic solutions of earthquake rupture. The method originates from the Fully Bayesian Inversion (FBI) and the Mixed linear-nonlinear Bayesian inversion (MBI) methods , shares the same a posterior PDF with them and keeps most of their merits, while overcoming its convergence difficulty when large numbers of low quality data are used and improving the convergence rate greatly using optimization procedures. A highly efficient global optimization algorithm, Adaptive Simulated Annealing (ASA), is used to search for the maximum posterior probability in the first step. The non-slip parameters are determined by the global optimization method, and the slip parameters are inverted for using the least squares method without positivity constraint initially, and then damped to physically reasonable range. This step MAP inversion brings the inversion close to 'true' solution quickly and jumps over local maximum regions in high-dimensional parameter space. The second step inversion approaches the 'true' solution further with positivity constraints subsequently applied on slip parameters using the Monte Carlo Inversion (MCI) technique, with all parameters obtained from step one as the initial solution. Then the slip artifacts are eliminated from slip models in the third step MAP inversion with fault geometry parameters fixed. We first used a designed model with 45 degree dipping angle and oblique slip, and corresponding synthetic InSAR data sets to validate the efficiency and accuracy of method. We then applied the method on four recent large earthquakes in Asia, namely the 2010 Yushu, China earthquake, the 2011 Burma earthquake, the 2011 New Zealand earthquake and the 2008 Qinghai, China earthquake, and compared our results with those results from other groups. Our results show the effectiveness of the method in earthquake studies and a number of advantages of it over other methods. The details will be reported on the meeting.

  11. Effect of density feedback on the two-route traffic scenario with bottleneck

    NASA Astrophysics Data System (ADS)

    Sun, Xiao-Yan; Ding, Zhong-Jun; Huang, Guo-Hua

    2016-12-01

    In this paper, we investigate the effect of density feedback on the two-route scenario with a bottleneck. The simulation and theory analysis shows that there exist two critical vehicle entry probabilities αc1 and αc2. When vehicle entry probability α≤αc1, four different states, i.e. free flow state, transition state, maximum current state and congestion state are identified in the system, which correspond to three critical reference densities. However, in the interval αc1<α<αc2, the free flow and transition state disappear, and there is only congestion state when α≥αc2. According to the results, traffic control center can adjust the reference density so that the system is in maximum current state. In this case, the capacity of the traffic system reaches maximum so that drivers can make full use of the roads. We hope that the study results can provide good advice for alleviating traffic jam and be useful to traffic control center for designing advanced traveller information systems.

  12. Quantifying Extrinsic Noise in Gene Expression Using the Maximum Entropy Framework

    PubMed Central

    Dixit, Purushottam D.

    2013-01-01

    We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in Escherichia coli. We suggest that the variation in extrinsic factors may account for the observed wider-than-Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in E. coli whereas others need verification. Application of the presented framework to more complex situations is also discussed. PMID:23790383

  13. Quantifying extrinsic noise in gene expression using the maximum entropy framework.

    PubMed

    Dixit, Purushottam D

    2013-06-18

    We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in Escherichia coli. We suggest that the variation in extrinsic factors may account for the observed wider-than-Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in E. coli whereas others need verification. Application of the presented framework to more complex situations is also discussed. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  14. Smisc - A collection of miscellaneous functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Landon Sego, PNNL

    2015-08-31

    A collection of functions for statistical computing and data manipulation. These include routines for rapidly aggregating heterogeneous matrices, manipulating file names, loading R objects, sourcing multiple R files, formatting datetimes, multi-core parallel computing, stream editing, specialized plotting, etc. Smisc-package A collection of miscellaneous functions allMissing Identifies missing rows or columns in a data frame or matrix as.numericSilent Silent wrapper for coercing a vector to numeric comboList Produces all possible combinations of a set of linear model predictors cumMax Computes the maximum of the vector up to the current index cumsumNA Computes the cummulative sum of a vector without propogating NAsmore » d2binom Probability functions for the sum of two independent binomials dataIn A flexible way to import data into R. dbb The Beta-Binomial Distribution df2list Row-wise conversion of a data frame to a list dfplapply Parallelized single row processing of a data frame dframeEquiv Examines the equivalence of two dataframes or matrices dkbinom Probability functions for the sum of k independent binomials factor2character Converts all factor variables in a dataframe to character variables findDepMat Identify linearly dependent rows or columns in a matrix formatDT Converts date or datetime strings into alternate formats getExtension Filename manipulations: remove the extension or path, extract the extension or path getPath Filename manipulations: remove the extension or path, extract the extension or path grabLast Filename manipulations: remove the extension or path, extract the extension or path ifelse1 Non-vectorized version of ifelse integ Simple numerical integration routine interactionPlot Two-way Interaction Plot with Error Bar linearMap Linear mapping of a numerical vector or scalar list2df Convert a list to a data frame loadObject Loads and returns the object(s) in an ".Rdata" file more Display the contents of a file to the R terminal movAvg2 Calculate the moving average using a 2-sided window openDevice Opens a graphics device based on the filename extension p2binom Probability functions for the sum of two independent binomials padZero Pad a vector of numbers with zeros parseJob Parses a collection of elements into (almost) equal sized groups pbb The Beta-Binomial Distribution pcbinom A continuous version of the binomial cdf pkbinom Probability functions for the sum of k independent binomials plapply Simple parallelization of lapply plotFun Plot one or more functions on a single plot PowerData An example of power data pvar Prints the name and value of one or more objects qbb The Beta-Binomial Distribution rbb And numerous others (space limits reporting).« less

  15. Single channel analysis of the blocking actions of BIDN and fipronil on a Drosophila melanogaster GABA receptor (RDL) stably expressed in a Drosophila cell line

    PubMed Central

    Grolleau, Françoise; Sattelle, David B

    2000-01-01

    Single channel recordings were obtained from a Drosophila S2 cell line stably expressing the wild-type RDLac Drosophila melanogaster homomer-forming ionotropic GABA receptor subunit, a product of the resistance to dieldrin gene, Rdl. GABA (50 μM) was applied by pressure ejection to outside-out patches from S2-RDL cells at a holding potential of −60 mV. The resulting inward current was completely blocked by 100 μM picrotoxin (PTX). The unitary current-voltage relationship was linear at negative potentials but showed slight inward rectification at potentials more positive than 0 mV. The reversal potential of the current (EGABA=−1.4 mV) was close to the calculated chloride equilibrium potential. The single channel conductance elicited by GABA was 36 pS. A 71 pS conductance channel was also observed when the duration of the pulse, used to eject GABA, was longer than 80 ms. The mean open time distribution of the unitary events was fitted best by two exponential functions suggesting two open channel states. When either 1 μM fipronil or 1 μM BIDN was present in the external saline, the GABA-gated channels were completely blocked. When BIDN or fipronil was applied at a concentration close to the IC50 value for suppression of open probability (281 nM, BIDN; 240 nM, fipronil), the duration of channel openings was shortened. In addition, the blocking action of BIDN resulted in the appearance of a novel channel conductance (17 pS). The effects of co-application of BIDN and fipronil were examined. Co-application of BIDN (300 nM) with various concentrations (100–1000 nM) of fipronil resulted in an additional BIDN-induced dose-dependent reduction of the maximum Po value. Thus both BIDN and fipronil shorten the duration of wild-type RDLac GABA receptor channel openings but appear to act at distinct sites. PMID:10952672

  16. Voting for a personality: Do first impressions and self-evaluations affect voting decisions?

    PubMed

    Koppensteiner, Markus; Stephan, Pia

    2014-08-01

    Participants were asked to assess their own personality (i.e. Big Five scales), the personality of politicians shown in brief silent video clips, and the probability that they would vote for these politicians. Response surface analyses (RSA) revealed noteworthy effects of self-ratings and observer-ratings of openness, agreeableness, and emotional stability on voting probability. Furthermore, the participants perceived themselves as being more open, more agreeable, more emotionally stable, and more extraverted than the average politician. The study supports previous findings that first impressions affect decision making on important issues. Results also indicate that when only nonverbal information is available people prefer political candidates they perceive as having personality traits they value in themselves.

  17. Safety and tolerability of the first-in-class agent CPI-613 in combination with modified FOLFIRINOX in patients with metastatic pancreatic cancer: a single-centre, open-label, dose-escalation, phase 1 trial.

    PubMed

    Alistar, Angela; Morris, Bonny B; Desnoyer, Rodwige; Klepin, Heidi D; Hosseinzadeh, Keyanoosh; Clark, Clancy; Cameron, Amy; Leyendecker, John; D'Agostino, Ralph; Topaloglu, Umit; Boteju, Lakmal W; Boteju, Asela R; Shorr, Rob; Zachar, Zuzana; Bingham, Paul M; Ahmed, Tamjeed; Crane, Sandrine; Shah, Riddhishkumar; Migliano, John J; Pardee, Timothy S; Miller, Lance; Hawkins, Gregory; Jin, Guangxu; Zhang, Wei; Pasche, Boris

    2017-06-01

    Pancreatic cancer statistics are dismal, with a 5-year survival of less than 10%, and more than 50% of patients presenting with metastatic disease. Metabolic reprogramming is an emerging hallmark of pancreatic adenocarcinoma. CPI-613 is a novel anticancer agent that selectively targets the altered form of mitochondrial energy metabolism in tumour cells, causing changes in mitochondrial enzyme activities and redox status that lead to apoptosis, necrosis, and autophagy of tumour cells. We aimed to establish the maximum tolerated dose of CPI-613 when used in combination with modified FOLFIRINOX chemotherapy (comprising oxaliplatin, leucovorin, irinotecan, and fluorouracil) in patients with metastatic pancreatic cancer. In this single-centre, open-label, dose-escalation phase 1 trial, we recruited adult patients (aged ≥18 years) with newly diagnosed metastatic pancreatic adenocarcinoma from the Comprehensive Cancer Center of Wake Forest Baptist Medical Center (Winston-Salem, NC, USA). Patients had good bone marrow, liver and kidney function, and good performance status (Eastern Cooperative Oncology Group [ECOG] performance status 0-1). We studied CPI-613 in combination with modified FOLFIRINOX (oxaliplatin at 65 mg/m 2 , leucovorin at 400 mg/m 2 , irinotecan at 140 mg/m 2 , and fluorouracil 400 mg/m 2 bolus followed by 2400 mg/m 2 over 46 h). We applied a two-stage dose-escalation scheme (single patient and traditional 3+3 design). In the single-patient stage, one patient was accrued per dose level. The starting dose of CPI-613 was 500 mg/m 2 per day; the dose level was then escalated by doubling the previous dose if there were no adverse events worse than grade 2 within 4 weeks attributed as probably or definitely related to CPI-613. The traditional 3+3 dose-escalation stage was triggered if toxic effects attributed as probably or definitely related to CPI-613 were grade 2 or worse. The dose level for CPI-613 for the first cohort in the traditional dose-escalation stage was the same as that used in the last cohort of the single-patient dose-escalation stage. The primary objective was to establish the maximum tolerated dose of CPI-613 (as assessed by dose-limiting toxicities). This trial is registered with ClinicalTrials.gov, number NCT01835041, and is closed to recruitment. Between April 22, 2013, and Jan 8, 2016, we enrolled 20 patients. The maximum tolerated dose of CPI-613 was 500 mg/m 2 . The median number of treatment cycles given at the maximum tolerated dose was 11 (IQR 4-19). Median follow-up of the 18 patients treated at the maximum tolerated dose was 378 days (IQR 250-602). Two patients enrolled at a higher dose of 1000 mg/m 2 , and both had a dose-limiting toxicity. Two unexpected serious adverse events occurred, both for the first patient enrolled. Expected serious adverse events were: thrombocytopenia, anaemia, and lymphopenia (all for patient number 2; anaemia and lymphopenia were dose-limiting toxicities); hyperglycaemia (in patient number 7); hypokalaemia, hypoalbuminaemia, and sepsis (patient number 11); and neutropenia (patient number 20). No deaths due to adverse events were reported. For the 18 patients given the maximum tolerated dose, the most common grade 3-4 non-haematological adverse events were hyperglycaemia (ten [55%] patients), hypokalaemia (six [33%]), peripheral sensory neuropathy (five [28%]), diarrhoea (five [28%]), and abdominal pain (four [22%]). The most common grade 3-4 haematological adverse events were neutropenia (five [28%] of 18 patients), lymphopenia (five [28%]), anaemia (four [22%], and thrombocytopenia in three [17%]). Sensory neuropathy (all grade 1-3) was recorded in 17 (94%) of the 18 patients and was managed with dose de-escalation or discontinuation per standard of care. No patients died while on active treatment; 11 study participants died, with cause of death as terminal pancreatic cancer. Of the 18 patients given the maximum tolerated dose, 11 (61%) achieved an objective (complete or partial) response. A maximum tolerated dose of CPI-613 was established at 500 mg/m 2 when used in combination with modified FOLFIRINOX in patients with metastatic pancreatic cancer. The findings of clinical activity will require validation in a phase 2 trial. Comprehensive Cancer Center of Wake Forest Baptist Medical Center. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Causal Methods for Observational Research: A Primer.

    PubMed

    Almasi-Hashiani, Amir; Nedjat, Saharnaz; Mansournia, Mohammad Ali

    2018-04-01

    The goal of many observational studies is to estimate the causal effect of an exposure on an outcome after adjustment for confounders, but there are still some serious errors in adjusting confounders in clinical journals. Standard regression modeling (e.g., ordinary logistic regression) fails to estimate the average effect of exposure in total population in the presence of interaction between exposure and covariates, and also cannot adjust for time-varying confounding appropriately. Moreover, stepwise algorithms of the selection of confounders based on P values may miss important confounders and lead to bias in effect estimates. Causal methods overcome these limitations. We illustrate three causal methods including inverse-probability-of-treatment-weighting (IPTW) and parametric g-formula, with an emphasis on a clever combination of these 2 methods: targeted maximum likelihood estimation (TMLE) which enjoys a double-robust property against bias. © 2018 The Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

  19. Modeling and forecasting foreign exchange daily closing prices with normal inverse Gaussian

    NASA Astrophysics Data System (ADS)

    Teneng, Dean

    2013-09-01

    We fit the normal inverse Gaussian(NIG) distribution to foreign exchange closing prices using the open software package R and select best models by Käärik and Umbleja (2011) proposed strategy. We observe that daily closing prices (12/04/2008 - 07/08/2012) of CHF/JPY, AUD/JPY, GBP/JPY, NZD/USD, QAR/CHF, QAR/EUR, SAR/CHF, SAR/EUR, TND/CHF and TND/EUR are excellent fits while EGP/EUR and EUR/GBP are good fits with a Kolmogorov-Smirnov test p-value of 0.062 and 0.08 respectively. It was impossible to estimate normal inverse Gaussian parameters (by maximum likelihood; computational problem) for JPY/CHF but CHF/JPY was an excellent fit. Thus, while the stochastic properties of an exchange rate can be completely modeled with a probability distribution in one direction, it may be impossible the other way around. We also demonstrate that foreign exchange closing prices can be forecasted with the normal inverse Gaussian (NIG) Lévy process, both in cases where the daily closing prices can and cannot be modeled by NIG distribution.

  20. Integrating Physical and Topographic Information Into a Fuzzy Scheme to Map Flooded Area by SAR.

    PubMed

    Pierdicca, Nazzareno; Chini, Marco; Pulvirenti, Luca; Macina, Flavia

    2008-07-10

    A flood mapping procedure based on a fuzzy sets theory has been developed. The method is based on the integration of Synthetic Aperture Radar (SAR) measurements with additional data on the inundated area, such as a land cover map and a digital elevation model (DEM). The information on land cover has allowed us to account for both specular reflection, typical of open water, and double bounce backscattering, typical of forested and urban areas. DEM has been exploited to include simple hydraulic considerations on the dependence of inundation probability on surface characteristics. Contextual information has been taken into account too. The proposed algorithm has been tested on a flood occurred in Italy on November 1994. A pair of ERS-1 images, collected before and after (three days later) the flood, has been used. The results have been compared with the data provided by a ground survey carried out when the flood reached its maximum extension. Despite the temporal mismatch between the survey and the post-inundation SAR image, the comparison has yielded encouraging results, with the 87% of the pixels correctly classified as inundated.

  1. Variation in the Mississippi River Plume from Data Synthesis of Model Outputs and MODIS Imagery

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, C.; Kolker, A.; Chu, P. Y.

    2017-12-01

    Understanding the Mississippi River (MR) plume's interaction with the open ocean is crucial for understanding many processes in the Gulf of Mexico. Though the Mississippi River and its delta and plume have been studied extensively, recent archives of model products and satellite imagery have allowed us to highlight patterns in plume behavior over the last two decades through large scale data synthesis. Using 8 years of USGS discharge data and Landsat imagery, we identified the spatial extent, geographic patterns, depth, and freshwater concentration of the MR plume across seasons and years. Using 20 years of HYCOM (HYbrid Coordinate Ocean Model) analysis and reanalysis model output, and several years of NGOFS FVCOM model outputs, we mapped the minimum and maximum spatial area of the MR plume, and its varied extent east and west. From the synthesis and analysis of these data, the statistical probability of the MR plume's spatial area and geographical extent were computed. Measurements of the MR plume and its response to river discharge may predict future behavior and provide a path forward to understanding MR plume influence on nearby ecosystems.

  2. Genetic linkage of familial granulomatous inflammatory arthritis, skin rash, and uveitis to chromosome 16

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tromp, G.; Kuivaniemi, H.; Ala-Kokko, L.

    1996-11-01

    Blau syndrome (MIM 186580), first described in a large, three-generation kindred, is an autosomal, dominantly inherited disease characterized by multiorgan, tissue-specific inflammation. Its clinical phenotype includes granulomatous arthritis, skin rash, and uveitis and probably represents a subtype of a group of clinical entities referred to as {open_quotes}familial granulomatosis.{close_quotes} It is the sole human model with recognizably Mendelian inheritance for a variety of multisystem inflammatory diseases affecting a significant percentage of the population. A genomewide search for the Blau susceptibility locus was undertaken after karyotypic analysis revealed no abnormalities. Sixty-two of the 74-member pedigree were genotyped with dinucleotide-repeat markers. Linkage analysismore » was performed under dominant model of inheritance with reduced penetrance. The marker D16S298 gave a maximum LOD score of 3.75 at {theta} = .04, with two-point analysis. LOD scores for flanking markers were consistent and placed the Blau susceptibility locus within the 16p12-q21 interval. 46 refs., 3 figs., 3 tabs.« less

  3. National Program for Inspection of Non-Federal Dams, Wheeler Dam (NH 00028), Arlington Mill Reservoir, Merrimack River Basin, Salem, New Hampshire. Phase I Inspection Report.

    DTIC Science & Technology

    1978-08-01

    dam is a concrete gravity dam with earth abutments. It is 730 ft. long and the maximum height of it is 54 ft. The dam is assessed to be in poor...concrete gravity dam with earth abutments constructed in 1920. Overall length is 730 feet and maximum height is 54 feet. The Spicket River flows 5...the Spillway Test flood is based on the estimated "Probable Maximum Flood" for the region ( greatest reasonably possible storm runoff), or fractions

  4. Maximum-likelihood block detection of noncoherent continuous phase modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K.; Divsalar, Dariush

    1993-01-01

    This paper examines maximum-likelihood block detection of uncoded full response CPM over an additive white Gaussian noise (AWGN) channel. Both the maximum-likelihood metrics and the bit error probability performances of the associated detection algorithms are considered. The special and popular case of minimum-shift-keying (MSK) corresponding to h = 0.5 and constant amplitude frequency pulse is treated separately. The many new receiver structures that result from this investigation can be compared to the traditional ones that have been used in the past both from the standpoint of simplicity of implementation and optimality of performance.

  5. Maximum of a Fractional Brownian Motion: Analytic Results from Perturbation Theory.

    PubMed

    Delorme, Mathieu; Wiese, Kay Jörg

    2015-11-20

    Fractional Brownian motion is a non-Markovian Gaussian process X_{t}, indexed by the Hurst exponent H. It generalizes standard Brownian motion (corresponding to H=1/2). We study the probability distribution of the maximum m of the process and the time t_{max} at which the maximum is reached. They are encoded in a path integral, which we evaluate perturbatively around a Brownian, setting H=1/2+ϵ. This allows us to derive analytic results beyond the scaling exponents. Extensive numerical simulations for different values of H test these analytical predictions and show excellent agreement, even for large ϵ.

  6. Rationale for early versus late intervention with arthroscopy for treatment of inflammatory/degenerative temporomandibular joint disorders.

    PubMed

    Israel, Howard A; Behrman, David A; Friedman, Joel M; Silberstein, Jennifer

    2010-11-01

    The goal of this study was to determine if there were differences in outcomes of arthroscopic surgery in patients with inflammatory/degenerative temporomandibular joint (TMJ) disease who underwent early surgical intervention versus late surgical intervention. The study population included 44 consecutive patients who met the criteria for TMJ operative arthroscopy who were divided into early and late intervention groups. The time between the onset of symptoms and the performance of arthroscopy was used to determine entry into the early versus late intervention group. All groups were evaluated for changes in preoperative versus postoperative pain levels based on visual analog scale (VAS) scores and maximum interincisal opening distance. Statistical analyses included the Student t test to determine if there were significant differences between preoperative and postoperative assessments in the early and late intervention groups. The mean time between onset of symptoms in the early intervention group (21 patients) was 5.4 months compared with 33 months in the late intervention group (23 patients). All patient groups had statistically significant decreases in pain and improvement in maximum interincisal opening distance after arthroscopy. The early intervention group had a mean decrease in VAS pain scores of 5.14 compared with the late intervention group with a mean decrease in VAS pain scores of 2.84, and this difference was significant (P = .012). The early intervention group had a mean increase in maximum interincisal opening of 12.38 mm compared with the late intervention group with a mean increase of 7.70. Although statistical significance was not achieved for increases in maximum interincisal opening between the early and late intervention groups (P = .089), the difference between the 2 groups was suggestive of a trend. There were no surgical complications for either group; however, 2 patients in the late intervention group developed persistent chronic neuropathic pain, requiring pain management. TMJ arthroscopy reliably decreased pain and increased the maximum interincisal opening distance in the early and late intervention groups. The early intervention group had better surgical outcomes than the late intervention group. Arthroscopic surgery should be considered early in the management of patients with inflammatory/degenerative TMJ disease. Copyright © 2010 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.

  7. Effectiveness of a stormwater collection and detention system for reducing constituent loads from bridge runoff in Pinellas County, Florida

    USGS Publications Warehouse

    Stoker, Y.E.

    1996-01-01

    The quantity and quality of stormwater runoff from the Bayside Bridge were evaluated to determine the effectiveness of the stormwater collection and detention pond system of the bridge in reducing constituent loads to Old Tampa Bay. Water-quality samples of stormwater runoff from the bridge and outflow from the detention pond were collected during and after selected storms. These samples were used to compute loads for selected constituents. Stormwater on the Bayside Bridge drained rapidly during rain events. The volume of stormwater runoff from 24 storms measured during the study ranged from 4,086 to 103,705 cubic feet. Storms were most frequent during July through September and were least frequent from February through May. Concentrations of most constituents in stormwater runoff before the bridge opened to traffic were less than or equal to concentrations measured after the bridge was opened to traffic. However, concentrations of arsenic in the outflow from the detention pond generally were greater before the bridge opened than concentrations after, and concentrations of orthophosphorus in the stormwater runoff and outflow from the pond were greater before the bridge opened than during over half the sampled storms after the bridge opened. Concentrations of most constituents measured in stormwater runoff from the bridge were greatest at the beginning of the storm and decreased as the storm continued. Variations in suspended solids, nutrients, and trace element concentrations were not always concurrent with each other. The source of the measured constituent (rainfall or road debris) and the phase of the constituent (suspended or dissolved) probably affected the timing of concentration changes. The quality of stormwater runoff from the Bayside Bridge varied with total runoff volume, with the length of the dry period before the storm, and with season. Average concentrations of suspended solids, ammonia plus organic nitrogen, nitrite plus nitrate nitrogen, orthophosphorus, phosphorus, total organic carbon, aluminum, arsenic, copper, and zinc in stormwater runoff generally were inversely related to runoff volume. The quality of outflow from the detention pond also varied during a storm event and with season. Maximum concentrations generally occurred near the beginning of a storm, and decreased as the storm continued. Maximum concentrations of many constituents occurred in June and July 1995. During the summer months, pH exceeded 9.0 while inorganic nitrogen concentrations were very low. These high pH values and low inorganic nitrogen concentrations are most likely associated with photosynthesis by algae or aquatic plants in the pond. Concentrations of nitrogen, phosphorus, and nickel in stormwater runoff were correlated with total organic carbon concentrations. Concentrations of chromium, copper, iron, nickel, lead, and zinc in stormwater runoff were correlated with aluminum concentrations. The source of these metals is probably the bridge materials and metallic debris from vehicles. The northern detention pond system of the Bayside Bridge effectively reduced concentrations of suspended solids, ammonia nitrogen, nitrite plus nitrate nitrogen, phosphorus, aluminum, cadmium, chromium, copper, iron, lead, nickel, and zinc in stormwater runoff before water discharged from the pond. However, concentrations of ammonia plus organic nitrogen, organic carbon, arsenic, and values for alkalinity, pH, and specific conductance generally were greater in outflow from the pond than in stormwater runoff from the bridge. Stormwater runoff and pond outflow for three storm events were evaluated to determine the effectiveness of the detention pond system in removing selected constituents from the stormwater runoff. Most constituents and constituent loads were reduced in the outflow from the pond. Suspended solids loads were reduced about 30 to 45 percent, inorganic nitrogen loads were reduced by about 60 to 90 percent, and loads of most trace elements

  8. CAP/WAP-based correlations for estimating the static axial capacity of open-ended steel pipe piles in Alaska.

    DOT National Transportation Integrated Search

    2012-12-01

    CAPWAP analyses of open-ended steel pipe piles at 32 bridge sites in Alaska have been compiled with geotechnical and construction : information for 12- to 48-inch diameter piles embedded in predominantly cohesionless soils to maximum depths of 161-fe...

  9. 50 CFR 14.161 - Primary enclosures.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... the total surface area of each ventilated wall. When openings are located on all four walls of the enclosure, the openings shall comprise at least 8 percent of the total surface area of each wall. At least... specifies maximum densities minimum space for transporting rodents that fall within the specified weight...

  10. 50 CFR 14.161 - Primary enclosures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... the total surface area of each ventilated wall. When openings are located on all four walls of the enclosure, the openings shall comprise at least 8 percent of the total surface area of each wall. At least... specifies maximum densities minimum space for transporting rodents that fall within the specified weight...

  11. 50 CFR 14.161 - Primary enclosures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... the total surface area of each ventilated wall. When openings are located on all four walls of the enclosure, the openings shall comprise at least 8 percent of the total surface area of each wall. At least... specifies maximum densities minimum space for transporting rodents that fall within the specified weight...

  12. Removing obstacles for pavement cost reduction by examining early age opening requirements : material properties.

    DOT National Transportation Integrated Search

    2015-08-01

    The risk of cracking in a concrete pavement that is opened to traffic at early ages is related to the maximum tensile stress, , that : develops in the pavement and its relationship to the measured, age dependent, flexural strength of a beam, fr . T...

  13. On the Importance of Cycle Minimum in Sunspot Cycle Prediction

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.

    1996-01-01

    The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.

  14. Current-wave spectra coupling project. Volume I. Hurricane fields and cross sections, surface winds and currents, significant waves and wave spectra for potential OTEC sites: (A) Keahole Point, Hawaii, 100 year hurricane; (B) Punta Tuna, Puerto Rico, 100 year hurricane; (C) New Orleans, Louisiana, 100 year hurricane; (D) West Coast of Florida, 100 year hurricane; and for (E) Hurricane Camille (1969) off Louisiana Coast

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bretschneider, C.L.

    1980-06-01

    This volume is an extension of and consists of several modifications to the earlier report by Bretschneider (April 1979) on the subject of hurricane design wind, wave and current criteria for the four potential OTEC sites. The 100-year hurricane criteria for the design of OTEC plants is included. The criteria, in addition to the maximum conditions of winds, waves and surface current, include: hurricane fields for wind speed U/sub s/ and significant wave height H/sub s/; hurricane fields for modal wave period f/sub 0//sup -1/ and maximum energy density S/sub max/ of the wave spectrum; the corresponding Ekman wind-driven surfacemore » current V/sub s/; tabulated cross-sections for U/sub s/, H/sub s/, f/sub 0//sup -1/ and S/sub max/ through max U/sub s/ and through max H/sub s/ along traverses at right angles to and along traverses parallel to the forward movement of the hurricane; most probable maximum wave height and the expected corresponding wave period, based on statistical analysis of maximum wave heights from five hurricanes; design wave spectra for maximum U/sub s/ and also maximum H/sub s/, since maximum U/sub s/ and maximum H/sub s/ do not occur simultaneously; the envelope of wave spectra through maximum U/sub s/ and through maximum H/sub s/ along traverses parallel to the forward movement of the hurricane; the above same determinations for Hurricane Camille (1969) as for the four OTEC locations; and alternative methods (suggested) for obtaining design wave spectra from the joint probability distribution functions for wave height and period given by Longuet-Higgins (1975) and C.N.E.X.O. after Arhan, et al (1976).« less

  15. BOOK REVIEW: OPENING SCIENCE, THE EVOLVING GUIDE ...

    EPA Pesticide Factsheets

    The way we get our funding, collaborate, do our research, and get the word out has evolved over hundreds of years but we can imagine a more open science world, largely facilitated by the internet. The movement towards this more open way of doing and presenting science is coming, and it is not taking hundreds of years. If you are interested in these trends, and would like to find out more about where this is all headed and what it means to you, consider downloding Opening Science, edited by Sönke Bartling and Sascha Friesike, subtitled The Evolving Guide on How the Internet is Changing Research, Collaboration, and Scholarly Publishing. In 26 chapters by various authors from a range of disciplines the book explores the developing world of open science, starting from the first scientific revolution and bringing us to the next scientific revolution, sometimes referred to as “Science 2.0”. Some of the articles deal with the impact of the changing landscape of how science is done, looking at the impact of open science on Academia, or journal publishing, or medical research. Many of the articles look at the uses, pitfalls, and impact of specific tools, like microblogging (think Twitter), social networking, and reference management. There is lots of discussion and definition of terms you might use or misuse like “altmetrics” and “impact factor”. Science will probably never be completely open, and Twitter will probably never replace the journal article,

  16. Learning Probabilities From Random Observables in High Dimensions: The Maximum Entropy Distribution and Others

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi

    2015-11-01

    We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).

  17. 7 CFR 51.1995 - U.S. No. 1.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ...) Well formed; and, (2) Clean and bright. (3) Free from: (i) Blanks; and, (ii) Broken or split shells. (4... minimum diameter, minimum and maximum diameters, or in accordance with one of the size classifications in Table I. Table I Size classifications Maximum size—Will pass through a round opening of the following...

  18. The Emotional Climate of the Interpersonal Classroom in a Maximum Security Prison for Males.

    ERIC Educational Resources Information Center

    Meussling, Vonne

    1984-01-01

    Examines the nature, the task, and the impact of teaching in a maximum security prison for males. Data are presented concerning the curriculum design used in order to create a nonevaluative atmosphere. Inmates' reactions to self-disclosure and open communication in a prison setting are evaluated. (CT)

  19. Insulin activates single amiloride-blockable Na channels in a distal nephron cell line (A6).

    PubMed

    Marunaka, Y; Hagiwara, N; Tohda, H

    1992-09-01

    Using the patch-clamp technique, we studied the effect of insulin on an amiloride-blockable Na channel in the apical membrane of a distal nephron cell line (A6) cultured on permeable collagen films for 10-14 days. NPo (N, number of channels per patch membrane; Po, average value of open probability of individual channels in the patch) under baseline conditions was 0.88 +/- 0.12 (SE)(n = 17). After making cell-attached patches on the apical membrane which contained Na channels, insulin (1 mU/ml) was applied to the serosal bath. While maintaining the cell-attached patch, NPo significantly increased to 1.48 +/- 0.19 (n = 17; P less than 0.001) after 5-10 min of insulin application. The open probability of Na channels was 0.39 +/- 0.01 (n = 38) under baseline condition, and increased to 0.66 +/- 0.03 (n = 38, P less than 0.001) after addition of insulin. The baseline single-channel conductance was 4pS, and neither the single-channel conductance nor the current-voltage relationship was significantly changed by insulin. These results indicate that insulin increases Na absorption in the distal nephron by increasing the open probability of the amiloride-blockable Na channel.

  20. A Gaia study of the Hyades open cluster

    NASA Astrophysics Data System (ADS)

    Reino, Stella; de Bruijne, Jos; Zari, Eleonora; d'Antona, Francesca; Ventura, Paolo

    2018-03-01

    We present a study of the membership of the Hyades open cluster, derive kinematically-modelled parallaxes of its members, and study the colour-absolute magnitude diagram of the cluster. We use Gaia DR1 Tycho-Gaia Astrometric Solution (TGAS) data complemented by Hipparcos-2 data for bright stars not contained in TGAS. We supplement the astrometric data with radial velocities collected from a dozen literature sources. By assuming that all cluster members move with the mean cluster velocity to within the velocity dispersion, we use the observed and the expected motions of the stars to determine individual cluster membership probabilities. We subsequently derive improved parallaxes through maximum-likelihood kinematic modelling of the cluster. This method has an iterative component to deal with 'outliers', caused for instance by double stars or escaping members. Our method extends an existing method and supports the mixed presence of stars with and without radial velocities. We find 251 candidate members, 200 of which have a literature radial velocity, and 70 of which are new candidate members with TGAS astrometry. The cluster is roughly spherical in its centre but significantly flattened at larger radii. The observed colour-absolute magnitude diagram shows a clear binary sequence. The kinematically-modelled parallaxes that we derive are a factor ˜1.7 / 2.9 more precise than the TGAS / Hipparcos-2 values and allow to derive an extremely sharp main sequence. This sequence shows evidence for fine-detailed structure which is elegantly explained by the full spectrum turbulence model of convection.

  1. Developing a Model and Applications for Probabilities of Student Success: A Case Study of Predictive Analytics

    ERIC Educational Resources Information Center

    Calvert, Carol Elaine

    2014-01-01

    This case study relates to distance learning students on open access courses. It demonstrates the use of predictive analytics to generate a model of the probabilities of success and retention at different points, or milestones, in a student journey. A core set of explanatory variables has been established and their varying relative importance at…

  2. Gravity and decoherence: the double slit experiment revisited

    NASA Astrophysics Data System (ADS)

    Samuel, Joseph

    2018-02-01

    The double slit experiment is iconic and widely used in classrooms to demonstrate the fundamental mystery of quantum physics. The puzzling feature is that the probability of an electron arriving at the detector when both slits are open is not the sum of the probabilities when the slits are open separately. The superposition principle of quantum mechanics tells us to add amplitudes rather than probabilities and this results in interference. This experiment defies our classical intuition that the probabilities of exclusive events add. In understanding the emergence of the classical world from the quantum one, there have been suggestions by Feynman, Diosi and Penrose that gravity is responsible for suppressing interference. This idea has been pursued in many different forms ever since, predominantly within Newtonian approaches to gravity. In this paper, we propose and theoretically analyse two ‘gedanken’ or thought experiments which lend strong support to the idea that gravity is responsible for decoherence. The first makes the point that thermal radiation can suppress interference. The second shows that in an accelerating frame, Unruh radiation does the same. Invoking the Einstein equivalence principle to relate acceleration to gravity, we support the view that gravity is responsible for decoherence.

  3. Off-diagonal long-range order, cycle probabilities, and condensate fraction in the ideal Bose gas.

    PubMed

    Chevallier, Maguelonne; Krauth, Werner

    2007-11-01

    We discuss the relationship between the cycle probabilities in the path-integral representation of the ideal Bose gas, off-diagonal long-range order, and Bose-Einstein condensation. Starting from the Landsberg recursion relation for the canonic partition function, we use elementary considerations to show that in a box of size L3 the sum of the cycle probabilities of length k>L2 equals the off-diagonal long-range order parameter in the thermodynamic limit. For arbitrary systems of ideal bosons, the integer derivative of the cycle probabilities is related to the probability of condensing k bosons. We use this relation to derive the precise form of the pik in the thermodynamic limit. We also determine the function pik for arbitrary systems. Furthermore, we use the cycle probabilities to compute the probability distribution of the maximum-length cycles both at T=0, where the ideal Bose gas reduces to the study of random permutations, and at finite temperature. We close with comments on the cycle probabilities in interacting Bose gases.

  4. Safety of spray-dried powder formulated Pseudomonas fluorescens strain CL145A exposure to subadult/adult unionid mussels during simulated open-water treatments

    USGS Publications Warehouse

    Luoma, James A.; Weber, Kerry L.; Waller, Diane L.; Wise, Jeremy K.; Mayer, Denise A.; Aloisi, Douglas B.

    2015-01-01

    After exposure, the mussels were consolidated into wire mesh cages and placed in the Black River for a 27-28 day postexposure period, after which time survival of mussels was assessed. Of the 1,170 mussels tested in the study, 3 were confirmed dead and 5 were not recovered and treated as mortalities in the analysis. The effect and interactions of species, SDP exposure concentration, and SDP exposure duration were analyzed and did not affect mussel survival (p > 0.98). The results from this study indicate that SDP exposure at the maximum approved open-water concentration of 100 mg/L for up to 3 times the maximum approved open-water exposure duration of 8 hours (in other words for 24 hours of exposure) is unlikely to reduce survival of subadult or adult mussels.

  5. Answer to Question {number_sign}21 [{open_quote}{open_quote}Snell{close_quote}s law in quantum mechanics,{close_quote}{close_quote} Steve Blau and Brad Halfpap, Am. J. Phys. {bold 63} (7), 583 (1995)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Milonni, P.W.

    1996-07-01

    The quantum mechanical description of reflection and refraction in terms of photons is further extrapolated. The probability amplitude for a photon to be found inside or outside the medium is derived.

  6. Late quaternary environments, Denali National Park and Preserve, Alaska

    USGS Publications Warehouse

    Elias, S.A.; Short, S.K.; Waythomas, C.F.

    1996-01-01

    Late Quaternary pollen, plant macrofossils, and insect fossils were studied from sites along three rivers in the foothills north of the Alaska Range in Denali National Park and Preserve. The aim was to carry out a reconaissance of late Quaternary organic sediments in the region, emphasizing the mid-Wisconsin, or Boutellier interstadial interval. Samples of probable early- to mid-Boutellier age (ca. 60 000 to 40 000 B.P.) from Unit 2 at the Toklat High Bluffs site indicate open boreal woodland with dense alder shrub vegetation. Organic Unit 1 at the Foraker River Slump site indicates open taiga with shrubs of probable Boutellier age. Fossil evidence from the youngest horizon in this unit indicates graminoid tundra environments, marking the transition from interstadial to late Wisconsin glacial environments. Early Holocene samples from the Foraker exposures suggest birch shrub tundra; coniferous forest apparently became established only alter 6500 B.P. Local variations in forest composition at the Foraker and Sushana sites were probably the result of disturbances, such as fire.

  7. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes

    PubMed Central

    De Gregorio, Sofia; Camarda, Marco

    2016-01-01

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years. PMID:27456812

  8. A novel approach to estimate the eruptive potential and probability in open conduit volcanoes.

    PubMed

    De Gregorio, Sofia; Camarda, Marco

    2016-07-26

    In open conduit volcanoes, volatile-rich magma continuously enters into the feeding system nevertheless the eruptive activity occurs intermittently. From a practical perspective, the continuous steady input of magma in the feeding system is not able to produce eruptive events alone, but rather surplus of magma inputs are required to trigger the eruptive activity. The greater the amount of surplus of magma within the feeding system, the higher is the eruptive probability.Despite this observation, eruptive potential evaluations are commonly based on the regular magma supply, and in eruptive probability evaluations, generally any magma input has the same weight. Conversely, herein we present a novel approach based on the quantification of surplus of magma progressively intruded in the feeding system. To quantify the surplus of magma, we suggest to process temporal series of measurable parameters linked to the magma supply. We successfully performed a practical application on Mt Etna using the soil CO2 flux recorded over ten years.

  9. Acute bilateral leg amputation following combat injury in UK servicemen.

    PubMed

    Penn-Barwell, J G; Bennett, P M; Kay, A; Sargeant, I D

    2014-07-01

    This study aims to characterise the injuries and surgical management of British servicemen sustaining bilateral lower limb amputations. The UK Military Trauma Registry was searched for all cases of primary bilateral lower limb amputation sustained between March 2004 and March 2010. Amputations were excluded if they occurred more than 7 days after injury or if they were at the ankle or more distal. There were 1694 UK military patients injured or killed during this six-year study period. Forty-three of these (2.8%) were casualties with bilateral lower limb amputations. All casualties were men with a mean age of 25.1 years (SD 4.3): all were injured in Afghanistan by Improvised Explosive Devices (IEDs). Six casualties were in vehicles when they were injured with the remaining 37 (80%) patrolling on foot. The mean New Injury Severity Score (NISS) was 48.2 (SD 13.2): four patients had a maximum score of 75. The mean TRISS probability of survival was 60% (SD 39.4), with 18 having a survival probability of less than 50% i.e. unexpected survivors. The most common amputation pattern was bilateral trans-femoral (TF) amputations, which was seen in 25 patients (58%). Nine patients also lost an upper limb (triple amputation): no patients survived loss of all four limbs. In retained upper limbs extensive injuries to the hands and forearms were common, including loss of digits. Six patients (14%) sustained an open pelvic fracture. Perineal/genital injury was a feature in 19 (44%) patients, ranging from unilateral orchidectomy to loss of genitalia and permanent requirement for colostomy and urostomy. The mean requirement for blood products was 66 units (SD 41.7). The maximum transfusion was 12 units of platelets, 94 packed red cells, 8 cryoprecipitate, 76 units of fresh frozen plasma and 3 units of fresh whole blood, a total of 193 units of blood products. Our findings detail the severe nature of these injuries together with the massive surgical and resuscitative efforts required to firstly keep patients alive and secondly reconstruct and prepare them for rehabilitation. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.

  10. Population pharmacokinetics and maximum a posteriori probability Bayesian estimator of abacavir: application of individualized therapy in HIV-infected infants and toddlers.

    PubMed

    Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne

    2012-04-01

    Abacavir is used to treat HIV infection in both adults and children. The recommended paediatric dose is 8 mg kg(-1) twice daily up to a maximum of 300 mg twice daily. Weight was identified as the central covariate influencing pharmacokinetics of abacavir in children. A population pharmacokinetic model was developed to describe both once and twice daily pharmacokinetic profiles of abacavir in infants and toddlers. Standard dosage regimen is associated with large interindividual variability in abacavir concentrations. A maximum a posteriori probability Bayesian estimator of AUC(0-) (t) based on three time points (0, 1 or 2, and 3 h) is proposed to support area under the concentration-time curve (AUC) targeted individualized therapy in infants and toddlers. To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration-time curve (AUC) targeted dosage and individualize therapy. The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation-estimation method. The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 () h−1 (RSE 6.3%), apparent central volume of distribution 4.94 () (RSE 28.7%), apparent peripheral volume of distribution 8.12 () (RSE14.2%), apparent intercompartment clearance 1.25 () h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC(0-) (t) was developed from the final model and can be used routinely to optimize individual dosing. © 2011 The Authors. British Journal of Clinical Pharmacology © 2011 The British Pharmacological Society.

  11. Exact probability distribution functions for Parrondo's games

    NASA Astrophysics Data System (ADS)

    Zadourian, Rubina; Saakian, David B.; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  12. Associating an ionospheric parameter with major earthquake occurrence throughout the world

    NASA Astrophysics Data System (ADS)

    Ghosh, D.; Midya, S. K.

    2014-02-01

    With time, ionospheric variation analysis is gaining over lithospheric monitoring in serving precursors for earthquake forecast. The current paper highlights the association of major (Ms ≥ 6.0) and medium (4.0 ≤ Ms < 6.0) earthquake occurrences throughout the world in different ranges of the Ionospheric Earthquake Parameter (IEP) where `Ms' is earthquake magnitude on the Richter scale. From statistical and graphical analyses, it is concluded that the probability of earthquake occurrence is maximum when the defined parameter lies within the range of 0-75 (lower range). In the higher ranges, earthquake occurrence probability gradually decreases. A probable explanation is also suggested.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  14. Exact probability distribution functions for Parrondo's games.

    PubMed

    Zadourian, Rubina; Saakian, David B; Klümper, Andreas

    2016-12-01

    We study the discrete time dynamics of Brownian ratchet models and Parrondo's games. Using the Fourier transform, we calculate the exact probability distribution functions for both the capital dependent and history dependent Parrondo's games. In certain cases we find strong oscillations near the maximum of the probability distribution with two limiting distributions for odd and even number of rounds of the game. Indications of such oscillations first appeared in the analysis of real financial data, but now we have found this phenomenon in model systems and a theoretical understanding of the phenomenon. The method of our work can be applied to Brownian ratchets, molecular motors, and portfolio optimization.

  15. Simulation-Based Model Checking for Nondeterministic Systems and Rare Events

    DTIC Science & Technology

    2016-03-24

    year, we have investigated AO* search and Monte Carlo Tree Search algorithms to complement and enhance CMU’s SMCMDP. 1 Final Report, March 14... tree , so we can use it to find the probability of reachability for a property in PRISM’s Probabilistic LTL. By finding the maximum probability of...savings, particularly when handling very large models. 2.3 Monte Carlo Tree Search The Monte Carlo sampling process in SMCMDP can take a long time to

  16. E-O Sensor Signal Recognition Simulation: Computer Code SPOT I.

    DTIC Science & Technology

    1978-10-01

    scattering phase function PDCO , defined at the specified wavelength, given for each of the scattering angles defined. Currently, a maximum of sixty-four...PHASE MATRIX DATA IS DEFINED PDCO AVERAGE PROBABILITY FOR PHASE MATRIX DEFINITION NPROB PROBLEM NUMBER 54 Fig. 12. FLOWCHART for the SPOT Computer Code...El0.1 WLAM(N) Wavelength at which the aerosol single-scattering phase function set is defined (microns) 3 8El0.1 PDCO (N,I) Average probability for

  17. Cargo Throughput and Survivability Trade-Offs in Force Sustainment Operations

    DTIC Science & Technology

    2008-06-01

    more correlation with direct human activity. Mines are able to simply ‘sit and wait,’ thus allowing for easier mathematical and statistical ...1.2) Since the ships will likely travel in groups along the same programmed GPS track, modeling several transitors to the identical path is assumed...setting of 1/2 was used for the actuation probability maximum. The ‘threat profile’ will give the probability that the nth transitor will hit a mine

  18. [Acoustic conditions in open plan offices - Pilot test results].

    PubMed

    Mikulski, Witold

    The main source of noise in open plan office are conversations. Office work standards in such premises are attained by applying specific acoustic adaptation. This article presents the results of pilot tests and acoustic evaluation of open space rooms. Acoustic properties of 6 open plan office rooms were the subject of the tests. Evaluation parameters, measurement methods and criterial values were adopted according to the following standards: PN-EN ISO 3382- 3:2012, PN-EN ISO 3382-2:2010, PN-B-02151-4:2015-06 and PN-B-02151-3:2015-10. The reverberation time was 0.33- 0.55 s (maximum permissible value in offices - 0.6 s; the criterion was met), sound absorption coefficient in relation to 1 m2 of the room's plan was 0.77-1.58 m2 (minimum permissible value - 1.1 m2; 2 out of 6 rooms met the criterion), distraction distance was 8.5-14 m (maximum permissible value - 5 m; none of the rooms met the criterion), A-weighted sound pressure level of speech at a distance of 4 m was 43.8-54.7 dB (maximum permissible value - 48 dB; 2 out of 6 rooms met the criterion), spatial decay rate of the speech was 1.8-6.3 dB (minimum permissible value - 7 dB; none of the rooms met the criterion). Standard acoustic treatment, containing sound absorbing suspended ceiling, sound absorbing materials on the walls, carpet flooring and sound absorbing workplace barriers, is not sufficient. These rooms require specific advanced acoustic solutions. Med Pr 2016;67(5):653-662. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  19. 46 CFR 172.065 - Damage stability.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of sinkage, heel, and trim, must be below the lower edge of an opening through which progressive... sliding watertight door; or (vi) Side scuttle of the non-opening type. (2) Heel angle. The maximum angle of heel must not exceed 25 degrees, except that this angle may be increased to 30 degrees if no deck...

  20. 46 CFR 172.065 - Damage stability.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... of sinkage, heel, and trim, must be below the lower edge of an opening through which progressive... sliding watertight door; or (vi) Side scuttle of the non-opening type. (2) Heel angle. The maximum angle of heel must not exceed 25 degrees, except that this angle may be increased to 30 degrees if no deck...

  1. Mandibular kinematics and maximum voluntary bite force following segmental resection of the mandible without or with reconstruction.

    PubMed

    Linsen, Sabine S; Oikonomou, Annina; Martini, Markus; Teschke, Marcus

    2018-05-01

    The purpose was to analyze mandibular kinematics and maximum voluntary bite force in patients following segmental resection of the mandible without and with reconstruction (autologous bone, alloplastic total temporomandibular joint replacement (TMJ TJR)). Subjects operated from April 2002 to August 2014 were enrolled in the study. Condylar (CRoM) and incisal (InRoM) range of motion and deflection during opening, condylar retrusion, incisal lateral excursion, mandibular rotation angle during opening, and maximum voluntary bite force were determined on the non-affected site and compared between groups. Influence of co-factors (defect size, soft tissue deficit, neck dissection, radiotherapy, occlusal contact zones (OCZ), and time) was determined. Twelve non-reconstructed and 26 reconstructed patients (13 autologous, 13 TMJ TJR) were included in the study. InRoM opening and bite force were significantly higher (P ≤ .024), and both condylar and incisal deflection during opening significantly lower (P ≤ .027) in reconstructed patients compared with non-reconstructed. Differences between the autologous and the TMJ TJR group were statistically not significant. Co-factors defect size, soft tissue deficit, and neck dissection had the greatest impact on kinematics and number of OCZs on bite force. Reconstructed patients (both autologous and TMJ TJR) have better overall function than non-reconstructed patients. Reconstruction of segmental mandibular resection has positive effects on mandibular function. TMJ TJR seems to be a suitable technique for the reconstruction of mandibular defects including the TMJ complex.

  2. In Vivo Measurement of Surface Pressures and Retraction Distances Applied on Abdominal Organs During Surgery.

    PubMed

    Shah, Dignesh; Alderson, Andrew; Corden, James; Satyadas, Thomas; Augustine, Titus

    2018-02-01

    This study undertook the in vivo measurement of surface pressures applied by the fingers of the surgeon during typical representative retraction movements of key human abdominal organs during both open and hand-assisted laparoscopic surgery. Surface pressures were measured using a flexible thin-film pressure sensor for 35 typical liver retractions to access the gall bladder, 36 bowel retractions, 9 kidney retractions, 8 stomach retractions, and 5 spleen retractions across 12 patients undergoing open and laparoscopic abdominal surgery. The maximum and root mean square surface pressures were calculated for each organ retraction. The maximum surface pressures applied to these key abdominal organs are in the range 1 to 41 kPa, and the average maximum surface pressure for all organs and procedures was 14 ± 3 kPa. Surface pressure relaxation during the retraction hold period was observed. Generally, the surface pressures are higher, and the rate of surface pressure relaxation is lower, in the more confined hand-assisted laparoscopic procedures than in open surgery. Combined video footage and pressure sensor data for retraction of the liver in open surgery enabled correlation of organ retraction distance with surface pressure application. The data provide a platform to design strategies for the prevention of retraction injuries. They also form a basis for the design of next-generation organ retraction and space creation surgical devices with embedded sensors that can further quantify intraoperative retraction forces to reduce injury or trauma to organs and surrounding tissues.

  3. Shelf and open-ocean calcareous phytoplankton assemblages across the Paleocene-Eocene thermal maximum: Implications for global productivity gradients

    USGS Publications Warehouse

    Gibbs, S.J.; Bralower, T.J.; Bown, Paul R.; Zachos, J.C.; Bybell, L.M.

    2006-01-01

    Abrupt global warming and profound perturbation of the carbon cycle during the Paleocene-Eocene Thermal Maximum (PETM, ca. 55 Ma) have been linked to a massive release of carbon into the ocean-atmosphere system. Increased phytoplankton productivity has been invoked to cause subsequent CO2 drawdown, cooling, and environmental recovery. However, interpretations of geochemical and biotic data differ on when and where this increased productivity occurred. Here we present high-resolution nannofossil assemblage data from a shelf section (the U.S. Geological Survey [USGS] drill hole at Wilson Lake, New Jersey) and an open-ocean location (Ocean Drilling Program [ODP] Site 1209, paleoequatorial Pacific). These data combined with published biotic records indicate a transient steepening of shelf-offshelf trophic gradients across the PETM onset and peak, with a decrease in open-ocean productivity coeval with increased nutrient availability in shelf areas. Productivity levels recovered in the open ocean during the later stages of the event, which, coupled with intensified continental weathering rates, may have played an important role in carbon sequestration and CO2 drawdown. ?? 2006 Geological Society of America.

  4. [OPEN FIELD BEHAVIOR AS A PREDICTIVE CRITERIA REFLECTING RATS CORTICOSTERONELEVEL BEFORE AND AFTER STRESS].

    PubMed

    Umriukhin, P E; Grigorchuk, O S

    2015-12-01

    In the presented study we investigated the possibility to use the open field behavior data for prediction of corticosterone level in rat blood plasma before and after stress. It is shown that the most reliable open field behavior parameters, reflecting high probability of significant upregulation of corticosterone after 3 hours of immobilization, are the short latency of first movement and low locomotor activity during the test. Rats with high corticosterone at normal non-stress conditions are characterized by low locomotor activity and on the contrary long latency period for the entrance of open field center.

  5. Estimating the probability that the Taser directly causes human ventricular fibrillation.

    PubMed

    Sun, H; Haemmerich, D; Rahko, P S; Webster, J G

    2010-04-01

    This paper describes the first methodology and results for estimating the order of probability for Tasers directly causing human ventricular fibrillation (VF). The probability of an X26 Taser causing human VF was estimated using: (1) current density near the human heart estimated by using 3D finite-element (FE) models; (2) prior data of the maximum dart-to-heart distances that caused VF in pigs; (3) minimum skin-to-heart distances measured in erect humans by echocardiography; and (4) dart landing distribution estimated from police reports. The estimated mean probability of human VF was 0.001 for data from a pig having a chest wall resected to the ribs and 0.000006 for data from a pig with no resection when inserting a blunt probe. The VF probability for a given dart location decreased with the dart-to-heart horizontal distance (radius) on the skin surface.

  6. Storm-based Cloud-to-Ground Lightning Probabilities and Warnings

    NASA Astrophysics Data System (ADS)

    Calhoun, K. M.; Meyer, T.; Kingfield, D.

    2017-12-01

    A new cloud-to-ground (CG) lightning probability algorithm has been developed using machine-learning methods. With storm-based inputs of Earth Networks' in-cloud lightning, Vaisala's CG lightning, multi-radar/multi-sensor (MRMS) radar derived products including the Maximum Expected Size of Hail (MESH) and Vertically Integrated Liquid (VIL), and near storm environmental data including lapse rate and CAPE, a random forest algorithm was trained to produce probabilities of CG lightning up to one-hour in advance. As part of the Prototype Probabilistic Hazard Information experiment in the Hazardous Weather Testbed in 2016 and 2017, National Weather Service forecasters were asked to use this CG lightning probability guidance to create rapidly updating probability grids and warnings for the threat of CG lightning for 0-60 minutes. The output from forecasters was shared with end-users, including emergency managers and broadcast meteorologists, as part of an integrated warning team.

  7. A GRASS GIS Semi-Stochastic Model for Evaluating the Probability of Landslides Impacting Road Networks in Collazzone, Central Italy

    NASA Astrophysics Data System (ADS)

    Taylor, Faith E.; Santangelo, Michele; Marchesini, Ivan; Malamud, Bruce D.

    2013-04-01

    During a landslide triggering event, the tens to thousands of landslides resulting from the trigger (e.g., earthquake, heavy rainfall) may block a number of sections of the road network, posing a risk to rescue efforts, logistics and accessibility to a region. Here, we present initial results from a semi-stochastic model we are developing to evaluate the probability of landslides intersecting a road network and the network-accessibility implications of this across a region. This was performed in the open source GRASS GIS software, where we took 'model' landslides and dropped them on a 79 km2 test area region in Collazzone, Umbria, Central Italy, with a given road network (major and minor roads, 404 km in length) and already determined landslide susceptibilities. Landslide areas (AL) were randomly selected from a three-parameter inverse gamma probability density function, consisting of a power-law decay of about -2.4 for medium and large values of AL and an exponential rollover for small values of AL; the rollover (maximum probability) occurs at about AL = 400 m.2 The number of landslide areas selected for each triggered event iteration was chosen to have an average density of 1 landslide km-2, i.e. 79 landslide areas chosen randomly for each iteration. Landslides were then 'dropped' over the region semi-stochastically: (i) random points were generated across the study region; (ii) based on the landslide susceptibility map, points were accepted/rejected based on the probability of a landslide occurring at that location. After a point was accepted, it was assigned a landslide area (AL) and length to width ratio. Landslide intersections with roads were then assessed and indices such as the location, number and size of road blockage recorded. The GRASS-GIS model was performed 1000 times in a Monte-Carlo type simulation. Initial results show that for a landslide triggering event of 1 landslide km-2 over a 79 km2 region with 404 km of road, the number of road blockages ranges from 6 to 17, resulting in one road blockage every 24-67 km of roads. The average length of road blocked was 33 m. As we progress with model development and more sophisticated network analysis, we believe this semi-stochastic modelling approach will aid civil protection agencies to get a rough idea for the probability of road network potential damage (road block number and extent) as the result of different magnitude landslide triggering event scenarios.

  8. Voting for a personality: Do first impressions and self-evaluations affect voting decisions?

    PubMed Central

    Koppensteiner, Markus; Stephan, Pia

    2014-01-01

    Participants were asked to assess their own personality (i.e. Big Five scales), the personality of politicians shown in brief silent video clips, and the probability that they would vote for these politicians. Response surface analyses (RSA) revealed noteworthy effects of self-ratings and observer-ratings of openness, agreeableness, and emotional stability on voting probability. Furthermore, the participants perceived themselves as being more open, more agreeable, more emotionally stable, and more extraverted than the average politician. The study supports previous findings that first impressions affect decision making on important issues. Results also indicate that when only nonverbal information is available people prefer political candidates they perceive as having personality traits they value in themselves. PMID:25089064

  9. 49 CFR 325.59 - Measurement procedure; stationary test.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... made of the sound level generated by a stationary motor vehicle as follows: (a) Park the motor vehicle... open throttle. Return the engine's speed to idle. (e) Observe the maximum reading on the sound level... this section until the first two maximum sound level readings that are within 2 dB(A) of each other are...

  10. Comparison of Traditional and Open-Access Appointment Scheduling for Exponentially Distributed Service Time.

    PubMed

    Yan, Chongjun; Tang, Jiafu; Jiang, Bowen; Fung, Richard Y K

    2015-01-01

    This paper compares the performance measures of traditional appointment scheduling (AS) with those of an open-access appointment scheduling (OA-AS) system with exponentially distributed service time. A queueing model is formulated for the traditional AS system with no-show probability. The OA-AS models assume that all patients who call before the session begins will show up for the appointment on time. Two types of OA-AS systems are considered: with a same-session policy and with a same-or-next-session policy. Numerical results indicate that the superiority of OA-AS systems is not as obvious as those under deterministic scenarios. The same-session system has a threshold of relative waiting cost, after which the traditional system always has higher total costs, and the same-or-next-session system is always preferable, except when the no-show probability or the weight of patients' waiting is low. It is concluded that open-access policies can be viewed as alternative approaches to mitigate the negative effects of no-show patients.

  11. Stochastic Sznajd Model in Open Community

    NASA Astrophysics Data System (ADS)

    Emmert-Streib, Frank

    We extend the Sznajd Model for opinion formation by introducing persuasion probabilities for opinions. Moreover, we couple the system to an environment which mimics the application of the opinion. This results in a feedback, representing single-state opinion transitions in opposite to the two-state opinion transitions for persuading other people. We call this model opinion formation in an open community (OFOC). It can be seen as a stochastic extension of the Sznajd model for an open community, because it allows for a special choice of parameters to recover the original Sznajd model. We demonstrate the effect of feedback in the OFOC model by applying it to a scenario in which, e.g., opinion B is worse then opinion A but easier explained to other people. Casually formulated we analyzed the question, how much better one has to be, in order to persuade other people, provided the opinion is worse. Our results reveal a linear relation between the transition probability for opinion B and the influence of the environment on B.

  12. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    PubMed

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  13. Phylogeographic Insights into a Peripheral Refugium: The Importance of Cumulative Effect of Glaciation on the Genetic Structure of Two Endemic Plants

    PubMed Central

    Zecca, Giovanni; Minuto, Luigi

    2016-01-01

    Quaternary glaciations and mostly last glacial maximum have shaped the contemporary distribution of many species in the Alps. However, in the Maritime and Ligurian Alps a more complex picture is suggested by the presence of many Tertiary paleoendemisms and by the divergence time between lineages in one endemic species predating the Late Pleistocene glaciation. The low number of endemic species studied limits the understanding of the processes that took place within this region. We used species distribution models and phylogeographical methods to infer glacial refugia and to reconstruct the phylogeographical pattern of Silene cordifolia All. and Viola argenteria Moraldo & Forneris. The predicted suitable area for last glacial maximum roughly fitted current known distribution. Our results suggest that separation of the major clades predates the last glacial maximum and the following repeated glacial and interglacial periods probably drove differentiations. The complex phylogeographical pattern observed in the study species suggests that both populations and genotypes extinction was minimal during the last glacial maximum, probably due to the low impact of glaciations and to topographic complexity in this area. This study underlines the importance of cumulative effect of previous glacial cycles in shaping the genetic structure of plant species in Maritime and Ligurian Alps, as expected for a Mediterranean mountain region more than for an Alpine region. PMID:27870888

  14. Ditching Investigation of a 1/12-Scale Model of the Douglas F4D-1 Airplane, TED No. NACA DE 384

    NASA Technical Reports Server (NTRS)

    Windham, John O.

    1956-01-01

    A ditching investigation was made of a l/l2-scale dynamically similar model of the Douglas F4D-1 airplane to study its behavior when ditched. The model was landed in calm water at the Langley tank no. 2 monorail. Various landing attitudes, speeds, and configurations were investigated. The behavior of the model was determined from visual observations, acceleration records, and motion-picture records of the ditchings. Data are presented in tables, sequence photographs, time-history acceleration curves, and attitude curves. From the results of the investigation, it was concluded that the airplane should be ditched at the lowest speed and highest attitude consistent with adequate control (near 22 deg) with landing gear retracted. In a calm-water ditching under these conditions the airplane will probably nose in slightly, then make a fairly smooth run. The fuselage bottom will sustain appreciable damage so that rapid flooding and short flotation time are likely. Maximum longitudinal deceleration will be about 4g and maximum normal acceleration will be about 6g in a landing run of about 420 feet, In a calm-water ditching under similar conditions with the landing gear extended, the airplane will probably dive. Maximum longitudinal decelerations will be about 5-1/2g and maximum normal accelerations will be about 3-1/2g in a landing run of about 170 feet.

  15. The non-parametric Parzen's window in stereo vision matching.

    PubMed

    Pajares, G; de la Cruz, J

    2002-01-01

    This paper presents an approach to the local stereovision matching problem using edge segments as features with four attributes. From these attributes we compute a matching probability between pairs of features of the stereo images. A correspondence is said true when such a probability is maximum. We introduce a nonparametric strategy based on Parzen's window (1962) to estimate a probability density function (PDF) which is used to obtain the matching probability. This is the main finding of the paper. A comparative analysis of other recent matching methods is included to show that this finding can be justified theoretically. A generalization of the proposed method is made in order to give guidelines about its use with the similarity constraint and also in different environments where other features and attributes are more suitable.

  16. Influence of a MoOx interlayer on the open-circuit voltage in organic photovoltaic cells

    NASA Astrophysics Data System (ADS)

    Zou, Yunlong; Holmes, Russell J.

    2013-07-01

    Metal-oxides have been used as interlayers at the anode-organic interface in organic photovoltaic cells (OPVs) to increase the open-circuit voltage (VOC). We examine the role of MoOx in determining the maximum VOC in a planar heterojunction OPV and find that the interlayer strongly affects the temperature dependence of VOC. Boron subphthalocyanine chloride (SubPc)-C60 OPVs that contain no interlayer show a maximum VOC of 1.2 V at low temperature, while those with MoOx show no saturation, reaching VOC > 1.4 V. We propose that the MoOx-SubPc interface forms a Schottky junction that provides an additional contribution to VOC at low temperature.

  17. Anomalous maximum and minimum for the dissociation of a geminate pair in energetically disordered media

    NASA Astrophysics Data System (ADS)

    Govatski, J. A.; da Luz, M. G. E.; Koehler, M.

    2015-01-01

    We study the geminated pair dissociation probability φ as function of applied electric field and temperature in energetically disordered nD media. Regardless nD, for certain parameters regions φ versus the disorder degree (σ) displays anomalous minimum (maximum) at low (moderate) fields. This behavior is compatible with a transport energy which reaches a maximum and then decreases to negative values as σ increases. Our results explain the temperature dependence of the persistent photoconductivity in C60 single crystals going through order-disorder transitions. They also indicate how an energetic disorder spatial variation may contribute to higher exciton dissociation in multicomponent donor/acceptor systems.

  18. Spatial and Temporal Analysis of Eruption Locations, Compositions, and Styles in Northern Harrat Rahat, Kingdom of Saudi Arabia

    NASA Astrophysics Data System (ADS)

    Dietterich, H. R.; Stelten, M. E.; Downs, D. T.; Champion, D. E.

    2017-12-01

    Harrat Rahat is a predominantly mafic, 20,000 km2 volcanic field in western Saudi Arabia with an elongate volcanic axis extending 310 km north-south. Prior mapping suggests that the youngest eruptions were concentrated in northernmost Harrat Rahat, where our new geologic mapping and geochronology reveal >300 eruptive vents with ages ranging from 1.2 Ma to a historic eruption in 1256 CE. Eruption compositions and styles vary spatially and temporally within the volcanic field, where extensive alkali basaltic lavas dominate, but more evolved compositions erupted episodically as clusters of trachytic domes and small-volume pyroclastic flows. Analysis of vent locations, compositions, and eruption styles shows the evolution of the volcanic field and allows assessment of the spatio-temporal probabilities of vent opening and eruption styles. We link individual vents and fissures to eruptions and their deposits using field relations, petrography, geochemistry, paleomagnetism, and 40Ar/39Ar and 36Cl geochronology. Eruption volumes and deposit extents are derived from geologic mapping and topographic analysis. Spatial density analysis with kernel density estimation captures vent densities of up to 0.2 %/km2 along the north-south running volcanic axis, decaying quickly away to the east but reaching a second, lower high along a secondary axis to the west. Temporal trends show slight younging of mafic eruption ages to the north in the past 300 ka, as well as clustered eruptions of trachytes over the past 150 ka. Vent locations, timing, and composition are integrated through spatial probability weighted by eruption age for each compositional range to produce spatio-temporal models of vent opening probability. These show that the next mafic eruption is most probable within the north end of the main (eastern) volcanic axis, whereas more evolved compositions are most likely to erupt within the trachytic centers further to the south. These vent opening probabilities, combined with corresponding eruption properties, can be used as the basis for lava flow and tephra fall hazard maps.

  19. 2002 Commercial Space Transportation Lecture Series, volumes 1,2, and 3

    DOT National Transportation Integrated Search

    2003-04-01

    This document includes three presentations which are part of the 2002 Commercial Space Transportation Lecture Series: The Early Years, AST - A Historical Perspective; Approval of Reentry Vehicles; and, Setting Insurance Requirements: Maximum Probable...

  20. 14 CFR Appendix A to Part 440 - Information Requirements for Obtaining a Maximum Probable Loss Determination for Licensed or...

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... payload, including type (e.g., telecommunications, remote sensing), propellants, and hazardous components... description of any payload, including type (e.g., telecommunications, remote sensing), propellants, and...

  1. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  2. Probability distribution of extreme share returns in Malaysia

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  3. Estimation of descriptive statistics for multiply censored water quality data

    USGS Publications Warehouse

    Helsel, Dennis R.; Cohn, Timothy A.

    1988-01-01

    This paper extends the work of Gilliom and Helsel (1986) on procedures for estimating descriptive statistics of water quality data that contain “less than” observations. Previously, procedures were evaluated when only one detection limit was present. Here we investigate the performance of estimators for data that have multiple detection limits. Probability plotting and maximum likelihood methods perform substantially better than simple substitution procedures now commonly in use. Therefore simple substitution procedures (e.g., substitution of the detection limit) should be avoided. Probability plotting methods are more robust than maximum likelihood methods to misspecification of the parent distribution and their use should be encouraged in the typical situation where the parent distribution is unknown. When utilized correctly, less than values frequently contain nearly as much information for estimating population moments and quantiles as would the same observations had the detection limit been below them.

  4. Conceptual design study of Fusion Experimental Reactor (FY86 FER): Safety

    NASA Astrophysics Data System (ADS)

    Seki, Yasushi; Iida, Hiromasa; Honda, Tsutomu

    1987-08-01

    This report describes the study on safety for FER (Fusion Experimental Reactor) which has been designed as a next step machine to the JT-60. Though the final purpose of this study is to have an image of design base accident, maximum credible accident and to assess their risk or probability, etc., as FER plant system, the emphasis of this years study is placed on fuel-gas circulation system where the tritium inventory is maximum. The report consists of two chapters. The first chapter summarizes the FER system and describes FMEA (Failure Mode and Effect Analysis) and related accident progression sequence for FER plant system as a whole. The second chapter of this report is focused on fuel-gas circulation system including purification, isotope separation and storage. Probability of risk is assessed by the probabilistic risk analysis (PRA) procedure based on FMEA, ETA and FTA.

  5. Detection of the earth with the SETI microwave observing system assumed to be operating out in the Galaxy

    NASA Technical Reports Server (NTRS)

    Billingham, John; Tarter, Jill

    1989-01-01

    The maximum range is calculated at which radar signals from the earth could be detected by a search system similar to the NASA SETI Microwave Observing Project (SETI MOP) assumed to be operating out in the Galaxy. Figures are calculated for the Targeted Search and for the Sky Survey parts of the MOP, both planned to be operating in the 1990s. The probability of detection is calculated for the two most powerful transmitters, the planetary radar at Arecibo (Puerto Rico) and the ballistic missile early warning systems (BMEWSs), assuming that the terrestrial radars are only in the eavesdropping mode. It was found that, for the case of a single transmitter within the maximum range, the highest probability is for the sky survey detecting BMEWSs; this is directly proportional to BMEWS sky coverage and is therefore 0.25.

  6. Reducing the open porosity of pyroboroncarbon articles

    NASA Astrophysics Data System (ADS)

    Martyushov, G. G.; Zakharevich, A. M.; Pichkhidze, S. Ya.; Koshuro, V. A.

    2016-02-01

    It is established that a decrease in the open porosity of pyroboroncarbon, a pyrolytic glassy composite material of interest for manufacturing prosthetic heart valves (PHVs), can be achieved via impregnation of articles with an alcohol solution of 3-aminopropyltriethoxysilane and subsequent thermal treatment. The maximum roughness height and linear size of open pores on the surface of PHV parts made of pyroboroncarbon can additionally be reduced by final mechanical processing of a silicon oxide film formed on the surface.

  7. Brownian motion surviving in the unstable cubic potential and the role of Maxwell's demon

    NASA Astrophysics Data System (ADS)

    Ornigotti, Luca; Ryabov, Artem; Holubec, Viktor; Filip, Radim

    2018-03-01

    The trajectories of an overdamped particle in a highly unstable potential diverge so rapidly, that the variance of position grows much faster than its mean. A description of the dynamics by moments is therefore not informative. Instead, we propose and analyze local directly measurable characteristics, which overcome this limitation. We discuss the most probable particle position (position of the maximum of the probability density) and the local uncertainty in an unstable cubic potential, V (x ) ˜x3 , both in the transient regime and in the long-time limit. The maximum shifts against the acting force as a function of time and temperature. Simultaneously, the local uncertainty does not increase faster than the observable shift. In the long-time limit, the probability density naturally attains a quasistationary form. We interpret this process as a stabilization via the measurement-feedback mechanism, the Maxwell demon, which works as an entropy pump. The rules for measurement and feedback naturally arise from the basic properties of the unstable dynamics. All reported effects are inherent in any unstable system. Their detailed understanding will stimulate the development of stochastic engines and amplifiers and, later, their quantum counterparts.

  8. Methods for utilizing maximum power from a solar array

    NASA Technical Reports Server (NTRS)

    Decker, D. K.

    1972-01-01

    A preliminary study of maximum power utilization methods was performed for an outer planet spacecraft using an ion thruster propulsion system and a solar array as the primary energy source. The problems which arise from operating the array at or near the maximum power point of its 1-V characteristic are discussed. Two closed loop system configurations which use extremum regulators to track the array's maximum power point are presented. Three open loop systems are presented that either: (1) measure the maximum power of each array section and compute the total array power, (2) utilize a reference array to predict the characteristics of the solar array, or (3) utilize impedance measurements to predict the maximum power utilization. The advantages and disadvantages of each system are discussed and recommendations for further development are made.

  9. Metabolic networks evolve towards states of maximum entropy production.

    PubMed

    Unrean, Pornkamol; Srienc, Friedrich

    2011-11-01

    A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles. Copyright © 2011 Elsevier Inc. All rights reserved.

  10. Applications of the principle of maximum entropy: from physics to ecology.

    PubMed

    Banavar, Jayanth R; Maritan, Amos; Volkov, Igor

    2010-02-17

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.

  11. The estimation of probable maximum precipitation: the case of Catalonia.

    PubMed

    Casas, M Carmen; Rodríguez, Raül; Nieto, Raquel; Redaño, Angel

    2008-12-01

    A brief overview of the different techniques used to estimate the probable maximum precipitation (PMP) is presented. As a particular case, the 1-day PMP over Catalonia has been calculated and mapped with a high spatial resolution. For this purpose, the annual maximum daily rainfall series from 145 pluviometric stations of the Instituto Nacional de Meteorología (Spanish Weather Service) in Catalonia have been analyzed. In order to obtain values of PMP, an enveloping frequency factor curve based on the actual rainfall data of stations in the region has been developed. This enveloping curve has been used to estimate 1-day PMP values of all the 145 stations. Applying the Cressman method, the spatial analysis of these values has been achieved. Monthly precipitation climatological data, obtained from the application of Geographic Information Systems techniques, have been used as the initial field for the analysis. The 1-day PMP at 1 km(2) spatial resolution over Catalonia has been objectively determined, varying from 200 to 550 mm. Structures with wavelength longer than approximately 35 km can be identified and, despite their general concordance, the obtained 1-day PMP spatial distribution shows remarkable differences compared to the annual mean precipitation arrangement over Catalonia.

  12. Quantum-Inspired Maximizer

    NASA Technical Reports Server (NTRS)

    Zak, Michail

    2008-01-01

    A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).

  13. Meta-analysis of Microbial Fuel Cells Using Waste Substrates.

    PubMed

    Dowdy, F Ryan; Kawakita, Ryan; Lange, Matthew; Simmons, Christopher W

    2018-05-01

    Microbial fuel cell experimentation using waste streams is an increasingly popular field of study. One obstacle to comparing studies has been the lack of consistent conventions for reporting results such that meta-analysis can be used for large groups of experiments. Here, 134 unique microbial fuel cell experiments using waste substrates were compiled for analysis. Findings include that coulombic efficiency correlates positively with volumetric power density (p < 0.001), negatively with working volume (p < 0.05), and positively with percentage removal of chemical oxygen demand (p < 0.005). Power density in mW/m 2 correlates positively with chemical oxygen demand loading (p < 0.005), and positively with maximum open-circuit voltage (p < 0.05). Finally, single-chamber versus double-chamber reactor configurations differ significantly in maximum open-circuit voltage (p < 0.005). Multiple linear regression to predict either power density or maximum open-circuit voltage produced no significant models due to the amount of multicollinearity between predictor variables. Results indicate that statistically relevant conclusions can be drawn from large microbial fuel cell datasets. Recommendations for future consistency in reporting results following a MIAMFCE convention (Minimum Information About a Microbial Fuel Cell Experiment) are included.

  14. Preliminary Investigation in the NACA Low-Turbulence Tunnel of Low-drag Airfoil Sections Suitable for Admitting Air at the Leading Edge

    NASA Technical Reports Server (NTRS)

    von Doenhoff, Albert E.; Horton, Elmer A.

    1942-01-01

    An investigation was carried out in the NACA low-turbulence tunnel to develop low-drag airfoil sections suitable for admitting air at the leading edge. A thickness distribution having the desired type of pressure distribution was found from tests of a flexible model. Other airfoil shapes were derived from this original shape by varying the thickness, the camper, the leading-edge radius, and the size of the leading-edge opening. Data are presented giving the characteristics of the airfoil shapes in the range of lift coefficients for high-speed and cruising flight. Shapes have been developed which show no substantial increases in drag over that of the same position along the chord. Many of these shapes appear to have higher critical compressibility speeds than plain airfoils of the same thickness. Low-drag airfoil sections have been developed with openings in the leading edge as large as 41.5 percent of the maximum thickness. The range of lift coefficients for low drag in several cases is nearly as large as that of the corresponding plain airfoil sections. Preliminary measurements of maximum lift characteristics indicate that nose-opening sections of the type herein considered may not produce any marked effects on the maximum lift coefficient.

  15. Quantifying the origins of life on a planetary scale.

    PubMed

    Scharf, Caleb; Cronin, Leroy

    2016-07-19

    A simple, heuristic formula with parallels to the Drake Equation is introduced to help focus discussion on open questions for the origins of life in a planetary context. This approach indicates a number of areas where quantitative progress can be made on parameter estimation for determining origins of life probabilities, based on constraints from Bayesian approaches. We discuss a variety of "microscale" factors and their role in determining "macroscale" abiogenesis probabilities on suitable planets. We also propose that impact ejecta exchange between planets with parallel chemistries and chemical evolution could in principle amplify the development of molecular complexity and abiogenesis probabilities. This amplification could be very significant, and both bias our conclusions about abiogenesis probabilities based on the Earth and provide a major source of variance in the probability of life arising in planetary systems. We use our heuristic formula to suggest a number of observational routes for improving constraints on origins of life probabilities.

  16. How Inhomogeneous Site Percolation Works on Bethe Lattices: Theory and Application

    NASA Astrophysics Data System (ADS)

    Ren, Jingli; Zhang, Liying; Siegmund, Stefan

    2016-03-01

    Inhomogeneous percolation, for its closer relationship with real-life, can be more useful and reasonable than homogeneous percolation to illustrate the critical phenomena and dynamical behaviour of complex networks. However, due to its intricacy, the theoretical framework of inhomogeneous percolation is far from being complete and many challenging problems are still open. In this paper, we first investigate inhomogeneous site percolation on Bethe Lattices with two occupation probabilities, and then extend the result to percolation with m occupation probabilities. The critical behaviour of this inhomogeneous percolation is shown clearly by formulating the percolation probability with given occupation probability p, the critical occupation probability , and the average cluster size where p is subject to . Moreover, using the above theory, we discuss in detail the diffusion behaviour of an infectious disease (SARS) and present specific disease-control strategies in consideration of groups with different infection probabilities.

  17. Quantifying the origins of life on a planetary scale

    NASA Astrophysics Data System (ADS)

    Scharf, Caleb; Cronin, Leroy

    2016-07-01

    A simple, heuristic formula with parallels to the Drake Equation is introduced to help focus discussion on open questions for the origins of life in a planetary context. This approach indicates a number of areas where quantitative progress can be made on parameter estimation for determining origins of life probabilities, based on constraints from Bayesian approaches. We discuss a variety of “microscale” factors and their role in determining “macroscale” abiogenesis probabilities on suitable planets. We also propose that impact ejecta exchange between planets with parallel chemistries and chemical evolution could in principle amplify the development of molecular complexity and abiogenesis probabilities. This amplification could be very significant, and both bias our conclusions about abiogenesis probabilities based on the Earth and provide a major source of variance in the probability of life arising in planetary systems. We use our heuristic formula to suggest a number of observational routes for improving constraints on origins of life probabilities.

  18. De-identification Methods for Open Health Data: The Case of the Heritage Health Prize Claims Dataset

    PubMed Central

    Arbuckle, Luk; Koru, Gunes; Eze, Benjamin; Gaudette, Lisa; Neri, Emilio; Rose, Sean; Howard, Jeremy; Gluck, Jonathan

    2012-01-01

    Background There are many benefits to open datasets. However, privacy concerns have hampered the widespread creation of open health data. There is a dearth of documented methods and case studies for the creation of public-use health data. We describe a new methodology for creating a longitudinal public health dataset in the context of the Heritage Health Prize (HHP). The HHP is a global data mining competition to predict, by using claims data, the number of days patients will be hospitalized in a subsequent year. The winner will be the team or individual with the most accurate model past a threshold accuracy, and will receive a US $3 million cash prize. HHP began on April 4, 2011, and ends on April 3, 2013. Objective To de-identify the claims data used in the HHP competition and ensure that it meets the requirements in the US Health Insurance Portability and Accountability Act (HIPAA) Privacy Rule. Methods We defined a threshold risk consistent with the HIPAA Privacy Rule Safe Harbor standard for disclosing the competition dataset. Three plausible re-identification attacks that can be executed on these data were identified. For each attack the re-identification probability was evaluated. If it was deemed too high then a new de-identification algorithm was applied to reduce the risk to an acceptable level. We performed an actual evaluation of re-identification risk using simulated attacks and matching experiments to confirm the results of the de-identification and to test sensitivity to assumptions. The main metric used to evaluate re-identification risk was the probability that a record in the HHP data can be re-identified given an attempted attack. Results An evaluation of the de-identified dataset estimated that the probability of re-identifying an individual was .0084, below the .05 probability threshold specified for the competition. The risk was robust to violations of our initial assumptions. Conclusions It was possible to ensure that the probability of re-identification for a large longitudinal dataset was acceptably low when it was released for a global user community in support of an analytics competition. This is an example of, and methodology for, achieving open data principles for longitudinal health data. PMID:22370452

  19. On the Discriminant Analysis in the 2-Populations Case

    NASA Astrophysics Data System (ADS)

    Rublík, František

    2008-01-01

    The empirical Bayes Gaussian rule, which in the normal case yields good values of the probability of total error, may yield high values of the maximum probability error. From this point of view the presented modified version of the classification rule of Broffitt, Randles and Hogg appears to be superior. The modification included in this paper is termed as a WR method, and the choice of its weights is discussed. The mentioned methods are also compared with the K nearest neighbours classification rule.

  20. Game Intelligence in Team Sports

    PubMed Central

    Lennartsson, Jan; Lidström, Nicklas; Lindberg, Carl

    2015-01-01

    We set up a game theoretic framework to analyze a wide range of situations from team sports. A fundamental idea is the concept of potential; the probability of the offense scoring the next goal minus the probability that the next goal is made by the defense. We develop categorical as well as continuous models, and obtain optimal strategies for both offense and defense. A main result is that the optimal defensive strategy is to minimize the maximum potential of all offensive strategies. PMID:25970581

  1. 46 CFR 172.150 - Survival conditions.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... waterline, in the final condition of sinkage, heel, and trim, must be below the lower edge of openings such.... (5) Side scuttles of the non-opening type. (b) Heel angle. (1) Except as described in paragraph (b)(2) of this section, the maximum angle of heel must not exceed 15 degrees (17 degrees if no part of the...

  2. 46 CFR 172.150 - Survival conditions.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... waterline, in the final condition of sinkage, heel, and trim, must be below the lower edge of openings such.... (5) Side scuttles of the non-opening type. (b) Heel angle. (1) Except as described in paragraph (b)(2) of this section, the maximum angle of heel must not exceed 15 degrees (17 degrees if no part of the...

  3. 46 CFR 151.50-42 - Ethyl ether.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... openings shall be in the top of the tank. (2) Pressure vessel type tanks shall be designed for the maximum pressure to which they may be subjected when pressure is used to discharge the cargo, but in no case shall the design pressure be less than 50 pounds per square inch gauge. All openings shall be in the top of...

  4. 46 CFR 151.50-42 - Ethyl ether.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... openings shall be in the top of the tank. (2) Pressure vessel type tanks shall be designed for the maximum pressure to which they may be subjected when pressure is used to discharge the cargo, but in no case shall the design pressure be less than 50 pounds per square inch gauge. All openings shall be in the top of...

  5. 46 CFR 151.50-42 - Ethyl ether.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... openings shall be in the top of the tank. (2) Pressure vessel type tanks shall be designed for the maximum pressure to which they may be subjected when pressure is used to discharge the cargo, but in no case shall the design pressure be less than 50 pounds per square inch gauge. All openings shall be in the top of...

  6. Earthquake response of heavily damaged historical masonry mosques after restoration

    NASA Astrophysics Data System (ADS)

    Altunışık, Ahmet Can; Fuat Genç, Ali

    2017-10-01

    Restoration works have been accelerated substantially in Turkey in the last decade. Many historical buildings, mosques, minaret, bridges, towers and structures have been restored. With these restorations an important issue arises, namely how restoration work affects the structure. For this reason, we aimed to investigate the restoration effect on the earthquake response of a historical masonry mosque considering the openings on the masonry dome. For this purpose, we used the Hüsrev Pasha Mosque, which is located in the Ortakapı district in the old city of Van, Turkey. The region of Van is in an active seismic zone; therefore, earthquake analyses were performed in this study. Firstly a finite element model of the mosque was constructed considering the restoration drawings and 16 window openings on the dome. Then model was constructed with eight window openings. Structural analyses were performed under dead load and earthquake load, and the mode superposition method was used in analyses. Maximum displacements, maximum-minimum principal stresses and shear stresses are given with contours diagrams. The results are analyzed according to Turkish Earthquake Code (TEC, 2007) and compared between 8 and 16 window openings cases. The results show that reduction of the window openings affected the structural behavior of the mosque positively.

  7. The numerical evaluation of maximum-likelihood estimates of the parameters for a mixture of normal distributions from partially identified samples

    NASA Technical Reports Server (NTRS)

    Walker, H. F.

    1976-01-01

    Likelihood equations determined by the two types of samples which are necessary conditions for a maximum-likelihood estimate are considered. These equations, suggest certain successive-approximations iterative procedures for obtaining maximum-likelihood estimates. These are generalized steepest ascent (deflected gradient) procedures. It is shown that, with probability 1 as N sub 0 approaches infinity (regardless of the relative sizes of N sub 0 and N sub 1, i=1,...,m), these procedures converge locally to the strongly consistent maximum-likelihood estimates whenever the step size is between 0 and 2. Furthermore, the value of the step size which yields optimal local convergence rates is bounded from below by a number which always lies between 1 and 2.

  8. Stochastic control system parameter identifiability

    NASA Technical Reports Server (NTRS)

    Lee, C. H.; Herget, C. J.

    1975-01-01

    The parameter identification problem of general discrete time, nonlinear, multiple input/multiple output dynamic systems with Gaussian white distributed measurement errors is considered. The knowledge of the system parameterization was assumed to be known. Concepts of local parameter identifiability and local constrained maximum likelihood parameter identifiability were established. A set of sufficient conditions for the existence of a region of parameter identifiability was derived. A computation procedure employing interval arithmetic was provided for finding the regions of parameter identifiability. If the vector of the true parameters is locally constrained maximum likelihood (CML) identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the constrained maximum likelihood estimation sequence will converge to the vector of true parameters.

  9. An open source multivariate framework for n-tissue segmentation with evaluation on public data.

    PubMed

    Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C

    2011-12-01

    We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.

  10. An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data

    PubMed Central

    Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.

    2012-01-01

    We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993

  11. Sources and historical record of tin and butyl-tin species in a Mediterranean bay (Toulon Bay, France).

    PubMed

    Pougnet, Frédérique; Schäfer, Jörg; Dutruch, Lionel; Garnier, Cédric; Tessier, Erwan; Dang, Duc Huy; Lanceleur, Laurent; Mullot, Jean-Ulrich; Lenoble, Véronique; Blanc, Gérard

    2014-05-01

    Concentrations of inorganic tin (Sn(inorg)), tributyltin (TBT) and its degradation products dibutyltin (DBT) and monobutyltin (MBT) were measured in surface sediments and in two cores from the Toulon Bay, hosting the major French military harbour. Anticipating planned dredging, the aim of the present work is to map and evaluate for the first time the recent and historic contamination of these sediments by inorganic and organic Sn species derived from antifouling paints used for various naval domains including military, trade, tourism and leisure. Tin and butyl-Sn concentrations in the bay varied strongly (4 orders of magnitude), depending on the site, showing maximum values near the shipyards. The concentrations of total Sn (1.3-112 μg g(-1)), TBT (<0.5-2,700 ng g(-1)), DBT (<0.5-1,800 ng g(-1)) and MBT (0.5-1,000 ng g(-1)) generally decreased towards the open sea, i.e. as a function of both distance from the presumed main source and bottom currents. Progressive degradation state of the butyl-Sn species according to the same spatial scheme and the enrichment factors support the scenario of a strongly polluted bay with exportation of polluted sediment to the open Mediterranean. Low degradation and the historical records of butyl-Sn species in two (210)Pb-dated sediment cores, representative of the Northern Bay, are consistent with the relatively recent use of TBT by military shipyards and confirm maximum pollution during the 1970s, which will persist in the anoxic sediments for several centuries. The results show that (a) degradation kinetics of butyl-Sn species depend on environmental conditions, (b) the final degradation product Sn(inorgBT) is by far the dominant species after 10-12 half-life periods and (c) using recent data to reliably assess former TBT contamination requires the use of a modified butyl-Sn degradation index BDI(mod). Resuspension of extremely contaminated subsurface sediments by the scheduled dredging will probably result in mobilization of important amounts of butyl-Sn species.

  12. Maximizing profits in international technology transfer

    NASA Technical Reports Server (NTRS)

    Straube, W.

    1974-01-01

    Maximum profit can be introduced into international technology transfer by observing the following: (1) ethical and open dealing between the parties; (2) maximum knowledge of all facts concerning the technology, the use of the technology, the market, competition, prices, and alternatives; (3) ability to coordinate exports, service, support activities, licensing and cross licensing; and (4) knowledgeable people which put these factors together.

  13. A Comparison of Electromagnetic Induction Mapping to Measurements of Maximum Effluent Flow Depth for Assessing Flow Paths in Vegetative Treatment Areas

    USDA-ARS?s Scientific Manuscript database

    Vegetative treatment systems (VTSs) are one type of control structure that has shown potential to control runoff from open feedlots. To achieve maximum performance, sheet-flow over the width of the vegetative treatment area (VTA) is required. Tools, such as maps of flow paths through the VTA, are ne...

  14. HDSC/OWP

    Science.gov Websites

    Maximum Precipitation Documents Miscellaneous Publications Storm Analysis Record Precipitation Contact Us ; - Probability analysis for selected historical storm events learn more > - Record point precipitation for the Oceanic and Atmospheric Administration National Weather Service Office of Water Prediction (OWP) 1325 East

  15. [Risk factor analysis of the patients with solitary pulmonary nodules and establishment of a prediction model for the probability of malignancy].

    PubMed

    Wang, X; Xu, Y H; Du, Z Y; Qian, Y J; Xu, Z H; Chen, R; Shi, M H

    2018-02-23

    Objective: This study aims to analyze the relationship among the clinical features, radiologic characteristics and pathological diagnosis in patients with solitary pulmonary nodules, and establish a prediction model for the probability of malignancy. Methods: Clinical data of 372 patients with solitary pulmonary nodules who underwent surgical resection with definite postoperative pathological diagnosis were retrospectively analyzed. In these cases, we collected clinical and radiologic features including gender, age, smoking history, history of tumor, family history of cancer, the location of lesion, ground-glass opacity, maximum diameter, calcification, vessel convergence sign, vacuole sign, pleural indentation, speculation and lobulation. The cases were divided to modeling group (268 cases) and validation group (104 cases). A new prediction model was established by logistic regression analying the data from modeling group. Then the data of validation group was planned to validate the efficiency of the new model, and was compared with three classical models(Mayo model, VA model and LiYun model). With the calculated probability values for each model from validation group, SPSS 22.0 was used to draw the receiver operating characteristic curve, to assess the predictive value of this new model. Results: 112 benign SPNs and 156 malignant SPNs were included in modeling group. Multivariable logistic regression analysis showed that gender, age, history of tumor, ground -glass opacity, maximum diameter, and speculation were independent predictors of malignancy in patients with SPN( P <0.05). We calculated a prediction model for the probability of malignancy as follow: p =e(x)/(1+ e(x)), x=-4.8029-0.743×gender+ 0.057×age+ 1.306×history of tumor+ 1.305×ground-glass opacity+ 0.051×maximum diameter+ 1.043×speculation. When the data of validation group was added to the four-mathematical prediction model, The area under the curve of our mathematical prediction model was 0.742, which is greater than other models (Mayo 0.696, VA 0.634, LiYun 0.681), while the differences between any two of the four models were not significant ( P >0.05). Conclusions: Age of patient, gender, history of tumor, ground-glass opacity, maximum diameter and speculation are independent predictors of malignancy in patients with solitary pulmonary nodule. This logistic regression prediction mathematic model is not inferior to those classical models in estimating the prognosis of SPNs.

  16. WPC Maximum Heat Index Forecasts

    Science.gov Websites

    Forecasts for Western US CLICK ON MAPS FOR MAXIMUM HEAT INDEX AND PROBABILITY FORECASTS FROM SUN MAY 27 2018 02 CLICK to view SAT JUN 02 forecast SUN JUN 03 CLICK to view SUN JUN 03 forecast SUN JUN 03 CLICK to view SUN JUN 03 forecast SUN JUN 03 CLICK to view SUN JUN 03 forecast SUN JUN 03 CLICK to view SUN JUN

  17. Developing a probability-based model of aquifer vulnerability in an agricultural region

    NASA Astrophysics Data System (ADS)

    Chen, Shih-Kai; Jang, Cheng-Shin; Peng, Yi-Huei

    2013-04-01

    SummaryHydrogeological settings of aquifers strongly influence the regional groundwater movement and pollution processes. Establishing a map of aquifer vulnerability is considerably critical for planning a scheme of groundwater quality protection. This study developed a novel probability-based DRASTIC model of aquifer vulnerability in the Choushui River alluvial fan, Taiwan, using indicator kriging and to determine various risk categories of contamination potentials based on estimated vulnerability indexes. Categories and ratings of six parameters in the probability-based DRASTIC model were probabilistically characterized according to the parameter classification methods of selecting a maximum estimation probability and calculating an expected value. Moreover, the probability-based estimation and assessment gave us an excellent insight into propagating the uncertainty of parameters due to limited observation data. To examine the prediction capacity of pollutants for the developed probability-based DRASTIC model, medium, high, and very high risk categories of contamination potentials were compared with observed nitrate-N exceeding 0.5 mg/L indicating the anthropogenic groundwater pollution. The analyzed results reveal that the developed probability-based DRASTIC model is capable of predicting high nitrate-N groundwater pollution and characterizing the parameter uncertainty via the probability estimation processes.

  18. Using live algae at the anode of a microbial fuel cell to generate electricity.

    PubMed

    Xu, Chang; Poon, Karen; Choi, Martin M F; Wang, Ruihua

    2015-10-01

    Live green microalgae Chlorella pyrenoidosa was introduced in the anode of a microbial fuel cell (MFC) to act as an electron donor. By controlling the oxygen content, light intensity, and algal cell density at the anode, microalgae would generate electricity without requiring externally added substrates. Two models of algal microbial fuel cells (MFCs) were constructed with graphite/carbon electrodes and no mediator. Model 1 algal MFC has live microalgae grown at the anode and potassium ferricyanide at the cathode, while model 2 algal MFC had live microalgae in both the anode and cathode in different growth conditions. Results indicated that a higher current produced in model 1 algal MFC was obtained at low light intensity of 2500 lx and algal cell density of 5 × 10(6) cells/ml, in which high algal density would limit the electricity generation, probably by increasing oxygen level and mass transfer problem. The maximum power density per unit anode volume obtained in model 1 algal MFC was relatively high at 6030 mW/m(2), while the maximum power density at 30.15 mW/m(2) was comparable with that of previous reported bacteria-driven MFC with graphite/carbon electrodes. A much smaller power density at 2.5 mW/m(2) was observed in model 2 algal MFC. Increasing the algal cell permeability by 4-nitroaniline would increase the open circuit voltage, while the mitochondrial acting and proton leak promoting agents resveratrol and 2,4-dinitrophenol would increase the electric current production in algal MFC.

  19. A Seakeeping Performance and Affordability Tradeoff Study for the Coast Guard Offshore Patrol Cutter

    DTIC Science & Technology

    2016-06-01

    Index Polar Plot for Sea State 4, All Headings Are Relative to the Wave Motion and Velocity is Given in Meters per Second...40 Figure 15. Probability and Cumulative Density Functions of Annual Sea State Occurrences in the Open Ocean, North Pacific...criteria at a given sea state. Probability distribution functions are available that describe the likelihood that an operational area will experience

  20. How to Recognize and Avoid Potential, Possible, or Probable Predatory Open-Access Publishers, Standalone, and Hijacked Journals.

    PubMed

    Danevska, Lenche; Spiroski, Mirko; Donev, Doncho; Pop-Jordanova, Nada; Polenakovic, Momir

    2016-11-01

    The Internet has enabled an easy method to search through the vast majority of publications and has improved the impact of scholarly journals. However, it can also pose threats to the quality of published articles. New publishers and journals have emerged so-called open-access potential, possible, or probable predatory publishers and journals, and so-called hijacked journals. It was our aim to increase the awareness and warn scholars, especially young researchers, how to recognize these journals and how to avoid submission of their papers to these journals. Review and critical analysis of the relevant published literature, Internet sources and personal experience, thoughts, and observations of the authors. The web blog of Jeffrey Beall, University of Colorado, was greatly consulted. Jeffrey Beall is a Denver academic librarian who regularly maintains two lists: the first one, of potential, possible, or probable predatory publishers and the second one, of potential, possible, or probable predatory standalone journals. Aspects related to this topic presented by other authors have been discussed as well. Academics should bear in mind how to differentiate between trustworthy and reliable journals and predatory ones, considering: publication ethics, peer-review process, international academic standards, indexing and abstracting, preservation in digital repositories, metrics, sustainability, etc.

  1. Probability of stress-corrosion fracture under random loading.

    NASA Technical Reports Server (NTRS)

    Yang, J.-N.

    1972-01-01

    A method is developed for predicting the probability of stress-corrosion fracture of structures under random loadings. The formulation is based on the cumulative damage hypothesis and the experimentally determined stress-corrosion characteristics. Under both stationary and nonstationary random loadings, the mean value and the variance of the cumulative damage are obtained. The probability of stress-corrosion fracture is then evaluated using the principle of maximum entropy. It is shown that, under stationary random loadings, the standard deviation of the cumulative damage increases in proportion to the square root of time, while the coefficient of variation (dispersion) decreases in inversed proportion to the square root of time. Numerical examples are worked out to illustrate the general results.

  2. Ubiquity of Benford's law and emergence of the reciprocal distribution

    DOE PAGES

    Friar, James Lewis; Goldman, Terrance J.; Pérez-Mercader, J.

    2016-04-07

    In this paper, we apply the Law of Total Probability to the construction of scale-invariant probability distribution functions (pdf's), and require that probability measures be dimensionless and unitless under a continuous change of scales. If the scale-change distribution function is scale invariant then the constructed distribution will also be scale invariant. Repeated application of this construction on an arbitrary set of (normalizable) pdf's results again in scale-invariant distributions. The invariant function of this procedure is given uniquely by the reciprocal distribution, suggesting a kind of universality. Finally, we separately demonstrate that the reciprocal distribution results uniquely from requiring maximum entropymore » for size-class distributions with uniform bin sizes.« less

  3. Truncated Long-Range Percolation on Oriented Graphs

    NASA Astrophysics Data System (ADS)

    van Enter, A. C. D.; de Lima, B. N. B.; Valesin, D.

    2016-07-01

    We consider different problems within the general theme of long-range percolation on oriented graphs. Our aim is to settle the so-called truncation question, described as follows. We are given probabilities that certain long-range oriented bonds are open; assuming that the sum of these probabilities is infinite, we ask if the probability of percolation is positive when we truncate the graph, disallowing bonds of range above a possibly large but finite threshold. We give some conditions in which the answer is affirmative. We also translate some of our results on oriented percolation to the context of a long-range contact process.

  4. Relations that affect the probability and prediction of nitrate concentration in private wells in the glacial aquifer system in the United States

    USGS Publications Warehouse

    Warner, Kelly L.; Arnold, Terri L.

    2010-01-01

    Nitrate in private wells in the glacial aquifer system is a concern for an estimated 17 million people using private wells because of the proximity of many private wells to nitrogen sources. Yet, less than 5 percent of private wells sampled in this study contained nitrate in concentrations that exceeded the U.S. Environmental Protection Agency (USEPA) Maximum Contaminant Level (MCL) of 10 mg/L (milligrams per liter) as N (nitrogen). However, this small group with nitrate concentrations above the USEPA MCL includes some of the highest nitrate concentrations detected in groundwater from private wells (77 mg/L). Median nitrate concentration measured in groundwater from private wells in the glacial aquifer system (0.11 mg/L as N) is lower than that in water from other unconsolidated aquifers and is not strongly related to surface sources of nitrate. Background concentration of nitrate is less than 1 mg/L as N. Although overall nitrate concentration in private wells was low relative to the MCL, concentrations were highly variable over short distances and at various depths below land surface. Groundwater from wells in the glacial aquifer system at all depths was a mixture of old and young water. Oxidation and reduction potential changes with depth and groundwater age were important influences on nitrate concentrations in private wells. A series of 10 logistic regression models was developed to estimate the probability of nitrate concentration above various thresholds. The threshold concentration (1 to 10 mg/L) affected the number of variables in the model. Fewer explanatory variables are needed to predict nitrate at higher threshold concentrations. The variables that were identified as significant predictors for nitrate concentration above 4 mg/L as N included well characteristics such as open-interval diameter, open-interval length, and depth to top of open interval. Environmental variables in the models were mean percent silt in soil, soil type, and mean depth to saturated soil. The 10-year mean (1992-2001) application rate of nitrogen fertilizer applied to farms was included as the potential source variable. A linear regression model also was developed to predict mean nitrate concentrations in well networks. The model is based on network averages because nitrate concentrations are highly variable over short distances. Using values for each of the predictor variables averaged by network (network mean value) from the logistic regression models, the linear regression model developed in this study predicted the mean nitrate concentration in well networks with a 95 percent confidence in predictions.

  5. Universal laws of human society's income distribution

    NASA Astrophysics Data System (ADS)

    Tao, Yong

    2015-10-01

    General equilibrium equations in economics play the same role with many-body Newtonian equations in physics. Accordingly, each solution of the general equilibrium equations can be regarded as a possible microstate of the economic system. Since Arrow's Impossibility Theorem and Rawls' principle of social fairness will provide a powerful support for the hypothesis of equal probability, then the principle of maximum entropy is available in a just and equilibrium economy so that an income distribution will occur spontaneously (with the largest probability). Remarkably, some scholars have observed such an income distribution in some democratic countries, e.g. USA. This result implies that the hypothesis of equal probability may be only suitable for some "fair" systems (economic or physical systems). From this meaning, the non-equilibrium systems may be "unfair" so that the hypothesis of equal probability is unavailable.

  6. Applications of Some Artificial Intelligence Methods to Satellite Soundings

    NASA Technical Reports Server (NTRS)

    Munteanu, M. J.; Jakubowicz, O.

    1985-01-01

    Hard clustering of temperature profiles and regression temperature retrievals were used to refine the method using the probabilities of membership of each pattern vector in each of the clusters derived with discriminant analysis. In hard clustering the maximum probability is taken and the corresponding cluster as the correct cluster are considered discarding the rest of the probabilities. In fuzzy partitioned clustering these probabilities are kept and the final regression retrieval is a weighted regression retrieval of several clusters. This method was used in the clustering of brightness temperatures where the purpose was to predict tropopause height. A further refinement is the division of temperature profiles into three major regions for classification purposes. The results are summarized in the tables total r.m.s. errors are displayed. An approach based on fuzzy logic which is intimately related to artificial intelligence methods is recommended.

  7. Shannon entropy and avoided crossings in closed and open quantum billiards

    NASA Astrophysics Data System (ADS)

    Park, Kyu-Won; Moon, Songky; Shin, Younghoon; Kim, Jinuk; Jeong, Kabgyun; An, Kyungwon

    2018-06-01

    The relation between Shannon entropy and avoided crossings is investigated in dielectric microcavities. The Shannon entropy of the probability density for eigenfunctions in an open elliptic billiard as well as a closed quadrupole billiard increases as the center of the avoided crossing is approached. These results are opposite to those of atomic physics for electrons. It is found that the collective Lamb shift of the open quantum system and the symmetry breaking in the closed chaotic quantum system have equivalent effects on the Shannon entropy.

  8. Disease-causing mutations C277R and C277Y modify gating of human ClC-1 chloride channels in myotonia congenita

    PubMed Central

    Weinberger, Sebastian; Wojciechowski, Daniel; Sternberg, Damien; Lehmann-Horn, Frank; Jurkat-Rott, Karin; Becher, Toni; Begemann, Birgit; Fahlke, Christoph; Fischer, Martin

    2012-01-01

    Myotonia congenita is a genetic condition that is caused by mutations in the muscle chloride channel gene CLCN1 and characterized by delayed muscle relaxation and muscle stiffness. We here investigate the functional consequences of two novel disease-causing missense mutations, C277R and C277Y, using heterologous expression in HEK293T cells and patch clamp recording. Both mutations reduce macroscopic anion currents in transfected cells. Since hClC-1 is a double-barrelled anion channel, this reduction in current amplitude might be caused by altered gating of individual protopores or of joint openings and closing of both protopores. We used non-stationary noise analysis and single channel recordings to separate the mutants’ effects on individual and common gating processes. We found that C277Y inverts the voltage dependence and reduces the open probabilities of protopore and common gates resulting in decreases of absolute open probabilities of homodimeric channels to values below 3%. In heterodimeric channels, C277R and C277Y also reduce open probabilities and shift the common gate activation curve towards positive potentials. Moreover, C277Y modifies pore properties of hClC-1. It reduces single protopore current amplitudes to about two-thirds of wild-type values, and inverts the anion permeability sequence to I− = NO3− > Br− > Cl−. Our findings predict a dramatic reduction of the muscle fibre resting chloride conductance and thus fully explain the disease-causing effects of mutations C277R and C277Y. Moreover, they provide additional insights into the function of C277, a residue recently implicated in common gating of ClC channels. PMID:22641783

  9. The influence of maximum magnitude on seismic-hazard estimates in the Central and Eastern United States

    USGS Publications Warehouse

    Mueller, C.S.

    2010-01-01

    I analyze the sensitivity of seismic-hazard estimates in the central and eastern United States (CEUS) to maximum magnitude (mmax) by exercising the U.S. Geological Survey (USGS) probabilistic hazard model with several mmax alternatives. Seismicity-based sources control the hazard in most of the CEUS, but data seldom provide an objective basis for estimating mmax. The USGS uses preferred mmax values of moment magnitude 7.0 and 7.5 for the CEUS craton and extended margin, respectively, derived from data in stable continental regions worldwide. Other approaches, for example analysis of local seismicity or judgment about a source's seismogenic potential, often lead to much smaller mmax. Alternative models span the mmax ranges from the 1980s Electric Power Research Institute/Seismicity Owners Group (EPRI/SOG) analysis. Results are presented as haz-ard ratios relative to the USGS national seismic hazard maps. One alternative model specifies mmax equal to moment magnitude 5.0 and 5.5 for the craton and margin, respectively, similar to EPRI/SOG for some sources. For 2% probability of exceedance in 50 years (about 0.0004 annual probability), the strong mmax truncation produces hazard ratios equal to 0.35-0.60 for 0.2-sec spectral acceleration, and 0.15-0.35 for 1.0-sec spectral acceleration. Hazard-controlling earthquakes interact with mmax in complex ways. There is a relatively weak dependence on probability level: hazardratios increase 0-15% for 0.002 annual exceedance probability and decrease 5-25% for 0.00001 annual exceedance probability. Although differences at some sites are tempered when faults are added, mmax clearly accounts for some of the discrepancies that are seen in comparisons between USGS-based and EPRI/SOG-based hazard results.

  10. Studies on the synthesis of isotopes of superheavy element Lv (Z = 116)

    NASA Astrophysics Data System (ADS)

    Santhosh, K. P.; Safoora, V.

    2017-11-01

    The probable projectile-target combinations for the synthesis of superheavy nucleus 296Lv found in the cold valley of 296Lv have been identified by studying the interaction barrier of the colliding nuclei, probability of compound nucleus formation, P_{CN}, and survival probability W_{sur}. At energies near and above the Coulomb barrier, the capture, fusion and evaporation residue (ER) cross sections for the probable combinations for the hot and cold fusion reactions are systematically investigated. By considering intensities of the projectile beams, availabilities of the targets and half lives of the colliding nuclei, the combination 48Ca+248Cm is found to be the most probable projectile-target pair for the synthesis of 296Lv. The calculated maximum value of 2n, 3n, 4n and 5n channel cross section for the reaction 48Ca+248Cm are 0.599 pb, 5.957 pb, 4.805 pb, and 0.065 pb, respectively. Moreover, the production cross sections for the synthesis of isotopes 291-295,298Lv using 48Ca projectile on 243-247,250Cm targets are calculated. Among these reactions, the reactions 48Ca+247Cm → 295Lv and 48Ca+250Cm → 298Lv have maximum production cross section in 3n (10.697 pb) and 4n (12.006 pb) channel, respectively. Our studies on the SHE Lv using the combinations 48Ca+245Cm → 293Lv and 48Ca+248Cm → 296Lv are compared with available experimental data and with other theoretical studies. Our studies are in agreement with experimental data and we hope that these studies will be a guide for the future experiments to synthesize the isotopes of Lv.

  11. Maximum mouth opening and trismus in 143 patients treated for oral cancer: a 1-year prospective study.

    PubMed

    Wetzels, Jan-Willem G H; Merkx, Matthias A W; de Haan, Anton F J; Koole, Ron; Speksnijder, Caroline M

    2014-12-01

    Patients with oral cancer can develop restricted mouth opening (trismus) because of the oncologic treatment. Maximum mouth opening (MMO) was measured in 143 patients shortly before treatment and 0, 6, and 12 months posttreatment, and the results were analyzed using a linear mixed-effects model. In every patient, MMO decreased after treatment. The patients who underwent surgery, recovered partially by 6 and 12 months after treatment, whereas the patients who received both surgery and radiotherapy or primary radiotherapy did not recover. Tumor location, tumor size, and alcohol consumption had independent effects on MMO. Having trismus (MMO <35 mm) 1 year after treatment was associated most strongly with pretreatment MMO, receiving both surgery and radiotherapy, and maxillary or mandibular tumor involvement. Postoperative radiotherapy and maxillary or mandibular tumor involvement are the highest contributing risk factors to decreasing MMO and the subsequent development of trismus after oral cancer treatment. © 2014 Wiley Periodicals, Inc.

  12. 75 FR 9592 - FPL Energy Maine Hydro, LLC; Notice of Intent To Prepare an Environmental Document and Soliciting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-03

    ... a probable maximum flood, and modification of the existing earthen embankments for improved slope stability and safety. The proposed remedial measures would not alter the basic footprint of the existing dam...

  13. Estimation of Multinomial Probabilities.

    DTIC Science & Technology

    1978-11-01

    1971) and Alam (1978) have shown that the maximum likelihood estimator is admissible with respect to the quadratic loss. Steinhaus (1957) and Trybula...appear). Johnson, B. Mck. (1971). On admissible estimators for certain fixed sample binomial populations. Ann. Math. Statist. 92, 1579-1587. Steinhaus , H

  14. [Lifestyle and probabilty of dementia in the elderly].

    PubMed

    León-Ortiz, Pablo; Ruiz-Flores, Manuel Leonardo; Ramírez-Bermúdez, Jesús; Sosa-Ortiz, Ana Luisa

    2013-01-01

    there is evidence of a relationship between physical and cognitive activity and the development of dementia, although this hypothesis has not been tested in Mexican population. analyze the association between an increased participation in physical and cognitive activities and the probability of having dementia, using a Mexican open population sample. we made a cross sectional survey in open Mexican population of residents in urban and rural areas of 65 of age and older; we performed cognitive assessments to identify subjects with dementia, as well as questionnaires to assess the level of participation in physical and cognitive activities. We performed a binary logistic regression analysis to establish the association between participation and the probability of having dementia. we included 2003 subjects, 180 with diagnosis of dementia. Subjects with dementia were older, had less education and higher prevalence of some chronic diseases. The low participation in cognitive activities was associated with a higher probability of developing dementia. Patients with dementia had significantly lower scores on physical activity scales. this study supports the hypothesis of a relationship between low cognitive and physical activity and the presentation of dementia.

  15. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    PubMed

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  16. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle

    PubMed Central

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886

  17. Exact computation of the maximum-entropy potential of spiking neural-network models.

    PubMed

    Cofré, R; Cessac, B

    2014-05-01

    Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.

  18. Maximum crown area equation for open-grown bur oak

    Treesearch

    M.C. Demchik; S.M. Virden; Z.L. Buchanan; A.M. Johnson

    2017-01-01

    Bur oak (Quercus macrocarpa Michx.) is a classic savanna species with a range that covers much of the eastern United States. Because savannas are an endangered habitat in North America, significant restoration efforts are in progress across much of the range of bur oak. For open sites being planted with bur oaks as well as fully stocked sites that...

  19. An Open-Label Trial of Escitalopram in Pervasive Developmental Disorders.

    ERIC Educational Resources Information Center

    Owley, Thomas; Walton, Laura; Salt, Jeff; Guter, Stephen J., Jr.; Winnega, Marrea; Leventhal, Bennett L.; Cook, Edwin H., Jr.

    2005-01-01

    Objective: To assess the effect of escitalopram in the treatment of pervasive developmental disorders (PDDs). Method: This 10-week study had a forced titration, open-label design. Twenty-eight subjects (mean age 125.1 [+ or -] 33.5 months) with a PDD received escitalopram at a dose that increased weekly to a maximum dose of 20 mg as tolerated. The…

  20. Application of at-site peak-streamflow frequency analyses for very low annual exceedance probabilities

    USGS Publications Warehouse

    Asquith, William H.; Kiang, Julie E.; Cohn, Timothy A.

    2017-07-17

    The U.S. Geological Survey (USGS), in cooperation with the U.S. Nuclear Regulatory Commission, has investigated statistical methods for probabilistic flood hazard assessment to provide guidance on very low annual exceedance probability (AEP) estimation of peak-streamflow frequency and the quantification of corresponding uncertainties using streamgage-specific data. The term “very low AEP” implies exceptionally rare events defined as those having AEPs less than about 0.001 (or 1 × 10–3 in scientific notation or for brevity 10–3). Such low AEPs are of great interest to those involved with peak-streamflow frequency analyses for critical infrastructure, such as nuclear power plants. Flood frequency analyses at streamgages are most commonly based on annual instantaneous peak streamflow data and a probability distribution fit to these data. The fitted distribution provides a means to extrapolate to very low AEPs. Within the United States, the Pearson type III probability distribution, when fit to the base-10 logarithms of streamflow, is widely used, but other distribution choices exist. The USGS-PeakFQ software, implementing the Pearson type III within the Federal agency guidelines of Bulletin 17B (method of moments) and updates to the expected moments algorithm (EMA), was specially adapted for an “Extended Output” user option to provide estimates at selected AEPs from 10–3 to 10–6. Parameter estimation methods, in addition to product moments and EMA, include L-moments, maximum likelihood, and maximum product of spacings (maximum spacing estimation). This study comprehensively investigates multiple distributions and parameter estimation methods for two USGS streamgages (01400500 Raritan River at Manville, New Jersey, and 01638500 Potomac River at Point of Rocks, Maryland). The results of this study specifically involve the four methods for parameter estimation and up to nine probability distributions, including the generalized extreme value, generalized log-normal, generalized Pareto, and Weibull. Uncertainties in streamflow estimates for corresponding AEP are depicted and quantified as two primary forms: quantile (aleatoric [random sampling] uncertainty) and distribution-choice (epistemic [model] uncertainty). Sampling uncertainties of a given distribution are relatively straightforward to compute from analytical or Monte Carlo-based approaches. Distribution-choice uncertainty stems from choices of potentially applicable probability distributions for which divergence among the choices increases as AEP decreases. Conventional goodness-of-fit statistics, such as Cramér-von Mises, and L-moment ratio diagrams are demonstrated in order to hone distribution choice. The results generally show that distribution choice uncertainty is larger than sampling uncertainty for very low AEP values.

  1. Anomalous night-time peaks in diurnal variations of NmF2 close to the geomagnetic equator: a statistical study

    NASA Astrophysics Data System (ADS)

    Pavlov, Anatoli

    We present a study of anomalous night-time NmF2 peaks, ANNPs, observed by the La Paz, Natal, Djibouti, Kodaikanal, Madras, Manila, Talara, and Huancayo-Jicamarca ionosonde stations close to the geomagnetic equator. It is shown that the probabilities of occurrence of the first and second ANNPs depend on the geomagnetic longitude, and there is a longitude sector close to 110° geomagnetic longitude where the first and second ANNPs occur less frequently in comparisons with the longitude regions located close to and below about 34° geomagnetic longitude and close to and above about 144° geomagnetic longitude. The found frequencies of occurrence of the ANNPs increase with increasing solar activity, except of the Djibouti and Kodaikanal ionosonde stations, where the probability of the first ANNP occurrence is found to decrease with increasing solar activity from low (F10.7<100) to moderate (100≤F10.7≤170) solar activity, and except of the Natal ionosonde station, where the frequencies of occurrence of the first and second ANNPs decrease with increasing solar activity from moderate to high (F10.7>170) solar activity. We found that the occurrence probabilities of ANNPs during geomagnetically disturbed conditions are greater than those during geomagnetically quiet conditions. The calculated values of these probabilities have pronounced maximums in June (La-Paz and Talara) and in July (Huancayo-Jicamarca) at the ionosonde stations located in the southern geographic hemisphere. The first ANNP is least frequently observed in January (La-Paz, Talara, and Huancayo-Jicamarca), and the second ANNP is least frequently measured in January (La-Paz and Huancayo-Jicamarca) and in December (Talara). In the northern geographic hemisphere, the studied probabilities are lowest in June (Djibouti and Madras), in July (Manila), and in April (Kodaikanal). The maximums in the probabilities of occurrence of the first and second ANNPs are found to be in September (Djibouti), in October (Madras), in November (Manila), and in December (Kodaikanal).

  2. A Comparison of Wolf's Reconstructed Record of Annual Sunspot Number with Schwabe's Observed Record of 'Clusters of Spots' for the Interval of 1826-1868

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    1997-01-01

    On the basis of a comparison of Wolf s reconstructed record of yearly averages of sunspot number against Schwabe's observations of yearly counts of 'clusters of spots' (i.e., the yearly number of newly appearing sunspot groups) during the interval of 1826-1868, one infers that Wolf probably misplaced and underestimated the maximum amplitude for cycle 7. In particular, Schwabe's data suggest that the maximum amplitude for cycle 7 occurred in 1828 rather than in 1830 and that it measured about 86.3 (+/-13.9; i.e., the 90% confidence level) rather than 70.4. If true, then, the ascent and descent durations for cycle 7 should be 5 years each instead of 7 and 3 years, respectively. Likewise, on the basis of the same comparison, one infers that the maximums for cycles 8 and 9, occurring, respectively, in 1837 and 1848, were of comparable size (approximately 130), although, quite possibly, the one for cycle 8 may have been smaller. Lastly, presuming the continued action of the 'odd-even' effect (i.e., the odd-numbered following cycle of Hale even-odd cycle pairs having a maximum amplitude that is of comparable or larger size than the even-numbered leading cycle) during the earlier pre-modem era of cycles 6-9, one infers that Wolf's estimate for the size of cycle 6 probably is too low.

  3. Estimating temporary emigration and breeding proportions using capture-recapture data with Pollock's robust design

    USGS Publications Warehouse

    Kendall, W.L.; Nichols, J.D.; Hines, J.E.

    1997-01-01

    Statistical inference for capture-recapture studies of open animal populations typically relies on the assumption that all emigration from the studied population is permanent. However, there are many instances in which this assumption is unlikely to be met. We define two general models for the process of temporary emigration, completely random and Markovian. We then consider effects of these two types of temporary emigration on Jolly-Seber (Seber 1982) estimators and on estimators arising from the full-likelihood approach of Kendall et al. (1995) to robust design data. Capture-recapture data arising from Pollock's (1982) robust design provide the basis for obtaining unbiased estimates of demographic parameters in the presence of temporary emigration and for estimating the probability of temporary emigration. We present a likelihood-based approach to dealing with temporary emigration that permits estimation under different models of temporary emigration and yields tests for completely random and Markovian emigration. In addition, we use the relationship between capture probability estimates based on closed and open models under completely random temporary emigration to derive three ad hoc estimators for the probability of temporary emigration, two of which should be especially useful in situations where capture probabilities are heterogeneous among individual animals. Ad hoc and full-likelihood estimators are illustrated for small mammal capture-recapture data sets. We believe that these models and estimators will be useful for testing hypotheses about the process of temporary emigration, for estimating demographic parameters in the presence of temporary emigration, and for estimating probabilities of temporary emigration. These latter estimates are frequently of ecological interest as indicators of animal movement and, in some sampling situations, as direct estimates of breeding probabilities and proportions.

  4. Does the probability of developing ocular trauma-related visual deficiency differ between genders?

    PubMed

    Blanco-Hernández, Dulce Milagros Razo; Valencia-Aguirre, Jessica Daniela; Lima-Gómez, Virgilio

    2011-01-01

    Ocular trauma affects males more often than females, but the impact of this condition regarding visual prognosis is unknown. We undertook this study to compare the probability of developing ocular trauma-related visual deficiency between genders, as estimated by the ocular trauma score (OTS). We designed an observational, retrospective, comparative, cross-sectional and open-label study. Female patients aged ≥6 years with ocular trauma were included and matched by age and ocular wall status with male patients at a 1:2 male/female ratio. Initial trauma features and the probability of developing visual deficiency (best corrected visual acuity <20/40) 6 months after the injury, as estimated by the OTS, were compared between genders. The proportion and 95% confidence intervals (95% CI) of visual deficiency 6 months after the injury were estimated. Ocular trauma features and the probability of developing visual deficiency were compared between genders (χ(2) and Fisher's exact test); p value <0.05 was considered significant. Included were 399 eyes (133 from females and 266 from males). Mean age of patients was 25.7 ± 14.6 years. Statistical differences existed in the proportion of zone III in closed globe trauma (p = 0.01) and types A (p = 0.04) and type B (p = 0.02) in open globe trauma. The distribution of the OTS categories was similar for both genders (category 5: p = 0.9); the probability of developing visual deficiency was 32.6% (95% CI = 24.6 to 40.5) in females and 33.2% (95% CI = 27.6 to 38.9) in males (p = 0.9). The probability of developing ocular trauma-related visual deficiency was similar for both genders. The same standard is required.

  5. Near-equinox spectro-imaging of Uranus aurorae sampling two planetary rotations

    NASA Astrophysics Data System (ADS)

    Lamy, Laurent

    2012-10-01

    A quarter of century after their discovery by Voyager 2 in 1986, HST sucessfully re-detected Uranus aurorae in 2011 {and also in 1998}, providing the first images of these emissions. Overall, they differ from other well-known planetary aurorae, and their characteristics vary at very different timescales, from minutes to decades. These results have provided the first insights on the poorly known Uranian magnetosphere in 26 years, and opened a rich field of investigation, together with a set of open questions. In addition, while solstice conditions prevailed in 1986, Uranus lay close to equinox in 2011, with the S and N magnetic poles alternately facing the Sun every half a rotation. This unique configuration of an asymmetric magnetosphere, extremely variable over a single rotation, had never been investigated before and deserved to be fully analyzed. New observations of the Uranian aurorae are therefore vital for our understanding of planetary magnetospheres, and HST is the only tool able to remotely investigate these emissions. We thus propose to re-observe Uranus with STIS spectro-imaging at next opposition {29 Sept. 2012} over two planetary rotations, in order to enlarge the set of positive detections and to sample the rotational dynamics of auroral processes and magnetosphere/solar wind interaction. To increase the probability of any possible auroral brightening triggered by magnetospheric compressions, observations will be scheduled in advance during active solar wind conditions at Uranus, near the maximum of solar cycle 24. Additional objectives will include the characterization of the extended neutral corona and the spectral response of atmospheric species.

  6. Discovery of a protein phosphatase activity encoded in the genome of bacteriophage lambda. Probable identity with open reading frame 221.

    PubMed

    Cohen, P T; Cohen, P

    1989-06-15

    Infection of Escherichia coli with phage lambda gt10 resulted in the appearance of a protein phosphatase with activity towards 32P-labelled casein. Activity reached a maximum near the point of cell lysis and declined thereafter. The phosphatase was stimulated 30-fold by Mn2+, while Mg2+ and Ca2+ were much less effective. Activity was unaffected by inhibitors 1 and 2, okadaic acid, calmodulin and trifluoperazine, distinguishing it from the major serine/threonine-specific protein phosphatases of eukaryotic cells. The lambda phosphatase was also capable of dephosphorylating other substrates in the presence of Mn2+, although activity towards 32P-labelled phosphorylase was 10-fold lower, and activity towards phosphorylase kinase and glycogen synthase 25 50-fold lower than with casein. No casein phosphatase activity was present in either uninfected cells, or in E. coli infected with phage lambda gt11. Since lambda gt11 lacks part of the open reading frame (orf) 221, previously shown to encode a protein with sequence similarity to protein phosphatase-1 and protein phosphatase-2A of mammalian cells [Cohen, Collins, Coulson, Berndt & da Cruz e Silva (1988) Gene 69, 131-134], the results indicate that ORF221 is the protein phosphatase detected in cells infected with lambda gt10. Comparison of the sequence of ORF221 with other mammalian protein phosphatases defines three highly conserved regions which are likely to be essential for function. The first of these is deleted in lambda gt11.

  7. Connexin and Pannexin hemichannels are regulated by redox potential

    PubMed Central

    Retamal, Mauricio A.

    2014-01-01

    Connexins (Cxs) and Pannexins (Panxs) are two non-related protein families, having both the property to form hemichannels at the plasma membrane. There are 21 genes coding for different Cx based proteins and only 3 for Panx. Under physiological conditions, these hemichannels (Cxs and Panxs) present a low open probability, but when open, they allow the release of signaling molecules to the extracellular space. However, under pathological conditions, these hemichannels increase their open probability, inducing important lysis of metabolites, and ionic imbalance, which in turn induce the massive entry of Ca+2 to the cell. Actually, it is well recognized that Cxs and Panxs based channels play an important role in several diseases and -in many cases- this is associated with an aberrant hemichannel opening. Hemichannel opening and closing are controlled by a plethora of signaling including changes of the voltage plasma membrane, protein-protein interactions, and several posttranslational modifications, including protein cleavage, phosphorylation, glycosylation, hydroxylation and S-nitrosylation, among others. In particular, it has been recently shown that the cellular redox status modulates the opening/closing and permeability of at least Cx43, Cx46, and Panx1 hemichannels. Thus, for example, the gaseous transmitter nitric oxide (NO) can induce the S-nitrosylation of these proteins modulating in turn several of their properties. The reason is that the redox status of a cell is fundamental to set their response to the environment and also plays an important role in several pathologies. In this review, I will discuss how NO and other molecules associated with redox signaling modulate Cxs and Panx hemichannels properties. PMID:24611056

  8. Quartz c-axis orientation patterns in fracture cement as a measure of fracture opening rate and a validation tool for fracture pattern models

    DOE PAGES

    Ukar, Estibalitz; Laubach, Stephen E.; Marrett, Randall

    2016-03-09

    Here, we evaluate a published model for crystal growth patterns in quartz cement in sandstone fractures by comparing crystal fracture-spanning predictions to quartz c-axis orientation distributions measured by electron backscatter diffraction (EBSD) of spanning quartz deposits. Samples from eight subvertical opening-mode fractures in four sandstone formations, the Jurassic– Cretaceous Nikanassin Formation, northwestern Alberta Foothills (Canada), Cretaceous Mesaverde Group (USA; Cozzette Sandstone Member of the Iles Formation), Piceance Basin, Colorado (USA), and upper Jurassic–lower Cretaceous Cotton Valley Group (Taylor sandstone) and overlying Travis Peak Formation, east Texas, have similar quartzose composition and grain size but contain fractures with different temperature historiesmore » and opening rates based on fluid inclusion assemblages and burial history. Spherical statistical analysis shows that, in agreement with model predictions, bridging crystals have a preferred orientation with c-axis orientations at a high angle to fracture walls. The second form of validation is for spanning potential that depends on the size of cut substrate grains. Using measured cut substrate grain sizes and c-axis orientations of spanning bridges, we calculated the required orientation for the smallest cut grain to span the maximum gap size and the required orientation of the crystal with the least spanning potential to form overgrowths that span across maximum measured gap sizes. We find that within a 10° error all spanning crystals conform to model predictions. Using crystals with the lowest spanning potential based on crystallographic orientation (c-axis parallel to fracture wall) and a temperature range for fracture opening measured from fluid inclusion assemblages, we calculate maximum fracture opening rates that allow crystals to span. These rates are comparable to those derived independently from fracture temperature histories based on burial history and multiple sequential fluid inclusion assemblages. Results support the R. Lander and S. Laubach model, which predicts that for quartz deposited synchronously with fracture opening, spanning potential, or likelihood of quartz deposits that are thick enough to span between fracture walls, depends on temperature history, fracture opening rate, size of opening increments, and size, mineralogy, and crystallographic orientation of substrates in the fracture wall (transected grains). Results suggest that EBSD maps, which can be more rapidly acquired than measurement of tens to hundreds of fluid inclusion assemblages, can provide a useful measure of relative opening rates within populations of quartz-filled fractures formed under sedimentary basin conditions. Such data are useful for evaluating fracture pattern development models.« less

  9. Quartz c-axis orientation patterns in fracture cement as a measure of fracture opening rate and a validation tool for fracture pattern models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ukar, Estibalitz; Laubach, Stephen E.; Marrett, Randall

    Here, we evaluate a published model for crystal growth patterns in quartz cement in sandstone fractures by comparing crystal fracture-spanning predictions to quartz c-axis orientation distributions measured by electron backscatter diffraction (EBSD) of spanning quartz deposits. Samples from eight subvertical opening-mode fractures in four sandstone formations, the Jurassic– Cretaceous Nikanassin Formation, northwestern Alberta Foothills (Canada), Cretaceous Mesaverde Group (USA; Cozzette Sandstone Member of the Iles Formation), Piceance Basin, Colorado (USA), and upper Jurassic–lower Cretaceous Cotton Valley Group (Taylor sandstone) and overlying Travis Peak Formation, east Texas, have similar quartzose composition and grain size but contain fractures with different temperature historiesmore » and opening rates based on fluid inclusion assemblages and burial history. Spherical statistical analysis shows that, in agreement with model predictions, bridging crystals have a preferred orientation with c-axis orientations at a high angle to fracture walls. The second form of validation is for spanning potential that depends on the size of cut substrate grains. Using measured cut substrate grain sizes and c-axis orientations of spanning bridges, we calculated the required orientation for the smallest cut grain to span the maximum gap size and the required orientation of the crystal with the least spanning potential to form overgrowths that span across maximum measured gap sizes. We find that within a 10° error all spanning crystals conform to model predictions. Using crystals with the lowest spanning potential based on crystallographic orientation (c-axis parallel to fracture wall) and a temperature range for fracture opening measured from fluid inclusion assemblages, we calculate maximum fracture opening rates that allow crystals to span. These rates are comparable to those derived independently from fracture temperature histories based on burial history and multiple sequential fluid inclusion assemblages. Results support the R. Lander and S. Laubach model, which predicts that for quartz deposited synchronously with fracture opening, spanning potential, or likelihood of quartz deposits that are thick enough to span between fracture walls, depends on temperature history, fracture opening rate, size of opening increments, and size, mineralogy, and crystallographic orientation of substrates in the fracture wall (transected grains). Results suggest that EBSD maps, which can be more rapidly acquired than measurement of tens to hundreds of fluid inclusion assemblages, can provide a useful measure of relative opening rates within populations of quartz-filled fractures formed under sedimentary basin conditions. Such data are useful for evaluating fracture pattern development models.« less

  10. Arthroscopy for treating temporomandibular joint disorders.

    PubMed

    Currie, Roger

    2011-01-01

    The Cochrane Oral Health Group Trials Register, the Cochrane Central Register of Controlled Trials (CENTRAL), Medline Embase, Lilacs, Allied and Complementary Medicine Database (AMED) and CINAHL databases were searched. In addition the reference lists of the included articles were checked and 14 journals hand searched. Randomised controlled clinical trials (RCT) of arthroscopy for treating TMDs were included. There were no restrictions regarding the language or date of publication. Two review authors independently extracted data, and three review authors independently assessed the risk of bias of included trials. The authors of the selected articles were contacted for additional information. Pooling of trials was only attempted if at least two trials of comparable protocols, with the same conditions and similar outcome measurements were available. Statistical analysis was performed in accordance with the Cochrane Collaboration guidelines. Seven RCTs (n = 349) met the inclusion criteria. All the studies were either at high or unclear risk of bias. Pain was evaluated after six months in two studies. No statistically significant differences were found between the arthroscopy versus nonsurgical groups (standardised mean difference (SMD) = 0.004; 95% confidence interval (CI) - 0.46 to 0.55, P = 0.81). Two studies analysed pain 12 months after surgery (arthroscopy and arthrocentesis) in 81 patients. No statistically significant differences were found (mean difference (MD) = 0.10; 95% CI -1.46 to 1.66, P = 0.90). Three studies analysed the same outcome in patients who had been submitted to arthroscopic surgery or to open surgery and a statistically significant difference was found after 12 months (SMD = 0.45; 95% CI 0.01 to 0.89, P = 0.05) in favour of open surgery.The two studies compared the maximum interincisal opening in six different clinical outcomes (interincisal opening over 35 mm; maximum protrusion over 5 mm; click; crepitation; tenderness on palpation in the TMJ and the jaw muscles 12 months after arthroscopy and open surgery). The outcome measures did not present statistically significant differences (odds ratio (OR) = 1.00; 95% CI 0.45 to 2.21, P = 1.00). Two studies compared the maximum interincisal opening after 12 months of postsurgical follow-up. A statistically significant difference in favour of the arthroscopy group was observed (MD = 5.28; 95% CI 3.46 to 7.10, P < 0.0001).The two studies compared the mandibular function after 12 months of follow-up with 40 patients evaluated. The outcome measure was mandibular functionality (MFIQ). This difference was not statistically significant (MD = 1.58; 95% CI -0.78 to 3.94, P = 0.19). Both arthroscopy and nonsurgical treatments reduced pain after six months. When compared with arthroscopy, open surgery was more effective at reducing pain after 12 months. Nevertheless, there were no differences in mandibular functionality or in other outcomes in clinical evaluations. Arthroscopy led to greater improvement in maximum interincisal opening after 12 months than arthrocentesis; however, there was no difference in pain.

  11. Seismic-hazard maps and time histories for the commonwealth of Kentucky.

    DOT National Transportation Integrated Search

    2008-06-01

    The ground-motion hazard maps and time histories for three earthquake scenarios, expected earthquakes, probable earthquakes, and maximum credible earthquakes on the free surface in hard rock (shear-wave velocity >1,500 m/s), were derived using the de...

  12. Bayesian explorations of fault slip evolution over the earthquake cycle

    NASA Astrophysics Data System (ADS)

    Duputel, Z.; Jolivet, R.; Benoit, A.; Gombert, B.

    2017-12-01

    The ever-increasing amount of geophysical data continuously opens new perspectives on fundamental aspects of the seismogenic behavior of active faults. In this context, the recent fleet of SAR satellites including Sentinel-1 and COSMO-SkyMED permits the use of InSAR for time-dependent slip modeling with unprecedented resolution in time and space. However, existing time-dependent slip models rely on spatial smoothing regularization schemes, which can produce unrealistically smooth slip distributions. In addition, these models usually do not include uncertainty estimates thereby reducing the utility of such estimates. Here, we develop an entirely new approach to derive probabilistic time-dependent slip models. This Markov-Chain Monte Carlo method involves a series of transitional steps to predict and update posterior Probability Density Functions (PDFs) of slip as a function of time. We assess the viability of our approach using various slow-slip event scenarios. Using a dense set of SAR images, we also use this method to quantify the spatial distribution and temporal evolution of slip along a creeping segment of the North Anatolian Fault. This allows us to track a shallow aseismic slip transient lasting for about a month with a maximum slip of about 2 cm.

  13. Biparametric complexities and generalized Planck radiation law

    NASA Astrophysics Data System (ADS)

    Puertas-Centeno, David; Toranzo, I. V.; Dehesa, J. S.

    2017-12-01

    Complexity theory embodies some of the hardest, most fundamental and most challenging open problems in modern science. The very term complexity is very elusive, so the main goal of this theory is to find meaningful quantifiers for it. In fact, we need various measures to take into account the multiple facets of this term. Here, some biparametric Crámer-Rao and Heisenberg-Rényi measures of complexity of continuous probability distributions are defined and discussed. Then, they are applied to blackbody radiation at temperature T in a d-dimensional universe. It is found that these dimensionless quantities do not depend on T nor on any physical constants. So, they have a universal character in the sense that they only depend on spatial dimensionality. To determine these complexity quantifiers, we have calculated their dispersion (typical deviations) and entropy (Rényi entropies and the generalized Fisher information) constituents. They are found to have a temperature-dependent behavior similar to the celebrated Wien’s displacement law of the dominant frequency ν_max at which the spectrum reaches its maximum. Moreover, they allow us to gain insights into new aspects of the d-dimensional blackbody spectrum and the quantification of quantum effects associated with space dimensionality.

  14. Detection and Counting of Orchard Trees from Vhr Images Using a Geometrical-Optical Model and Marked Template Matching

    NASA Astrophysics Data System (ADS)

    Maillard, Philippe; Gomes, Marília F.

    2016-06-01

    This article presents an original algorithm created to detect and count trees in orchards using very high resolution images. The algorithm is based on an adaptation of the "template matching" image processing approach, in which the template is based on a "geometricaloptical" model created from a series of parameters, such as illumination angles, maximum and ambient radiance, and tree size specifications. The algorithm is tested on four images from different regions of the world and different crop types. These images all have < 1 meter spatial resolution and were downloaded from the GoogleEarth application. Results show that the algorithm is very efficient at detecting and counting trees as long as their spectral and spatial characteristics are relatively constant. For walnut, mango and orange trees, the overall accuracy was clearly above 90%. However, the overall success rate for apple trees fell under 75%. It appears that the openness of the apple tree crown is most probably responsible for this poorer result. The algorithm is fully explained with a step-by-step description. At this stage, the algorithm still requires quite a bit of user interaction. The automatic determination of most of the required parameters is under development.

  15. Integrating Physical and Topographic Information Into a Fuzzy Scheme to Map Flooded Area by SAR

    PubMed Central

    Pierdicca, Nazzareno; Chini, Marco; Pulvirenti, Luca; Macina, Flavia

    2008-01-01

    A flood mapping procedure based on a fuzzy sets theory has been developed. The method is based on the integration of Synthetic Aperture Radar (SAR) measurements with additional data on the inundated area, such as a land cover map and a digital elevation model (DEM). The information on land cover has allowed us to account for both specular reflection, typical of open water, and double bounce backscattering, typical of forested and urban areas. DEM has been exploited to include simple hydraulic considerations on the dependence of inundation probability on surface characteristics. Contextual information has been taken into account too. The proposed algorithm has been tested on a flood occurred in Italy on November 1994. A pair of ERS-1 images, collected before and after (three days later) the flood, has been used. The results have been compared with the data provided by a ground survey carried out when the flood reached its maximum extension. Despite the temporal mismatch between the survey and the post-inundation SAR image, the comparison has yielded encouraging results, with the 87% of the pixels correctly classified as inundated. PMID:27879928

  16. Intraspecific variation and species coexistence.

    PubMed

    Lichstein, Jeremy W; Dushoff, Jonathan; Levin, Simon A; Pacala, Stephen W

    2007-12-01

    We use a two-species model of plant competition to explore the effect of intraspecific variation on community dynamics. The competitive ability ("performance") of each individual is assigned by an independent random draw from a species-specific probability distribution. If the density of individuals competing for open space is high (e.g., because fecundity is high), species with high maximum (or large variance in) performance are favored, while if density is low, species with high typical (e.g., mean) performance are favored. If there is an interspecific mean-variance performance trade-off, stable coexistence can occur across a limited range of intermediate densities, but the stabilizing effect of this trade-off appears to be weak. In the absence of this trade-off, one species is superior. In this case, intraspecific variation can blur interspecific differences (i.e., shift the dynamics toward what would be expected in the neutral case), but the strength of this effect diminishes as competitor density increases. If density is sufficiently high, the inferior species is driven to extinction just as rapidly as in the case where there is no overlap in performance between species. Intraspecific variation can facilitate coexistence, but this may be relatively unimportant in maintaining diversity in most real communities.

  17. Enhancement of delayed-rectifier potassium conductance by low concentrations of local anaesthetics in spinal sensory neurones

    PubMed Central

    Olschewski, Andrea; Wolff, Matthias; Bräu, Michael E; Hempelmann, Gunter; Vogel, Werner; Safronov, Boris V

    2002-01-01

    Combining the patch-clamp recordings in slice preparation with the ‘entire soma isolation' method we studied action of several local anaesthetics on delayed-rectifier K+ currents in spinal dorsal horn neurones.Bupivacaine, lidocaine and mepivacaine at low concentrations (1–100 μM) enhanced delayed-rectifier K+ current in intact neurones within the spinal cord slice, while exhibiting a partial blocking effect at higher concentrations (>100 μM). In isolated somata 0.1–10 μM bupivacaine enhanced delayed-rectifier K+ current by shifting its steady-state activation characteristic and the voltage-dependence of the activation time constant to more negative potentials by 10–20 mV.Detailed analysis has revealed that bupivacaine also increased the maximum delayed-rectifier K+ conductance by changing the open probability, rather than the unitary conductance, of the channel.It is concluded that local anaesthetics show a dual effect on delayed-rectifier K+ currents by potentiating them at low concentrations and partially suppressing at high concentrations. The phenomenon observed demonstrated the complex action of local anaesthetics during spinal and epidural anaesthesia, which is not restricted to a suppression of Na+ conductance only. PMID:12055132

  18. Pyrolysis and kinetic analyses of a perennial grass (Saccharum ravannae L.) from north-east India: Optimization through response surface methodology and product characterization.

    PubMed

    Saikia, Ruprekha; Baruah, Bhargav; Kalita, Dipankar; Pant, Kamal K; Gogoi, Nirmali; Kataki, Rupam

    2018-04-01

    The objective of the present investigation was to optimize the pyrolysis condition of an abundantly available and low cost perennial grass of north-east India Saccharum ravannae L. (S. ravannae) using response surface methodology based on central composite design. Kinetic study of the biomass was conducted at four different heating rates of 10, 20, 40 and 60 °C min -1 and results were interpreted by Friedman, Kissinger Akira Sunnose and Flynn-Wall-Ozawa methods. Average activation energy 151.45 kJ mol -1 was used for evaluation of reaction mechanism following Criado master plot. Maximum bio-oil yield of 38.1 wt% was obtained at pyrolysis temperature of 550 °C, heating rate of 20 °C min -1 and nitrogen flow rate of 226 mL min -1 . Study on bio-oil quality revealed higher content of hydrocarbon, antioxidant property, total phenolic content and metal chelating capacity. These opened up probable applications of S. ravannae bio-oil in different fields including fuel, food industry and biomedical domain. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Differences in compassion fatigue, symptoms of posttraumatic stress disorder and relationship satisfaction, including sexual desire and functioning, between male and female detectives who investigate sexual offenses against children: a pilot study.

    PubMed

    Lane, Eric J; Lating, Jeffrey M; Lowry, Jenny L; Martino, Traci P

    2010-01-01

    Law enforcement detectives who work with traumatized individuals, especially children who were victims of sexual abuse or assault, are likely to experience job-related emotional distress. The purpose of this study was to examine the relations among compassion fatigue, probable PTSD symptoms, and personal relationship satisfaction, including communication and sexual satisfaction, in a sample of 47 male and female detectives. Responses to the administered questionnaires indicated a relation between compassion fatigue symptoms and probable PTSD symptoms. There also were compelling gender differences. For example, for male detectives, open communication with their spouse or significant other was negatively correlated with burnout, indicating the more open the communication, the lower the reported burnout. However for female detectives there was a negative correlation between open communication with spouse or significant other and compassion satisfaction, suggesting that more open communication was related to lower levels of satisfaction with their ability to be a professional caregiver Furthermore, although stepwise regression analysis indicated that years of service as a detective is independently associated with sexual desire, female detectives evidenced less sexual desire and more difficulty with sexual functioning than did male detectives. Implications of these preliminary findings are discussed and limitations addressed.

  20. Flood risk assessment in a Spanish Mediterranean catchment

    NASA Astrophysics Data System (ADS)

    Salazar, S.; Francés, F.; García-Bartual, R.; Ortiz, E.; Múnera, J. C.; Vélez, J. J.

    2009-04-01

    This paper describes a multidisciplinary approach for the risk assessment and its application to analysing the effects of extreme flood events on the Mediterranean catchment called "Rambla del Poyo" in Valencia (Spain). This catchment located in the East coast of Spain has an area of 380 km2 and is clearly open to the Mediterranean Mesoscale Convective Storms. The climate is semiarid, and the flow regime is typically ephemeral, but with highly frequent flash floods, with peak flows in the order of 500 m3/s. Recently, in 2000 and 2002 the area was severe flooded. The flood prone area is located in the lower part of the basin, with an important concentration of different urban centers and industrial and commercial areas (including part of the Valencia International Airport). For this reason, the analysis of damages of residential, industrial and commercial urbanized areas is essential for the prevention of damages with a proper flood risk management. The approach is based on three main steps. The first step entails a detailed hydrological analysis (parameter estimation, calibration-validation and simulations) using a distributed rainfall-runoff model called TETIS. In the case study, on one hand, high temporal resolutions rain gauge data are scarce, because of this, in addition to a small number of historic events, 100 synthetic rainstorms were generated using the multidimensional stochastic model called RAINGEN, which adequately represents the main structural properties typical of intense convective storms, including occurrence of raincells in space and time and the generated intensities. An equivalent daily maximum precipitation Pd was estimated for each synthetic event, thus allowing a return period assignment using the known statistical distribution of Pd in the region. On the other hand, the initial soil moisture condition can have a strong influence in the runoff production, for this reason, long term daily simulation has been done in order to asses the probability distribution of the initial situation before the extreme flood events (dry and wet conditions). For all combinations of precipitation inputs and initial conditions, 200 hydrological simulations has been done in order to obtain the input hydrographs for the hydraulic model. Finally in this step, a frequency analysis to obtain the non-exceedence probability of the peak discharges has been developed using the annual maximum daily precipitation and the initial soil moisture condition with this expression: ž« FX (x) = FX |r (x|r).fR (r).dr -ž where: X= random variable of interest (peak discharge), R= annual maximum daily precipitation, fR(r)= probability density function of R, FX•r(x/r)= conditional density function of X given r obtained from simulations. The main objective of second step is flood hazard estimation, which, the hydraulic modelling has been developed using the coupled computing version of Sobek 1D/2D. In this task, the treatment of DEM calculation can be a key task depending on the scale of work. The introduction of buildings, walls, the opening of drainage works… improving the quality of results in areas with high anthropogenic influence; in our case has been made 6 simulations with 3 different resolutions, after all, the model has been done with a model one-dimensional (1D), logging throughout the stretch to two-dimensional (2D) grid with the parent of 30x30 metres, except for its passage through the urban, commercial and industrial land uses in the flood prone area where it connects with the child grid of 10x10 metres. Unfortunately, for reasons of computer time, the hydraulic model has not been run for the 200 available events. However, 20 events have been carefully select trying to cover the best probabilistic interest spectrum for this study (from two to one thousand years of return period). From the 20 selected flooding maps it has been developed a GIS computational tool for calculating a regression between the independent variable (maximum water depth) and the dependent variable return period transformed into natural logarithm. Using this methodology have been generated the hazard maps for the return periods of interest. Finally, the third step concerns to the flood risk, which was defined as probabilistic integral of the combination of flood hazard and land use vulnerability: ž« R = V (h).fH (h).dh 0 Where: R is the flood risk, V(h) is the land use vulnerability, h is the flood magnitude and fH(h) is its probability density function. The land use vulnerability is expressed in terms of stage-damage functions for urban, commercial and industrial land uses. Both, flood hazard and land use vulnerability are defined in terms of magnitude (water depth). This integral has been solved in discrete form using a GIS tools. The flood risk assessment by a resolution of 10 meters in size cell in the flood prone area of the "Rambla del Poyo" has been done. With this useful methodology, we believe that a complete flood risk analysis is needed in order to objectively compare different future scenarios that can affect either the flood hazard and/or the vulnerability in the flood prone area.

  1. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.

  2. The preference of probability over negative values in action selection.

    PubMed

    Neyedli, Heather F; Welsh, Timothy N

    2015-01-01

    It has previously been found that when participants are presented with a pair of motor prospects, they can select the prospect with the largest maximum expected gain (MEG). Many of those decisions, however, were trivial because of large differences in MEG between the prospects. The purpose of the present study was to explore participants' preferences when making non-trivial decisions between two motor prospects. Participants were presented with pairs of prospects that: 1) differed in MEG with either only the values or only the probabilities differing between the prospects; and 2) had similar MEG with one prospect having a larger probability of hitting the target and a higher penalty value and the other prospect a smaller probability of hitting the target but a lower penalty value. In different experiments, participants either had 400 ms or 2000 ms to decide between the prospects. It was found that participants chose the configuration with the larger MEG more often when the probability varied between prospects than when the value varied. In pairs with similar MEGs, participants preferred a larger probability of hitting the target over a smaller penalty value. These results indicate that participants prefer probability information over negative value information in a motor selection task.

  3. PubMed Central

    Nozaki, S.; Kawai, M.; Shimoyama, R.; Futamura, N.; Matsumura, T.; Adachi, K.; Kikuchi, Y.

    2010-01-01

    The purpose of this study is to evaluate whether the range of motion exercise of the temporo-mandibular joint (jaw ROM exercise) with a hot pack and massage of the masseter muscle improve biting disorder in Duchenne muscular dystrophy (DMD). The subjects were 18 DMD patients (21.3 ± 4.1 years old). The jaw ROM exercise consisted of therapist-assisted training (2 times a week) and self-training (before each meal every day). The therapist-assisted training consisted of the application of a hot pack on the cheek of the masseter muscle region (15 minutes), the massage of the masseter (10 minutes), and jaw ROM exercise (5 minutes). The self-training involved jaw ROM exercise by opening the mouth to the maximum degree, ten times. These trainings continued for six months. Outcomes were evaluated by measuring the greatest occlusal force and the distance at the maximum degree of mouth opening between an incisor of the top and that of the bottom. Six months later, the greatest occlusal force had increased significantly compared with that at the start of jaw ROM exercise (intermediate values: from 73.8N to 97.3N) (p = 0.005) as determined by the Friedman test and Scheffé's nonparametric test. The patients' satisfaction with meals increased. However, the maximum degree of mouth opening did not change after six months of jaw ROM exercise. Jaw ROM exercise in DMD is effective for increasing the greatest occlusal force. PMID:21574523

  4. National Dam Inspection Program. Upper Pigeon Hill Dam. (NDI I.D. Number PA-00340, PennDER I.D. Number 67-5) Susquehanna River Basin, Gitts Run, York County, Pennsylvania. Phase I Inspection Report,

    DTIC Science & Technology

    1980-03-01

    recommended guidelines, the Spillway Design Flood (SDF) ranges between the 1 /2-PMF (Probable Maximum Flood) and PMF. Since the dam is near the lower end of...overtopping. A breach analysis indicates that failure under 1 /2-PMF conditions would probably not lead to increased property damage or loss of life at...ii OVERVIEW PHOTOGRAPH ......... .................. V TABLE OF CONTENTS ......... ................... vi SECTION 1 - GENERAL INFORMATION

  5. Bayesian approach to inverse statistical mechanics.

    PubMed

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  6. Bayesian approach to inverse statistical mechanics

    NASA Astrophysics Data System (ADS)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  7. Flood frequency analysis for nonstationary annual peak records in an urban drainage basin

    USGS Publications Warehouse

    Villarini, G.; Smith, J.A.; Serinaldi, F.; Bales, J.; Bates, P.D.; Krajewski, W.F.

    2009-01-01

    Flood frequency analysis in urban watersheds is complicated by nonstationarities of annual peak records associated with land use change and evolving urban stormwater infrastructure. In this study, a framework for flood frequency analysis is developed based on the Generalized Additive Models for Location, Scale and Shape parameters (GAMLSS), a tool for modeling time series under nonstationary conditions. GAMLSS is applied to annual maximum peak discharge records for Little Sugar Creek, a highly urbanized watershed which drains the urban core of Charlotte, North Carolina. It is shown that GAMLSS is able to describe the variability in the mean and variance of the annual maximum peak discharge by modeling the parameters of the selected parametric distribution as a smooth function of time via cubic splines. Flood frequency analyses for Little Sugar Creek (at a drainage area of 110 km2) show that the maximum flow with a 0.01-annual probability (corresponding to 100-year flood peak under stationary conditions) over the 83-year record has ranged from a minimum unit discharge of 2.1 m3 s- 1 km- 2 to a maximum of 5.1 m3 s- 1 km- 2. An alternative characterization can be made by examining the estimated return interval of the peak discharge that would have an annual exceedance probability of 0.01 under the assumption of stationarity (3.2 m3 s- 1 km- 2). Under nonstationary conditions, alternative definitions of return period should be adapted. Under the GAMLSS model, the return interval of an annual peak discharge of 3.2 m3 s- 1 km- 2 ranges from a maximum value of more than 5000 years in 1957 to a minimum value of almost 8 years for the present time (2007). The GAMLSS framework is also used to examine the links between population trends and flood frequency, as well as trends in annual maximum rainfall. These analyses are used to examine evolving flood frequency over future decades. ?? 2009 Elsevier Ltd.

  8. Flood Frequency Analysis For Partial Duration Series In Ganjiang River Basin

    NASA Astrophysics Data System (ADS)

    zhangli, Sun; xiufang, Zhu; yaozhong, Pan

    2016-04-01

    Accurate estimation of flood frequency is key to effective, nationwide flood damage abatement programs. The partial duration series (PDS) method is widely used in hydrologic studies because it considers all events above a certain threshold level as compared to the annual maximum series (AMS) method, which considers only the annual maximum value. However, the PDS has a drawback in that it is difficult to define the thresholds and maintain an independent and identical distribution of the partial duration time series; this drawback is discussed in this paper. The Ganjiang River is the seventh largest tributary of the Yangtze River, the longest river in China. The Ganjiang River covers a drainage area of 81,258 km2 at the Wanzhou hydrologic station as the basin outlet. In this work, 56 years of daily flow data (1954-2009) from the Wanzhou station were used to analyze flood frequency, and the Pearson-III model was employed as the hydrologic probability distribution. Generally, three tasks were accomplished: (1) the threshold of PDS by percentile rank of daily runoff was obtained; (2) trend analysis of the flow series was conducted using PDS; and (3) flood frequency analysis was conducted for partial duration flow series. The results showed a slight upward trend of the annual runoff in the Ganjiang River basin. The maximum flow with a 0.01 exceedance probability (corresponding to a 100-year flood peak under stationary conditions) was 20,000 m3/s, while that with a 0.1 exceedance probability was 15,000 m3/s. These results will serve as a guide to hydrological engineering planning, design, and management for policymakers and decision makers associated with hydrology.

  9. Population pharmacokinetics and maximum a posteriori probability Bayesian estimator of abacavir: application of individualized therapy in HIV-infected infants and toddlers

    PubMed Central

    Zhao, Wei; Cella, Massimo; Della Pasqua, Oscar; Burger, David; Jacqz-Aigrain, Evelyne

    2012-01-01

    AIMS To develop a population pharmacokinetic model for abacavir in HIV-infected infants and toddlers, which will be used to describe both once and twice daily pharmacokinetic profiles, identify covariates that explain variability and propose optimal time points to optimize the area under the concentration–time curve (AUC) targeted dosage and individualize therapy. METHODS The pharmacokinetics of abacavir was described with plasma concentrations from 23 patients using nonlinear mixed-effects modelling (NONMEM) software. A two-compartment model with first-order absorption and elimination was developed. The final model was validated using bootstrap, visual predictive check and normalized prediction distribution errors. The Bayesian estimator was validated using the cross-validation and simulation–estimation method. RESULTS The typical population pharmacokinetic parameters and relative standard errors (RSE) were apparent systemic clearance (CL) 13.4 l h−1 (RSE 6.3%), apparent central volume of distribution 4.94 l (RSE 28.7%), apparent peripheral volume of distribution 8.12 l (RSE14.2%), apparent intercompartment clearance 1.25 l h−1 (RSE 16.9%) and absorption rate constant 0.758 h−1 (RSE 5.8%). The covariate analysis identified weight as the individual factor influencing the apparent oral clearance: CL = 13.4 × (weight/12)1.14. The maximum a posteriori probability Bayesian estimator, based on three concentrations measured at 0, 1 or 2, and 3 h after drug intake allowed predicting individual AUC0–t. CONCLUSIONS The population pharmacokinetic model developed for abacavir in HIV-infected infants and toddlers accurately described both once and twice daily pharmacokinetic profiles. The maximum a posteriori probability Bayesian estimator of AUC0–t was developed from the final model and can be used routinely to optimize individual dosing. PMID:21988586

  10. Minimum error discrimination between similarity-transformed quantum states

    NASA Astrophysics Data System (ADS)

    Jafarizadeh, M. A.; Sufiani, R.; Mazhari Khiavi, Y.

    2011-07-01

    Using the well-known necessary and sufficient conditions for minimum error discrimination (MED), we extract an equivalent form for the MED conditions. In fact, by replacing the inequalities corresponding to the MED conditions with an equivalent but more suitable and convenient identity, the problem of mixed state discrimination with optimal success probability is solved. Moreover, we show that the mentioned optimality conditions can be viewed as a Helstrom family of ensembles under some circumstances. Using the given identity, MED between N similarity transformed equiprobable quantum states is investigated. In the case that the unitary operators are generating a set of irreducible representation, the optimal set of measurements and corresponding maximum success probability of discrimination can be determined precisely. In particular, it is shown that for equiprobable pure states, the optimal measurement strategy is the square-root measurement (SRM), whereas for the mixed states, SRM is not optimal. In the case that the unitary operators are reducible, there is no closed-form formula in the general case, but the procedure can be applied in each case in accordance to that case. Finally, we give the maximum success probability of optimal discrimination for some important examples of mixed quantum states, such as generalized Bloch sphere m-qubit states, spin-j states, particular nonsymmetric qudit states, etc.

  11. A novel rumor diffusion model considering the effect of truth in online social media

    NASA Astrophysics Data System (ADS)

    Sun, Ling; Liu, Yun; Zeng, Qing-An; Xiong, Fei

    2015-12-01

    In this paper, we propose a model to investigate how truth affects rumor diffusion in online social media. Our model reveals a relation between rumor and truth — namely, when a rumor is diffusing, the truth about the rumor also diffuses with it. Two patterns of the agents used to identify rumor, self-identification and passive learning are taken into account. Combining theoretical proof and simulation analysis, we find that the threshold value of rumor diffusion is negatively correlated to the connectivity between nodes in the network and the probability β of agents knowing truth. Increasing β can reduce the maximum density of the rumor spreaders and slow down the generation speed of new rumor spreaders. On the other hand, we conclude that the best rumor diffusion strategy must balance the probability of forwarding rumor and the probability of agents losing interest in the rumor. High spread rate λ of rumor would lead to a surge in truth dissemination which will greatly limit the diffusion of rumor. Furthermore, in the case of unknown λ, increasing β can effectively reduce the maximum proportion of agents who do not know the truth, but cannot narrow the rumor diffusion range in a certain interval of β.

  12. Minimum error discrimination between similarity-transformed quantum states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jafarizadeh, M. A.; Institute for Studies in Theoretical Physics and Mathematics, Tehran 19395-1795; Research Institute for Fundamental Sciences, Tabriz 51664

    2011-07-15

    Using the well-known necessary and sufficient conditions for minimum error discrimination (MED), we extract an equivalent form for the MED conditions. In fact, by replacing the inequalities corresponding to the MED conditions with an equivalent but more suitable and convenient identity, the problem of mixed state discrimination with optimal success probability is solved. Moreover, we show that the mentioned optimality conditions can be viewed as a Helstrom family of ensembles under some circumstances. Using the given identity, MED between N similarity transformed equiprobable quantum states is investigated. In the case that the unitary operators are generating a set of irreduciblemore » representation, the optimal set of measurements and corresponding maximum success probability of discrimination can be determined precisely. In particular, it is shown that for equiprobable pure states, the optimal measurement strategy is the square-root measurement (SRM), whereas for the mixed states, SRM is not optimal. In the case that the unitary operators are reducible, there is no closed-form formula in the general case, but the procedure can be applied in each case in accordance to that case. Finally, we give the maximum success probability of optimal discrimination for some important examples of mixed quantum states, such as generalized Bloch sphere m-qubit states, spin-j states, particular nonsymmetric qudit states, etc.« less

  13. Formation of muconaldehyde, an open-ring metabolite of benzene, in mouse liver microsomes: an additional pathway for toxic metabolites.

    PubMed Central

    Latriano, L; Goldstein, B D; Witz, G

    1986-01-01

    It has been proposed that a ring-opened form may be responsible for the toxicity of benzene. The present studies demonstrate that incubation of [14C]benzene with liver microsomes (obtained from male CD-1 mice treated with benzene) in the presence of NADPH results in the formation of a ring-opened product. Evidence for the identity of this product was obtained by derivatizing with 2-thiobarbituric acid (TBA), which resulted in the formation of an adduct with a 490-nm absorbance maximum. This maximum is identical to that observed after authentic trans,trans-muconaldehyde has reacted with TBA. Separation of muconaldehyde, both with and without trapping with TBA, from other benzene metabolites in the incubation mixture was accomplished by HPLC. The radioactivity profile of fractions collected during HPLC analysis contained peaks that eluted with muconaldehyde and the muconaldehyde-TBA adduct. The structure of the ring-opened product was confirmed by mass spectrometry, studies in which the HPLC peak from the microsomal incubation mixture that eluted at the retention time of authentic muconaldehyde was collected and derivatized with 2,4-dinitrophenylhydrazine. The high-resolution mass spectrum of this sample contained an ion with an m/z of 291.0729, corresponding to muconaldehyde mono-dinitrophenylhydrazone. These results indicate that benzene is metabolized in vitro to a ring-opened product identified as muconaldehyde. PMID:3464956

  14. Land-Surface Subsidence and Open Bedrock Fractures in the Tully Valley, Onondaga County, New York

    USGS Publications Warehouse

    Hackett, William R.; Gleason, Gayle C.; Kappel, William M.

    2009-01-01

    Open bedrock fractures were mapped in and near two brine field areas in Tully Valley, New York. More than 400 open fractures and closed joints were mapped for dimension, orientation, and distribution along the east and west valley walls adjacent to two former brine fields. The bedrock fractures are as much as 2 feet wide and over 50 feet deep, while linear depressions in the soil, which are 3 to 10 feet wide and 3 to 6 feet deep, indicate the presence of open bedrock fractures below the soil. The fractures are probably the result of solution mining of halite deposits about 1,200 feet below the land surface.

  15. The effect of activity-based instruction on conceptual development of seventh grade students in probability

    NASA Astrophysics Data System (ADS)

    Gürbüz, Ramazan

    2010-09-01

    The purpose of this study is to investigate and compare the effects of activity-based and traditional instructions on students' conceptual development of certain probability concepts. The study was conducted using a pretest-posttest control group design with 80 seventh graders. A developed 'Conceptual Development Test' comprising 12 open-ended questions was administered on both groups of students before and after the intervention. The data were analysed using analysis of covariance, with the pretest as covariate. The results revealed that activity-based instruction (ABI) outperformed the traditional counterpart in the development of probability concepts. Furthermore, ABI was found to contribute students' conceptual development of the concept of 'Probability of an Event' the most, whereas to the concept of 'Sample Space' the least. As a consequence, it can be deduced that the designed instructional process was effective in the instruction of probability concepts.

  16. How Life History Can Sway the Fixation Probability of Mutants

    PubMed Central

    Li, Xiang-Yi; Kurokawa, Shun; Giaimo, Stefano; Traulsen, Arne

    2016-01-01

    In this work, we study the effects of demographic structure on evolutionary dynamics when selection acts on reproduction, survival, or both. In contrast to the previously discovered pattern that the fixation probability of a neutral mutant decreases while the population becomes younger, we show that a mutant with a constant selective advantage may have a maximum or a minimum of the fixation probability in populations with an intermediate fraction of young individuals. This highlights the importance of life history and demographic structure in studying evolutionary dynamics. We also illustrate the fundamental differences between selection on reproduction and selection on survival when age structure is present. In addition, we evaluate the relative importance of size and structure of the population in determining the fixation probability of the mutant. Our work lays the foundation for also studying density- and frequency-dependent effects in populations when demographic structures cannot be neglected. PMID:27129737

  17. Type I error probabilities based on design-stage strategies with applications to noninferiority trials.

    PubMed

    Rothmann, Mark

    2005-01-01

    When testing the equality of means from two different populations, a t-test or large sample normal test tend to be performed. For these tests, when the sample size or design for the second sample is dependent on the results of the first sample, the type I error probability is altered for each specific possibility in the null hypothesis. We will examine the impact on the type I error probabilities for two confidence interval procedures and procedures using test statistics when the design for the second sample or experiment is dependent on the results from the first sample or experiment (or series of experiments). Ways for controlling a desired maximum type I error probability or a desired type I error rate will be discussed. Results are applied to the setting of noninferiority comparisons in active controlled trials where the use of a placebo is unethical.

  18. An extended car-following model considering random safety distance with different probabilities

    NASA Astrophysics Data System (ADS)

    Wang, Jufeng; Sun, Fengxin; Cheng, Rongjun; Ge, Hongxia; Wei, Qi

    2018-02-01

    Because of the difference in vehicle type or driving skill, the driving strategy is not exactly the same. The driving speeds of the different vehicles may be different for the same headway. Since the optimal velocity function is just determined by the safety distance besides the maximum velocity and headway, an extended car-following model accounting for random safety distance with different probabilities is proposed in this paper. The linear stable condition for this extended traffic model is obtained by using linear stability theory. Numerical simulations are carried out to explore the complex phenomenon resulting from multiple safety distance in the optimal velocity function. The cases of multiple types of safety distances selected with different probabilities are presented. Numerical results show that the traffic flow with multiple safety distances with different probabilities will be more unstable than that with single type of safety distance, and will result in more stop-and-go phenomena.

  19. Forecasting a winner for Malaysian Cup 2013 using soccer simulation model

    NASA Astrophysics Data System (ADS)

    Yusof, Muhammad Mat; Fauzee, Mohd Soffian Omar; Latif, Rozita Abdul

    2014-07-01

    This paper investigates through soccer simulation the calculation of the probability for each team winning Malaysia Cup 2013. Our methodology used here is we predict the outcomes of individual matches and then we simulate the Malaysia Cup 2013 tournament 5000 times. As match outcomes are always a matter of uncertainty, statistical model, in particular a double Poisson model is used to predict the number of goals scored and conceded for each team. Maximum likelihood estimation is use to measure the attacking strength and defensive weakness for each team. Based on our simulation result, LionXII has a higher probability in becoming the winner, followed by Selangor, ATM, JDT and Kelantan. Meanwhile, T-Team, Negeri Sembilan and Felda United have lower probabilities to win Malaysia Cup 2013. In summary, we find that the probability for each team becominga winner is small, indicating that the level of competitive balance in Malaysia Cup 2013 is quite high.

  20. Vandenberg Air Force Base Upper Level Wind Launch Weather Constraints

    NASA Technical Reports Server (NTRS)

    Shafer, Jaclyn A.; Wheeler, Mark M.

    2012-01-01

    The 30th Operational Support Squadron Weather Flight (30 OSSWF) provides comprehensive weather services to the space program at Vandenberg Air Force Base (VAFB) in California. One of their responsibilities is to monitor upper-level winds to ensure safe launch operations of the Minuteman III ballistic missile. The 30 OSSWF tasked the Applied Meteorology Unit (AMU) to analyze VAFB sounding data with the goal of determining the probability of violating (PoV) their upper-level thresholds for wind speed and shear constraints specific to this launch vehicle, and to develop a tool that will calculate the PoV of each constraint on the day of launch. In order to calculate the probability of exceeding each constraint, the AMU collected and analyzed historical data from VAFB. The historical sounding data were retrieved from the National Oceanic and Atmospheric Administration Earth System Research Laboratory archive for the years 1994-2011 and then stratified into four sub-seasons: January-March, April-June, July-September, and October-December. The maximum wind speed and 1000-ft shear values for each sounding in each subseason were determined. To accurately calculate the PoV, the AMU determined the theoretical distributions that best fit the maximum wind speed and maximum shear datasets. Ultimately it was discovered that the maximum wind speeds follow a Gaussian distribution while the maximum shear values follow a lognormal distribution. These results were applied when calculating the averages and standard deviations needed for the historical and real-time PoV calculations. In addition to the requirements outlined in the original task plan, the AMU also included forecast sounding data from the Rapid Refresh model. This information provides further insight for the launch weather officers (LWOs) when determining if a wind constraint violation will occur over the next few hours on day of launch. The interactive graphical user interface (GUI) for this project was developed in Microsoft Excel using Visual Basic for Applications. The GUI displays the critical sounding data easily and quickly for the LWOs on day of launch. This tool will replace the existing one used by the 30 OSSWF, assist the LWOs in determining the probability of exceeding specific wind threshold values, and help to improve the overall upper winds forecast for the launch customer.

  1. Quantifying the origins of life on a planetary scale

    PubMed Central

    Scharf, Caleb; Cronin, Leroy

    2016-01-01

    A simple, heuristic formula with parallels to the Drake Equation is introduced to help focus discussion on open questions for the origins of life in a planetary context. This approach indicates a number of areas where quantitative progress can be made on parameter estimation for determining origins of life probabilities, based on constraints from Bayesian approaches. We discuss a variety of “microscale” factors and their role in determining “macroscale” abiogenesis probabilities on suitable planets. We also propose that impact ejecta exchange between planets with parallel chemistries and chemical evolution could in principle amplify the development of molecular complexity and abiogenesis probabilities. This amplification could be very significant, and both bias our conclusions about abiogenesis probabilities based on the Earth and provide a major source of variance in the probability of life arising in planetary systems. We use our heuristic formula to suggest a number of observational routes for improving constraints on origins of life probabilities. PMID:27382156

  2. Developing the fuzzy c-means clustering algorithm based on maximum entropy for multitarget tracking in a cluttered environment

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing

    2018-01-01

    For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.

  3. Low-energy hydrogen uptake by small-cage C n and C n-1B fullerenes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dominguez-Gutierrez, F. Javier; Krstic, Predrag S.; Irle, Stephan

    We present a theoretical study of the hydrogen uptake capability of carbon fullerene cages Cn and their boron-doped heterofullerene equivalents C n-1B, with n = 20, 40, and 60, irradiated by hydrogen atoms in an impact energy range of 0.1–100 eV. In order to predict exohedral and endohedral hydrogen captures as well as the scattering probability of hydrogen for various cage types and sizes, we perform quantum-classical molecular dynamics (QCMD) calculations using the self-consistent-charge density-functional tight-binding (SCC-DFTB) method. Maximum endohedral hydrogen capture probabilities of 20% for n = 60 and 14% for n = 40 are found at impact energiesmore » close to 15 eV for both C n and C n-1B systems. For n = 20, however, endohedral capture is observed at a maximum of 2%, while the exohedral capture reaches a maximum of 5% both at 15 eV. Similar results for the hydrogen capture are obtained by classical molecular dynamics based on the ReaxFF potential. Lastly, the stopping cross section per carbon atom from the QCMD simulations for all cage sizes displays a linear dependence on the projectile velocity with a threshold at 0.8 eV, and extrapolates well to the available theoretical data.« less

  4. Low-energy hydrogen uptake by small-cage C n and C n-1B fullerenes

    DOE PAGES

    Dominguez-Gutierrez, F. Javier; Krstic, Predrag S.; Irle, Stephan; ...

    2018-08-29

    We present a theoretical study of the hydrogen uptake capability of carbon fullerene cages Cn and their boron-doped heterofullerene equivalents C n-1B, with n = 20, 40, and 60, irradiated by hydrogen atoms in an impact energy range of 0.1–100 eV. In order to predict exohedral and endohedral hydrogen captures as well as the scattering probability of hydrogen for various cage types and sizes, we perform quantum-classical molecular dynamics (QCMD) calculations using the self-consistent-charge density-functional tight-binding (SCC-DFTB) method. Maximum endohedral hydrogen capture probabilities of 20% for n = 60 and 14% for n = 40 are found at impact energiesmore » close to 15 eV for both C n and C n-1B systems. For n = 20, however, endohedral capture is observed at a maximum of 2%, while the exohedral capture reaches a maximum of 5% both at 15 eV. Similar results for the hydrogen capture are obtained by classical molecular dynamics based on the ReaxFF potential. Lastly, the stopping cross section per carbon atom from the QCMD simulations for all cage sizes displays a linear dependence on the projectile velocity with a threshold at 0.8 eV, and extrapolates well to the available theoretical data.« less

  5. A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges

    PubMed Central

    Wang, Xu; Sun, Baitao

    2014-01-01

    Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347

  6. Nonequilibrium-thermodynamics approach to open quantum systems

    NASA Astrophysics Data System (ADS)

    Semin, Vitalii; Petruccione, Francesco

    2014-11-01

    Open quantum systems are studied from the thermodynamical point of view unifying the principle of maximum informational entropy and the hypothesis of relaxation times hierarchy. The result of the unification is a non-Markovian and local-in-time master equation that provides a direct connection for dynamical and thermodynamical properties of open quantum systems. The power of the approach is illustrated by the application to the damped harmonic oscillator and the damped driven two-level system, resulting in analytical expressions for the non-Markovian and nonequilibrium entropy and inverse temperature.

  7. FUEL ASSAY REACTOR

    DOEpatents

    Spinrad, B.I.; Sandmeier, H.A.; Martens, F.H.

    1962-12-25

    A reactor having maximum sensitivity to perturbations is described comprising a core consisting of a horizontally disposed, rectangular, annular fuel zone containing enriched uranium dioxide dispersed in graphite, the concentration of uranium dioxide increasing from the outside to the inside of the fuel zone, an internal reflector of graphite containing an axial test opening disposed within the fuel zone, an external graphite reflector, means for changing the neutron spectrum in the test opening, and means for measuring perturbations in the neutron flux caused by the introduction of different fuel elements into the test opening. (AEC)

  8. A removal model for estimating detection probabilities from point-count surveys

    USGS Publications Warehouse

    Farnsworth, G.L.; Pollock, K.H.; Nichols, J.D.; Simons, T.R.; Hines, J.E.; Sauer, J.R.

    2000-01-01

    We adapted a removal model to estimate detection probability during point count surveys. The model assumes one factor influencing detection during point counts is the singing frequency of birds. This may be true for surveys recording forest songbirds when most detections are by sound. The model requires counts to be divided into several time intervals. We used time intervals of 2, 5, and 10 min to develop a maximum-likelihood estimator for the detectability of birds during such surveys. We applied this technique to data from bird surveys conducted in Great Smoky Mountains National Park. We used model selection criteria to identify whether detection probabilities varied among species, throughout the morning, throughout the season, and among different observers. The overall detection probability for all birds was 75%. We found differences in detection probability among species. Species that sing frequently such as Winter Wren and Acadian Flycatcher had high detection probabilities (about 90%) and species that call infrequently such as Pileated Woodpecker had low detection probability (36%). We also found detection probabilities varied with the time of day for some species (e.g. thrushes) and between observers for other species. This method of estimating detectability during point count surveys offers a promising new approach to using count data to address questions of the bird abundance, density, and population trends.

  9. Entropic Inference

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  10. Entropy-based goodness-of-fit test: Application to the Pareto distribution

    NASA Astrophysics Data System (ADS)

    Lequesne, Justine

    2013-08-01

    Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.

  11. JPS heater and sensor lightning qualification

    NASA Technical Reports Server (NTRS)

    Cook, M.

    1989-01-01

    Simulated lightning strike testing of the Redesigned Solid Rocket Motor (RSRM) field joint protection system heater assembly was performed at Thiokol Corp., Wendover Lightning Facility. Testing consisted of subjecting the lightning evaluation test article to simulated lightning strikes and evaluating the effects of heater cable transients on cables within the systems tunnel. The maximum short circuit current coupled onto a United Space Boosters, Inc. operational flight cable within the systems tunnel, induced by transients from all cables external to the systems tunnel, was 92 amperes. The maximum open-circuit voltage coupled was 316 volts. The maximum short circuit current coupled onto a United Space Boosters, Inc. operational flight cable within the systems tunnel, induced by heater power cable transients only, was 2.7 amperes; the maximum open-circuit voltage coupled was 39 volts. All heater power cable induced coupling was due to simulated lightning discharges only, no heater operating power was applied during the test. The results showed that, for a worst-case lightning discharge, the heater power cable is responsible for a 3.9 decibel increase in voltage coupling to operational flight cables within the systems tunnel. Testing also showed that current and voltage levels coupled onto cables within the systems tunnel are partially dependant on the relative locations of the cables within the systems tunnel.

  12. Glutamate receptor-channel gating. Maximum likelihood analysis of gigaohm seal recordings from locust muscle.

    PubMed Central

    Bates, S E; Sansom, M S; Ball, F G; Ramsey, R L; Usherwood, P N

    1990-01-01

    Gigaohm recordings have been made from glutamate receptor channels in excised, outside-out patches of collagenase-treated locust muscle membrane. The channels in the excised patches exhibit the kinetic state switching first seen in megaohm recordings from intact muscle fibers. Analysis of channel dwell time distributions reveals that the gating mechanism contains at least four open states and at least four closed states. Dwell time autocorrelation function analysis shows that there are at least three gateways linking the open states of the channel with the closed states. A maximum likelihood procedure has been used to fit six different gating models to the single channel data. Of these models, a cooperative model yields the best fit, and accurately predicts most features of the observed channel gating kinetics. PMID:1696510

  13. Hydrophobic interaction between contiguous residues in the S6 transmembrane segment acts as a stimuli integration node in the BK channel

    PubMed Central

    Carrasquel-Ursulaez, Willy; Contreras, Gustavo F.; Sepúlveda, Romina V.; Aguayo, Daniel; González-Nilo, Fernando

    2015-01-01

    Large-conductance Ca2+- and voltage-activated K+ channel (BK) open probability is enhanced by depolarization, increasing Ca2+ concentration, or both. These stimuli activate modular voltage and Ca2+ sensors that are allosterically coupled to channel gating. Here, we report a point mutation of a phenylalanine (F380A) in the S6 transmembrane helix that, in the absence of internal Ca2+, profoundly hinders channel opening while showing only minor effects on the voltage sensor active–resting equilibrium. Interpretation of these results using an allosteric model suggests that the F380A mutation greatly increases the free energy difference between open and closed states and uncouples Ca2+ binding from voltage sensor activation and voltage sensor activation from channel opening. However, the presence of a bulky and more hydrophobic amino acid in the F380 position (F380W) increases the intrinsic open–closed equilibrium, weakening the coupling between both sensors with the pore domain. Based on these functional experiments and molecular dynamics simulations, we propose that F380 interacts with another S6 hydrophobic residue (L377) in contiguous subunits. This pair forms a hydrophobic ring important in determining the open–closed equilibrium and, like an integration node, participates in the communication between sensors and between the sensors and pore. Moreover, because of its effects on open probabilities, the F380A mutant can be used for detailed voltage sensor experiments in the presence of permeant cations. PMID:25548136

  14. 40 CFR 418.11 - Specialized definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... including precipitation runoff which, during manufacturing or processing, comes into incidental contact with...) Precipitation runoff; (2) accidental spills; (3) accidental leaks caused by the failure of process equipment and... shall mean the maximum 24-hour precipitation event with a probable recurrence interval of once in 10...

  15. 50 CFR 648.40 - Prohibition on possession.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Atlantic Salmon § 648.40 Prohibition on possession. (a) Incidental catch. All Atlantic salmon caught... maximum probability of survival. (b) Presumption. The possession of Atlantic salmon is prima facie evidence that such Atlantic salmon were taken in violation of this regulation. Evidence that such fish were...

  16. 50 CFR 648.40 - Prohibition on possession.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Atlantic Salmon § 648.40 Prohibition on possession. (a) Incidental catch. All Atlantic salmon caught... maximum probability of survival. (b) Presumption. The possession of Atlantic salmon is prima facie evidence that such Atlantic salmon were taken in violation of this regulation. Evidence that such fish were...

  17. Gender recognition from vocal source

    NASA Astrophysics Data System (ADS)

    Sorokin, V. N.; Makarov, I. S.

    2008-07-01

    Efficiency of automatic recognition of male and female voices based on solving the inverse problem for glottis area dynamics and for waveform of the glottal airflow volume velocity pulse is studied. The inverse problem is regularized through the use of analytical models of the voice excitation pulse and of the dynamics of the glottis area, as well as the model of one-dimensional glottal airflow. Parameters of these models and spectral parameters of the volume velocity pulse are considered. The following parameters are found to be most promising: the instant of maximum glottis area, the maximum derivative of the area, the slope of the spectrum of the glottal airflow volume velocity pulse, the amplitude ratios of harmonics of this spectrum, and the pitch. On the plane of the first two main components in the space of these parameters, an almost twofold decrease in the classification error relative to that for the pitch alone is attained. The male voice recognition probability is found to be 94.7%, and the female voice recognition probability is 95.9%.

  18. DNA damage during glycation of lysine by methylglyoxal: assessment of vitamins in preventing damage.

    PubMed

    Suji, G; Sivakami, S

    2007-11-01

    Amino acids react with methylglyoxal to form advanced glycation end products. This reaction is known to produce free radicals. In this study, cleavage to plasmid DNA was induced by the glycation of lysine with methylglyoxal in the presence of iron(III). This system was found to produce superoxide as well as hydroxyl radicals. The abilities of various vitamins to prevent damage to plasmid DNA were evaluated. Pyridoxal-5-phosphate showed maximum protection, while pyridoxamine showed no protection. The protective abilities could be directly correlated to inhibition of production of hydroxyl and superoxide radicals. Pyridoxal-5-phosphate exhibited low radical scavenging ability as evaluated by its TEAC, but showed maximum protection probably by interfering in free radical production. Pyridoxamine did not inhibit free radical production. Thiamine and thiamine pyrophosphate, both showed protective effects albeit to different extents. Tetrahydrofolic acid showed better antioxidant activity than folic acid but was found to damage DNA by itself probably by superoxide generation.

  19. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  20. Geochemical behavior of heavy metals in a Zn-Pb-Cu mining area in the State of Mexico (central Mexico).

    PubMed

    Lizárraga-Mendiola, L; González-Sandoval, M R; Durán-Domínguez, M C; Márquez-Herrera, C

    2009-08-01

    The geochemical behavior of zinc, lead and copper from sulfidic tailings in a mine site with potential to generate acidic drainage (pyrite (55%) and sphalerite (2%)) is reported in this paper. The mining area is divided in two zones, considering the topographic location of sampling points with respect to the tailings pile: (a) outer zone, out of the probable influence of acid mine drainage (AMD) pollution, and (b) inner zone, probably influenced by AMD pollution. Maximum total ions concentrations (mg/L) measured in superficial waters found were, in the outer zone: As (0.2), Cd (0.9), Fe (19), Mn (39), Pb (5.02), SO4(2-) (4650), Zn (107.67), and in the inner zone are As (0.1), Cd (0.2), Fe (88), Mn (13), Pb (6), SO4(2-) (4,880), Zn (46). The presence of these ions that exceeding the permissible maximum limits for human consume, could be associated to tailings mineralogy and acid leachates generated in tailings pile.

Top