Statistical mechanics of maximal independent sets
NASA Astrophysics Data System (ADS)
Dall'Asta, Luca; Pin, Paolo; Ramezanpour, Abolfazl
2009-12-01
The graph theoretic concept of maximal independent set arises in several practical problems in computer science as well as in game theory. A maximal independent set is defined by the set of occupied nodes that satisfy some packing and covering constraints. It is known that finding minimum and maximum-density maximal independent sets are hard optimization problems. In this paper, we use cavity method of statistical physics and Monte Carlo simulations to study the corresponding constraint satisfaction problem on random graphs. We obtain the entropy of maximal independent sets within the replica symmetric and one-step replica symmetry breaking frameworks, shedding light on the metric structure of the landscape of solutions and suggesting a class of possible algorithms. This is of particular relevance for the application to the study of strategic interactions in social and economic networks, where maximal independent sets correspond to pure Nash equilibria of a graphical game of public goods allocation.
Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks
NASA Astrophysics Data System (ADS)
Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.
Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.
Efficient parallel algorithms for (5+1)-coloring and maximal independent set problems
Goldberg, A.V.; Plotkin, S.A.
1987-01-01
An efficient technique for breaking symmetry in parallel is described. The technique works especially well on rooted trees and on graphs with a small maximum degree. In particular, a maximal independent set can be found on a constant-degree graph in O(lg*n) time on an EREW PRAM using a linear number of processors. It is shown how to apply this technique to construct more efficient parallel algorithms for several problems, including coloring of planar graphs and (delta + 1)-coloring of constant-degree graphs. Lower bounds for two related problems are proved.
NASA Astrophysics Data System (ADS)
Assad, S. M.; Thearle, O.; Lam, P. K.
2016-07-01
The rates at which a user can generate device-independent quantum random numbers from a Bell-type experiment depend on the measurements that the user performs. By numerically optimizing over these measurements, we present lower bounds on the randomness generation rates for a family of two-qubit states composed from a mixture of partially entangled states and the completely mixed state. We also report on the randomness generation rates from a tomographic measurement. Interestingly in this case, the randomness generation rates are not monotonic functions of entanglement.
Maximal non-classicality in multi-setting Bell inequalities
NASA Astrophysics Data System (ADS)
Tavakoli, Armin; Zohren, Stefan; Pawlowski, Marcin
2016-04-01
The discrepancy between maximally entangled states and maximally non-classical quantum correlations is well-known but still not well understood. We aim to investigate the relation between quantum correlations and entanglement in a family of Bell inequalities with N-settings and d outcomes. Using analytical as well as numerical techniques, we derive both maximal quantum violations and violations obtained from maximally entangled states. Furthermore, we study the most non-classical quantum states in terms of their entanglement entropy for large values of d and many measurement settings. Interestingly, we find that the entanglement entropy behaves very differently depending on whether N = 2 or N\\gt 2: when N = 2 the entanglement entropy is a monotone function of d and the most non-classical state is far from maximally entangled, whereas when N\\gt 2 the entanglement entropy is a non-monotone function of d and converges to that of the maximally entangled state in the limit of large d.
The maximally entangled set of 4-qubit states
NASA Astrophysics Data System (ADS)
Spee, C.; de Vicente, J. I.; Kraus, B.
2016-05-01
Entanglement is a resource to overcome the natural restriction of operations used for state manipulation to Local Operations assisted by Classical Communication (LOCC). Hence, a bipartite maximally entangled state is a state which can be transformed deterministically into any other state via LOCC. In the multipartite setting no such state exists. There, rather a whole set, the Maximally Entangled Set of states (MES), which we recently introduced, is required. This set has on the one hand the property that any state outside of this set can be obtained via LOCC from one of the states within the set and on the other hand, no state in the set can be obtained from any other state via LOCC. Recently, we studied LOCC transformations among pure multipartite states and derived the MES for three and generic four qubit states. Here, we consider the non-generic four qubit states and analyze their properties regarding local transformations. As already the most coarse grained classification, due to Stochastic LOCC (SLOCC), of four qubit states is much richer than in case of three qubits, the investigation of possible LOCC transformations is correspondingly more difficult. We prove that most SLOCC classes show a similar behavior as the generic states, however we also identify here three classes with very distinct properties. The first consists of the GHZ and W class, where any state can be transformed into some other state non-trivially. In particular, there exists no isolation. On the other hand, there also exist classes where all states are isolated. Last but not least we identify an additional class of states, whose transformation properties differ drastically from all the other classes. Although the possibility of transforming states into local-unitary inequivalent states by LOCC turns out to be very rare, we identify those states (with exception of the latter class) which are in the MES and those, which can be obtained (transformed) non-trivially from (into) other states
Finding the maximal membership in a fuzzy set of an element from another fuzzy set
NASA Astrophysics Data System (ADS)
Yager, Ronald R.
2010-11-01
The problem of finding the maximal membership grade in a fuzzy set of an element from another fuzzy set is an important class of optimisation problems manifested in the real world by situations in which we try to find what is the optimal financial satisfaction we can get from a socially responsible investment. Here, we provide a solution to this problem. We then look at the proposed solution for fuzzy sets with various types of membership grades, ordinal, interval value and intuitionistic.
Maximum independent set on diluted triangular lattices.
Fay, C W; Liu, J W; Duxbury, P M
2006-05-01
Core percolation and maximum independent set on random graphs have recently been characterized using the methods of statistical physics. Here we present a statistical physics study of these problems on bond diluted triangular lattices. Core percolation critical behavior is found to be consistent with the standard percolation values, though there are strong finite size effects. A transfer matrix method is developed and applied to find accurate values of the density and degeneracy of the maximum independent set on lattices of limited width but large length. An extrapolation of these results to the infinite lattice limit yields high precision results, which are tabulated. These results are compared to results found using both vertex based and edge based local probability recursion algorithms, which have proven useful in the analysis of hard computational problems, such as the satisfiability problem. PMID:16803003
Counting independent sets using the Bethe approximation
Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
An inability to set independent attentional control settings by hemifield.
Becker, Mark W; Ravizza, Susan M; Peltier, Chad
2015-11-01
Recent evidence suggests that people can simultaneously activate attentional control setting for two distinct colors. However, it is unclear whether both attentional control settings must operate globally across the visual field or whether each can be constrained to a particular spatial location. Using two different paradigms, we investigated participants' ability to apply independent color attentional control settings to distinct regions of space. In both experiments, participants were told to identify red letters in one hemifield and green letters in the opposite hemifield. Additionally, some trials used a "relevant distractor"-a letter that matched the opposite side's target color. In Experiment 1, eight letters appeared (four per hemifield) simultaneously for a brief amount of time and then were masked. Relevant distractors increased the error rate and resulted in a greater number of distractor intrusions than irrelevant distractors. Similar results were observed in Experiment 2 in which red and green targets were presented in two rapid serial visual presentation streams. Relevant distractors were found to produce an attentional blink similar in magnitude to an actual target. The results of both experiments suggest that letters matching either attentional control setting were selected by attention and were processed as if they were targets, providing strong evidence that both attentional control settings were applied globally, rather than being constrained to a particular location. PMID:26220268
Maximizing Social Model Principles in Residential Recovery Settings
Polcin, Douglas; Mericle, Amy; Howell, Jason; Sheridan, Dave; Christensen, Jeff
2014-01-01
Abstract Peer support is integral to a variety of approaches to alcohol and drug problems. However, there is limited information about the best ways to facilitate it. The “social model” approach developed in California offers useful suggestions for facilitating peer support in residential recovery settings. Key principles include using 12-step or other mutual-help group strategies to create and facilitate a recovery environment, involving program participants in decision making and facility governance, using personal recovery experience as a way to help others, and emphasizing recovery as an interaction between the individual and their environment. Although limited in number, studies have shown favorable outcomes for social model programs. Knowledge about social model recovery and how to use it to facilitate peer support in residential recovery homes varies among providers. This article presents specific, practical suggestions for enhancing social model principles in ways that facilitate peer support in a range of recovery residences. PMID:25364996
Speeding up Growth: Selection for Mass-Independent Maximal Metabolic Rate Alters Growth Rates.
Downs, Cynthia J; Brown, Jessi L; Wone, Bernard W M; Donovan, Edward R; Hayes, Jack P
2016-03-01
Investigations into relationships between life-history traits, such as growth rate and energy metabolism, typically focus on basal metabolic rate (BMR). In contrast, investigators rarely examine maximal metabolic rate (MMR) as a relevant metric of energy metabolism, even though it indicates the maximal capacity to metabolize energy aerobically, and hence it might also be important in trade-offs. We studied the relationship between energy metabolism and growth in mice (Mus musculus domesticus Linnaeus) selected for high mass-independent metabolic rates. Selection for high mass-independent MMR increased maximal growth rate, increased body mass at 20 weeks of age, and generally altered growth patterns in both male and female mice. In contrast, there was little evidence that the correlated response in mass-adjusted BMR altered growth patterns. The relationship between mass-adjusted MMR and growth rate indicates that MMR is an important mediator of life histories. Studies investigating associations between energy metabolism and life histories should consider MMR because it is potentially as important in understanding life history as BMR. PMID:26913943
NASA Astrophysics Data System (ADS)
Mahjoub, Dhia; Matula, David W.
The domatic partition problem seeks to maximize the partitioning of the nodes of the network into disjoint dominating sets. These sets represent a series of virtual backbones for wireless sensor networks to be activated successively, resulting in more balanced energy consumption and increased network robustness. In this study, we address the domatic partition problem in random geometric graphs by investigating several vertex coloring algorithms both topology and geometry-aware, color-adaptive and randomized. Graph coloring produces color classes with each class representing an independent set of vertices. The disjoint maximal independent sets constitute a collection of disjoint dominating sets that offer good network coverage. Furthermore, if we relax the full domination constraint then we obtain a partitioning of the network into disjoint dominating and nearly-dominating sets of nearly equal size, providing better redundancy and a near-perfect node coverage yield. In addition, these independent sets can be the basis for clustering a very large sensor network with minimal overlap between the clusters leading to increased efficiency in routing, wireless transmission scheduling and data-aggregation. We also observe that in dense random deployments, certain coloring algorithms yield a packing of the nodes into independent sets each of which is relatively close to the perfect placement in the triangular lattice.
On small set of one-way LOCC indistinguishability of maximally entangled states
NASA Astrophysics Data System (ADS)
Wang, Yan-Ling; Li, Mao-Sheng; Zheng, Zhu-Jun; Fei, Shao-Ming
2016-04-01
In this paper, we study the one-way local operations and classical communication (LOCC) problem. In {C}^d⊗ {C}^d with d≥ 4, we construct a set of 3lceil √{d}rceil -1 one-way LOCC indistinguishable maximally entangled states which are generalized Bell states. Moreover, we show that there are four maximally entangled states which cannot be perfectly distinguished by one-way LOCC measurements for any dimension d≥ 4.
Agreement Measure Comparisons between Two Independent Sets of Raters.
ERIC Educational Resources Information Center
Berry, Kenneth J.; Mielke, Paul W., Jr.
1997-01-01
Describes a FORTRAN software program that calculates the probability of an observed difference between agreement measures obtained from two independent sets of raters. An example illustrates the use of the DIFFER program in evaluating undergraduate essays. (Author/SLD)
ERIC Educational Resources Information Center
Green, Robert A.
The doctoral thesis, three-fourths of which consists of appendixes, describes the development and implementation of procedures to maximize the individualized instruction time of speech, hearing, and visually handicapped students in a public school itinerant special education setting in Pennsylvania. A brief review of the Education for All…
Influence maximization in social networks under an independent cascade-based model
NASA Astrophysics Data System (ADS)
Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan
2016-02-01
The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.
Optimal quench for distance-independent entanglement and maximal block entropy
NASA Astrophysics Data System (ADS)
Alkurtass, Bedoor; Banchi, Leonardo; Bose, Sougato
2014-10-01
We optimize a quantum walk of multiple fermions following a quench in a spin chain to generate near-ideal resources for quantum networking. We first prove a useful theorem mapping the correlations evolved from specific quenches to the apparently unrelated problem of quantum state transfer between distinct spins. This mapping is then exploited to optimize the dynamics and produce large amounts of entanglement distributed in very special ways. Two applications are considered: the simultaneous generation of many Bell states between pairs of distant spins (maximal block entropy) and high entanglement between the ends of an arbitrarily long chain (distance-independent entanglement). Thanks to the generality of the result, we study its implementation in different experimental setups using present technology: nuclear magnetic resonance, ion traps, and ultracold atoms in optical lattices.
State-independent contextuality sets for a qutrit
NASA Astrophysics Data System (ADS)
Xu, Zhen-Peng; Chen, Jing-Ling; Su, Hong-Yi
2015-09-01
We present a generalized set of complex rays for a qutrit in terms of parameter q =e i 2 π / k, a k-th root of unity. Remarkably, when k = 2 , 3, the set reduces to two well-known state-independent contextuality (SIC) sets: the Yu-Oh set and the Bengtsson-Blanchfield-Cabello set. Based on the Ramanathan-Horodecki criterion and the violation of a noncontextuality inequality, we have proven that the sets with k = 3 m and k = 4 are SIC sets, while the set with k = 5 is not. Our generalized set of rays will theoretically enrich the study of SIC proofs, and stimulate the novel application to quantum information processing.
Existence of independent [1, 2]-sets in caterpillars
NASA Astrophysics Data System (ADS)
Santoso, Eko Budi; Marcelo, Reginaldo M.
2016-02-01
Given a graph G, a subset S ⊆ V (G) is an independent [1, 2]-set if no two vertices in S are adjacent and for every vertex ν ∈ V (G)/S, 1 ≤ |N(ν) ∩ S| ≤ 2, that is, every vertex ν ∈ V (G)/S is adjacent to at least one but not more than two vertices in S. In this paper, we discuss the existence of independent [1, 2]-sets in a family of trees called caterpillars.
Shortest Paths between Shortest Paths and Independent Sets
NASA Astrophysics Data System (ADS)
Kamiński, Marcin; Medvedev, Paul; Milanič, Martin
We study problems of reconfiguration of shortest paths in graphs. We prove that the shortest reconfiguration sequence can be exponential in the size of the graph and that it is NP-hard to compute the shortest reconfiguration sequence even when we know that the sequence has polynomial length. Moreover, we also study reconfiguration of independent sets in three different models and analyze relationships between these models, observing that shortest path reconfiguration is a special case of independent set reconfiguration in perfect graphs, under any of the three models. Finally, we give polynomial results for restricted classes of graphs (even-hole-free and P 4-free graphs).
Balance between noise and information flow maximizes set complexity of network dynamics.
Mäki-Marttunen, Tuomo; Kesseli, Juha; Nykter, Matti
2013-01-01
Boolean networks have been used as a discrete model for several biological systems, including metabolic and genetic regulatory networks. Due to their simplicity they offer a firm foundation for generic studies of physical systems. In this work we show, using a measure of context-dependent information, set complexity, that prior to reaching an attractor, random Boolean networks pass through a transient state characterized by high complexity. We justify this finding with a use of another measure of complexity, namely, the statistical complexity. We show that the networks can be tuned to the regime of maximal complexity by adding a suitable amount of noise to the deterministic Boolean dynamics. In fact, we show that for networks with Poisson degree distributions, all networks ranging from subcritical to slightly supercritical can be tuned with noise to reach maximal set complexity in their dynamics. For networks with a fixed number of inputs this is true for near-to-critical networks. This increase in complexity is obtained at the expense of disruption in information flow. For a large ensemble of networks showing maximal complexity, there exists a balance between noise and contracting dynamics in the state space. In networks that are close to critical the intrinsic noise required for the tuning is smaller and thus also has the smallest effect in terms of the information processing in the system. Our results suggest that the maximization of complexity near to the state transition might be a more general phenomenon in physical systems, and that noise present in a system may in fact be useful in retaining the system in a state with high information content. PMID:23516395
Sartor, Francesco; Vernillo, Gianluca; de Morree, Helma M; Bonomi, Alberto G; La Torre, Antonio; Kubis, Hans-Peter; Veicsteinas, Arsenio
2013-09-01
Assessment of the functional capacity of the cardiovascular system is essential in sports medicine. For athletes, the maximal oxygen uptake [Formula: see text] provides valuable information about their aerobic power. In the clinical setting, the (VO(2max)) provides important diagnostic and prognostic information in several clinical populations, such as patients with coronary artery disease or heart failure. Likewise, VO(2max) assessment can be very important to evaluate fitness in asymptomatic adults. Although direct determination of [VO(2max) is the most accurate method, it requires a maximal level of exertion, which brings a higher risk of adverse events in individuals with an intermediate to high risk of cardiovascular problems. Estimation of VO(2max) during submaximal exercise testing can offer a precious alternative. Over the past decades, many protocols have been developed for this purpose. The present review gives an overview of these submaximal protocols and aims to facilitate appropriate test selection in sports, clinical, and home settings. Several factors must be considered when selecting a protocol: (i) The population being tested and its specific needs in terms of safety, supervision, and accuracy and repeatability of the VO(2max) estimation. (ii) The parameters upon which the prediction is based (e.g. heart rate, power output, rating of perceived exertion [RPE]), as well as the need for additional clinically relevant parameters (e.g. blood pressure, ECG). (iii) The appropriate test modality that should meet the above-mentioned requirements should also be in line with the functional mobility of the target population, and depends on the available equipment. In the sports setting, high repeatability is crucial to track training-induced seasonal changes. In the clinical setting, special attention must be paid to the test modality, because multiple physiological parameters often need to be measured during test execution. When estimating VO(2max), one has
Carlson, Christopher S.; Eberle, Michael A.; Rieder, Mark J.; Yi, Qian; Kruglyak, Leonid; Nickerson, Deborah A.
2004-01-01
Common genetic polymorphisms may explain a portion of the heritable risk for common diseases. Within candidate genes, the number of common polymorphisms is finite, but direct assay of all existing common polymorphism is inefficient, because genotypes at many of these sites are strongly correlated. Thus, it is not necessary to assay all common variants if the patterns of allelic association between common variants can be described. We have developed an algorithm to select the maximally informative set of common single-nucleotide polymorphisms (tagSNPs) to assay in candidate-gene association studies, such that all known common polymorphisms either are directly assayed or exceed a threshold level of association with a tagSNP. The algorithm is based on the r2 linkage disequilibrium (LD) statistic, because r2 is directly related to statistical power to detect disease associations with unassayed sites. We show that, at a relatively stringent r2 threshold (r2>0.8), the LD-selected tagSNPs resolve >80% of all haplotypes across a set of 100 candidate genes, regardless of recombination, and tag specific haplotypes and clades of related haplotypes in nonrecombinant regions. Thus, if the patterns of common variation are described for a candidate gene, analysis of the tagSNP set can comprehensively interrogate for main effects from common functional variation. We demonstrate that, although common variation tends to be shared between populations, tagSNPs should be selected separately for populations with different ancestries. PMID:14681826
Beyond Maximum Independent Set: AN Extended Model for Point-Feature Label Placement
NASA Astrophysics Data System (ADS)
Haunert, Jan-Henrik; Wolff, Alexander
2016-06-01
Map labeling is a classical problem of cartography that has frequently been approached by combinatorial optimization. Given a set of features in the map and for each feature a set of label candidates, a common problem is to select an independent set of labels (that is, a labeling without label-label overlaps) that contains as many labels as possible and at most one label for each feature. To obtain solutions of high cartographic quality, the labels can be weighted and one can maximize the total weight (rather than the number) of the selected labels. We argue, however, that when maximizing the weight of the labeling, interdependences between labels are insufficiently addressed. Furthermore, in a maximum-weight labeling, the labels tend to be densely packed and thus the map background can be occluded too much. We propose extensions of an existing model to overcome these limitations. Since even without our extensions the problem is NP-hard, we cannot hope for an efficient exact algorithm for the problem. Therefore, we present a formalization of our model as an integer linear program (ILP). This allows us to compute optimal solutions in reasonable time, which we demonstrate for randomly generated instances.
Cometti, Carole; Deley, Gaelle; Babault, Nicolas
2011-01-01
The presents study investigated the effects of between-set interventions on neuromuscular function of the knee extensors during six sets of 10 isokinetic (120°·s-1) maximal concentric contractions separated by three minutes. Twelve healthy men (age: 23.9 ± 2.4 yrs) were tested for four different between-set recovery conditions applied during two minutes: passive recovery, active recovery (cycling), electromyostimulation and stretching, in a randomized, crossover design. Before, during and at the end of the isokinetic session, torque and thigh muscles electromyographic activity were measured during maximal voluntary contractions and electrically-evoked doublets. Activation level was calculated using the twitch interpolation technique. While quadriceps electromyographic activity and activation level were significantly decreased at the end of the isokinetic session (-5.5 ± 14.2 % and -2.7 ± 4.8 %; p < 0.05), significant decreases in maximal voluntary contractions and doublets were observed after the third set (respectively -0.8 ± 12.1% and -5.9 ± 9.9%; p < 0.05). Whatever the recovery modality applied, torque was back to initial values after each recovery period. The present results showed that fatigue appeared progressively during the isokinetic session with peripheral alterations occurring first followed by central ones. Recovery interventions between sets did not modify fatigue time course as compared with passive recovery. It appears that the interval between sets (3 min) was long enough to provide recovery regardless of the interventions. Key points Allowing three minutes of recovery between sets of 10 maximal concentric contractions would help the subjects to recover from the peripheral fatigue induced by each set and therefore to start each new set with a high intensity. During this type of session, with three minutes between sets, passive recovery is sufficient; there is no need to apply complicated recovery interventions. PMID:24149550
Wone, B W M; Madsen, P; Donovan, E R; Labocha, M K; Sears, M W; Downs, C J; Sorensen, D A; Hayes, J P
2015-01-01
Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR, and BMR was slightly, but not significantly, higher (2.5%). Compared with controls, MMR was significantly higher (5.3%) in antagonistically selected lines, and BMR was slightly, but not significantly, lower (4.2%). Analysis of breeding values revealed no positive genetic trend for elevated BMR in high-MMR lines. A weak positive genetic correlation was detected between MMR and BMR. That weak positive genetic correlation supports the aerobic capacity model for the evolution of endothermy in the sense that it fails to falsify a key model assumption. Overall, the results suggest that at least in these mice there is significant capacity for independent evolution of metabolic traits. Whether that is true in the ancestral animals that evolved endothermy remains an important but unanswered question. PMID:25604947
Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.
Nath, Abhigyan; Subbiah, Karthikeyan
2015-12-01
Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance
Takahashi, Kei-ichiro; Takigawa, Ichigaku; Mamitsuka, Hiroshi
2013-01-01
Detecting biclusters from expression data is useful, since biclusters are coexpressed genes under only part of all given experimental conditions. We present a software called SiBIC, which from a given expression dataset, first exhaustively enumerates biclusters, which are then merged into rather independent biclusters, which finally are used to generate gene set networks, in which a gene set assigned to one node has coexpressed genes. We evaluated each step of this procedure: 1) significance of the generated biclusters biologically and statistically, 2) biological quality of merged biclusters, and 3) biological significance of gene set networks. We emphasize that gene set networks, in which nodes are not genes but gene sets, can be more compact than usual gene networks, meaning that gene set networks are more comprehensible. SiBIC is available at http://utrecht.kuicr.kyoto-u.ac.jp:8080/miami/faces/index.jsp. PMID:24386124
Takahashi, Kei-ichiro; Takigawa, Ichigaku; Mamitsuka, Hiroshi
2013-01-01
Detecting biclusters from expression data is useful, since biclusters are coexpressed genes under only part of all given experimental conditions. We present a software called SiBIC, which from a given expression dataset, first exhaustively enumerates biclusters, which are then merged into rather independent biclusters, which finally are used to generate gene set networks, in which a gene set assigned to one node has coexpressed genes. We evaluated each step of this procedure: 1) significance of the generated biclusters biologically and statistically, 2) biological quality of merged biclusters, and 3) biological significance of gene set networks. We emphasize that gene set networks, in which nodes are not genes but gene sets, can be more compact than usual gene networks, meaning that gene set networks are more comprehensible. SiBIC is available at http://utrecht.kuicr.kyoto-u.ac.jp:8080/miami/faces/index.jsp. PMID:24386124
Assessing material flows in urban systems: an approach to maximize the use of incomplete data sets.
Espinosa, G; Otterpohl, R
2014-01-01
Data scarcity and uncertainty are the main limiting factors for an integral evaluation of the urban water and wastewater management system (WWMS) in developing countries. The present research shows an approach to use incomplete data sets to analyse the flows of water and nitrogen and to make an integral evaluation of the WWMS at a case study city. By means of data validation and model adaptations the use of literature values is kept at the minimum possible and so the current trends for water consumption and pollution in the city are identified. The material flows were calculated as central values with a certain confidence range and met the selected plausibility criteria. Thus, the first essential step needed to identify the challenges and opportunities of future improvement strategies at the WWMS of the city was possible. PMID:25259505
Brodsky, Stanley J.; Wu, Xing-Gang; /SLAC /Chongqing U.
2012-02-16
A key problem in making precise perturbative QCD predictions is to set the proper renormalization scale of the running coupling. The extended renormalization group equations, which express the invariance of physical observables under both the renormalization scale- and scheme-parameter transformations, provide a convenient way for estimating the scale- and scheme-dependence of the physical process. In this paper, we present a solution for the scale-equation of the extended renormalization group equations at the four-loop level. Using the principle of maximum conformality (PMC)/Brodsky-Lepage-Mackenzie (BLM) scale-setting method, all non-conformal {beta}{sub i} terms in the perturbative expansion series can be summed into the running coupling, and the resulting scale-fixed predictions are independent of the renormalization scheme. Different schemes lead to different effective PMC/BLM scales, but the final results are scheme independent. Conversely, from the requirement of scheme independence, one not only can obtain scheme-independent commensurate scale relations among different observables, but also determine the scale displacements among the PMC/BLM scales which are derived under different schemes. In principle, the PMC/BLM scales can be fixed order-by-order, and as a useful reference, we present a systematic and scheme-independent procedure for setting PMC/BLM scales up to NNLO. An explicit application for determining the scale setting of R{sub e{sup +}e{sup -}}(Q) up to four loops is presented. By using the world average {alpha}{sub s}{sup {ovr MS}}(MZ) = 0.1184 {+-} 0.0007, we obtain the asymptotic scale for the 't Hooft associated with the {ovr MS} scheme, {Lambda}{sub {ovr MS}}{sup 'tH} = 245{sub -10}{sup +9} MeV, and the asymptotic scale for the conventional {ovr MS} scheme, {Lambda}{sub {ovr MS}} = 213{sub -8}{sup +19} MeV.
Grignon, Jessica S; Ledikwe, Jenny H; Makati, Ditsapelo; Nyangah, Robert; Sento, Baraedi W; Semo, Bazghina-Werq
2014-01-01
To address health systems challenges in limited-resource settings, global health initiatives, particularly the President's Emergency Plan for AIDS Relief, have seconded health workers to the public sector. Implementation considerations for secondment as a health workforce development strategy are not well documented. The purpose of this article is to present outcomes, best practices, and lessons learned from a President's Emergency Plan for AIDS Relief-funded secondment program in Botswana. Outcomes are documented across four World Health Organization health systems' building blocks. Best practices include documentation of joint stakeholder expectations, collaborative recruitment, and early identification of counterparts. Lessons learned include inadequate ownership, a two-tier employment system, and ill-defined position duration. These findings can inform program and policy development to maximize the benefit of health workforce secondment. Secondment requires substantial investment, and emphasis should be placed on high-level technical positions responsible for building systems, developing health workers, and strengthening government to translate policy into programs. PMID:24876798
Martín, René San; Appelbaum, Lawrence G.; Pearson, John M.; Huettel, Scott A.; Woldorff, Marty G.
2013-01-01
Success in many decision-making scenarios depends on the ability to maximize gains and minimize losses. Even if an agent knows which cues lead to gains and which lead to losses, that agent could still make choices yielding suboptimal rewards. Here, by analyzing event-related potentials (ERPs) recorded in humans during a probabilistic gambling task, we show that individuals’ behavioral tendencies to maximize gains and to minimize losses are associated with their ERP responses to the receipt of those gains and losses, respectively. We focused our analyses on ERP signals that predict behavioral adjustment: the fronto-central feedback-related negativity (FRN) and two P300 (P3) subcomponents: the fronto-central P3a and the parietal P3b. We found that, across participants, gain-maximization was predicted by differences in amplitude of the P3b for suboptimal versus optimal gains (i.e., P3b amplitude difference between the least good and the best possible gains). Conversely, loss-minimization was predicted by differences in the P3b amplitude to suboptimal versus optimal losses (i.e., difference between the worst and the least bad losses). Finally, we observed that the P3a and P3b, but not the FRN, predicted behavioral adjustment on subsequent trials, suggesting a specific adaptive mechanism by which prior experience may alter ensuing behavior. These findings indicate that individual differences in gain-maximization and loss-minimization are linked to individual differences in rapid neural responses to monetary outcomes. PMID:23595758
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
NASA Astrophysics Data System (ADS)
Kaloshin, Vadim; Saprykina, Maria
2012-11-01
The famous ergodic hypothesis suggests that for a typical Hamiltonian on a typical energy surface nearly all trajectories are dense. KAM theory disproves it. Ehrenfest (The Conceptual Foundations of the Statistical Approach in Mechanics. Ithaca, NY: Cornell University Press, 1959) and Birkhoff (Collected Math Papers. Vol 2, New York: Dover, pp 462-465, 1968) stated the quasi-ergodic hypothesis claiming that a typical Hamiltonian on a typical energy surface has a dense orbit. This question is wide open. Herman (Proceedings of the International Congress of Mathematicians, Vol II (Berlin, 1998). Doc Math 1998, Extra Vol II, Berlin: Int Math Union, pp 797-808, 1998) proposed to look for an example of a Hamiltonian near {H_0(I)= < I, I rangle/2} with a dense orbit on the unit energy surface. In this paper we construct a Hamiltonian {H_0(I)+\\varepsilon H_1(θ , I , \\varepsilon)} which has an orbit dense in a set of maximal Hausdorff dimension equal to 5 on the unit energy surface.
Pal, Karoly F.; Vertesi, Tamas
2010-08-15
The I{sub 3322} inequality is the simplest bipartite two-outcome Bell inequality beyond the Clauser-Horne-Shimony-Holt (CHSH) inequality, consisting of three two-outcome measurements per party. In the case of the CHSH inequality the maximal quantum violation can already be attained with local two-dimensional quantum systems; however, there is no such evidence for the I{sub 3322} inequality. In this paper a family of measurement operators and states is given which enables us to attain the maximum quantum value in an infinite-dimensional Hilbert space. Further, it is conjectured that our construction is optimal in the sense that measuring finite-dimensional quantum systems is not enough to achieve the true quantum maximum. We also describe an efficient iterative algorithm for computing quantum maximum of an arbitrary two-outcome Bell inequality in any given Hilbert space dimension. This algorithm played a key role in obtaining our results for the I{sub 3322} inequality, and we also applied it to improve on our previous results concerning the maximum quantum violation of several bipartite two-outcome Bell inequalities with up to five settings per party.
Maximally nonlocal theories cannot be maximally random.
de la Torre, Gonzalo; Hoban, Matty J; Dhara, Chirag; Prettico, Giuseppe; Acín, Antonio
2015-04-24
Correlations that violate a Bell inequality are said to be nonlocal; i.e., they do not admit a local and deterministic explanation. Great effort has been devoted to study how the amount of nonlocality (as measured by a Bell inequality violation) serves to quantify the amount of randomness present in observed correlations. In this work we reverse this research program and ask what do the randomness certification capabilities of a theory tell us about the nonlocality of that theory. We find that, contrary to initial intuition, maximal randomness certification cannot occur in maximally nonlocal theories. We go on and show that quantum theory, in contrast, permits certification of maximal randomness in all dichotomic scenarios. We hence pose the question of whether quantum theory is optimal for randomness; i.e., is it the most nonlocal theory that allows maximal randomness certification? We answer this question in the negative by identifying a larger-than-quantum set of correlations capable of this feat. Not only are these results relevant to understanding quantum mechanics' fundamental features, but also put fundamental restrictions on device-independent protocols based on the no-signaling principle. PMID:25955039
NASA Technical Reports Server (NTRS)
Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)
2002-01-01
The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.
Holocene sea level variations on the basis of integration of independent data sets
Sahagian, D.; Berkman, P. . Dept. of Geological Sciences and Byrd Polar Research Center)
1992-01-01
Variations in sea level through earth history have occurred at a wide variety of time scales. Sea level researchers have attacked the problem of measuring these sea level changes through a variety of approaches, each relevant only to the time scale in question, and usually only relevant to the specific locality from which a specific type of data are derived. There is a plethora of different data types that can and have been used (locally) for the measurement of Holocene sea level variations. The problem of merging different data sets for the purpose of constructing a global eustatic sea level curve for the Holocene has not previously been adequately addressed. The authors direct the efforts to that end. Numerous studies have been published regarding Holocene sea level changes. These have involved exposed fossil reef elevations, elevation of tidal deltas, elevation of depth of intertidal peat deposits, caves, tree rings, ice cores, moraines, eolian dune ridges, marine-cut terrace elevations, marine carbonate species, tide gauges, and lake level variations. Each of these data sets is based on particular set of assumptions, and is valid for a specific set of environments. In order to obtain the most accurate possible sea level curve for the Holocene, these data sets must be merged so that local and other influences can be filtered out of each data set. Since each data set involves very different measurements, each is scaled in order to define the sensitivity of the proxy measurement parameter to sea level, including error bounds. This effectively determines the temporal and spatial resolution of each data set. The level of independence of data sets is also quantified, in order to rule out the possibility of a common non-eustatic factor affecting more than one variety of data. The Holocene sea level curve is considered to be independent of other factors affecting the proxy data, and is taken to represent the relation between global ocean water and basin volumes.
Composite alignment media for the measurement of independent sets of NMR residual dipolar couplings.
Ruan, Ke; Tolman, Joel R
2005-11-01
The measurement of independent sets of NMR residual dipolar couplings (RDCs) in multiple alignment media can provide a detailed view of biomolecular structure and dynamics, yet remains experimentally challenging. It is demonstrated here that independent sets of RDCs can be measured for ubiquitin using just a single alignment medium composed of aligned bacteriophage Pf1 particles embedded in a strained polyacrylamide gel matrix. Using this composite medium, molecular alignment can be modulated by varying the angle between the directors of ordering for the Pf1 and strained gel matrix, or by varying the ionic strength or concentration of the Pf1 particles. This approach offers significant advantages in that greater experimental control can be exercised over the acquisition of multi-alignment RDC data while a homogeneous chemical environment is maintained across all of the measured RDC data. PMID:16248635
GreedyMAX-type Algorithms for the Maximum Independent Set Problem
NASA Astrophysics Data System (ADS)
Borowiecki, Piotr; Göring, Frank
A maximum independent set problem for a simple graph G = (V,E) is to find the largest subset of pairwise nonadjacent vertices. The problem is known to be NP-hard and it is also hard to approximate. Within this article we introduce a non-negative integer valued function p defined on the vertex set V(G) and called a potential function of a graph G, while P(G) = max v ∈ V(G) p(v) is called a potential of G. For any graph P(G) ≤ Δ(G), where Δ(G) is the maximum degree of G. Moreover, Δ(G) - P(G) may be arbitrarily large. A potential of a vertex lets us get a closer insight into the properties of its neighborhood which leads to the definition of the family of GreedyMAX-type algorithms having the classical GreedyMAX algorithm as their origin. We establish a lower bound 1/(P + 1) for the performance ratio of GreedyMAX-type algorithms which favorably compares with the bound 1/(Δ + 1) known to hold for GreedyMAX. The cardinality of an independent set generated by any GreedyMAX-type algorithm is at least sum_{vin V(G)} (p(v)+1)^{-1}, which strengthens the bounds of Turán and Caro-Wei stated in terms of vertex degrees.
ERIC Educational Resources Information Center
Trapp, Georgina; Giles-Corti, Billie; Martin, Karen; Timperio, Anna; Villanueva, Karen
2012-01-01
Background: Schools are an ideal setting in which to involve children in research. Yet for investigators wishing to work in these settings, there are few method papers providing insights into working efficiently in this setting. Objective: The aim of this paper is to describe the five strategies used to increase response rates, data quality and…
Scudese, Estevão; Willardson, Jeffrey M; Simão, Roberto; Senna, Gilmar; de Salles, Belmiro F; Miranda, Humberto
2015-11-01
The purpose of this study was to compare different rest intervals between sets on repetition consistency and ratings of perceived exertion (RPE) during consecutive bench press sets with an absolute 3RM (3 repetition maximum) load. Sixteen trained men (23.75 ± 4.21 years; 74.63 ± 5.36 kg; 175 ± 4.64 cm; bench press relative strength: 1.44 ± 0.19 kg/kg of body mass) attended 4 randomly ordered sessions during which 5 consecutive sets of the bench press were performed with an absolute 3RM load and 1, 2, 3, or 5 minutes of rest interval between sets. The results indicated that significantly greater bench press repetitions were completed with 2, 3, and 5 minutes vs. 1-minute rest between sets (p ≤ 0.05); no significant differences were noted between the 2, 3, and 5 minutes rest conditions. For the 1-minute rest condition, performance reductions (relative to the first set) were observed commencing with the second set; whereas for the other conditions (2, 3, and 5 minutes rest), performance reductions were not evident until the third and fourth sets. The RPE values before each of the successive sets were significantly greater, commencing with the second set for the 1-minute vs. the 3 and 5 minutes rest conditions. Significant increases were also evident in RPE immediately after each set between the 1 and 5 minutes rest conditions from the second through fifth sets. These findings indicate that when utilizing an absolute 3RM load for the bench press, practitioners may prescribe a time-efficient minimum of 2 minutes rest between sets without significant impairments in repetition performance. However, lower perceived exertion levels may necessitate prescription of a minimum of 3 minutes rest between sets. PMID:24045632
Cell Wall Invertase Promotes Fruit Set under Heat Stress by Suppressing ROS-Independent Cell Death.
Liu, Yong-Hua; Offler, Christina E; Ruan, Yong-Ling
2016-09-01
Reduced cell wall invertase (CWIN) activity has been shown to be associated with poor seed and fruit set under abiotic stress. Here, we examined whether genetically increasing native CWIN activity would sustain fruit set under long-term moderate heat stress (LMHS), an important factor limiting crop production, by using transgenic tomato (Solanum lycopersicum) with its CWIN inhibitor gene silenced and focusing on ovaries and fruits at 2 d before and after pollination, respectively. We found that the increase of CWIN activity suppressed LMHS-induced programmed cell death in fruits. Surprisingly, measurement of the contents of H2O2 and malondialdehyde and the activities of a cohort of antioxidant enzymes revealed that the CWIN-mediated inhibition on programmed cell death is exerted in a reactive oxygen species-independent manner. Elevation of CWIN activity sustained Suc import into fruits and increased activities of hexokinase and fructokinase in the ovaries in response to LMHS Compared to the wild type, the CWIN-elevated transgenic plants exhibited higher transcript levels of heat shock protein genes Hsp90 and Hsp100 in ovaries and HspII17.6 in fruits under LMHS, which corresponded to a lower transcript level of a negative auxin responsive factor IAA9 but a higher expression of the auxin biosynthesis gene ToFZY6 in fruits at 2 d after pollination. Collectively, the data indicate that CWIN enhances fruit set under LMHS through suppression of programmed cell death in a reactive oxygen species-independent manner that could involve enhanced Suc import and catabolism, HSP expression, and auxin response and biosynthesis. PMID:27462084
NASA Astrophysics Data System (ADS)
Hebenstreit, M.; Spee, C.; Kraus, B.
2016-01-01
Entanglement is the resource to overcome the restriction of operations to local operations assisted by classical communication (LOCC). The maximally entangled set (MES) of states is the minimal set of n -partite pure states with the property that any truly n -partite entangled pure state can be obtained deterministically via LOCC from some state in this set. Hence, this set contains the most useful states for applications. In this work, we characterize the MES for generic three-qutrit states. Moreover, we analyze which generic three-qutrit states are reachable (and convertible) under LOCC transformations. To this end, we study reachability via separable operations (SEP), a class of operations that is strictly larger than LOCC. Interestingly, we identify a family of pure states that can be obtained deterministically via SEP but not via LOCC. This gives an affirmative answer to the question of whether there is a difference between SEP and LOCC for transformations among pure states.
Lee, Wei-Ning; Qian, Zhen; Tosti, Christina L.; Brown, Truman R.; Metaxas, Dimitris N.; Konofagou, Elisa E.
2014-01-01
Myocardial Elastography (ME), a radio-frequency (RF) based speckle tracking technique, was employed in order to image the entire two-dimensional (2D) transmural deformation field in full view, and validated against tagged Magnetic Resonance Imaging (tMRI) in normal as well as reperfused (i.e., treated myocardial infarction (MI)) human left ventricles. RF ultrasound and tMRI frames were acquired at the papillary muscle level in 2D short-axis (SA) views at nominal frame rates of 136 (fps; real time) and 33 fps (electrocardiogram (ECG)-gated), respectively. In ultrasound, in-plane, 2D (lateral and axial) incremental displacements were iteratively estimated using one-dimensional (1D) cross-correlation and recorrelation techniques in a 2D search with a 1D matching kernel. In tMRI, cardiac motion was estimated by a template-matching algorithm on a 2D grid-shaped mesh. In both ME and tMRI, cumulative 2D displacements were estimated and then used to estimate 2D Lagrangian finite systolic strains, from which polar (i.e., radial and circumferential) strains, namely angle-independent measures, were further obtained through coordinate transformation. Principal strains, which are angle-independent and less centroid-dependent than polar strains, were also computed and imaged based on the 2D finite strains with a previously established strategy. Both qualitatively and quantitatively, angle-independent ME is shown to be capable of 1) estimating myocardial deformation in good agreement with tMRI estimates in a clinical setting and of 2) differentiating abnormal from normal myocardium in a full left-ventricular view. Finally, the principal strains are suggested to be an alternative diagnostic tool of detecting cardiac disease with the characteristics of their reduced centroid dependence. PMID:18952364
ERIC Educational Resources Information Center
Hume, Kara; Plavnick, Joshua; Odom, Samuel L.
2012-01-01
Strategies that promote the independent demonstration of skills across educational settings are critical for improving the accessibility of general education settings for students with ASD. This research assessed the impact of an individual work system on the accuracy of task completion and level of adult prompting across educational setting.…
A Method Defining a Limited Set of Character-Strings with Maximal Coverage of a Sample of Text.
ERIC Educational Resources Information Center
Hultgren, Jan; Larsson, Rolf
This is a progress report on a project attempting to design a system of compacting text for storage appropriate to disc oriented demand searching. After noting a number of previously designed methods of compression, it offers a tentative solution which couples a dictionary of most frequent character-strings with a set of variable-length codes. The…
Wang, Ning; Braun, Edward L; Kimball, Rebecca T
2012-02-01
Although many phylogenetic studies have focused on developing hypotheses about relationships, advances in data collection and computation have increased the feasibility of collecting large independent data sets to rigorously test controversial hypotheses or carefully assess artifacts that may be misleading. One such relationship in need of independent evaluation is the position of Passeriformes (perching birds) in avian phylogeny. This order comprises more than half of all extant birds, and it includes one of the most important avian model systems (the zebra finch). Recent large-scale studies using morphology, mitochondrial, and nuclear sequence data have generated very different hypotheses about the sister group of Passeriformes, and all conflict with an older hypothesis generated using DNA-DNA hybridization. We used novel data from 30 nuclear loci, primarily introns, for 28 taxa to evaluate five major a priori hypotheses regarding the phylogenetic position of Passeriformes. Although previous studies have suggested that nuclear introns are ideal for the resolution of ancient avian relationships, introns have also been criticized because of the potential for alignment ambiguities and the loss of signal due to saturation. To examine these issues, we generated multiple alignments using several alignment programs, varying alignment parameters, and using guide trees that reflected the different a priori hypotheses. Although different alignments and analyses yielded slightly different results, our analyses excluded all but one of the five a priori hypotheses. In many cases, the passerines were sister to the Psittaciformes (parrots), and taxa were members of a larger clade that includes Falconidae (falcons) and Cariamidae (seriemas). However, the position of Coliiformes (mousebirds) was highly unstable in our analyses of 30 loci, and this represented the primary source of incongruence among analyses. Mousebirds were united with passerines or parrots in some analyses
NASA Astrophysics Data System (ADS)
Douthett, Elwood (Jack) Moser, Jr.
1999-10-01
Cyclic configurations of white and black sites, together with convex (concave) functions used to weight path length, are investigated. The weights of the white set and black set are the sums of the weights of the paths connecting the white sites and black sites, respectively, and the weight between sets is the sum of the weights of the paths that connect sites opposite in color. It is shown that when the weights of all configurations of a fixed number of white and a fixed number of black sites are compared, minimum (maximum) weight of a white set, minimum (maximum) weight of the a black set, and maximum (minimum) weight between sets occur simultaneously. Such configurations are called maximally even configurations. Similarly, the configurations whose weights are the opposite extremes occur simultaneously and are called minimally even configurations. Algorithms that generate these configurations are constructed and applied to the one- dimensional antiferromagnetic spin-1/2 Ising model. Next the goodness of continued fractions as applied to musical intervals (frequency ratios and their base 2 logarithms) is explored. It is shown that, for the intermediate convergents between two consecutive principal convergents of an irrational number, the first half of the intermediate convergents are poorer approximations than the preceding principal convergent while the second half are better approximations; the goodness of a middle intermediate convergent can only be determined by calculation. These convergents are used to determine what equal-tempered systems have intervals that most closely approximate the musical fifth (pn/ qn = log2(3/2)). The goodness of exponentiated convergents ( 2pn/qn~3/2 ) is also investigated. It is shown that, with the exception of a middle convergent, the goodness of the exponential form agrees with that of its logarithmic Counterpart As in the case of the logarithmic form, the goodness of a middle intermediate convergent in the exponential form can
The limitations of simple gene set enrichment analysis assuming gene independence.
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P
2016-02-01
Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. PMID:23070592
Mangle, Lisa; Phillips, Paula; Pitts, Mark; Laver-Bradbury, Cathy
2014-12-01
Legislative changes that came into effect in the UK in April 2012 gave nurse independent prescribers (NIPs) the power to prescribe schedule 2-5 controlled drugs. Therefore, suitably qualified UK nurses can now independently prescribe any drug for any medical condition within their clinical competence. The potential benefits of independent nurse prescribing include improved access to medications and more efficient use of skills within the National Health Service workforce. This review explores the published literature (to July 2013) to investigate whether the predicted benefits of NIPs in mental health settings can be supported by empirical evidence, with a specific focus on nurse-led management of patients with attention-deficit/hyperactivity disorder (ADHD). The most common pharmacological treatments for ADHD are controlled drugs. Therefore, the 2012 legislative changes allow nurse-led ADHD services to offer holistic packages of care for patients. Evidence suggests that independent prescribing by UK nurses is safe, clinically appropriate and associated with high levels of patient satisfaction. The quality of the nurse-patient relationship and nurses' ability to provide flexible follow-up services suggests that nurse-led ADHD services are well positioned to enhance the outcomes for patients and their parents/carers. However, the empirical evidence available to support the value of NIPs in mental health settings is limited. There is a need for additional high-quality data to verify scientifically the value of nurse-delivered ADHD care. This evidence will be invaluable in supporting the growth of nurse-led ADHD services and for those who support greater remuneration for the expanded role of NIPs. PMID:24744052
10 Questions about Independent Reading
ERIC Educational Resources Information Center
Truby, Dana
2012-01-01
Teachers know that establishing a robust independent reading program takes more than giving kids a little quiet time after lunch. But how do they set up a program that will maximize their students' gains? Teachers have to know their students' reading levels inside and out, help them find just-right books, and continue to guide them during…
A set of ligation-independent expression vectors for co-expression of proteins in Escherichia coli.
Chanda, Pranab K; Edris, Wade A; Kennedy, Jeffrey D
2006-05-01
A set of ligation-independent expression vectors system has been developed for co-expression of proteins in Escherichia coli. These vectors contain a strong T7 promoter, different drug resistant genes, and an origin of DNA replication from a different incompatibility group, allowing combinations of these plasmids to be stably maintained together. In addition, these plasmids also contain the lacI gene, a transcriptional terminator, and a 3' polyhistidine (6x His) affinity tag (H6) for easy purification of target proteins. All of these vectors contain an identical transportable cassette flanked by suitable restriction enzyme cleavage sites for easy cloning and shuttling among different vectors. This cassette incorporates a ligation-independent cloning (LIC) site for LIC manipulations, an optimal ribosome binding site for efficient protein translation, and a 6x His affinity tag for protein purification Therefore, any E. coli expression vector of choice can be easily converted to LIC type expression vectors by shuttling the cassette using the restriction enzyme cleavage sites at the ends. We have demonstrated the expression capabilities of these vectors by co-expressing three bacterial (dsbA, dsbG, and Trx) and also two other mammalian proteins (KChIP1 and Kv4.3). We further show that co-expressed KChIP1/Kv4.3 forms soluble protein complexes that can be purified for further studies. PMID:16325426
Resources and energetics determined dinosaur maximal size
McNab, Brian K.
2009-01-01
Some dinosaurs reached masses that were ≈8 times those of the largest, ecologically equivalent terrestrial mammals. The factors most responsible for setting the maximal body size of vertebrates are resource quality and quantity, as modified by the mobility of the consumer, and the vertebrate's rate of energy expenditure. If the food intake of the largest herbivorous mammals defines the maximal rate at which plant resources can be consumed in terrestrial environments and if that limit applied to dinosaurs, then the large size of sauropods occurred because they expended energy in the field at rates extrapolated from those of varanid lizards, which are ≈22% of the rates in mammals and 3.6 times the rates of other lizards of equal size. Of 2 species having the same energy income, the species that uses the most energy for mass-independent maintenance of necessity has a smaller size. The larger mass found in some marine mammals reflects a greater resource abundance in marine environments. The presumptively low energy expenditures of dinosaurs potentially permitted Mesozoic communities to support dinosaur biomasses that were up to 5 times those found in mammalian herbivores in Africa today. The maximal size of predatory theropods was ≈8 tons, which if it reflected the maximal capacity to consume vertebrates in terrestrial environments, corresponds in predatory mammals to a maximal mass less than a ton, which is what is observed. Some coelurosaurs may have evolved endothermy in association with the evolution of feathered insulation and a small mass. PMID:19581600
Samus, QM; Johnston, D; Black, BS; Hess, E; Lyman, C; Vavilikolanu, A; Pollutra, J; Leoutsakos, J-M; Gitlin, LN; Rabins, PV; Lyketsos, CG
2014-01-01
Objectives To assess whether a dementia care coordination intervention delays time to transition from home and reduces unmet needs in elders with memory disorders. Design 18-month randomized controlled trial of 303 community-living elders. Setting: 28 postal code areas of Baltimore, MD. Participants Age 70+, with a cognitive disorder, community-living, English-speaking, and having a study partner available. Intervention 18-month care coordination intervention to systematically identify and address dementia-related care needs through individualized care planning; referral and linkage to services; provision of dementia education and skill building strategies; and care monitoring by an interdisciplinary team. Measurements Primary outcomes were time to transfer from home and total percent of unmet care needs at 18 months. Results Intervention participants had a significant delay in time to all-cause transition from home and the adjusted hazard of leaving the home was decreased by 37% (HR = 0.63, 95% CI 0.42 to 0.94) compared to control participants. While there was no significant group difference in reduction of total percent of unmet needs from baseline to 18 months, the intervention group had significant reductions in the proportion of unmet needs in safety and legal/advance care domains relative to controls. Intervention participants had a significant improvement in self-reported quality of life (QOL) relative to control participants. No group differences were found in proxy-rated QOL, neuropsychiatric symptoms, or depression. Conclusions A home-based dementia care coordination intervention delivered by non-clinical community workers trained and overseen by geriatric clinicians led to delays in transition from home, reduced unmet needs, and improved self-reported QOL. PMID:24502822
Ansari, Elnaz Saberi; Eslahchi, Changiz; Pezeshk, Hamid; Sadeghi, Mehdi
2014-09-01
Decomposition of structural domains is an essential task in classifying protein structures, predicting protein function, and many other proteomics problems. As the number of known protein structures in PDB grows exponentially, the need for accurate automatic domain decomposition methods becomes more essential. In this article, we introduce a bottom-up algorithm for assigning protein domains using a graph theoretical approach. This algorithm is based on a center-based clustering approach. For constructing initial clusters, members of an independent dominating set for the graph representation of a protein are considered as the centers. A distance matrix is then defined for these clusters. To obtain final domains, these clusters are merged using the compactness principle of domains and a method similar to the neighbor-joining algorithm considering some thresholds. The thresholds are computed using a training set consisting of 50 protein chains. The algorithm is implemented using C++ language and is named ProDomAs. To assess the performance of ProDomAs, its results are compared with seven automatic methods, against five publicly available benchmarks. The results show that ProDomAs outperforms other methods applied on the mentioned benchmarks. The performance of ProDomAs is also evaluated against 6342 chains obtained from ASTRAL SCOP 1.71. ProDomAs is freely available at http://www.bioinf.cs.ipm.ir/software/prodomas. PMID:24596179
MAZZETTI, SCOTT A.; WOLFF, CHRISTOPHER; COLLINS, BRITTANY; KOLANKOWSKI, MICHAEL T.; WILKERSON, BRITTANY; OVERSTREET, MATTHEW; GRUBE, TROY
2011-01-01
With resistance exercise, greater intensity typically elicits increased energy expenditure, but heavier loads require that the lifter perform more sets of fewer repetitions, which alters the kilograms lifted per set. Thus, the effect of exercise-intensity on energy expenditure has yielded varying results, especially with explosive resistance exercise. This study was designed to examine the effect of exercise-intensity and kilograms/set on energy expenditure during explosive resistance exercise. Ten resistance-trained men (22±3.6 years; 84±6.4 kg, 180±5.1 cm, and 13±3.8 %fat) performed squat and bench press protocols once/week using different exercise-intensities including 48% (LIGHT-48), 60% (MODERATE-60), and 72% of 1-repetition-maximum (1-RM) (HEAVY-72), plus a no-exercise protocol (CONTROL). To examine the effects of kilograms/set, an additional protocol using 72% of 1-RM was performed (HEAVY-72MATCHED) with kilograms/set matched with LIGHT-48 and MODERATE-60. LIGHT-48 was 4 sets of 10 repetitions (4×10); MODERATE-60 4×8; HEAVY-72 5×5; and HEAVY-72MATCHED 4×6.5. Eccentric and concentric repetition speeds, ranges-of-motion, rest-intervals, and total kilograms were identical between protocols. Expired air was collected continuously throughout each protocol using a metabolic cart, [Blood lactate] using a portable analyzer, and bench press peak power were measured. Rates of energy expenditure were significantly greater (p≤0.05) with LIGHT-48 and HEAVY-72MATCHED than HEAVY-72 during squat (7.3±0.7; 6.9±0.6 > 6.1±0.7 kcal/min), bench press (4.8±0.3; 4.7±0.3 > 4.0±0.4 kcal/min), and +5min after (3.7±0.1; 3.7±0.2 > 3.3±0.3 kcal/min), but there were no significant differences in total kcal among protocols. Therefore, exercise-intensity may not effect energy expenditure with explosive contractions, but light loads (~50% of 1-RM) may be preferred because of higher rates of energy expenditure, and since heavier loading requires more sets with lower
Sabree, Zakee L; Hansen, Allison K; Moran, Nancy A
2012-01-01
Starting in 2003, numerous studies using culture-independent methodologies to characterize the gut microbiota of honey bees have retrieved a consistent and distinctive set of eight bacterial species, based on near identity of the 16S rRNA gene sequences. A recent study [Mattila HR, Rios D, Walker-Sperling VE, Roeselers G, Newton ILG (2012) Characterization of the active microbiotas associated with honey bees reveals healthier and broader communities when colonies are genetically diverse. PLoS ONE 7(3): e32962], using pyrosequencing of the V1-V2 hypervariable region of the 16S rRNA gene, reported finding entirely novel bacterial species in honey bee guts, and used taxonomic assignments from these reads to predict metabolic activities based on known metabolisms of cultivable species. To better understand this discrepancy, we analyzed the Mattila et al. pyrotag dataset. In contrast to the conclusions of Mattila et al., we found that the large majority of pyrotag sequences belonged to clusters for which representative sequences were identical to sequences from previously identified core species of the bee microbiota. On average, they represent 95% of the bacteria in each worker bee in the Mattila et al. dataset, a slightly lower value than that found in other studies. Some colonies contain small proportions of other bacteria, mostly species of Enterobacteriaceae. Reanalysis of the Mattila et al. dataset also did not support a relationship between abundances of Bifidobacterium and of putative pathogens or a significant difference in gut communities between colonies from queens that were singly or multiply mated. Additionally, consistent with previous studies, the dataset supports the occurrence of considerable strain variation within core species, even within single colonies. The roles of these bacteria within bees, or the implications of the strain variation, are not yet clear. PMID:22829932
Sabree, Zakee L.; Hansen, Allison K.; Moran, Nancy A.
2012-01-01
Starting in 2003, numerous studies using culture-independent methodologies to characterize the gut microbiota of honey bees have retrieved a consistent and distinctive set of eight bacterial species, based on near identity of the 16S rRNA gene sequences. A recent study [Mattila HR, Rios D, Walker-Sperling VE, Roeselers G, Newton ILG (2012) Characterization of the active microbiotas associated with honey bees reveals healthier and broader communities when colonies are genetically diverse. PLoS ONE 7(3): e32962], using pyrosequencing of the V1–V2 hypervariable region of the 16S rRNA gene, reported finding entirely novel bacterial species in honey bee guts, and used taxonomic assignments from these reads to predict metabolic activities based on known metabolisms of cultivable species. To better understand this discrepancy, we analyzed the Mattila et al. pyrotag dataset. In contrast to the conclusions of Mattila et al., we found that the large majority of pyrotag sequences belonged to clusters for which representative sequences were identical to sequences from previously identified core species of the bee microbiota. On average, they represent 95% of the bacteria in each worker bee in the Mattila et al. dataset, a slightly lower value than that found in other studies. Some colonies contain small proportions of other bacteria, mostly species of Enterobacteriaceae. Reanalysis of the Mattila et al. dataset also did not support a relationship between abundances of Bifidobacterium and of putative pathogens or a significant difference in gut communities between colonies from queens that were singly or multiply mated. Additionally, consistent with previous studies, the dataset supports the occurrence of considerable strain variation within core species, even within single colonies. The roles of these bacteria within bees, or the implications of the strain variation, are not yet clear. PMID:22829932
Pelletier, Alexandra; Sunthara, Gajen; Gujral, Nitin; Mittal, Vandna; Bourgeois, Fabienne C
2016-01-01
understanding what features should be built into the app. Phase 3 involved deployment of TaskList on a clinical floor at BCH. Lastly, Phase 4 gathered the lessons learned from the pilot to refine the guideline. Results Fourteen practical recommendations were identified to create the BCH Mobile Application Development Guideline to safeguard custom applications in hospital BYOD settings. The recommendations were grouped into four categories: (1) authentication and authorization, (2) data management, (3) safeguarding app environment, and (4) remote enforcement. Following the guideline, the TaskList app was developed and then was piloted with an inpatient ward team. Conclusions The Mobile Application Development guideline was created and used in the development of TaskList. The guideline is intended for use by developers when addressing integration with hospital information systems, deploying apps in BYOD health care settings, and meeting compliance standards, such as Health Insurance Portability and Accountability Act (HIPAA) regulations. PMID:27169345
ERIC Educational Resources Information Center
Velastegui, Pamela J.
2013-01-01
This hypothesis-generating case study investigates the naturally emerging roles of technology brokers and technology leaders in three independent schools in New York involving 92 school educators. A multiple and mixed method design utilizing Social Network Analysis (SNA) and fuzzy set Qualitative Comparative Analysis (FSQCA) involved gathering…
ERIC Educational Resources Information Center
Sireci, Stephen G.
Whether item response theory (IRT) is useful to the small-scale testing practitioner is examined. The stability of IRT item parameters is evaluated with respect to the classical item parameters (i.e., p-values, biserials) obtained from the same data set. Previous research investigating the effect of sample size on IRT parameter estimation has…
Řezáč, Jan; de la Lande, Aurélien
2015-02-10
Separation of the energetic contribution of charge transfer to interaction energy in noncovalent complexes would provide important insight into the mechanisms of the interaction. However, the calculation of charge-transfer energy is not an easy task. It is not a physically well-defined term, and the results might depend on how it is described in practice. Commonly, the charge transfer is defined in terms of molecular orbitals; in this framework, however, the charge transfer vanishes as the basis set size increases toward the complete basis set limit. This can be avoided by defining the charge transfer in terms of the spatial extent of the electron densities of the interacting molecules, but the schemes used so far do not reflect the actual electronic structure of each particular system and thus are not reliable. We propose a spatial partitioning of the system, which is based on a charge transfer-free reference state, namely superimposition of electron densities of the noninteracting fragments. We show that this method, employing constrained DFT for the calculation of the charge-transfer energy, yields reliable results and is robust with respect to the strength of the charge transfer, the basis set size, and the DFT functional used. Because it is based on DFT, the method is applicable to rather large systems. PMID:26580910
Jones, J.W.; Jarnagin, T.
2009-01-01
Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean W.; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Wei, Jun; Patel, Smita
2015-03-01
We have developed a computer-aided detection (CAD) system for assisting radiologists in detection of pulmonary embolism (PE) in computed tomographic pulmonary angiographic (CTPA) images. The CAD system includes stages of pulmonary vessel segmentation, prescreening of PE candidates and false positive (FP) reduction to identify suspicious PEs. The system was trained with 59 CTPA PE cases collected retrospectively from our patient files (UM set) with IRB approval. Five feature groups containing 139 features that characterized the intensity texture, gradient, intensity homogeneity, shape, and topology of PE candidates were initially extracted. Stepwise feature selection guided by simplex optimization was used to select effective features for FP reduction. A linear discriminant analysis (LDA) classifier was formulated to differentiate true PEs from FPs. The purpose of this study is to evaluate the performance of our CAD system using an independent test set of CTPA cases. The test set consists of 50 PE cases from the PIOPED II data set collected by multiple institutions with access permission. A total of 537 PEs were manually marked by experienced thoracic radiologists as reference standard for the test set. The detection performance was evaluated by freeresponse receiver operating characteristic (FROC) analysis. The FP classifier obtained a test Az value of 0.847 and the FROC analysis indicated that the CAD system achieved an overall sensitivity of 80% at 8.6 FPs/case for the PIOPED test set.
Maximally Expressive Task Modeling
NASA Technical Reports Server (NTRS)
Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.
Maximal combustion temperature estimation
NASA Astrophysics Data System (ADS)
Golodova, E.; Shchepakina, E.
2006-12-01
This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models.
Sun, Guangyan; Zhou, Zhipeng; Liu, Xiao; Gai, Kexin; Liu, Qingqing; Cha, Joonseok; Kaleri, Farah Naz; Wang, Ying; He, Qun
2016-05-20
The circadian system in Neurospora is based on the transcriptional/translational feedback loops and rhythmic frequency (frq) transcription requires the WHITE COLLAR (WC) complex. Our previous paper has shown that frq could be transcribed in a WC-independent pathway in a strain lacking the histone H3K36 methyltransferase, SET-2 (su(var)3-9-enhancer-of-zeste-trithorax-2) (1), but the mechanism was unclear. Here we disclose that loss of histone H3K36 methylation, due to either deletion of SET-2 or H3K36R mutation, results in arrhythmic frq transcription and loss of overt rhythmicity. Histone acetylation at frq locus increases in set-2(KO) mutant. Consistent with these results, loss of H3K36 methylation readers, histone deacetylase RPD-3 (reduced potassium dependence 3) or EAF-3 (essential SAS-related acetyltransferase-associated factor 3), also leads to hyperacetylation of histone at frq locus and WC-independent frq expression, suggesting that proper chromatin modification at frq locus is required for circadian clock operation. Furthermore, a mutant strain with three amino acid substitutions (histone H3 lysine 9, 14, and 18 to glutamine) was generated to mimic the strain with hyperacetylation state of histone H3. H3K9QK14QK18Q mutant exhibits the same defective clock phenotype as rpd-3(KO) mutant. Our results support a scenario in which H3K36 methylation is required to establish a permissive chromatin state for circadian frq transcription by maintaining proper acetylation status at frq locus. PMID:27002152
NASA Astrophysics Data System (ADS)
Last, Isidore; Baer, Michael
1992-01-01
Recently we introduced a time-independent approach to treat reactive collisions employing the negative imaginary absorbing potentials and L 2 basis sets. The application of these potentials led to the formulation of a method whereby only one arrangement channel has to be considered in a given calculation. In the present work we further extend this approach. (a) We show how this method is capable of yielding reactive state-to-state S-matrix elements. (In the previous versions of this method, these could not be obtained.) (b) We show that by employing contracted vibrational adiabatic and translational Gaussian functions the number of algebraic equations to be solved within this approach is significantly reduced (by a factor of four).
Bradshaw, P J; Ko, D T; Newman, A M; Donovan, L R
2006-01-01
Objective To determine the validity of the GRACE (Global Registry of Acute Coronary Events) prediction model for death six months after discharge in all forms of acute coronary syndrome in an independent dataset of a community based cohort of patients with acute myocardial infarction (AMI). Design Independent validation study based on clinical data collected retrospectively for a clinical trial in a community based population and record linkage to administrative databases. Setting Study conducted among patients from the EFFECT (enhanced feedback for effective cardiac treatment) study from Ontario, Canada. Patients Randomly selected men and women hospitalised for AMI between 1999 and 2001. Main outcome measure Discriminatory capacity and calibration of the GRACE prediction model for death within six months of hospital discharge in the contemporaneous EFFECT AMI study population. Results Post‐discharge crude mortality at six months for the EFFECT study patients with AMI was 7.0%. The discriminatory capacity of the GRACE model was good overall (C statistic 0.80) and for patients with ST segment elevation AMI (STEMI) (0.81) and non‐STEMI (0.78). Observed and predicted deaths corresponded well in each stratum of risk at six months, although the risk was underestimated by up to 30% in the higher range of scores among patients with non‐STEMI. Conclusions In an independent validation the GRACE risk model had good discriminatory capacity for predicting post‐discharge death at six months and was generally well calibrated, suggesting that it is suitable for clinical use in general populations. PMID:16387810
Maximizing Classroom Participation.
ERIC Educational Resources Information Center
Englander, Karen
2001-01-01
Discusses how to maximize classroom participation in the English-as-a-Second-or-Foreign-Language classroom, and provides a classroom discussion method that is based on real-life problem solving. (Author/VWL)
Polarity Related Influence Maximization in Signed Social Networks
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986
Polarity related influence maximization in signed social networks.
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986
On the Relationship between Maximal Reliability and Maximal Validity of Linear Composites
ERIC Educational Resources Information Center
Penev, Spiridon; Raykov, Tenko
2006-01-01
A linear combination of a set of measures is often sought as an overall score summarizing subject performance. The weights in this composite can be selected to maximize its reliability or to maximize its validity, and the optimal choice of weights is in general not the same for these two optimality criteria. We explore several relationships…
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Froeschke, John T.; Stunz, Gregory W.; Sterba-Boatwright, Blair; Wildhaber, Mark L.
2010-01-01
Using a long-term fisheries-independent data set, we tested the 'shark nursery area concept' proposed by Heupel et al. (2007) with the suggested working assumptions that a shark nursery habitat would: (1) have an abundance of immature sharks greater than the mean abundance across all habitats where they occur; (2) be used by sharks repeatedly through time (years); and (3) see immature sharks remaining within the habitat for extended periods of time. We tested this concept using young-of-the-year (age 0) and juvenile (age 1+ yr) bull sharks Carcharhinus leucas from gill-net surveys conducted in Texas bays from 1976 to 2006 to estimate the potential nursery function of 9 coastal bays. Of the 9 bay systems considered as potential nursery habitat, only Matagorda Bay satisfied all 3 criteria for young-of-the-year bull sharks. Both Matagorda and San Antonio Bays met the criteria for juvenile bull sharks. Through these analyses we examined the utility of this approach for characterizing nursery areas and we also describe some practical considerations, such as the influence of the temporal or spatial scales considered when applying the nursery role concept to shark populations.
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2004-01-01
Google is shaking out to be the leading Web search engine, with recent research from Nielsen NetRatings reporting about 40 percent of all U.S. households using the tool at least once in January 2004. This brief article discusses how teachers and students can maximize their use of Google.
Maximal Outboxes of Quadrilaterals
ERIC Educational Resources Information Center
Zhao, Dongsheng
2011-01-01
An outbox of a quadrilateral is a rectangle such that each vertex of the given quadrilateral lies on one side of the rectangle and different vertices lie on different sides. We first investigate those quadrilaterals whose every outbox is a square. Next, we consider the maximal outboxes of rectangles and those quadrilaterals with perpendicular…
NASA Astrophysics Data System (ADS)
Wang, Y.; Penning de Vries, M.; Xie, P. H.; Beirle, S.; Dörner, S.; Remmers, J.; Li, A.; Wagner, T.
2015-12-01
Multi-axis differential optical absorption spectroscopy (MAX-DOAS) observations of trace gases can be strongly influenced by clouds and aerosols. Thus it is important to identify clouds and characterize their properties. In a recent study Wagner et al. (2014) developed a cloud classification scheme based on the MAX-DOAS measurements themselves with which different "sky conditions" (e.g., clear sky, continuous clouds, broken clouds) can be distinguished. Here we apply this scheme to long-term MAX-DOAS measurements from 2011 to 2013 in Wuxi, China (31.57° N, 120.31° E). The original algorithm has been adapted to the characteristics of the Wuxi instrument, and extended towards smaller solar zenith angles (SZA). Moreover, a method for the determination and correction of instrumental degradation is developed to avoid artificial trends of the cloud classification results. We compared the results of the MAX-DOAS cloud classification scheme to several independent measurements: aerosol optical depth from a nearby Aerosol Robotic Network (AERONET) station and from two Moderate Resolution Imaging Spectroradiometer (MODIS) instruments, visibility derived from a visibility meter and various cloud parameters from different satellite instruments (MODIS, the Ozone Monitoring Instrument (OMI) and the Global Ozone Monitoring Experiment (GOME-2)). Here it should be noted that no quantitative comparison between the MAX-DOAS results and the independent data sets is possible, because (a) not exactly the same quantities are measured, and (b) the spatial and temporal sampling is quite different. Thus our comparison is performed in a semi-quantitative way: the MAX-DOAS cloud classification results are studied as a function of the external quantities. The most important findings from these comparisons are as follows: (1) most cases characterized as clear sky with low or high aerosol load were associated with the respective aerosol optical depth (AOD) ranges obtained by AERONET and MODIS
Generation and Transmission Maximization Model
Energy Science and Technology Software Center (ESTSC)
2001-04-05
GTMax was developed to study complex marketing and system operational issues facing electric utility power systems. The model maximizes the value of the electric system taking into account not only a single system''s limited energy and transmission resources but also firm contracts, independent power producer (IPP) agreements, and bulk power transaction opportunities on the spot market. GTMax maximizes net revenues of power systems by finding a solution that increases income while keeping expenses at amore » minimum. It does this while ensuring that market transactions and system operations are within the physical and institutional limitations of the power system. When multiple systems are simulated, GTMax identifies utilities that can successfully compete on the market by tracking hourly energy transactions, costs, and revenues. Some limitations that are modeled are power plant seasonal capabilities and terms specified in firm and IPP contracts. GTMax also considers detaile operational limitations such as power plant ramp rates and hydropower reservoir constraints.« less
Infrared Maximally Abelian Gauge
Mendes, Tereza; Cucchieri, Attilio; Mihara, Antonio
2007-02-27
The confinement scenario in Maximally Abelian gauge (MAG) is based on the concepts of Abelian dominance and of dual superconductivity. Recently, several groups pointed out the possible existence in MAG of ghost and gluon condensates with mass dimension 2, which in turn should influence the infrared behavior of ghost and gluon propagators. We present preliminary results for the first lattice numerical study of the ghost propagator and of ghost condensation for pure SU(2) theory in the MAG.
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
ERIC Educational Resources Information Center
Wells, Ruth Herman
This document is one of eight in a series of guides designed to help teach and counsel troubled youth. This document presents 20 lessons on the social skills necessary to live independently. It includes four lessons designed to help students accurately evaluate their readiness for independent living. Other lessons teach the basic steps for…
NASA Astrophysics Data System (ADS)
Knop, R. A.; Aldering, G.; Amanullah, R.; Astier, P.; Blanc, G.; Burns, M. S.; Conley, A.; Deustua, S. E.; Doi, M.; Ellis, R.; Fabbro, S.; Folatelli, G.; Fruchter, A. S.; Garavini, G.; Garmond, S.; Garton, K.; Gibbons, R.; Goldhaber, G.; Goobar, A.; Groom, D. E.; Hardin, D.; Hook, I.; Howell, D. A.; Kim, A. G.; Lee, B. C.; Lidman, C.; Mendez, J.; Nobili, S.; Nugent, P. E.; Pain, R.; Panagia, N.; Pennypacker, C. R.; Perlmutter, S.; Quimby, R.; Raux, J.; Regnault, N.; Ruiz-Lapuente, P.; Sainton, G.; Schaefer, B.; Schahmaneche, K.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Sullivan, M.; Walton, N. A.; Wang, L.; Wood-Vasey, W. M.; Yasuda, N.
2003-11-01
We report measurements of ΩM, ΩΛ, and w from 11 supernovae (SNe) at z=0.36-0.86 with high-quality light curves measured using WFPC2 on the Hubble Space Telescope (HST). This is an independent set of high-redshift SNe that confirms previous SN evidence for an accelerating universe. The high-quality light curves available from photometry on WFPC2 make it possible for these 11 SNe alone to provide measurements of the cosmological parameters comparable in statistical weight to the previous results. Combined with earlier Supernova Cosmology Project data, the new SNe yield a measurement of the mass density ΩM=0.25+0.07-0.06(statistical)+/-0.04 (identified systematics), or equivalently, a cosmological constant of ΩΛ=0.75+0.06-0.07(statistical)+/-0.04 (identified systematics), under the assumptions of a flat universe and that the dark energy equation-of-state parameter has a constant value w=-1. When the SN results are combined with independent flat-universe measurements of ΩM from cosmic microwave background and galaxy redshift distortion data, they provide a measurement of w=-1.05+0.15-0.20(statistical)+/-0.09 (identified systematic), if w is assumed to be constant in time. In addition to high-precision light-curve measurements, the new data offer greatly improved color measurements of the high-redshift SNe and hence improved host galaxy extinction estimates. These extinction measurements show no anomalous negative E(B-V) at high redshift. The precision of the measurements is such that it is possible to perform a host galaxy extinction correction directly for individual SNe without any assumptions or priors on the parent E(B-V) distribution. Our cosmological fits using full extinction corrections confirm that dark energy is required with P(ΩΛ>0)>0.99, a result consistent with previous and current SN analyses that rely on the identification of a low-extinction subset or prior assumptions concerning the intrinsic extinction distribution. Based in part on
NASA Technical Reports Server (NTRS)
Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris
2004-01-01
The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.
Energy Band Calculations for Maximally Even Superlattices
NASA Astrophysics Data System (ADS)
Krantz, Richard; Byrd, Jason
2007-03-01
Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.
Maximally spaced projection sequencing in electron paramagnetic resonance imaging
Redler, Gage; Epel, Boris; Halpern, Howard J.
2015-01-01
Electron paramagnetic resonance imaging (EPRI) provides 3D images of absolute oxygen concentration (pO2) in vivo with excellent spatial and pO2 resolution. When investigating such physiologic parameters in living animals, the situation is inherently dynamic. Improvements in temporal resolution and experimental versatility are necessary to properly study such a system. Uniformly distributed projections result in efficient use of data for image reconstruction. This has dictated current methods such as equal-solid-angle (ESA) spacing of projections. However, acquisition sequencing must still be optimized to achieve uniformity throughout imaging. An object-independent method for uniform acquisition of projections, using the ESA uniform distribution for the final set of projections, is presented. Each successive projection maximizes the distance in the gradient space between itself and prior projections. This maximally spaced projection sequencing (MSPS) method improves image quality for intermediate images reconstructed from incomplete projection sets, enabling useful real-time reconstruction. This method also provides improved experimental versatility, reduced artifacts, and the ability to adjust temporal resolution post factum to best fit the data and its application. The MSPS method in EPRI provides the improvements necessary to more appropriately study a dynamic system. PMID:26185490
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth; Richardson, Lea
2004-01-01
Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.
Maximizing your teaching moment
... have assessed the patient's needs and selected the education materials and methods you will use, you will need to: Set up a good learning environment. This may include things such as adjusting the ...
Origin of constrained maximal CP violation in flavor symmetry
NASA Astrophysics Data System (ADS)
He, Hong-Jian; Rodejohann, Werner; Xu, Xun-Jie
2015-12-01
Current data from neutrino oscillation experiments are in good agreement with δ = -π/2 and θ23 =π/4 under the standard parametrization of the mixing matrix. We define the notion of "constrained maximal CP violation" (CMCPV) for predicting these features and study their origin in flavor symmetry. We derive the parametrization-independent solution of CMCPV and give a set of equivalent definitions for it. We further present a theorem on how the CMCPV can be realized. This theorem takes the advantage of residual symmetries in neutrino and charged lepton mass matrices, and states that, up to a few minor exceptions, (| δ | ,θ23) = (π/2 ,π/4) is generated when those symmetries are real. The often considered μ- τ reflection symmetry, as well as specific discrete subgroups of O(3), is a special case of our theorem.
Maximizing Brightness in Photoinjectors
Limborg-Deprey, C.; Tomizawa, H.; /JAERI-RIKEN, Hyogo
2011-11-30
If the laser pulse driving photoinjectors could be arbitrarily shaped, the emittance growth induced by space charge effects could be totally compensated for. In particular, for RF guns the photo-electron distribution leaving the cathode should have a 3D-ellipsoidal shape. The emittance at the end of the injector could be as small as the cathode emittance. We explore how the emittance and the brightness can be optimized for photoinjector based on RF gun depending on the peak current requirements. Techniques available to produce those ideal laser pulse shapes are also discussed. If the laser pulse driving photoinjectors could be arbitrarily shaped, the emittance growth induced by space charge effects could be totally compensated for. In particular, for RF guns, the photo-electron distribution leaving the cathode should be close to a uniform distribution contained in a 3D-ellipsoid contour. For photo-cathodes which have very fast emission times, and assuming a perfectly uniform emitting surface, this could be achieved by shaping the laser in a pulse of constant fluence and limited in space by a 3D-ellipsoid contour. Simulations show that in such conditions, with the standard linear emittance compensation, the emittance at the end of the photo-injector beamline approaches the minimum value imposed by the cathode emittance. Brightness, which is expressed as the ratio of peak current over the product of the two transverse emittance, seems to be maximized for small charges. Numerical simulations also show that for very high charge per bunch (10nC), emittances as small as 2 mm-mrad could be reached by using 3D-ellipsoidal laser pulses in an S-Band gun. The production of 3D-ellipsoidal pulses is very challenging, but seems worthwhile the effort. We briefly discuss some of the present ideas and difficulties of achieving such pulses.
A Method for Maximizing the Internal Consistency Coefficient Alpha.
ERIC Educational Resources Information Center
Pepin, Michel
This paper presents three different ways of computing the internal consistency coefficient alpha for a same set of data. The main objective of the paper is the illustration of a method for maximizing coefficient alpha. The maximization of alpha can be achieved with the aid of a principal component analysis. The relation between alpha max. and the…
Maximal switchability of centralized networks
NASA Astrophysics Data System (ADS)
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
Hamiltonian formalism and path entropy maximization
NASA Astrophysics Data System (ADS)
Davis, Sergio; González, Diego
2015-10-01
Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.
Curiel, José Antonio; de Las Rivas, Blanca; Mancheño, José Miguel; Muñoz, Rosario
2011-03-01
A family of restriction enzyme- and ligation-independent cloning vectors has been developed for producing recombinant His-tagged fusion proteins in Escherichia coli. These are based on pURI2 and pURI3 expression vectors which have been previously used for the successful production of recombinant proteins at the milligram scale. The newly designed vectors combines two different promoters (lpp(p)-5 and T7 RNA polymerase Ø10), two different endoprotease recognition sites for the His₆-tag removal (enterokinase and tobacco etch virus), different antibiotic selectable markers (ampicillin and erythromycin resistance), and different placements of the His₆-tag (N- and C-terminus). A single gene can be cloned and further expressed in the eight pURI vectors by using six nucleotide primers, avoiding the restriction enzyme and ligation steps. A unique NotI site was introduced to facilitate the selection of the recombinant plasmid. As a case study, the new vectors have been used to clone the gene coding for the phenolic acid decarboxylase from Lactobacillus plantarum. Interestingly, the obtained results revealed markedly different production levels of the target protein, emphasizing the relevance of the cloning strategy on soluble protein production yield. Efficient purification and tag removal steps showed that the affinity tag and the protease cleavage sites functioned properly. The novel family of pURI vectors designed for parallel cloning is a useful and versatile tool for the production and purification of a protein of interest. PMID:21055470
Excap: Maximization of Haplotypic Diversity of Linked Markers
Kahles, André; Sarqume, Fahad; Savolainen, Peter; Arvestad, Lars
2013-01-01
Genetic markers, defined as variable regions of DNA, can be utilized for distinguishing individuals or populations. As long as markers are independent, it is easy to combine the information they provide. For nonrecombinant sequences like mtDNA, choosing the right set of markers for forensic applications can be difficult and requires careful consideration. In particular, one wants to maximize the utility of the markers. Until now, this has mainly been done by hand. We propose an algorithm that finds the most informative subset of a set of markers. The algorithm uses a depth first search combined with a branch-and-bound approach. Since the worst case complexity is exponential, we also propose some data-reduction techniques and a heuristic. We implemented the algorithm and applied it to two forensic caseworks using mitochondrial DNA, which resulted in marker sets with significantly improved haplotypic diversity compared to previous suggestions. Additionally, we evaluated the quality of the estimation with an artificial dataset of mtDNA. The heuristic is shown to provide extensive speedup at little cost in accuracy. PMID:24244403
2010-01-01
Background The genus Neisseria contains two important yet very different pathogens, N. meningitidis and N. gonorrhoeae, in addition to non-pathogenic species, of which N. lactamica is the best characterized. Genomic comparisons of these three bacteria will provide insights into the mechanisms and evolution of pathogenesis in this group of organisms, which are applicable to understanding these processes more generally. Results Non-pathogenic N. lactamica exhibits very similar population structure and levels of diversity to the meningococcus, whilst gonococci are essentially recent descendents of a single clone. All three species share a common core gene set estimated to comprise around 1190 CDSs, corresponding to about 60% of the genome. However, some of the nucleotide sequence diversity within this core genome is particular to each group, indicating that cross-species recombination is rare in this shared core gene set. Other than the meningococcal cps region, which encodes the polysaccharide capsule, relatively few members of the large accessory gene pool are exclusive to one species group, and cross-species recombination within this accessory genome is frequent. Conclusion The three Neisseria species groups represent coherent biological and genetic groupings which appear to be maintained by low rates of inter-species horizontal genetic exchange within the core genome. There is extensive evidence for exchange among positively selected genes and the accessory genome and some evidence of hitch-hiking of housekeeping genes with other loci. It is not possible to define a 'pathogenome' for this group of organisms and the disease causing phenotypes are therefore likely to be complex, polygenic, and different among the various disease-associated phenotypes observed. PMID:21092259
NASA Astrophysics Data System (ADS)
Wang, Y.; Penning de Vries, M.; Xie, P. H.; Beirle, S.; Dörner, S.; Remmers, J.; Li, A.; Wagner, T.
2015-05-01
Multi-Axis-Differential Optical Absorption Spectroscopy (MAX-DOAS) observations of trace gases can be strongly influenced by clouds and aerosols. Thus it is important to identify clouds and characterise their properties. In a recent study Wagner et al. (2014) developed a cloud classification scheme based on the MAX-DOAS measurements themselves with which different "sky conditions" (e.g. clear sky, continuous clouds, broken clouds) can be distinguished. Here we apply this scheme to long term MAX-DOAS measurements from 2011 to 2013 in Wuxi, China (31.57° N, 120.31° E). The original algorithm has been modified, in particular in order to account for smaller solar zenith angles (SZA). Instrumental degradation is accounted for to avoid artificial trends of the cloud classification. We compared the results of the MAX-DOAS cloud classification scheme to several independent measurements: aerosol optical depth from a nearby AERONET station and from MODIS, visibility derived from a visibility meter; and various cloud parameters from different satellite instruments (MODIS, OMI, and GOME-2). The most important findings from these comparisons are: (1) most cases characterized as clear sky with low or high aerosol load were associated with the respective AOD ranges obtained by AERONET and MODIS, (2) the observed dependences of MAX-DOAS results on cloud optical thickness and effective cloud fraction from satellite indicate that the cloud classification scheme is sensitive to cloud (optical) properties, (3) separation of cloudy scenes by cloud pressure shows that the MAX-DOAS cloud classification scheme is also capable of detecting high clouds, (4) some clear sky conditions, especially with high aerosol load, classified from MAX-DOAS observations corresponding to the optically thin and low clouds derived by satellite observations probably indicate that the satellite cloud products contain valuable information on aerosols.
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2009-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-03-01
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169
Hohman, Timothy J; Bush, William S; Jiang, Lan; Brown-Gentry, Kristin D; Torstenson, Eric S; Dudek, Scott M; Mukherjee, Shubhabrata; Naj, Adam; Kunkle, Brian W; Ritchie, Marylyn D; Martin, Eden R; Schellenberg, Gerard D; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Haines, Jonathan L; Thornton-Wells, Tricia A
2016-02-01
Late-onset Alzheimer disease (AD) has a complex genetic etiology, involving locus heterogeneity, polygenic inheritance, and gene-gene interactions; however, the investigation of interactions in recent genome-wide association studies has been limited. We used a biological knowledge-driven approach to evaluate gene-gene interactions for consistency across 13 data sets from the Alzheimer Disease Genetics Consortium. Fifteen single nucleotide polymorphism (SNP)-SNP pairs within 3 gene-gene combinations were identified: SIRT1 × ABCB1, PSAP × PEBP4, and GRIN2B × ADRA1A. In addition, we extend a previously identified interaction from an endophenotype analysis between RYR3 × CACNA1C. Finally, post hoc gene expression analyses of the implicated SNPs further implicate SIRT1 and ABCB1, and implicate CDH23 which was most recently identified as an AD risk locus in an epigenetic analysis of AD. The observed interactions in this article highlight ways in which genotypic variation related to disease may depend on the genetic context in which it occurs. Further, our results highlight the utility of evaluating genetic interactions to explain additional variance in AD risk and identify novel molecular mechanisms of AD pathogenesis. PMID:26827652
ERIC Educational Resources Information Center
Lange, L. H.
1974-01-01
Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)
NASA Astrophysics Data System (ADS)
Salvio, Alberto; Staub, Florian; Strumia, Alessandro; Urbano, Alfredo
2016-03-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into γγ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
All maximally entangling unitary operators
Cohen, Scott M.
2011-11-15
We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.
Maximizing the usefulness of hypnosis in forensic investigative settings.
Hibler, Neil S; Scheflin, Alan W
2012-07-01
This is an article written for mental health professionals interested in using investigative hypnosis with law enforcement agencies in the effort to enhance the memory of witnesses and victims. Discussion focuses on how to work with law enforcement agencies so as to control for factors that can interfere with recall. Specifics include what police need to know about how to conduct case review, to prepare interviewees, to conduct interviews, and what to do with the results. Case examples are used to illustrate applications of this guidance in actual investigations. PMID:22913226
Maximally polarized states for quantum light fields
Sanchez-Soto, Luis L.; Yustas, Eulogio C.; Bjoerk, Gunnar; Klimov, Andrei B.
2007-10-15
The degree of polarization of a quantum field can be defined as its distance to an appropriate set of states. When we take unpolarized states as this reference set, the states optimizing this degree for a fixed average number of photons N present a fairly symmetric, parabolic photon statistic, with a variance scaling as N{sup 2}. Although no standard optical process yields such a statistic, we show that, to an excellent approximation, a highly squeezed vacuum can be taken as maximally polarized. We also consider the distance of a field to the set of its SU(2) transformed, finding that certain linear superpositions of SU(2) coherent states make this degree to be unity.
Are Independent Probes Truly Independent?
ERIC Educational Resources Information Center
Camp, Gino; Pecher, Diane; Schmidt, Henk G.; Zeelenberg, Rene
2009-01-01
The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval…
Basic principles of maximizing dental office productivity.
Mamoun, John
2012-01-01
To maximize office productivity, dentists should focus on performing tasks that only they can perform and not spend office hours performing tasks that can be delegated to non-dentist personnel. An important element of maximizing productivity is to arrange the schedule so that multiple patients are seated simultaneously in different operatories. Doing so allows the dentist to work on one patient in one operatory without needing to wait for local anesthetic to take effect on another patient in another operatory, or for assistants to perform tasks (such as cleaning up, taking radiographs, performing prophylaxis, or transporting and preparing equipment and supplies) in other operatories. Another way to improve productivity is to structure procedures so that fewer steps are needed to set up and implement them. In addition, during procedures, four-handed dental passing methods can be used to provide the dentist with supplies or equipment when needed. This article reviews basic principles of maximizing dental office productivity, based on the author's observations of business logistics used by various dental offices. PMID:22414506
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
... systems advocacy--to maximize the leadership, empowerment, independence and productivity of individuals..., empowerment, independence and productivity of individuals with significant disabilities and to promote...
Optimizing Population Variability to Maximize Benefit
Izu, Leighton T.; Bányász, Tamás; Chen-Izu, Ye
2015-01-01
Variability is inherent in any population, regardless whether the population comprises humans, plants, biological cells, or manufactured parts. Is the variability beneficial, detrimental, or inconsequential? This question is of fundamental importance in manufacturing, agriculture, and bioengineering. This question has no simple categorical answer because research shows that variability in a population can have both beneficial and detrimental effects. Here we ask whether there is a certain level of variability that can maximize benefit to the population as a whole. We answer this question by using a model composed of a population of individuals who independently make binary decisions; individuals vary in making a yes or no decision, and the aggregated effect of these decisions on the population is quantified by a benefit function (e.g. accuracy of the measurement using binary rulers, aggregate income of a town of farmers). Here we show that an optimal variance exists for maximizing the population benefit function; this optimal variance quantifies what is often called the “right mix” of individuals in a population. PMID:26650247
Simple conditions constraining the set of quantum correlations
NASA Astrophysics Data System (ADS)
de Vicente, Julio I.
2015-09-01
The characterization of the set of quantum correlations in Bell scenarios is a problem of paramount importance for both the foundations of quantum mechanics and quantum information processing in the device-independent scenario. However, a clear-cut (physical or mathematical) characterization of this set remains elusive and many of its properties are still unknown. We provide here simple and general analytical conditions that are necessary for an arbitrary bipartite behavior to be quantum. Although the conditions are not sufficient, we illustrate the strength and nontriviality of these conditions with a few examples. Moreover, we provide several applications of this result: we prove a quantitative separation of the quantum set from extremal nonlocal no-signaling behaviors in several general scenarios, we provide a relation to obtain Tsirelson bounds for arbitrary Bell inequalities and a construction of Bell expressions whose maximal quantum value is attained by a maximally entangled state of any given dimension.
Factors affecting maximal acid secretion
Desai, H. G.
1969-01-01
The mechanisms by which different factors affect the maximal acid secretion of the stomach are discussed with particular reference to nationality, sex, age, body weight or lean body mass, procedural details, mode of calculation, the nature, dose and route of administration of a stimulus, the synergistic action of another stimulus, drugs, hormones, electrolyte levels, anaemia or deficiency of the iron-dependent enzyme system, vagal continuity and parietal cell mass. PMID:4898322
RG flows of Quantum Einstein Gravity on maximally symmetric spaces
NASA Astrophysics Data System (ADS)
Demmel, Maximilian; Saueressig, Frank; Zanusso, Omar
2014-06-01
We use the Wetterich-equation to study the renormalization group flow of f ( R)-gravity in a three-dimensional, conformally reduced setting. Building on the exact heat kernel for maximally symmetric spaces, we obtain a partial differential equation which captures the scale-dependence of f ( R) for positive and, for the first time, negative scalar curvature. The effects of different background topologies are studied in detail and it is shown that they affect the gravitational RG flow in a way that is not visible in finite-dimensional truncations. Thus, while featuring local background independence, the functional renormalization group equation is sensitive to the topological properties of the background. The detailed analytical and numerical analysis of the partial differential equation reveals two globally well-defined fixed functionals with at most a finite number of relevant deformations. Their properties are remarkably similar to two of the fixed points identified within the R 2-truncation of full Quantum Einstein Gravity. As a byproduct, we obtain a nice illustration of how the functional renormalization group realizes the "integrating out" of fluctuation modes on the three-sphere.
NASA Astrophysics Data System (ADS)
Fraser, Gordon
2009-01-01
In his kind review of my biography of the Nobel laureate Abdus Salam (December 2008 pp45-46), John W Moffat wrongly claims that Salam had "independently thought of the idea of parity violation in weak interactions".
Sensitivity to conversational maxims in deaf and hearing children.
Surian, Luca; Tedoldi, Mariantonia; Siegal, Michael
2010-09-01
We investigated whether access to a sign language affects the development of pragmatic competence in three groups of deaf children aged 6 to 11 years: native signers from deaf families receiving bimodal/bilingual instruction, native signers from deaf families receiving oralist instruction and late signers from hearing families receiving oralist instruction. The performance of these children was compared to a group of hearing children aged 6 to 7 years on a test designed to assess sensitivity to violations of conversational maxims. Native signers with bimodal/bilingual instruction were as able as the hearing children to detect violations that concern truthfulness (Maxim of Quality) and relevance (Maxim of Relation). On items involving these maxims, they outperformed both the late signers and native signers attending oralist schools. These results dovetail with previous findings on mindreading in deaf children and underscore the role of early conversational experience and instructional setting in the development of pragmatics. PMID:19719886
Maximizing algebraic connectivity in air transportation networks
NASA Astrophysics Data System (ADS)
Wei, Peng
In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the
Knowledge discovery by accuracy maximization
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-01-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821
Maximally coherent mixed states: Complementarity between maximal coherence and mixedness
NASA Astrophysics Data System (ADS)
Singh, Uttam; Bera, Manabendra Nath; Dhar, Himadri Shekhar; Pati, Arun Kumar
2015-05-01
Quantum coherence is a key element in topical research on quantum resource theories and a primary facilitator for design and implementation of quantum technologies. However, the resourcefulness of quantum coherence is severely restricted by environmental noise, which is indicated by the loss of information in a quantum system, measured in terms of its purity. In this work, we derive the limits imposed by the mixedness of a quantum system on the amount of quantum coherence that it can possess. We obtain an analytical trade-off between the two quantities that upperbound the maximum quantum coherence for fixed mixedness in a system. This gives rise to a class of quantum states, "maximally coherent mixed states," whose coherence cannot be increased further under any purity-preserving operation. For the above class of states, quantum coherence and mixedness satisfy a complementarity relation, which is crucial to understand the interplay between a resource and noise in open quantum systems.
Maximal energy extraction under discrete diffusive exchange
Hay, M. J.; Schiff, J.; Fisch, N. J.
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Mixtures of maximally entangled pure states
NASA Astrophysics Data System (ADS)
Flores, M. M.; Galapon, E. A.
2016-09-01
We study the conditions when mixtures of maximally entangled pure states remain entangled. We found that the resulting mixed state remains entangled when the number of entangled pure states to be mixed is less than or equal to the dimension of the pure states. For the latter case of mixing a number of pure states equal to their dimension, we found that the mixed state is entangled provided that the entangled pure states to be mixed are not equally weighted. We also found that one can restrict the set of pure states that one can mix from in order to ensure that the resulting mixed state is genuinely entangled. Also, we demonstrate how these results could be applied as a way to detect entanglement in mixtures of the entangled pure states with noise.
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake.
Convertino, V A
1997-02-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake
NASA Technical Reports Server (NTRS)
Convertino, V. A.
1997-01-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
ERIC Educational Resources Information Center
Kaplan, Suzanne; Wilson, Gordon
1978-01-01
The Independent Human Studies program at Schoolcraft College offers an alternative method of earning academic credits. Students delineate an area of study, pose research questions, gather resources, synthesize the information, state the thesis, choose the method of presentation, set schedules, and take responsibility for meeting deadlines. (MB)
Maximal acceleration and radiative processes
NASA Astrophysics Data System (ADS)
Papini, Giorgio
2015-08-01
We derive the radiation characteristics of an accelerated, charged particle in a model due to Caianiello in which the proper acceleration of a particle of mass m has the upper limit 𝒜m = 2mc3/ℏ. We find two power laws, one applicable to lower accelerations, the other more suitable for accelerations closer to 𝒜m and to the related physical singularity in the Ricci scalar. Geometrical constraints and power spectra are also discussed. By comparing the power laws due to the maximal acceleration (MA) with that for particles in gravitational fields, we find that the model of Caianiello allows, in principle, the use of charged particles as tools to distinguish inertial from gravitational fields locally.
Lighting spectrum to maximize colorfulness.
Masuda, Osamu; Nascimento, Sérgio M C
2012-02-01
The spectrum of modern illumination can be computationally tailored considering the visual effects of lighting. We investigated the spectral profiles of the white illumination maximizing the theoretical limits of the perceivable object colors. A large number of metamers with various degrees of smoothness were generated on and around the Planckian locus, and the volume in the CIELAB space of the optimal colors for each metamer was calculated. The optimal spectrum was found at the color temperature of around 5.7×10(3) K, had three peaks at both ends of the visible band and at around 510 nm, and was 25% better than daylight and 35% better than Thornton's prime color lamp. PMID:22297368
NASA Astrophysics Data System (ADS)
Annan, James; Hargreaves, Julia
2016-04-01
In order to perform any Bayesian processing of a model ensemble, we need a prior over the ensemble members. In the case of multimodel ensembles such as CMIP, the historical approach of ``model democracy'' (i.e. equal weight for all models in the sample) is no longer credible (if it ever was) due to model duplication and inbreeding. The question of ``model independence'' is central to the question of prior weights. However, although this question has been repeatedly raised, it has not yet been satisfactorily addressed. Here I will discuss the issue of independence and present a theoretical foundation for understanding and analysing the ensemble in this context. I will also present some simple examples showing how these ideas may be applied and developed.
Varieties of maximal line subbundles
NASA Astrophysics Data System (ADS)
Oxbury, W. M.
2000-07-01
The point of this note is to make an observation concerning the variety M(E) parametrizing line subbundles of maximal degree in a generic stable vector bundle E over an algebraic curve C. M(E) is smooth and projective and its dimension is known in terms of the rank and degree of E and the genus of C (see Section 1). Our observation (Theorem 3·1) is that it has exactly the Chern numbers of an étale cover of the symmetric product S[delta]C where [delta] = dim M(E).This suggests looking for a natural map M(E) [rightward arrow] S[delta]C; however, it is not clear what such a map should be. Indeed, we exhibit an example in which M(E) is connected and deforms non-trivially with E, while there are only finitely many isomorphism classes of étale cover of the symmetric product. This shows that for a general deformation in the family M(E) cannot be such a cover (see Section 4).One may conjecture that M(E) is always connected. This would follow from ampleness of a certain Picard-type bundle on the Jacobian and there seems to be some evidence for expecting this, though we do not pursue this question here.Note that by forgetting the inclusion of a maximal line subbundle in E we get a natural map from M(E) to the Jacobian whose image W(E) is analogous to the classical (Brill-Noether) varieties of special line bundles. (In this sense M(E) is precisely a generalization of the symmetric products of C.) In Section 2 we give some results on W(E) which generalise standard Brill-Noether properties. These are due largely to Laumon, to whom the author is grateful for the reference [9].
Reif, Maria M.; Huenenberger, Philippe H.
2011-04-14
The raw single-ion solvation free energies computed from atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [M. A. Kastenholz and P. H. Huenenberger, J. Chem. Phys. 124, 224501 (2006); M. M. Reif and P. H. Huenenberger, J. Chem. Phys. 134, 144103 (2010)], the application of appropriate correction terms permits to obtain methodology-independent results. The corrected values are then exclusively characteristic of the underlying molecular model including in particular the ion-solvent van der Waals interaction parameters, determining the effective ion size and the magnitude of its dispersion interactions. In the present study, the comparison of calculated (corrected) hydration free energies with experimental data (along with the consideration of ionic polarizabilities) is used to calibrate new sets of ion-solvent van der Waals (Lennard-Jones) interaction parameters for the alkali (Li{sup +}, Na{sup +}, K{sup +}, Rb{sup +}, Cs{sup +}) and halide (F{sup -}, Cl{sup -}, Br{sup -}, I{sup -}) ions along with either the SPC or the SPC/E water models. The experimental dataset is defined by conventional single-ion hydration free energies [Tissandier et al., J. Phys. Chem. A 102, 7787 (1998); Fawcett, J. Phys. Chem. B 103, 11181] along with three plausible choices for the (experimentally elusive) value of the absolute (intrinsic) hydration free energy of the proton, namely, {Delta}G{sub hyd} {sup O-minus} [H{sup +}]=-1100, -1075 or -1050 kJ mol{sup -1}, resulting in three sets L, M, and H for the SPC water model and three sets L{sub E}, M{sub E}, and H{sub E} for the SPC/E water model (alternative sets can easily be interpolated to intermediate {Delta}G{sub hyd} {sup O-minus} [H{sup +}] values). The residual sensitivity of the calculated (corrected) hydration free energies on the volume-pressure boundary conditions and on the effective
Larsen, Filip J; Weitzberg, Eddie; Lundberg, Jon O; Ekblom, Björn
2010-01-15
The anion nitrate-abundant in our diet-has recently emerged as a major pool of nitric oxide (NO) synthase-independent NO production. Nitrate is reduced stepwise in vivo to nitrite and then NO and possibly other bioactive nitrogen oxides. This reductive pathway is enhanced during low oxygen tension and acidosis. A recent study shows a reduction in oxygen consumption during submaximal exercise attributable to dietary nitrate. We went on to study the effects of dietary nitrate on various physiological and biochemical parameters during maximal exercise. Nine healthy, nonsmoking volunteers (age 30+/-2.3 years, VO(2max) 3.72+/-0.33 L/min) participated in this study, which had a randomized, double-blind crossover design. Subjects received dietary supplementation with sodium nitrate (0.1 mmol/kg/day) or placebo (NaCl) for 2 days before the test. This dose corresponds to the amount found in 100-300 g of a nitrate-rich vegetable such as spinach or beetroot. The maximal exercise tests consisted of an incremental exercise to exhaustion with combined arm and leg cranking on two separate ergometers. Dietary nitrate reduced VO(2max) from 3.72+/-0.33 to 3.62+/-0.31 L/min, P<0.05. Despite the reduction in VO(2max) the time to exhaustion trended to an increase after nitrate supplementation (524+/-31 vs 563+/-30 s, P=0.13). There was a correlation between the change in time to exhaustion and the change in VO(2max) (R(2)=0.47, P=0.04). A moderate dietary dose of nitrate significantly reduces VO(2max) during maximal exercise using a large active muscle mass. This reduction occurred with a trend toward increased time to exhaustion implying that two separate mechanisms are involved: one that reduces VO(2max) and another that improves the energetic function of the working muscles. PMID:19913611
Maximal dinucleotide and trinucleotide circular codes.
Michel, Christian J; Pellegrini, Marco; Pirillo, Giuseppe
2016-01-21
We determine here the number and the list of maximal dinucleotide and trinucleotide circular codes. We prove that there is no maximal dinucleotide circular code having strictly less than 6 elements (maximum size of dinucleotide circular codes). On the other hand, a computer calculus shows that there are maximal trinucleotide circular codes with less than 20 elements (maximum size of trinucleotide circular codes). More precisely, there are maximal trinucleotide circular codes with 14, 15, 16, 17, 18 and 19 elements and no maximal trinucleotide circular code having less than 14 elements. We give the same information for the maximal self-complementary dinucleotide and trinucleotide circular codes. The amino acid distribution of maximal trinucleotide circular codes is also determined. PMID:26382231
Many parameter sets in a multicompartment model oscillator are robust to temperature perturbations.
Caplan, Jonathan S; Williams, Alex H; Marder, Eve
2014-04-01
Neurons in cold-blooded animals remarkably maintain their function over a wide range of temperatures, even though the rates of many cellular processes increase twofold, threefold, or many-fold for each 10°C increase in temperature. Moreover, the kinetics of ion channels, maximal conductances, and Ca(2+) buffering each have independent temperature sensitivities, suggesting that the balance of biological parameters can be disturbed by even modest temperature changes. In stomatogastric ganglia of the crab Cancer borealis, the duty cycle of the bursting pacemaker kernel is highly robust between 7 and 23°C (Rinberg et al., 2013). We examined how this might be achieved in a detailed conductance-based model in which exponential temperature sensitivities were given by Q10 parameters. We assessed the temperature robustness of this model across 125,000 random sets of Q10 parameters. To examine how robustness might be achieved across a variable population of animals, we repeated this analysis across six sets of maximal conductance parameters that produced similar activity at 11°C. Many permissible combinations of maximal conductance and Q10 parameters were found over broad regions of parameter space and relatively few correlations among Q10s were observed across successful parameter sets. A significant portion of Q10 sets worked for at least 3 of the 6 maximal conductance sets (∼11.1%). Nonetheless, no Q10 set produced robust function across all six maximal conductance sets, suggesting that maximal conductance parameters critically contribute to temperature robustness. Overall, these results provide insight into principles of temperature robustness in neuronal oscillators. PMID:24695714
Many Parameter Sets in a Multicompartment Model Oscillator Are Robust to Temperature Perturbations
Caplan, Jonathan S.; Williams, Alex H.
2014-01-01
Neurons in cold-blooded animals remarkably maintain their function over a wide range of temperatures, even though the rates of many cellular processes increase twofold, threefold, or many-fold for each 10°C increase in temperature. Moreover, the kinetics of ion channels, maximal conductances, and Ca2+ buffering each have independent temperature sensitivities, suggesting that the balance of biological parameters can be disturbed by even modest temperature changes. In stomatogastric ganglia of the crab Cancer borealis, the duty cycle of the bursting pacemaker kernel is highly robust between 7 and 23°C (Rinberg et al., 2013). We examined how this might be achieved in a detailed conductance-based model in which exponential temperature sensitivities were given by Q10 parameters. We assessed the temperature robustness of this model across 125,000 random sets of Q10 parameters. To examine how robustness might be achieved across a variable population of animals, we repeated this analysis across six sets of maximal conductance parameters that produced similar activity at 11°C. Many permissible combinations of maximal conductance and Q10 parameters were found over broad regions of parameter space and relatively few correlations among Q10s were observed across successful parameter sets. A significant portion of Q10 sets worked for at least 3 of the 6 maximal conductance sets (∼11.1%). Nonetheless, no Q10 set produced robust function across all six maximal conductance sets, suggesting that maximal conductance parameters critically contribute to temperature robustness. Overall, these results provide insight into principles of temperature robustness in neuronal oscillators. PMID:24695714
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Maximizing the optical network capacity
Bayvel, Polina; Maher, Robert; Liga, Gabriele; Shevchenko, Nikita A.; Lavery, Domaniç; Killey, Robert I.
2016-01-01
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572
Maximizing the optical network capacity.
Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I
2016-03-01
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572
Maximal Oxygen Intake and Maximal Work Performance of Active College Women.
ERIC Educational Resources Information Center
Higgs, Susanne L.
Maximal oxygen intake and associated physiological variables were measured during strenuous exercise on women subjects (N=20 physical education majors). Following assessment of maximal oxygen intake, all subjects underwent a performance test at the work level which had elicited their maximal oxygen intake. Mean maximal oxygen intake was 41.32…
Moving multiple sinks through wireless sensor networks for lifetime maximization.
Petrioli, Chiara; Carosi, Alessio; Basagni, Stefano; Phillips, Cynthia Ann
2008-01-01
Unattended sensor networks typically watch for some phenomena such as volcanic events, forest fires, pollution, or movements in animal populations. Sensors report to a collection point periodically or when they observe reportable events. When sensors are too far from the collection point to communicate directly, other sensors relay messages for them. If the collection point location is static, sensor nodes that are closer to the collection point relay far more messages than those on the periphery. Assuming all sensor nodes have roughly the same capabilities, those with high relay burden experience battery failure much faster than the rest of the network. However, since their death disconnects the live nodes from the collection point, the whole network is then dead. We consider the problem of moving a set of collectors (sinks) through a wireless sensor network to balance the energy used for relaying messages, maximizing the lifetime of the network. We show how to compute an upper bound on the lifetime for any instance using linear and integer programming. We present a centralized heuristic that produces sink movement schedules that produce network lifetimes within 1.4% of the upper bound for realistic settings. We also present a distributed heuristic that produces lifetimes at most 25:3% below the upper bound. More specifically, we formulate a linear program (LP) that is a relaxation of the scheduling problem. The variables are naturally continuous, but the LP relaxes some constraints. The LP has an exponential number of constraints, but we can satisfy them all by enforcing only a polynomial number using a separation algorithm. This separation algorithm is a p-median facility location problem, which we can solve efficiently in practice for huge instances using integer programming technology. This LP selects a set of good sensor configurations. Given the solution to the LP, we can find a feasible schedule by selecting a subset of these configurations, ordering them
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Click on the image for 'Independence' Panorama (QTVR)
This is the Spirit 'Independence' panorama, acquired on martian days, or sols, 536 to 543 (July 6 to 13, 2005), from a position in the 'Columbia Hills' near the summit of 'Husband Hill.' The summit of 'Husband Hill' is the peak near the right side of this panorama and is about 100 meters (328 feet) away from the rover and about 30 meters (98 feet) higher in elevation. The rocky outcrops downhill and on the left side of this mosaic include 'Larry's Lookout' and 'Cumberland Ridge,' which Spirit explored in April, May, and June of 2005.
The panorama spans 360 degrees and consists of 108 individual images, each acquired with five filters of the rover's panoramic camera. The approximate true color of the mosaic was generated using the camera's 750-, 530-, and 480-nanometer filters. During the 8 martian days, or sols, that it took to acquire this image, the lighting varied considerably, partly because of imaging at different times of sol, and partly because of small sol-to-sol variations in the dustiness of the atmosphere. These slight changes produced some image seams and rock shadows. These seams have been eliminated from the sky portion of the mosaic to better simulate the vista a person standing on Mars would see. However, it is often not possible or practical to smooth out such seams for regions of rock, soil, rover tracks or solar panels. Such is the nature of acquiring and assembling large panoramas from the rovers.
Predicted maximal heart rate for upper body exercise testing.
Hill, M; Talbot, C; Price, M
2016-03-01
Age-predicted maximal heart rate (HRMAX ) equations are commonly used for the purpose of prescribing exercise regimens, as criteria for achieving maximal exertion and for diagnostic exercise testing. Despite the growing popularity of upper body exercise in both healthy and clinical settings, no recommendations are available for exercise modes using the smaller upper body muscle mass. The purpose of this study was to determine how well commonly used age-adjusted prediction equations for HRMAX estimate actual HRMAX for upper body exercise in healthy young and older adults. A total of 30 young (age: 20 ± 2 years, height: 171·9 ± 32·8 cm, mass: 77·7 ± 12·6 kg) and 20 elderly adults (age: 66 ± 6 years, height: 162 ± 8·1 cm, mass: 65·3 ± 12·3 kg) undertook maximal incremental exercise tests on a conventional arm crank ergometer. Age-adjusted maximal heart rate was calculated using prediction equations based on leg exercise and compared with measured HRMAX data for the arms. Maximal HR for arm exercise was significantly overpredicted compared with age-adjusted prediction equations in both young and older adults. Subtracting 10-20 beats min(-1) from conventional prediction equations provides a reasonable estimate of HRMAX for upper body exercise in healthy older and younger adults. PMID:25319169
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Does mental exertion alter maximal muscle activation?
Rozand, Vianney; Pageaux, Benjamin; Marcora, Samuele M.; Papaxanthis, Charalambos; Lepers, Romuald
2014-01-01
Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 min each: (i) high mental exertion (incongruent Stroop task), (ii) moderate mental exertion (congruent Stroop task), (iii) low mental exertion (watching a movie). In each condition, mental exertion was combined with 10 intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 min). Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors. PMID:25309404
Comparison of Hardy-Littlewood and dyadic maximal functions on spaces of homogeneous type
NASA Astrophysics Data System (ADS)
Aimar, Hugo; Bernardis, Ana; Iaffei, Bibiana
2005-12-01
We obtain a comparison of the level sets for two maximal functions on a space of homogeneous type: the Hardy-Littlewood maximal function of mean values over balls and the dyadic maximal function of mean values over the dyadic sets introduced by M. Christ in [M. Christ, A T(b) theorem with remarks on analytic capacity and the Cauchy integral, Colloq. Math. 60/61 (1990) 601-628]. As applications to the theory of Ap weights on this setting, we compare the standard and the dyadic Muckenhoupt classes and we give an alternative proof of reverse Hölder type inequalities.
Inflation in maximal gauged supergravities
Kodama, Hideo; Nozawa, Masato
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Maximally Expressive Modeling of Operations Tasks
NASA Technical Reports Server (NTRS)
Jaap, John; Richardson, Lea; Davis, Elizabeth
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.
Maximizing TDRS Command Load Lifetime
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2002-01-01
was therefore the key to achieving this goal. This goal was eventually realized through development of an Excel spreadsheet tool called EMMIE (Excel Mean Motion Interactive Estimation). EMMIE utilizes ground ephemeris nodal data to perform a least-squares fit to inferred mean anomaly as a function of time, thus generating an initial estimate for mean motion. This mean motion in turn drives a plot of estimated downtrack position difference versus time. The user can then manually iterate the mean motion, and determine an optimal value that will maximize command load lifetime. Once this optimal value is determined, the mean motion initially calculated by the command builder tool is overwritten with the new optimal value, and the command load is built for uplink to ISS. EMMIE also provides the capability for command load lifetime to be tracked through multiple TORS ephemeris updates. Using EMMIE, TORS command load lifetimes of approximately 30 days have been achieved.
Specificity of a Maximal Step Exercise Test
ERIC Educational Resources Information Center
Darby, Lynn A.; Marsh, Jennifer L.; Shewokis, Patricia A.; Pohlman, Roberta L.
2007-01-01
To adhere to the principle of "exercise specificity" exercise testing should be completed using the same physical activity that is performed during exercise training. The present study was designed to assess whether aerobic step exercisers have a greater maximal oxygen consumption (max VO sub 2) when tested using an activity specific, maximal step…
The maximal affinity of ligands
Kuntz, I. D.; Chen, K.; Sharp, K. A.; Kollman, P. A.
1999-01-01
We explore the question of what are the best ligands for macromolecular targets. A survey of experimental data on a large number of the strongest-binding ligands indicates that the free energy of binding increases with the number of nonhydrogen atoms with an initial slope of ≈−1.5 kcal/mol (1 cal = 4.18 J) per atom. For ligands that contain more than 15 nonhydrogen atoms, the free energy of binding increases very little with relative molecular mass. This nonlinearity is largely ascribed to nonthermodynamic factors. An analysis of the dominant interactions suggests that van der Waals interactions and hydrophobic effects provide a reasonable basis for understanding binding affinities across the entire set of ligands. Interesting outliers that bind unusually strongly on a per atom basis include metal ions, covalently attached ligands, and a few well known complexes such as biotin–avidin. PMID:10468550
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-01
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. PMID:24530825
Independent task Fourier filters
NASA Astrophysics Data System (ADS)
Caulfield, H. John
2001-11-01
Since the early 1960s, a major part of optical computing systems has been Fourier pattern recognition, which takes advantage of high speed filter changes to enable powerful nonlinear discrimination in `real time.' Because filter has a task quite independent of the tasks of the other filters, they can be applied and evaluated in parallel or, in a simple approach I describe, in sequence very rapidly. Thus I use the name ITFF (independent task Fourier filter). These filters can also break very complex discrimination tasks into easily handled parts, so the wonderful space invariance properties of Fourier filtering need not be sacrificed to achieve high discrimination and good generalizability even for ultracomplex discrimination problems. The training procedure proceeds sequentially, as the task for a given filter is defined a posteriori by declaring it to be the discrimination of particular members of set A from all members of set B with sufficient margin. That is, we set the threshold to achieve the desired margin and note the A members discriminated by that threshold. Discriminating those A members from all members of B becomes the task of that filter. Those A members are then removed from the set A, so no other filter will be asked to perform that already accomplished task.
The effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake
Oh, Deuk-Ja; Hong, Hyeon-Ok; Lee, Bo-Ae
2016-01-01
The purpose of this study is to investigate the effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake. To achieve the purpose of the study, a total of 30 subjects were selected, including 15 people who performed continued regular exercises and 15 people as the control group. With regard to data processing, the IBM SPSS Statistics ver. 21.0 was used to calculate the mean and standard deviation. The difference of mean change between groups was verified through an independent t-test. As a result, there were significant differences in resting heart rate, maximal heart rate, maximal systolic blood pressure, and maximal oxygen uptake. However, the maximal systolic blood pressure was found to be an exercise-induced high blood pressure. Thus, it is thought that a risk diagnosis for it through a regular exercise stress test is necessary. PMID:26933659
The effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake.
Oh, Deuk-Ja; Hong, Hyeon-Ok; Lee, Bo-Ae
2016-02-01
The purpose of this study is to investigate the effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake. To achieve the purpose of the study, a total of 30 subjects were selected, including 15 people who performed continued regular exercises and 15 people as the control group. With regard to data processing, the IBM SPSS Statistics ver. 21.0 was used to calculate the mean and standard deviation. The difference of mean change between groups was verified through an independent t-test. As a result, there were significant differences in resting heart rate, maximal heart rate, maximal systolic blood pressure, and maximal oxygen uptake. However, the maximal systolic blood pressure was found to be an exercise-induced high blood pressure. Thus, it is thought that a risk diagnosis for it through a regular exercise stress test is necessary. PMID:26933659
Maximizing your return on people.
Bassi, Laurie; McMurrer, Daniel
2007-03-01
Though most traditional HR performance metrics don't predict organizational performance, alternatives simply have not existed--until now. During the past ten years, researchers Laurie Bassi and Daniel McMurrer have worked to develop a system that allows executives to assess human capital management (HCM) and to use those metrics both to predict organizational performance and to guide organizations' investments in people. The new framework is based on a core set of HCM drivers that fall into five major categories: leadership practices, employee engagement, knowledge accessibility, workforce optimization, and organizational learning capacity. By employing rigorously designed surveys to score a company on the range of HCM practices across the five categories, it's possible to benchmark organizational HCM capabilities, identify HCM strengths and weaknesses, and link improvements or back-sliding in specific HCM practices with improvements or shortcomings in organizational performance. The process requires determining a "maturity" score for each practice, based on a scale of 1 (low) to 5 (high). Over time, evolving maturity scores from multiple surveys can reveal progress in each of the HCM practices and help a company decide where to focus improvement efforts that will have a direct impact on performance. The authors draw from their work with American Standard, South Carolina's Beaufort County School District, and a bevy of financial firms to show how improving HCM scores led to increased sales, safety, academic test scores, and stock returns. Bassi and McMurrer urge HR departments to move beyond the usual metrics and begin using HCM measurement tools to gauge how well people are managed and developed throughout the organization. In this new role, according to the authors, HR can take on strategic responsibility and ensure that superior human capital management becomes central to the organization's culture. PMID:17348175
Matching, maximizing, and hill-climbing
Hinson, John M.; Staddon, J. E. R.
1983-01-01
In simple situations, animals consistently choose the better of two alternatives. On concurrent variable-interval variable-interval and variable-interval variable-ratio schedules, they approximately match aggregate choice and reinforcement ratios. The matching law attempts to explain the latter result but does not address the former. Hill-climbing rules such as momentary maximizing can account for both. We show that momentary maximizing constrains molar choice to approximate matching; that molar choice covaries with pigeons' momentary-maximizing estimate; and that the “generalized matching law” follows from almost any hill-climbing rule. PMID:16812350
Are all maximally entangled states pure?
NASA Astrophysics Data System (ADS)
Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.
2005-10-01
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
Are all maximally entangled states pure?
Cavalcanti, D.; Brandao, F.G.S.L.; Terra Cunha, M.O.
2005-10-15
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
MAXIM Pathfinder x-ray interferometry mission
NASA Astrophysics Data System (ADS)
Gendreau, Keith C.; Cash, Webster C.; Shipley, Ann F.; White, Nicholas
2003-03-01
The MAXIM Pathfinder (MP) mission is under study as a scientific and technical stepping stone for the full MAXIM X-ray interferometry mission. While full MAXIM will resolve the event horizons of black holes with 0.1 microarcsecond imaging, MP will address scientific and technical issues as a 100 microarcsecond imager with some capabilities to resolve microarcsecond structure. We will present the primary science goals of MP. These include resolving stellar coronae, distinguishing between jets and accretion disks in AGN. This paper will also present the baseline design of MP. We will overview the challenging technical requirements and solutions for formation flying, target acquisition, and metrology.
Maximal hypersurfaces in asymptotically stationary spacetimes
NASA Astrophysics Data System (ADS)
Chrusciel, Piotr T.; Wald, Robert M.
1992-12-01
The purpose of the work is to extend the results on the existence of maximal hypersurfaces to encompass some situations considered by other authors. The existence of maximal hypersurface in asymptotically stationary spacetimes is proven. Existence of maximal surface and of foliations by maximal hypersurfaces is proven in two classes of asymptotically flat spacetimes which possess a one parameter group of isometries whose orbits are timelike 'near infinity'. The first class consists of strongly causal asymptotically flat spacetimes which contain no 'blackhole or white hole' (but may contain 'ergoregions' where the Killing orbits fail to be timelike). The second class of space times possess a black hole and a white hole, with the black and white hole horizon intersecting in a compact 2-surface S.
Gaussian maximally multipartite-entangled states
Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano
2009-12-15
We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n<=7.
Natural selection and the maximization of fitness.
Birch, Jonathan
2016-08-01
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits. PMID:25899152
AUC-Maximizing Ensembles through Metalearning
LeDell, Erin; van der Laan, Mark J.; Peterson, Maya
2016-01-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721
NASA Astrophysics Data System (ADS)
Jois, Manjunath Holaykoppa Nanjunda
The conventional Influence Maximization problem is the problem of finding such a team (a small subset) of seed nodes in a social network that would maximize the spread of influence over the whole network. This paper considers a lottery system aimed at maximizing the awareness spread to promote energy conservation behavior as a stochastic Influence Maximization problem with the constraints ensuring lottery fairness. The resulting Multi-Team Influence Maximization problem involves assigning the probabilities to multiple teams of seeds (interpreted as lottery winners) to maximize the expected awareness spread. Such a variation of the Influence Maximization problem is modeled as a Linear Program; however, enumerating all the possible teams is a hard task considering that the feasible team count grows exponentially with the network size. In order to address this challenge, we develop a column generation based approach to solve the problem with a limited number of candidate teams, where new candidates are generated and added to the problem iteratively. We adopt a piecewise linear function to model the impact of including a new team so as to pick only such teams which can improve the existing solution. We demonstrate that with this approach we can solve such influence maximization problems to optimality, and perform computational study with real-world social network data sets to showcase the efficiency of the approach in finding lottery designs for optimal awareness spread. Lastly, we explore other possible scenarios where this model can be utilized to optimally solve the otherwise hard to solve influence maximization problems.
Tseng, Kuo-Wei; Tseng, Wei-Chin; Lin, Ming-Ju; Chen, Hsin-Lian; Nosaka, Kazunori; Chen, Trevor C
2016-01-01
This study investigated whether maximal voluntary isometric contractions (MVIC) performed before maximal eccentric contractions (MaxEC) would attenuate muscle damage of the knee extensors. Untrained men were placed to an experimental group that performed 6 sets of 10 MVIC at 90° knee flexion 2 weeks before 6 sets of 10 MaxEC or a control group that performed MaxEC only (n = 13/group). Changes in muscle damage markers were assessed before to 5 days after each exercise. Small but significant changes in maximal voluntary concentric contraction torque, range of motion (ROM) and plasma creatine kinase (CK) activity were evident at immediately to 2 days post-MVIC (p < 0.05), but other variables (e.g. thigh girth, myoglobin concentration, B-mode echo intensity) did not change significantly. Changes in all variables after MaxEC were smaller (p < 0.05) by 45% (soreness)-67% (CK) for the experimental than the control group. These results suggest that MVIC conferred potent protective effect against MaxEC-induced muscle damage. PMID:27366814
Associations of maximal strength and muscular endurance with cardiovascular risk factors.
Vaara, J P; Fogelholm, M; Vasankari, T; Santtila, M; Häkkinen, K; Kyröläinen, H
2014-04-01
The aim was to study the associations of maximal strength and muscular endurance with single and clustered cardiovascular risk factors. Muscular endurance, maximal strength, cardiorespiratory fitness and waist circumference were measured in 686 young men (25±5 years). Cardiovascular risk factors (plasma glucose, serum high- and low-density lipoprotein cholesterol, triglycerides, blood pressure) were determined. The risk factors were transformed to z-scores and the mean of values formed clustered cardiovascular risk factor. Muscular endurance was inversely associated with triglycerides, s-LDL-cholesterol, glucose and blood pressure (β=-0.09 to - 0.23, p<0.05), and positively with s-HDL cholesterol (β=0.17, p<0.001) independent of cardiorespiratory fitness. Muscular endurance was negatively associated with the clustered cardiovascular risk factor independent of cardiorespiratory fitness (β=-0.26, p<0.05), whereas maximal strength was not associated with any of the cardiovascular risk factors or the clustered cardiovascular risk factor independent of cardiorespiratory fitness. Furthermore, cardiorespiratory fitness was inversely associated with triglycerides, s-LDL-cholesterol and the clustered cardiovascular risk factor (β=-0.14 to - 0.24, p<0.005), as well as positively with s-HDL cholesterol (β=0.11, p<0.05) independent of muscular fitness. This cross-sectional study demonstrated that in young men muscular endurance and cardiorespiratory fitness were independently associated with the clustering of cardiovascular risk factors, whereas maximal strength was not. PMID:24022567
Independence Generalizing Monotone and Boolean Independences
NASA Astrophysics Data System (ADS)
Hasebe, Takahiro
2011-01-01
We define conditionally monotone independence in two states which interpolates monotone and Boolean ones. This independence is associative, and therefore leads to a natural probability theory in a non-commutative algebra.
Maximal Holevo Quantity Based on Weak Measurements
Wang, Yao-Kun; Fei, Shao-Ming; Wang, Zhi-Xi; Cao, Jun-Peng; Fan, Heng
2015-01-01
The Holevo bound is a keystone in many applications of quantum information theory. We propose “ maximal Holevo quantity for weak measurements” as the generalization of the maximal Holevo quantity which is defined by the optimal projective measurements. The scenarios that weak measurements is necessary are that only the weak measurements can be performed because for example the system is macroscopic or that one intentionally tries to do so such that the disturbance on the measured system can be controlled for example in quantum key distribution protocols. We evaluate systematically the maximal Holevo quantity for weak measurements for Bell-diagonal states and find a series of results. Furthermore, we find that weak measurements can be realized by noise and project measurements. PMID:26090962
Caffeine, maximal power output and fatigue.
Williams, J H; Signorile, J F; Barnes, W S; Henrich, T W
1988-01-01
The purpose of this investigation was to determine the effects of caffeine ingestion on maximal power output and fatigue during short term, high intensity exercise. Nine adult males performed 15 s maximal exercise bouts 60 min after ingestion of caffeine (7 mg.kg-1) or placebo. Exercise bouts were carried out on a modified cycle ergometer which allowed power output to be computed for each one-half pedal stroke via microcomputer. Peak power output under caffeine conditions was not significantly different from that obtained following placebo ingestion. Similarly, time to peak power, total work, power fatigue index and power fatigue rate did not differ significantly between caffeine and placebo conditions. These results suggest that caffeine ingestion does not increase one's maximal ability to generate power. Further, caffeine does not alter the rate or magnitude of fatigue during high intensity, dynamic exercise. PMID:3228680
An information maximization model of eye movements
NASA Technical Reports Server (NTRS)
Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra
2005-01-01
We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.
Optimum array design to maximize Fisher information for bearing estimation.
Tuladhar, Saurav R; Buck, John R
2011-11-01
Source bearing estimation is a common application of linear sensor arrays. The Cramer-Rao bound (CRB) sets a lower bound on the achievable mean square error (MSE) of any unbiased bearing estimate. In the spatially white noise case, the CRB is minimized by placing half of the sensors at each end of the array. However, many realistic ocean environments have a mixture of both white noise and spatially correlated noise. In shallow water environments, the correlated ambient noise can be modeled as cylindrically isotropic. This research designs a fixed aperture linear array to maximize the bearing Fisher information (FI) under these noise conditions. The FI is the inverse of the CRB, so maximizing the FI minimizes the CRB. The elements of the optimum array are located closer to the array ends than uniform spacing, but are not as extreme as in the white noise case. The optimum array results from a trade off between maximizing the array bearing sensitivity and minimizing output noise power variation over the bearing. Depending on the source bearing, the resulting improvement in MSE performance of the optimized array over a uniform array is equivalent to a gain of 2-5 dB in input signal-to-noise ratio. PMID:22087908
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database. PMID:25163062
Quantum-state reconstruction by maximizing likelihood and entropy.
Teo, Yong Siah; Zhu, Huangjun; Englert, Berthold-Georg; Řeháček, Jaroslav; Hradil, Zdeněk
2011-07-01
Quantum-state reconstruction on a finite number of copies of a quantum system with informationally incomplete measurements, as a rule, does not yield a unique result. We derive a reconstruction scheme where both the likelihood and the von Neumann entropy functionals are maximized in order to systematically select the most-likely estimator with the largest entropy, that is, the least-bias estimator, consistent with a given set of measurement data. This is equivalent to the joint consideration of our partial knowledge and ignorance about the ensemble to reconstruct its identity. An interesting structure of such estimators will also be explored. PMID:21797584
Flux maximization techniques for compton backscatter depth profilometry.
Lawson, L
1993-01-01
Resolution in x-ray backscatter imaging has often been hampered by low fluxes. But, for a given set of resolution requirements and geometric constraints, it is possible to define a maximization problem in the geometric parameters for which the solution is the maximum flux possible in those circumstances. In this way, resolution in noncritical directions can be traded for improved resolution in a desired direction. Making this the thickness, or surface normal direction, makes practicable the depth profiling of layered structures. Such techniques were applied to the problem of imaging the layered structure of corroding aircraft sheet metal joints using Compton backscatter. PMID:21307450
Understanding violations of Gricean maxims in preschoolers and adults.
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018
Understanding violations of Gricean maxims in preschoolers and adults
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018
Maximizing the Spectacle of Water Fountains
ERIC Educational Resources Information Center
Simoson, Andrew J.
2009-01-01
For a given initial speed of water from a spigot or jet, what angle of the jet will maximize the visual impact of the water spray in the fountain? This paper focuses on fountains whose spigots are arranged in circular fashion, and couches the measurement of the visual impact in terms of the surface area and the volume under the fountain's natural…
A Model of College Tuition Maximization
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.
2009-01-01
This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…
Maximal aerobic exercise following prolonged sleep deprivation.
Goodman, J; Radomski, M; Hart, L; Plyley, M; Shephard, R J
1989-12-01
The effect of 60 h without sleep upon maximal oxygen intake was examined in 12 young women, using a cycle ergometer protocol. The arousal of the subjects was maintained by requiring the performance of a sequence of cognitive tasks throughout the experimental period. Well-defined oxygen intake plateaus were obtained both before and after sleep deprivation, and no change of maximal oxygen intake was observed immediately following sleep deprivation. The endurance time for exhausting exercise also remained unchanged, as did such markers of aerobic performance as peak exercise ventilation, peak heart rate, peak respiratory gas exchange ratio, and peak blood lactate. However, as in an earlier study of sleep deprivation with male subjects (in which a decrease of treadmill maximal oxygen intake was observed), the formula of Dill and Costill (4) indicated the development of a substantial (11.6%) increase of estimated plasma volume percentage with corresponding decreases in hematocrit and red cell count. Possible factors sustaining maximal oxygen intake under the conditions of the present experiment include (1) maintained arousal of the subjects with no decrease in peak exercise ventilation or the related respiratory work and (2) use of a cycle ergometer rather than a treadmill test with possible concurrent differences in the impact of hematocrit levels and plasma volume expansion upon peak cardiac output and thus oxygen delivery to the working muscles. PMID:2628360
Does evolution lead to maximizing behavior?
Lehmann, Laurent; Alger, Ingela; Weibull, Jörgen
2015-07-01
A long-standing question in biology and economics is whether individual organisms evolve to behave as if they were striving to maximize some goal function. We here formalize this "as if" question in a patch-structured population in which individuals obtain material payoffs from (perhaps very complex multimove) social interactions. These material payoffs determine personal fitness and, ultimately, invasion fitness. We ask whether individuals in uninvadable population states will appear to be maximizing conventional goal functions (with population-structure coefficients exogenous to the individual's behavior), when what is really being maximized is invasion fitness at the genetic level. We reach two broad conclusions. First, no simple and general individual-centered goal function emerges from the analysis. This stems from the fact that invasion fitness is a gene-centered multigenerational measure of evolutionary success. Second, when selection is weak, all multigenerational effects of selection can be summarized in a neutral type-distribution quantifying identity-by-descent between individuals within patches. Individuals then behave as if they were striving to maximize a weighted sum of material payoffs (own and others). At an uninvadable state it is as if individuals would freely choose their actions and play a Nash equilibrium of a game with a goal function that combines self-interest (own material payoff), group interest (group material payoff if everyone does the same), and local rivalry (material payoff differences). PMID:26082379
How to Generate Good Profit Maximization Problems
ERIC Educational Resources Information Center
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Ehrenfest's Lottery--Time and Entropy Maximization
ERIC Educational Resources Information Center
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Robust Utility Maximization Under Convex Portfolio Constraints
Matoussi, Anis; Mezghani, Hanen Mnif, Mohamed
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
Faculty Salaries and the Maximization of Prestige
ERIC Educational Resources Information Center
Melguizo, Tatiana; Strober, Myra H.
2007-01-01
Through the lens of the emerging economic theory of higher education, we look at the relationship between salary and prestige. Starting from the premise that academic institutions seek to maximize prestige, we hypothesize that monetary rewards are higher for faculty activities that confer prestige. We use data from the 1999 National Study of…
Maximizing Resource Utilization in Video Streaming Systems
ERIC Educational Resources Information Center
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing the Phytonutrient Content of Potatoes
Technology Transfer Automated Retrieval System (TEKTRAN)
We are exploring to what extent the rich genetic diversity of potatoes can be used to maximize the nutritional potential of potatoes. Metabolic profiling is being used to screen potatoes for genotypes with elevated amounts of vitamins and phytonutrients. Substantial differences in phytonutrients am...
Why Contextual Preference Reversals Maximize Expected Value
2016-01-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2011-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
NASA Astrophysics Data System (ADS)
Wagner, T.; Beirle, S.; Brauers, T.; Deutschmann, T.; Frieß, U.; Hak, C.; Halla, J. D.; Heue, K. P.; Junkermann, W.; Li, X.; Platt, U.; Pundt-Gruber, I.
2011-12-01
We present aerosol and trace gas profiles derived from MAX-DOAS observations. Our inversion scheme is based on simple profile parameterisations used as input for an atmospheric radiative transfer model (forward model). From a least squares fit of the forward model to the MAX-DOAS measurements, two profile parameters are retrieved including integrated quantities (aerosol optical depth or trace gas vertical column density), and parameters describing the height and shape of the respective profiles. From these results, the aerosol extinction and trace gas mixing ratios can also be calculated. We apply the profile inversion to MAX-DOAS observations during a measurement campaign in Milano, Italy, September 2003, which allowed simultaneous observations from three telescopes (directed to north, west, south). Profile inversions for aerosols and trace gases were possible on 23 days. Especially in the middle of the campaign (17-20 September 2003), enhanced values of aerosol optical depth and NO2 and HCHO mixing ratios were found. The retrieved layer heights were typically similar for HCHO and aerosols. For NO2, lower layer heights were found, which increased during the day. The MAX-DOAS inversion results are compared to independent measurements: (1) aerosol optical depth measured at an AERONET station at Ispra; (2) near-surface NO2 and HCHO (formaldehyde) mixing ratios measured by long path DOAS and Hantzsch instruments at Bresso; (3) vertical profiles of HCHO and aerosols measured by an ultra light aircraft. Depending on the viewing direction, the aerosol optical depths from MAX-DOAS are either smaller or larger than those from AERONET observations. Similar comparison results are found for the MAX-DOAS NO2 mixing ratios versus long path DOAS measurements. In contrast, the MAX-DOAS HCHO mixing ratios are generally higher than those from long path DOAS or Hantzsch instruments. The comparison of the HCHO and aerosol profiles from the aircraft showed reasonable agreement with
NASA Astrophysics Data System (ADS)
Wagner, T.; Beirle, S.; Brauers, T.; Deutschmann, T.; Frieß, U.; Hak, C.; Halla, J. D.; Heue, K. P.; Junkermann, W.; Li, X.; Platt, U.; Pundt-Gruber, I.
2011-06-01
We present aerosol and trace gas profiles derived from MAX-DOAS observations. Our inversion scheme is based on simple profile parameterisations used as input for an atmospheric radiative transfer model (forward model). From a least squares fit of the forward model to the MAX-DOAS measurements, two profile parameters are retrieved including integrated quantities (aerosol optical depth or trace gas vertical column density), and parameters describing the height and shape of the respective profiles. From these results, the aerosol extinction and trace gas mixing ratios can also be calculated. We apply the profile inversion to MAX-DOAS observations during a measurement campaign in Milano, Italy, September 2003, which allowed simultaneous observations from three telescopes (directed to north, west, south). Profile inversions for aerosols and trace gases were possible on 23 days. Especially in the middle of the campaign (17-20 September 2003), enhanced values of aerosol optical depth and NO2 and HCHO mixing ratios were found. The retrieved layer heights were typically similar for HCHO and aerosols. For NO2, lower layer heights were found, which increased during the day. The MAX-DOAS inversion results are compared to independent measurements: (1) aerosol optical depth measured at an AERONET station at Ispra; (2) near-surface NO2 and HCHO (formaldehyde) mixing ratios measured by long path DOAS and Hantzsch instruments at Bresso; (3) vertical profiles of HCHO and aerosols measured by an ultra light aircraft. Depending on the viewing direction, the aerosol optical depths from MAX-DOAS are either smaller or larger than those from AERONET observations. Similar comparison results are found for the MAX-DOAS NO2 mixing ratios versus long path DOAS measurements. In contrast, the MAX-DOAS HCHO mixing ratios are generally higher than those from long path DOAS or Hantzsch instruments. The comparison of the HCHO and aerosol profiles from the aircraft showed reasonable agreement with
2012-03-16
Independent Assessments: DOE's Systems Integrator convenes independent technical reviews to gauge progress toward meeting specific technical targets and to provide technical information necessary for key decisions.
ERIC Educational Resources Information Center
Raykov, Tenko; Penev, Spiridon
2006-01-01
Unlike a substantial part of reliability literature in the past, this article is concerned with weighted combinations of a given set of congeneric measures with uncorrelated errors. The relationship between maximal coefficient alpha and maximal reliability for such composites is initially dealt with, and it is shown that the former is a lower…
A New Algorithm to Optimize Maximal Information Coefficient
Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
A New Algorithm to Optimize Maximal Information Coefficient.
Chen, Yuan; Zeng, Ying; Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
Maximizing Macromolecule Crystal Size for Neutron Diffraction Experiments
NASA Technical Reports Server (NTRS)
Judge, R. A.; Kephart, R.; Leardi, R.; Myles, D. A.; Snell, E. H.; vanderWoerd, M.; Curreri, Peter A. (Technical Monitor)
2002-01-01
A challenge in neutron diffraction experiments is growing large (greater than 1 cu mm) macromolecule crystals. In taking up this challenge we have used statistical experiment design techniques to quickly identify crystallization conditions under which the largest crystals grow. These techniques provide the maximum information for minimal experimental effort, allowing optimal screening of crystallization variables in a simple experimental matrix, using the minimum amount of sample. Analysis of the results quickly tells the investigator what conditions are the most important for the crystallization. These can then be used to maximize the crystallization results in terms of reducing crystal numbers and providing large crystals of suitable habit. We have used these techniques to grow large crystals of Glucose isomerase. Glucose isomerase is an industrial enzyme used extensively in the food industry for the conversion of glucose to fructose. The aim of this study is the elucidation of the enzymatic mechanism at the molecular level. The accurate determination of hydrogen positions, which is critical for this, is a requirement that neutron diffraction is uniquely suited for. Preliminary neutron diffraction experiments with these crystals conducted at the Institute Laue-Langevin (Grenoble, France) reveal diffraction to beyond 2.5 angstrom. Macromolecular crystal growth is a process involving many parameters, and statistical experimental design is naturally suited to this field. These techniques are sample independent and provide an experimental strategy to maximize crystal volume and habit for neutron diffraction studies.
Auctions with Dynamic Populations: Efficiency and Revenue Maximization
NASA Astrophysics Data System (ADS)
Said, Maher
We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.
Maximal violation of Bell inequalities by position measurements
Kiukas, J.; Werner, R. F.
2010-07-15
We show that it is possible to find maximal violations of the Clauser-Horne-Shimony-Holt (CHSH) Bell inequality using only position measurements on a pair of entangled nonrelativistic free particles. The device settings required in the CHSH inequality are done by choosing one of two times at which position is measured. For different assignments of the '+' outcome to positions, namely, to an interval, to a half-line, or to a periodic set, we determine violations of the inequalities and states where they are attained. These results have consequences for the hidden variable theories of Bohm and Nelson, in which the two-time correlations between distant particle trajectories have a joint distribution, and hence cannot violate any Bell inequality.
Karbowski, Jan
2015-10-01
The structure and quantitative composition of the cerebral cortex are interrelated with its computational capacity. Empirical data analyzed here indicate a certain hierarchy in local cortical composition. Specifically, neural wire, i.e., axons and dendrites take each about 1/3 of cortical space, spines and glia/astrocytes occupy each about (1/3)(2), and capillaries around (1/3)(4). Moreover, data analysis across species reveals that these fractions are roughly brain size independent, which suggests that they could be in some sense optimal and thus important for brain function. Is there any principle that sets them in this invariant way? This study first builds a model of local circuit in which neural wire, spines, astrocytes, and capillaries are mutually coupled elements and are treated within a single mathematical framework. Next, various forms of wire minimization rule (wire length, surface area, volume, or conduction delays) are analyzed, of which, only minimization of wire volume provides realistic results that are very close to the empirical cortical fractions. As an alternative, a new principle called "spine economy maximization" is proposed and investigated, which is associated with maximization of spine proportion in the cortex per spine size that yields equally good but more robust results. Additionally, a combination of wire cost and spine economy notions is considered as a meta-principle, and it is found that this proposition gives only marginally better results than either pure wire volume minimization or pure spine economy maximization, but only if spine economy component dominates. However, such a combined meta-principle yields much better results than the constraints related solely to minimization of wire length, wire surface area, and conduction delays. Interestingly, the type of spine size distribution also plays a role, and better agreement with the data is achieved for distributions with long tails. In sum, these results suggest that for the
Price of anarchy is maximized at the percolation threshold.
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold. PMID:26066138
The price of anarchy is maximized at the percolation threshold
NASA Astrophysics Data System (ADS)
Skinner, Brian
2015-03-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called ``price of anarchy'' (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly-placed ``congestible'' and ``incongestible'' links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
Price of anarchy is maximized at the percolation threshold
NASA Astrophysics Data System (ADS)
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
An updated version of wannier90: A tool for obtaining maximally-localised Wannier functions
NASA Astrophysics Data System (ADS)
Mostofi, Arash A.; Yates, Jonathan R.; Pizzi, Giovanni; Lee, Young-Su; Souza, Ivo; Vanderbilt, David; Marzari, Nicola
2014-08-01
wannier90 is a program for calculating maximally-localised Wannier functions (MLWFs) from a set of Bloch energy bands that may or may not be attached to or mixed with other bands. The formalism works by minimising the total spread of the MLWFs in real space. This is done in the space of unitary matrices that describe rotations of the Bloch bands at each k-point. As a result, wannier90 is independent of the basis set used in the underlying calculation to obtain the Bloch states. Therefore, it may be interfaced straightforwardly to any electronic structure code. The locality of MLWFs can be exploited to compute band-structure, density of states and Fermi surfaces at modest computational cost. Furthermore, wannier90 is able to output MLWFs for visualisation and other post-processing purposes. Wannier functions are already used in a wide variety of applications. These include analysis of chemical bonding in real space; calculation of dielectric properties via the modern theory of polarisation; and as an accurate and minimal basis set in the construction of model Hamiltonians for large-scale systems, in linear-scaling quantum Monte Carlo calculations, and for efficient computation of material properties, such as the anomalous Hall coefficient. We present here an updated version of wannier90, wannier90 2.0, including minor bug fixes and parallel (MPI) execution for band-structure interpolation and the calculation of properties such as density of states, Berry curvature and orbital magnetisation. wannier90 is freely available under the GNU General Public License from http://www.wannier.org/.
Maximal element theorems in product FC-spaces and generalized games
NASA Astrophysics Data System (ADS)
Ding, Xie Ping
2005-05-01
Let I be a finite or infinite index set, X be a topological space and (Yi,{[phi]Ni})i[set membership, variant]I be a family of finitely continuous topological spaces (in short, FC-space). For each i[set membership, variant]I, let be a set-valued mapping. Some existence theorems of maximal elements for the family {Ai}i[set membership, variant]I are established under noncompact setting of FC-spaces. As applications, some equilibrium existence theorems for generalized games with fuzzy constraint correspondences are proved in noncompact FC-spaces. These theorems improve, unify and generalize many important results in recent literature.
Tian, Guojing; Wu, Xia; Cao, Ya; Gao, Fei; Wen, Qiaoyan
2016-01-01
It is known that there exist two locally operational settings, local operations with one-way and two-way classical communication. And recently, some sets of maximally entangled states have been built in specific dimensional quantum systems, which can be locally distinguished only with two-way classical communication. In this paper, we show the existence of such sets is general, through constructing such sets in all the remaining quantum systems. Specifically, such sets including p or n maximally entangled states will be built in the quantum system of (np − 1) ⊗ (np − 1) with n ≥ 3 and p being a prime number, which completes the picture that such sets do exist in every possible dimensional quantum system. PMID:27440087
Tian, Guojing; Wu, Xia; Cao, Ya; Gao, Fei; Wen, Qiaoyan
2016-01-01
It is known that there exist two locally operational settings, local operations with one-way and two-way classical communication. And recently, some sets of maximally entangled states have been built in specific dimensional quantum systems, which can be locally distinguished only with two-way classical communication. In this paper, we show the existence of such sets is general, through constructing such sets in all the remaining quantum systems. Specifically, such sets including p or n maximally entangled states will be built in the quantum system of (np - 1) ⊗ (np - 1) with n ≥ 3 and p being a prime number, which completes the picture that such sets do exist in every possible dimensional quantum system. PMID:27440087
Nondecoupling of maximal supergravity from the superstring.
Green, Michael B; Ooguri, Hirosi; Schwarz, John H
2007-07-27
We consider the conditions necessary for obtaining perturbative maximal supergravity in d dimensions as a decoupling limit of type II superstring theory compactified on a (10-d) torus. For dimensions d=2 and d=3, it is possible to define a limit in which the only finite-mass states are the 256 massless states of maximal supergravity. However, in dimensions d>or=4, there are infinite towers of additional massless and finite-mass states. These correspond to Kaluza-Klein charges, wound strings, Kaluza-Klein monopoles, or branes wrapping around cycles of the toroidal extra dimensions. We conclude that perturbative supergravity cannot be decoupled from string theory in dimensions>or=4. In particular, we conjecture that pure N=8 supergravity in four dimensions is in the Swampland. PMID:17678349
Maximal CP violation in flavor neutrino masses
NASA Astrophysics Data System (ADS)
Kitabayashi, Teruyuki; Yasuè, Masaki
2016-03-01
Since flavor neutrino masses Mμμ,ττ,μτ can be expressed in terms of Mee,eμ,eτ, mutual dependence among Mμμ,ττ,μτ is derived by imposing some constraints on Mee,eμ,eτ. For appropriately imposed constraints on Mee,eμ,eτ giving rise to both maximal CP violation and the maximal atmospheric neutrino mixing, we show various specific textures of neutrino mass matrices including the texture with Mττ = Mμμ∗ derived as the simplest solution to the constraint of Mττ ‑ Mμμ = imaginary, which is required by the constraint of Meμcos θ23 ‑ Meτsin θ23 = real for cos 2θ23 = 0. It is found that Majorana CP violation depends on the phase of Mee.
Maximal temperature in a simple thermodynamical system
NASA Astrophysics Data System (ADS)
Dai, De-Chang; Stojkovic, Dejan
2016-06-01
Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.
Experimental implementation of maximally synchronizable networks
NASA Astrophysics Data System (ADS)
Sevilla-Escoboza, R.; Buldú, J. M.; Boccaletti, S.; Papo, D.; Hwang, D.-U.; Huerta-Cuellar, G.; Gutiérrez, R.
2016-04-01
Maximally synchronizable networks (MSNs) are acyclic directed networks that maximize synchronizability. In this paper, we investigate the feasibility of transforming networks of coupled oscillators into their corresponding MSNs. By tuning the weights of any given network so as to reach the lowest possible eigenratio λN /λ2, the synchronized state is guaranteed to be maintained across the longest possible range of coupling strengths. We check the robustness of the resulting MSNs with an experimental implementation of a network of nonlinear electronic oscillators and study the propagation of the synchronization errors through the network. Importantly, a method to study the effects of topological uncertainties on the synchronizability is proposed and explored both theoretically and experimentally.
Formation Control for the MAXIM Mission
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.
Revenue maximization in survivable WDM networks
NASA Astrophysics Data System (ADS)
Sridharan, Murari; Somani, Arun K.
2000-09-01
Service availability is an indispensable requirement for many current and future applications over the Internet and hence has to be addressed as part of the optical QoS service model. Network service providers can offer varying classes of services based on the choice of protection employed which can vary from full protection to no protection. Based on the service classes, traffic in the network falls into one of the three classes viz., full protection, no protection and best-effort. The network typically relies on the best-effort traffic for maximizing revenue. We consider two variations on the best-effort class, (1) all connections are accepted and network tries to protect as many as possible and (2) a mix of protected and unprotected connections and the goal is to maximize revenue. In this paper, we present a mathematical formulation, that captures service differentiation based on lightpath protection, for revenue maximization in a wavelength routed backbone networks. Our approach also captures the service disruption aspect into the problem formulation, as there may be a penalty for disrupting currently working connections.
Maximal acceleration is non-rotating
NASA Astrophysics Data System (ADS)
Page, Don N.
1998-06-01
In a stationary axisymmetric spacetime, the angular velocity of a stationary observer whose acceleration vector is Fermi-Walker transported is also the angular velocity that locally extremizes the magnitude of the acceleration of such an observer. The converse is also true if the spacetime is symmetric under reversing both t and 0264-9381/15/6/020/img1 together. Thus a congruence of non-rotating acceleration worldlines (NAW) is equivalent to a stationary congruence accelerating locally extremely (SCALE). These congruences are defined completely locally, unlike the case of zero angular momentum observers (ZAMOs), which requires knowledge around a symmetry axis. The SCALE subcase of a stationary congruence accelerating maximally (SCAM) is made up of stationary worldlines that may be considered to be locally most nearly at rest in a stationary axisymmetric gravitational field. Formulae for the angular velocity and other properties of the SCALEs are given explicitly on a generalization of an equatorial plane, infinitesimally near a symmetry axis, and in a slowly rotating gravitational field, including the far-field limit, where the SCAM is shown to be counter-rotating relative to infinity. These formulae are evaluated in particular detail for the Kerr-Newman metric. Various other congruences are also defined, such as a stationary congruence rotating at minimum (SCRAM), and stationary worldlines accelerating radially maximally (SWARM), both of which coincide with a SCAM on an equatorial plane of reflection symmetry. Applications are also made to the gravitational fields of maximally rotating stars, the Sun and the Solar System.
The “Independent Components” of Natural Scenes are Edge Filters
BELL, ANTHONY J.; SEJNOWSKI, TERRENCE J.
2010-01-01
It has previously been suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and it has been reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features. We show here that a new unsupervised learning algorithm based on information maximization, a nonlinear “infomax” network, when applied to an ensemble of natural scenes produces sets of visual filters that are localized and oriented. Some of these filters are Gabor-like and resemble those produced by the sparseness-maximization network. In addition, the outputs of these filters are as independent as possible, since this infomax network performs Independent Components Analysis or ICA, for sparse (super-gaussian) component distributions. We compare the resulting ICA filters and their associated basis functions, with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA). The ICA filters have more sparsely distributed (kurtotic) outputs on natural scenes. They also resemble the receptive fields of simple cells in visual cortex, which suggests that these neurons form a natural, information-theoretic coordinate system for natural images. PMID:9425547
Maximal violation of tight Bell inequalities for maximal high-dimensional entanglement
Lee, Seung-Woo; Jaksch, Dieter
2009-07-15
We propose a Bell inequality for high-dimensional bipartite systems obtained by binning local measurement outcomes and show that it is tight. We find a binning method for even d-dimensional measurement outcomes for which this Bell inequality is maximally violated by maximally entangled states. Furthermore, we demonstrate that the Bell inequality is applicable to continuous variable systems and yields strong violations for two-mode squeezed states.
ERIC Educational Resources Information Center
Giorgis, Cyndi; Johnson, Nancy J.
2002-01-01
Presents annotations of approximately 30 titles grouped in text sets. Defines a text set as five to ten books on a particular topic or theme. Discusses books on the following topics: living creatures; pirates; physical appearance; natural disasters; and the Irish potato famine. (SG)
A Method for Evaluating Tuning Functions of Single Neurons based on Mutual Information Maximization
NASA Astrophysics Data System (ADS)
Brostek, Lukas; Eggert, Thomas; Ono, Seiji; Mustari, Michael J.; Büttner, Ulrich; Glasauer, Stefan
2011-03-01
We introduce a novel approach for evaluation of neuronal tuning functions, which can be expressed by the conditional probability of observing a spike given any combination of independent variables. This probability can be estimated out of experimentally available data. By maximizing the mutual information between the probability distribution of the spike occurrence and that of the variables, the dependence of the spike on the input variables is maximized as well. We used this method to analyze the dependence of neuronal activity in cortical area MSTd on signals related to movement of the eye and retinal image movement.
Cole, James R; Dodge, William W; Findley, John S; Young, Stephen K; Horn, Bruce D; Kalkwarf, Kenneth L; Martin, Max M; Winder, Ronald L
2015-05-01
This Point/Counterpoint article discusses the transformation of dental practice from the traditional solo/small-group (partnership) model of the 1900s to large Dental Support Organizations (DSO) that support affiliated dental practices by providing nonclinical functions such as, but not limited to, accounting, human resources, marketing, and legal and practice management. Many feel that DSO-managed group practices (DMGPs) with employed providers will become the setting in which the majority of oral health care will be delivered in the future. Viewpoint 1 asserts that the traditional dental practice patterns of the past are shifting as many younger dentists gravitate toward employed positions in large group practices or the public sector. Although educational debt is relevant in predicting graduates' practice choices, other variables such as gender, race, and work-life balance play critical roles as well. Societal characteristics demonstrated by aging Gen Xers and those in the Millennial generation blend seamlessly with the opportunities DMGPs offer their employees. Viewpoint 2 contends the traditional model of dental care delivery-allowing entrepreneurial practitioners to make decisions in an autonomous setting-is changing but not to the degree nor as rapidly as Viewpoint 1 professes. Millennials entering the dental profession, with characteristics universally attributed to their generation, see value in the independence and flexibility that a traditional practice allows. Although DMGPs provide dentists one option for practice, several alternative delivery models offer current dentists and future dental school graduates many of the advantages of DMGPs while allowing them to maintain the independence and freedom a traditional practice provides. PMID:25941139
Independent Schools - Independent Thinking - Independent Art: Testing Assumptions.
ERIC Educational Resources Information Center
Carnes, Virginia
This study consists of a review of selected educational reform issues from the past 10 years that deal with changing attitudes towards art and art instruction in the context of independent private sector schools. The major focus of the study is in visual arts and examines various programs and initiatives with an art focus. Programs include…
Maximizing versus satisficing: happiness is a matter of choice.
Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R
2002-11-01
Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame. PMID:12416921
Coloring random graphs and maximizing local diversity.
Bounkong, S; van Mourik, J; Saad, D
2006-11-01
We study a variation of the graph coloring problem on random graphs of finite average connectivity. Given the number of colors, we aim to maximize the number of different colors at neighboring vertices (i.e., one edge distance) of any vertex. Two efficient algorithms, belief propagation and Walksat, are adapted to carry out this task. We present experimental results based on two types of random graphs for different system sizes and identify the critical value of the connectivity for the algorithms to find a perfect solution. The problem and the suggested algorithms have practical relevance since various applications, such as distributed storage, can be mapped onto this problem. PMID:17280022
Using molecular biology to maximize concurrent training.
Baar, Keith
2014-11-01
Very few sports use only endurance or strength. Outside of running long distances on a flat surface and power-lifting, practically all sports require some combination of endurance and strength. Endurance and strength can be developed simultaneously to some degree. However, the development of a high level of endurance seems to prohibit the development or maintenance of muscle mass and strength. This interaction between endurance and strength is called the concurrent training effect. This review specifically defines the concurrent training effect, discusses the potential molecular mechanisms underlying this effect, and proposes strategies to maximize strength and endurance in the high-level athlete. PMID:25355186
Electromagnetically induced grating with maximal atomic coherence
Carvalho, Silvania A.; Araujo, Luis E. E. de
2011-10-15
We describe theoretically an atomic diffraction grating that combines an electromagnetically induced grating with a coherence grating in a double-{Lambda} atomic system. With the atom in a condition of maximal coherence between its lower levels, the combined gratings simultaneously diffract both the incident probe beam as well as the signal beam generated through four-wave mixing. A special feature of the atomic grating is that it will diffract any beam resonantly tuned to any excited state of the atom accessible by a dipole transition from its ground state.
Maximizing Information Diffusion in the Cyber-physical Integrated Network.
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-01-01
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks. PMID:26569254
Whose Entropy: A Maximal Entropy Analysis of Phosphorylation Signaling
NASA Astrophysics Data System (ADS)
Remacle, F.; Graeber, T. G.; Levine, R. D.
2011-07-01
High throughput experiments, characteristic of studies in systems biology, produce large output data sets often at different time points or under a variety of related conditions or for different patients. In several recent papers the data is modeled by using a distribution of maximal information-theoretic entropy. We pose the question: `whose entropy' meaning how do we select the variables whose distribution should be compared to that of maximal entropy. The point is that different choices can lead to different answers. Due to the technological advances that allow for the system-wide measurement of hundreds to thousands of events from biological samples, addressing this question is now part of the analysis of systems biology datasets. The analysis of the extent of phosphorylation in reference to the transformation potency of Bcr-Abl fusion oncogene mutants is used as a biological example. The approach taken seeks to use entropy not simply as a statistical measure of dispersion but as a physical, thermodynamic, state function. This highlights the dilemma of what are the variables that describe the state of the signaling network. Is what matters Boolean, spin-like, variables that specify whether a particular phosphorylation site is or is not actually phosphorylated. Or does the actual extent of phosphorylation matter. Last but not least is the possibility that in a signaling network some few specific phosphorylation sites are the key to the signal transduction even though these sites are not at any time abundantly phosphorylated in an absolute sense.
Maximally Entangled States of a Two-Qubit System
NASA Astrophysics Data System (ADS)
Singh, Manu P.; Rajput, B. S.
2013-12-01
Entanglement has been explored as one of the key resources required for quantum computation, the functional dependence of the entanglement measures on spin correlation functions has been established, correspondence between evolution of maximally entangled states (MES) of two-qubit system and representation of SU(2) group has been worked out and the evolution of MES under a rotating magnetic field has been investigated. Necessary and sufficient conditions for the general two-qubit state to be maximally entangled state (MES) have been obtained and a new set of MES constituting a very powerful and reliable eigen basis (different from magic bases) of two-qubit systems has been constructed. In terms of the MES constituting this basis, Bell’s States have been generated and all the qubits of two-qubit system have been obtained. It has shown that a MES corresponds to a point in the SO(3) sphere and an evolution of MES corresponds to a trajectory connecting two points on this sphere. Analysing the evolution of MES under a rotating magnetic field, it has been demonstrated that a rotating magnetic field is equivalent to a three dimensional rotation in real space leading to the evolution of a MES.
Forms and algebras in (half-)maximal supergravity theories
NASA Astrophysics Data System (ADS)
Howe, Paul; Palmkvist, Jakob
2015-05-01
The forms in D-dimensional (half-)maximal supergravity theories are discussed for 3 ≤ D ≤ 11. Superspace methods are used to derive consistent sets of Bianchi identities for all the forms for all degrees, and to show that they are soluble and fully compatible with supersymmetry. The Bianchi identities determine Lie superalgebras that can be extended to Borcherds superalgebras of a special type. It is shown that any Borcherds superalgebra of this type gives the same form spectrum, up to an arbitrary degree, as an associated Kac-Moody algebra. For maximal supergravity up to D-form potentials, this is the very extended Kac-Moody algebra E 11. It is also shown how gauging can be carried out in a simple fashion by deforming the Bianchi identities by means of a new algebraic element related to the embedding tensor. In this case the appropriate extension of the form algebra is a truncated version of the so-called tensor hierarchy algebra.
Fredriksson, Albin Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goals to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.
Conditional independence in quantum many-body systems
NASA Astrophysics Data System (ADS)
Kim, Isaac Hyun
In this thesis, I will discuss how information-theoretic arguments can be used to produce sharp bounds in the studies of quantum many-body systems. The main advantage of this approach, as opposed to the conventional field-theoretic argument, is that it depends very little on the precise form of the Hamiltonian. The main idea behind this thesis lies on a number of results concerning the structure of quantum states that are conditionally independent. Depending on the application, some of these statements are generalized to quantum states that are approximately conditionally independent. These structures can be readily used in the studies of gapped quantum many-body systems, especially for the ones in two spatial dimensions. A number of rigorous results are derived, including (i) a universal upper bound for a maximal number of topologically protected states that is expressed in terms of the topological entanglement entropy, (ii) a first-order perturbation bound for the topological entanglement entropy that decays superpolynomially with the size of the subsystem, and (iii) a correlation bound between an arbitrary local operator and a topological operator constructed from a set of local reduced density matrices. I also introduce exactly solvable models supported on a three-dimensional lattice that can be used as a reliable quantum memory.
Distinguishing maximally entangled states by one-way local operations and classical communication
NASA Astrophysics Data System (ADS)
Zhang, Zhi-Chao; Feng, Ke-Qin; Gao, Fei; Wen, Qiao-Yan
2015-01-01
In this paper, we mainly study the local indistinguishability of mutually orthogonal bipartite maximally entangled states. We construct sets of fewer than d orthogonal maximally entangled states which are not distinguished by one-way local operations and classical communication (LOCC) in the Hilbert space of d ⊗d . The proof, based on the Fourier transform of an additive group, is very simple but quite effective. Simultaneously, our results give a general unified upper bound for the minimum number of one-way LOCC indistinguishable maximally entangled states. This improves previous results which only showed sets of N ≥d -2 such states. Finally, our results also show that previous conjectures in Zhang et al. [Z.-C. Zhang, Q.-Y. Wen, F. Gao, G.-J. Tian, and T.-Q. Cao, Quant. Info. Proc. 13, 795 (2014), 10.1007/s11128-013-0691-9] are indeed correct.
NASA Astrophysics Data System (ADS)
Adhikari, Dhruba R.; Kartsatos, Athanassios G.
2008-12-01
Let X be a real reflexive Banach space with dual X*. Let L:X[superset or implies]=(L)-->X* be densely defined, linear and maximal monotone. Let T:X[superset or implies]=(T)-->2X*, with 0[set membership, variant]=(T) and 0[set membership, variant]T(0), be strongly quasibounded and maximal monotone, and C:X[superset or implies]=(C)-->X* bounded, demicontinuous and of type (S+) w.r.t. =(L). A new topological degree theory has been developed for the sum L+T+C. This degree theory is an extension of the Berkovits-Mustonen theory (for T=0) and an improvement of the work of Addou and Mermri (for T:X-->2X* bounded). Unbounded maximal monotone operators with are strongly quasibounded and may be used with the new degree theory.
Maximizing strain in miniaturized dielectric elastomer actuators
NASA Astrophysics Data System (ADS)
Rosset, Samuel; Araromi, Oluwaseun; Shea, Herbert
2015-04-01
We present a theoretical model to optimise the unidirectional motion of a rigid object bonded to a miniaturized dielectric elastomer actuator (DEA), a configuration found for example in AMI's haptic feedback devices, or in our tuneable RF phase shifter. Recent work has shown that unidirectional motion is maximized when the membrane is both anistropically prestretched and subjected to a dead load in the direction of actuation. However, the use of dead weights for miniaturized devices is clearly highly impractical. Consequently smaller devices use the membrane itself to generate the opposing force. Since the membrane covers the entire frame, one has the same prestretch condition in the active (actuated) and passive zones. Because the passive zone contracts when the active zone expands, it does not provide a constant restoring force, reducing the maximum achievable actuation strain. We have determined the optimal ratio between the size of the electrode (active zone) and the passive zone, as well as the optimal prestretch in both in-plane directions, in order to maximize the absolute displacement of the rigid object placed at the active/passive border. Our model and experiments show that the ideal active ratio is 50%, with a displacement twice smaller than what can be obtained with a dead load. We expand our fabrication process to also show how DEAs can be laser-post-processed to remove carefully chosen regions of the passive elastomer membrane, thereby increasing the actuation strain of the device.
Factors affecting maximal momentary grip strength.
Martin, S; Neale, G; Elia, M
1985-03-01
Maximal voluntary grip strength has been measured in normal adults aged 18-70 years (17 f, 18 m) and compared with other indices of body muscle mass. Grip strength (dominant side) was directly proportional to creatinine excretion (r = 0.81); to forearm muscle area (r = 0.73); to upper arm muscle area (r = 0.71) and to lean body mass (r = 0.65). Grip strength relative to forearm muscle area decreased with age. The study of a subgroup of normal subjects revealed a small but significant postural and circadian effect on grip strength. The effect on maximal voluntary grip strength of sedatives in elderly subjects undergoing routine endoscopy (n = 6), and of acute infections in otherwise healthy individuals (n = 6), severe illness in patients requiring intensive care (n = 6), chronic renal failure (n = 7) and anorexia nervosa (n = 6) has been assessed. Intravenous diazepam and buscopan produced a 50 per cent reduction in grip strength which returned to normal within the next 2-3 h. Acute infections reduced grip strength by a mean of 35 per cent and severe illness in patients in intensive care by 60 per cent. In patients with chronic renal failure grip strength was 80-85 per cent of that predicted from forearm 'muscle area' (P less than 0.05). In anorectic patients the values were appropriate for their forearm muscle area. Nevertheless nutritional rehabilitation of one anorectic patient did not lead to a consistent improvement in grip strength. PMID:3926728
Spiders Tune Glue Viscosity to Maximize Adhesion.
Amarpuri, Gaurav; Zhang, Ci; Diaz, Candido; Opell, Brent D; Blackledge, Todd A; Dhinojwala, Ali
2015-11-24
Adhesion in humid conditions is a fundamental challenge to both natural and synthetic adhesives. Yet, glue from most spider species becomes stickier as humidity increases. We find the adhesion of spider glue, from five diverse spider species, maximizes at very different humidities that matches their foraging habitats. By using high-speed imaging and spreading power law, we find that the glue viscosity varies over 5 orders of magnitude with humidity for each species, yet the viscosity at maximal adhesion for each species is nearly identical, 10(5)-10(6) cP. Many natural systems take advantage of viscosity to improve functional response, but spider glue's humidity responsiveness is a novel adaptation that makes the glue stickiest in each species' preferred habitat. This tuning is achieved by a combination of proteins and hygroscopic organic salts that determines water uptake in the glue. We therefore anticipate that manipulation of polymer-salts interaction to control viscosity can provide a simple mechanism to design humidity responsive smart adhesives. PMID:26513350
Robust estimation by expectation maximization algorithm
NASA Astrophysics Data System (ADS)
Koch, Karl Rudolf
2013-02-01
A mixture of normal distributions is assumed for the observations of a linear model. The first component of the mixture represents the measurements without gross errors, while each of the remaining components gives the distribution for an outlier. Missing data are introduced to deliver the information as to which observation belongs to which component. The unknown location parameters and the unknown scale parameter of the linear model are estimated by the EM algorithm, which is iteratively applied. The E (expectation) step of the algorithm determines the expected value of the likelihood function given the observations and the current estimate of the unknown parameters, while the M (maximization) step computes new estimates by maximizing the expectation of the likelihood function. In comparison to Huber's M-estimation, the EM algorithm does not only identify outliers by introducing small weights for large residuals but also estimates the outliers. They can be corrected by the parameters of the linear model freed from the distortions by gross errors. Monte Carlo methods with random variates from the normal distribution then give expectations, variances, covariances and confidence regions for functions of the parameters estimated by taking care of the outliers. The method is demonstrated by the analysis of measurements with gross errors of a laser scanner.
Maximal lactate steady state in Judo
de Azevedo, Paulo Henrique Silva Marques; Pithon-Curi, Tania; Zagatto, Alessandro Moura; Oliveira, João; Perez, Sérgio
2014-01-01
Summary Background: the purpose of this study was to verify the validity of respiratory compensation threshold (RCT) measured during a new single judo specific incremental test (JSIT) for aerobic demand evaluation. Methods: to test the validity of the new test, the JSIT was compared with Maximal Lactate Steady State (MLSS), which is the gold standard procedure for aerobic demand measuring. Eight well-trained male competitive judo players (24.3 ± 7.9 years; height of 169.3 ± 6.7cm; fat mass of 12.7 ± 3.9%) performed a maximal incremental specific test for judo to assess the RCT and performed on 30-minute MLSS test, where both tests were performed mimicking the UchiKomi drills. Results: the intensity at RCT measured on JSIT was not significantly different compared to MLSS (p=0.40). In addition, it was observed high and significant correlation between MLSS and RCT (r=0.90, p=0.002), as well as a high agreement. Conclusions: RCT measured during JSIT is a valid procedure to measure the aerobic demand, respecting the ecological validity of Judo. PMID:25332923
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
Steps to Independent Living Series.
ERIC Educational Resources Information Center
Lobb, Nancy
This set of six activity books and a teacher's guide is designed to help students from eighth grade to adulthood with special needs to learn independent living skills. The activity books have a reading level of 2.5 and address: (1) "How to Get Well When You're Sick or Hurt," including how to take a temperature, see a doctor, and use medicines…
Calculating dispersion interactions using maximally localized Wannier functions.
Andrinopoulos, Lampros; Hine, Nicholas D M; Mostofi, Arash A
2011-10-21
We investigate a recently developed approach [P. L. Silvestrelli, Phys. Rev. Lett. 100, 053002 (2008); J. Phys. Chem. A 113, 5224 (2009)] that uses maximally localized Wannier functions to evaluate the van der Waals contribution to the total energy of a system calculated with density-functional theory. We test it on a set of atomic and molecular dimers of increasing complexity (argon, methane, ethene, benzene, phthalocyanine, and copper phthalocyanine) and demonstrate that the method, as originally proposed, has a number of shortcomings that hamper its predictive power. In order to overcome these problems, we have developed and implemented a number of improvements to the method and show that these modifications give rise to calculated binding energies and equilibrium geometries that are in closer agreement to results of quantum-chemical coupled-cluster calculations. PMID:22029295
ERIC Educational Resources Information Center
Elkins, Aaron J.
1977-01-01
The author questions the extent to which educators have relied on "relevance" and learner participation in objective-setting in the past decade. He describes a useful approach to learner-oriented evaluation in which content relevance was not judged by participants until after they had been exposed to it. (MF)
Alkner, Björn A; Berg, Hans E; Kozlovskaya, Inessa; Sayenko, Dimitri; Tesch, Per A
2003-09-01
The efficacy of a resistance exercise paradigm, using a gravity-independent flywheel principle, was examined in four men subjected to 110 days of confinement (simulation of flight of international crew on space station; SFINCSS-99). Subjects performed six upper- and lower-body exercises (calf raise, squat, back extension, seated row, lateral shoulder raise, biceps curl) 2-3 times weekly during the confinement. The exercise regimen consisted of four sets of ten repetitions of each exercise at estimated 80-100% of maximal effort. Work was measured and recorded in each exercise session. Maximal voluntary isometric force in the calf press, squat and back extension, was assessed at three different joint angles before and after confinement. Overall, the training load (work) increased in all subjects (range 16-108%) over the course of the intervention. Maximal voluntary isometric force was unchanged following confinement. Although the perceived level of strain and comfort varied between exercises and among individuals, the results of the present study suggest this resistance exercise regimen is effective in maintaining or even increasing performance and maximal force output during long-term confinement. These findings should be considered in the design of resistance exercise hardware and prescriptions to be employed on the International Space Station. PMID:12783231
CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design.
Zhang, Shaoqiang; Chen, Yong
2016-01-01
A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html. PMID:27487245
CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design
Chen, Yong
2016-01-01
A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html. PMID:27487245
Hofer, Scott M; Piccinin, Andrea M
2009-06-01
Replication of research findings across independent longitudinal studies is essential for a cumulative and innovative developmental science. Meta-analysis of longitudinal studies is often limited by the amount of published information on particular research questions, the complexity of longitudinal designs and the sophistication of analyses, and practical limits on full reporting of results. In many cases, cross-study differences in sample composition and measurements impede or lessen the utility of pooled data analysis. A collaborative, coordinated analysis approach can provide a broad foundation for cumulating scientific knowledge by facilitating efficient analysis of multiple studies in ways that maximize comparability of results and permit evaluation of study differences. The goal of such an approach is to maximize opportunities for replication and extension of findings across longitudinal studies through open access to analysis scripts and output for published results, permitting modification, evaluation, and extension of alternative statistical models and application to additional data sets. Drawing on the cognitive aging literature as an example, the authors articulate some of the challenges of meta-analytic and pooled-data approaches and introduce a coordinated analysis approach as an important avenue for maximizing the comparability, replication, and extension of results from longitudinal studies. PMID:19485626
Hofer, Scott M.; Piccinin, Andrea M.
2009-01-01
Replication of research findings across independent longitudinal studies is essential for a cumulative and innovative developmental science. Meta-analysis of longitudinal studies is often limited by the amount of published information on particular research questions, the complexity of longitudinal designs and sophistication of analyses, and practical limits on full reporting of results. In many cases, cross-study differences in sample composition and measurements impede or lessen the utility of pooled data analysis. A collaborative, coordinated analysis approach can provide a broad foundation for cumulating scientific knowledge by facilitating efficient analysis of multiple studies in ways that maximize comparability of results and permit evaluation of study differences. The goal of such an approach is to maximize opportunities for replication and extension of findings across longitudinal studies through open access to analysis scripts and output for published results, permitting modification, evaluation, and extension of alternative statistical models, and application to additional data sets. Drawing on the cognitive aging literature as an example, we articulate some of the challenges of meta-analytic and pooled-data approaches and introduce a coordinated analysis approach as an important avenue for maximizing the comparability, replication, and extension of results from longitudinal studies. PMID:19485626
Heart Rate Recovery Is Impaired After Maximal Exercise Testing in Children with Sickle Cell Anemia
Alvarado, Anthony M.; Ward, Kendra M.; Muntz, Devin S.; Thompson, Alexis A.; Rodeghier, Mark; Fernhall, Bo; Liem, Robert I.
2014-01-01
Objective To examine heart rate recovery (HRR) as an indicator of autonomic nervous system (ANS) dysfunction following maximal exercise testing in children and young adults with sickle cell anemia (SCA). Study design Recovery phase heart rate (HR) in the first 5 minutes following maximal exercise testing in 60 subjects with SCA and 30 matched controls without SCA was assessed. The difference between maximal HR and HR at both 1-minute (ΔHR1min) and 2-minute (ΔHR2min) recovery was our primary outcome. Results Compared with controls, subjects with SCA demonstrated significantly smaller mean ΔHR1min (23 bpm, 95% CI [20, 26] vs. 32 bpm, 95% CI [26, 37], p = 0.006) and ΔHR2min (39 bpm, 95% CI [36, 43] vs. 48 bpm, 95% CI [42, 53], p = 0.011). Subjects with SCA also showed smaller mean changes in HR from peak HR to 1 minute, from 1 minute to 2 minutes and from 2 through 5 minutes of recovery by repeated measures testing. In a multivariable regression model, older age was independently associated with smaller ΔHR1min in subjects with SCA. Cardiopulmonary fitness and hydroxyurea use, however, were not independent predictors of ΔHR1min. Conclusions Children with SCA demonstrate impaired HRR following maximal exercise. Reduced post-exercise HRR in SCA suggests impaired parasympathetic function, which may become progressively worse with age, in this population. PMID:25477159
Seizures and Teens: Maximizing Health and Safety
ERIC Educational Resources Information Center
Sundstrom, Diane
2007-01-01
As parents and caregivers, their job is to help their children become happy, healthy, and productive members of society. They try to balance the desire to protect their children with their need to become independent young adults. This can be a struggle for parents of teens with seizures, since there are so many challenges they may face. Teenagers…
Dispatch Scheduling to Maximize Exoplanet Detection
NASA Astrophysics Data System (ADS)
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
Characterizing maximally singular phase-space distributions
NASA Astrophysics Data System (ADS)
Sperling, J.
2016-07-01
Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.
SETS. Set Equation Transformation System
Worrel, R.B.
1992-01-13
SETS is used for symbolic manipulation of Boolean equations, particularly the reduction of equations by the application of Boolean identities. It is a flexible and efficient tool for performing probabilistic risk analysis (PRA), vital area analysis, and common cause analysis. The equation manipulation capabilities of SETS can also be used to analyze noncoherent fault trees and determine prime implicants of Boolean functions, to verify circuit design implementation, to determine minimum cost fire protection requirements for nuclear reactor plants, to obtain solutions to combinatorial optimization problems with Boolean constraints, and to determine the susceptibility of a facility to unauthorized access through nullification of sensors in its protection system.
ERIC Educational Resources Information Center
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…