Statistical mechanics of maximal independent sets
NASA Astrophysics Data System (ADS)
Dall'Asta, Luca; Pin, Paolo; Ramezanpour, Abolfazl
2009-12-01
The graph theoretic concept of maximal independent set arises in several practical problems in computer science as well as in game theory. A maximal independent set is defined by the set of occupied nodes that satisfy some packing and covering constraints. It is known that finding minimum and maximum-density maximal independent sets are hard optimization problems. In this paper, we use cavity method of statistical physics and Monte Carlo simulations to study the corresponding constraint satisfaction problem on random graphs. We obtain the entropy of maximal independent sets within the replica symmetric and one-step replica symmetry breaking frameworks, shedding light on the metric structure of the landscape of solutions and suggesting a class of possible algorithms. This is of particular relevance for the application to the study of strategic interactions in social and economic networks, where maximal independent sets correspond to pure Nash equilibria of a graphical game of public goods allocation.
Utilizing Maximal Independent Sets as Dominating Sets in Scale-Free Networks
NASA Astrophysics Data System (ADS)
Derzsy, N.; Molnar, F., Jr.; Szymanski, B. K.; Korniss, G.
Dominating sets provide key solution to various critical problems in networked systems, such as detecting, monitoring, or controlling the behavior of nodes. Motivated by graph theory literature [Erdos, Israel J. Math. 4, 233 (1966)], we studied maximal independent sets (MIS) as dominating sets in scale-free networks. We investigated the scaling behavior of the size of MIS in artificial scale-free networks with respect to multiple topological properties (size, average degree, power-law exponent, assortativity), evaluated its resilience to network damage resulting from random failure or targeted attack [Molnar et al., Sci. Rep. 5, 8321 (2015)], and compared its efficiency to previously proposed dominating set selection strategies. We showed that, despite its small set size, MIS provides very high resilience against network damage. Using extensive numerical analysis on both synthetic and real-world (social, biological, technological) network samples, we demonstrate that our method effectively satisfies four essential requirements of dominating sets for their practical applicability on large-scale real-world systems: 1.) small set size, 2.) minimal network information required for their construction scheme, 3.) fast and easy computational implementation, and 4.) resiliency to network damage. Supported by DARPA, DTRA, and NSF.
Simple neural-like p systems for maximal independent set selection.
Xu, Lei; Jeavons, Peter
2013-06-01
Membrane systems (P systems) are distributed computing models inspired by living cells where a collection of processors jointly achieves a computing task. The problem of maximal independent set (MIS) selection in a graph is to choose a set of nonadjacent nodes to which no further nodes can be added. In this letter, we design a class of simple neural-like P systems to solve the MIS selection problem efficiently in a distributed way. This new class of systems possesses two features that are attractive for both distributed computing and membrane computing: first, the individual processors do not need any information about the overall size of the graph; second, they communicate using only one-bit messages.
1987-08-11
A183 M FIN MCI AM NONADJACENT NUIERS ON THE CHRACTERIZRTION OF FIBONACCI MU.. (U) GEORGIA UNIV ATHENS DEPT OF CNENISTRY S EL-IASIL 11 AUG 67 TR-50...Code 4007001-6 0Technical Report No. 50 Fibonacci and Nonadjacent Numbers On the Characterization of Fibonacci Numbers as Maximal Independent Sets of...aide It necessary and Identify’ by block nuinbet) Graph Theory Fibonacci Numbers Nonadjacent Numbers King Patterns 15anzen inWdrnrarhnnc 20r ANSTRAC lCni
Maximally Entangled Set of Multipartite Quantum States
NASA Astrophysics Data System (ADS)
de Vicente, J. I.; Spee, C.; Kraus, B.
2013-09-01
Entanglement is a resource in quantum information theory when state manipulation is restricted to local operations assisted by classical communication (LOCC). It is therefore of paramount importance to decide which LOCC transformations are possible and, particularly, which states are maximally useful under this restriction. While the bipartite maximally entangled state is well known (it is the only state that cannot be obtained from any other and, at the same time, it can be transformed to any other by LOCC), no such state exists in the multipartite case. In order to cope with this fact, we introduce here the notion of the maximally entangled set (MES) of n-partite states. This is the set of states which are maximally useful under LOCC manipulation; i.e., any state outside of this set can be obtained via LOCC from one of the states within the set and no state in the set can be obtained from any other state via LOCC. We determine the MES for states of three and four qubits and provide a simple characterization for them. In both cases, infinitely many states are required. However, while the MES is of measure zero for 3-qubit states, almost all 4-qubit states are in the MES. This is because, in contrast to the 3-qubit case, deterministic LOCC transformations are almost never possible among fully entangled four-partite states. We determine the measure-zero subset of the MES of LOCC convertible states. This is the only relevant class of states for entanglement manipulation.
Quantum state space as a maximal consistent set
NASA Astrophysics Data System (ADS)
Tabia, Gelo Noel
2012-02-01
Measurement statistics in quantum theory are obtained from the Born rule and the uniqueness of the probability measure it assigns through quantum states is guaranteed by Gleason's theorem. Thus, a possible systematic way of exploring the geometry of quantum state space expresses quantum states in terms of outcome probabilities of a symmetric informationally complete measurement. This specific choice for representing quantum states is motivated by how the associated probability space provides a natural venue for characterizing the set of quantum states as a geometric construct called a maximal consistent set. We define the conditions for consistency and maximality of a set, provide some examples of maximal consistent sets and attempt to deduce the steps for building up a maximal consistent set of probability distributions equivalent to Hilbert space. In particular, we demonstrate how the reconstruction procedure works for qutrits and observe how it reveals an elegant underlying symmetry among five SIC-POVMs and a complete set of mutually unbiased bases, known in finite affine geometry as the Hesse configuration.
Feature Extraction Using Supervised Independent Component Analysis by Maximizing Class Distance
NASA Astrophysics Data System (ADS)
Sakaguchi, Yoshinori; Ozawa, Seiichi; Kotani, Manabu
Recently, Independent Component Analysis (ICA) has been applied to not only problems of blind signal separation, but also feature extraction of patterns. However, the effectiveness of pattern features extracted by conventional ICA algorithms depends on pattern sets; that is, how patterns are distributed in the feature space. As one of the reasons, we have pointed out that ICA features are obtained by increasing only their independence even if the class information is available. In this context, we can expect that more high-performance features can be obtained by introducing the class information into conventional ICA algorithms. In this paper, we propose a supervised ICA (SICA) that maximizes Mahalanobis distance between features of different classes as well as maximize their independence. In the first experiment, two-dimensional artificial data are applied to the proposed SICA algorithm to see how maximizing Mahalanobis distance works well in the feature extraction. As a result, we demonstrate that the proposed SICA algorithm gives good features with high separability as compared with principal component analysis and a conventional ICA. In the second experiment, the recognition performance of features extracted by the proposed SICA is evaluated using the three data sets of UCI Machine Learning Repository. From the results, we show that the better recognition accuracy is obtained using our proposed SICA. Furthermore, we show that pattern features extracted by SICA are better than those extracted by only maximizing the Mahalanobis distance.
On revenue maximization for selling multiple independently distributed items
Li, Xinye; Yao, Andrew Chi-Chih
2013-01-01
Consider the revenue-maximizing problem in which a single seller wants to sell k different items to a single buyer, who has independently distributed values for the items with additive valuation. The case was completely resolved by Myerson’s classical work in 1981, whereas for larger k the problem has been the subject of much research efforts ever since. Recently, Hart and Nisan analyzed two simple mechanisms: selling the items separately, or selling them as a single bundle. They showed that selling separately guarantees at least a fraction of the optimal revenue; and for identically distributed items, bundling yields at least a fraction of the optimal revenue. In this paper, we prove that selling separately guarantees at least fraction of the optimal revenue, whereas for identically distributed items, bundling yields at least a constant fraction of the optimal revenue. These bounds are tight (up to a constant factor), settling the open questions raised by Hart and Nisan. The results are valid for arbitrary probability distributions without restrictions. Our results also have implications on other interesting issues, such as monotonicity and randomization of selling mechanisms. PMID:23798394
On revenue maximization for selling multiple independently distributed items.
Li, Xinye; Yao, Andrew Chi-Chih
2013-07-09
Consider the revenue-maximizing problem in which a single seller wants to sell k different items to a single buyer, who has independently distributed values for the items with additive valuation. The k = 1 case was completely resolved by Myerson's classical work in 1981, whereas for larger k the problem has been the subject of much research efforts ever since. Recently, Hart and Nisan analyzed two simple mechanisms: selling the items separately, or selling them as a single bundle. They showed that selling separately guarantees at least a c/log2 k fraction of the optimal revenue; and for identically distributed items, bundling yields at least a c/log k fraction of the optimal revenue. In this paper, we prove that selling separately guarantees at least c/log k fraction of the optimal revenue, whereas for identically distributed items, bundling yields at least a constant fraction of the optimal revenue. These bounds are tight (up to a constant factor), settling the open questions raised by Hart and Nisan. The results are valid for arbitrary probability distributions without restrictions. Our results also have implications on other interesting issues, such as monotonicity and randomization of selling mechanisms.
Maximizing the Independence of Deaf-Blind Teenagers.
ERIC Educational Resources Information Center
Venn, J. J.; Wadler, F.
1990-01-01
The Independent Living Project for Deaf/Blind Youth emphasized the teaching of home management, personal management, social/emotional skills, work skills, and communication skills to increase low-functioning teenagers' autonomy. The project included an independent living apartment in which a video monitoring system was used for indirect…
The maximally entangled set of 4-qubit states
Spee, C.; Kraus, B.; Vicente, J. I. de
2016-05-15
Entanglement is a resource to overcome the natural restriction of operations used for state manipulation to Local Operations assisted by Classical Communication (LOCC). Hence, a bipartite maximally entangled state is a state which can be transformed deterministically into any other state via LOCC. In the multipartite setting no such state exists. There, rather a whole set, the Maximally Entangled Set of states (MES), which we recently introduced, is required. This set has on the one hand the property that any state outside of this set can be obtained via LOCC from one of the states within the set and on the other hand, no state in the set can be obtained from any other state via LOCC. Recently, we studied LOCC transformations among pure multipartite states and derived the MES for three and generic four qubit states. Here, we consider the non-generic four qubit states and analyze their properties regarding local transformations. As already the most coarse grained classification, due to Stochastic LOCC (SLOCC), of four qubit states is much richer than in case of three qubits, the investigation of possible LOCC transformations is correspondingly more difficult. We prove that most SLOCC classes show a similar behavior as the generic states, however we also identify here three classes with very distinct properties. The first consists of the GHZ and W class, where any state can be transformed into some other state non-trivially. In particular, there exists no isolation. On the other hand, there also exist classes where all states are isolated. Last but not least we identify an additional class of states, whose transformation properties differ drastically from all the other classes. Although the possibility of transforming states into local-unitary inequivalent states by LOCC turns out to be very rare, we identify those states (with exception of the latter class) which are in the MES and those, which can be obtained (transformed) non-trivially from (into) other states
The maximally entangled set of 4-qubit states
NASA Astrophysics Data System (ADS)
Spee, C.; de Vicente, J. I.; Kraus, B.
2016-05-01
Entanglement is a resource to overcome the natural restriction of operations used for state manipulation to Local Operations assisted by Classical Communication (LOCC). Hence, a bipartite maximally entangled state is a state which can be transformed deterministically into any other state via LOCC. In the multipartite setting no such state exists. There, rather a whole set, the Maximally Entangled Set of states (MES), which we recently introduced, is required. This set has on the one hand the property that any state outside of this set can be obtained via LOCC from one of the states within the set and on the other hand, no state in the set can be obtained from any other state via LOCC. Recently, we studied LOCC transformations among pure multipartite states and derived the MES for three and generic four qubit states. Here, we consider the non-generic four qubit states and analyze their properties regarding local transformations. As already the most coarse grained classification, due to Stochastic LOCC (SLOCC), of four qubit states is much richer than in case of three qubits, the investigation of possible LOCC transformations is correspondingly more difficult. We prove that most SLOCC classes show a similar behavior as the generic states, however we also identify here three classes with very distinct properties. The first consists of the GHZ and W class, where any state can be transformed into some other state non-trivially. In particular, there exists no isolation. On the other hand, there also exist classes where all states are isolated. Last but not least we identify an additional class of states, whose transformation properties differ drastically from all the other classes. Although the possibility of transforming states into local-unitary inequivalent states by LOCC turns out to be very rare, we identify those states (with exception of the latter class) which are in the MES and those, which can be obtained (transformed) non-trivially from (into) other states
Finding the maximal membership in a fuzzy set of an element from another fuzzy set
NASA Astrophysics Data System (ADS)
Yager, Ronald R.
2010-11-01
The problem of finding the maximal membership grade in a fuzzy set of an element from another fuzzy set is an important class of optimisation problems manifested in the real world by situations in which we try to find what is the optimal financial satisfaction we can get from a socially responsible investment. Here, we provide a solution to this problem. We then look at the proposed solution for fuzzy sets with various types of membership grades, ordinal, interval value and intuitionistic.
Maximum independent set on diluted triangular lattices
NASA Astrophysics Data System (ADS)
Fay, C. W., IV; Liu, J. W.; Duxbury, P. M.
2006-05-01
Core percolation and maximum independent set on random graphs have recently been characterized using the methods of statistical physics. Here we present a statistical physics study of these problems on bond diluted triangular lattices. Core percolation critical behavior is found to be consistent with the standard percolation values, though there are strong finite size effects. A transfer matrix method is developed and applied to find accurate values of the density and degeneracy of the maximum independent set on lattices of limited width but large length. An extrapolation of these results to the infinite lattice limit yields high precision results, which are tabulated. These results are compared to results found using both vertex based and edge based local probability recursion algorithms, which have proven useful in the analysis of hard computational problems, such as the satisfiability problem.
Counting independent sets using the Bethe approximation
Chertkov, Michael; Chandrasekaran, V; Gamarmik, D; Shah, D; Sin, J
2009-01-01
The authors consider the problem of counting the number of independent sets or the partition function of a hard-core model in a graph. The problem in general is computationally hard (P hard). They study the quality of the approximation provided by the Bethe free energy. Belief propagation (BP) is a message-passing algorithm can be used to compute fixed points of the Bethe approximation; however, BP is not always guarantee to converge. As the first result, they propose a simple message-passing algorithm that converges to a BP fixed pont for any grapy. They find that their algorithm converges within a multiplicative error 1 + {var_epsilon} of a fixed point in {Omicron}(n{sup 2}E{sup -4} log{sup 3}(nE{sup -1})) iterations for any bounded degree graph of n nodes. In a nutshell, the algorithm can be thought of as a modification of BP with 'time-varying' message-passing. Next, they analyze the resulting error to the number of independent sets provided by such a fixed point of the Bethe approximation. Using the recently developed loop calculus approach by Vhertkov and Chernyak, they establish that for any bounded graph with large enough girth, the error is {Omicron}(n{sup -{gamma}}) for some {gamma} > 0. As an application, they find that for random 3-regular graph, Bethe approximation of log-partition function (log of the number of independent sets) is within o(1) of corret log-partition - this is quite surprising as previous physics-based predictions were expecting an error of o(n). In sum, their results provide a systematic way to find Bethe fixed points for any graph quickly and allow for estimating error in Bethe approximation using novel combinatorial techniques.
An inability to set independent attentional control settings by hemifield.
Becker, Mark W; Ravizza, Susan M; Peltier, Chad
2015-11-01
Recent evidence suggests that people can simultaneously activate attentional control setting for two distinct colors. However, it is unclear whether both attentional control settings must operate globally across the visual field or whether each can be constrained to a particular spatial location. Using two different paradigms, we investigated participants' ability to apply independent color attentional control settings to distinct regions of space. In both experiments, participants were told to identify red letters in one hemifield and green letters in the opposite hemifield. Additionally, some trials used a "relevant distractor"-a letter that matched the opposite side's target color. In Experiment 1, eight letters appeared (four per hemifield) simultaneously for a brief amount of time and then were masked. Relevant distractors increased the error rate and resulted in a greater number of distractor intrusions than irrelevant distractors. Similar results were observed in Experiment 2 in which red and green targets were presented in two rapid serial visual presentation streams. Relevant distractors were found to produce an attentional blink similar in magnitude to an actual target. The results of both experiments suggest that letters matching either attentional control setting were selected by attention and were processed as if they were targets, providing strong evidence that both attentional control settings were applied globally, rather than being constrained to a particular location.
Cardiac vagal index does not explain age-independent maximal heart rate.
Duarte, C V; Araujo, C G
2013-06-01
Cardiac vagal tone (CVT), a key determinant of resting heart rate (HR), is progressively withdrawn with incremental exercise and nearly abolished at maximal effort. While maximal HR decreases with age, there remains a large interindividual variability of results for any given age. In the present study, we hypothesized that CVT does not contribute to age-independent maximal HR. Data were obtained from 1 000 (39±14 years old) healthy subjects (719 men) who were not taking medications affecting CVT or maximal HR performed a clinically normal and truly maximal cardiopulmonary exercise testing. CVT was estimated using the cardiac vagal index (CVI), a dimensionless ratio obtained by dividing 2 cardiac cycle durations--end of exercise and pre-exercise--, reflecting HR increases during a 4-s unloaded cycling test (a vagally-mediated response). Maximal HR was expressed as % of that predicted by age (208-0.7 × age (years)). Linear regression analyses identified that CVI can explain only 1% of the % age-predicted maximal HR variability with a high standard error of estimate (~6.3%), indicating the absence of a true physiological cause-effect relationship. In conclusion, the influence of CVI on % of age-predicted maximal HR is null in healthy subjects, suggesting distinct physiological mechanisms and potential clinical complementary role for these exercise-related variables. © Georg Thieme Verlag KG Stuttgart · New York.
Buttelli, Adriana Cristine Koch; Pinto, Stephanie Santana; Schoenell, Maira Cristina Wolf; Almada, Bruna Pereira; Camargo, Liliana Kologeski; de Oliveira Conceição, Matheus; Kruel, Luiz Fernando Martins
2015-09-29
The aim of this study was to compare the effects of single vs. multiple sets water-based resistance training on maximal dynamic strength in young men. Twenty-one physically active young men were randomly allocated into 2 groups: a single set group (SS, n=10) and a multiple sets group (MS, n=11). The single set program consisted of only 1 set of 30 s, whereas the multiple sets comprised 3 sets of 30 s (rest interval between sets equaled 1 min 30 s). All the water-based resistance exercises were performed at maximal effort and both groups trained twice a week for 10 weeks. Upper (bilateral elbow flexors and bilateral elbow extensors, peck deck and inverse peck deck) as well as lower-body (bilateral knee flexors and unilateral knee extensors) one-repetition maximal tests (1RM) were used to assess changes in muscle strength. The training-related effects were assessed using repeated measures two-way ANOVA (α=5%). Both SS and MS groups increased the upper and lower-body 1RM, with no differences between groups. Therefore, these data show that the maximal dynamic strength significantly increases in young men after 10 weeks of training in an aquatic environment, although the improvement in the strength levels is independent of the number of sets performed.
Predicting maximal aerobic speed through set distance time-trials.
Bellenger, Clint R; Fuller, Joel T; Nelson, Maximillian J; Hartland, Micheal; Buckley, Jonathan D; Debenedictis, Thomas A
2015-12-01
Knowledge of aerobic performance capacity allows for the optimisation of training programs in aerobically dominant sports. Maximal aerobic speed (MAS) is a measure of aerobic performance; however, the time and personnel demands of establishing MAS are considerable. This study aimed to determine whether time-trials (TT), which are shorter and less onerous than traditional MAS protocols, may be used to predict MAS. 28 Australian Rules football players completed a test of MAS, followed by TTs of six different distances in random order, each separated by at least 48 h. Half of the participants completed TT distances of 1200, 1600 and 2000 m, and the others completed distances of 1400, 1800 and 2200 m. Average speed for the 1200 and 1400 m TTs were greater than MAS (P < 0.01). Average speed for 1600, 1800, 2000 and 2200 m TTs were not different from MAS (P > 0.08). Average speed for all TT distances correlated with MAS (r = 0.69-0.84; P < 0.02), but there was a negative association between the difference in average TT speed and MAS with increasing TT distance (r = -0.79; P < 0.01). Average TT speed over the 2000 m distance exhibited the best agreement with MAS. MAS may be predicted from the average speed during a TT for any distance between 1200 and 2200 m, with 2000 m being optimal. Performance of a TT may provide a simple alternative to traditional MAS testing.
Maximizing Social Model Principles in Residential Recovery Settings
Polcin, Douglas; Mericle, Amy; Howell, Jason; Sheridan, Dave; Christensen, Jeff
2014-01-01
Abstract Peer support is integral to a variety of approaches to alcohol and drug problems. However, there is limited information about the best ways to facilitate it. The “social model” approach developed in California offers useful suggestions for facilitating peer support in residential recovery settings. Key principles include using 12-step or other mutual-help group strategies to create and facilitate a recovery environment, involving program participants in decision making and facility governance, using personal recovery experience as a way to help others, and emphasizing recovery as an interaction between the individual and their environment. Although limited in number, studies have shown favorable outcomes for social model programs. Knowledge about social model recovery and how to use it to facilitate peer support in residential recovery homes varies among providers. This article presents specific, practical suggestions for enhancing social model principles in ways that facilitate peer support in a range of recovery residences. PMID:25364996
Maddox, W Todd; Bohil, Corey J
2004-01-01
Observers completed perceptual categorization tasks that included 25 base-rate/payoff conditions constructed from the factorial combination of five base-rate ratios (1:3, 1:2, 1:1, 2:1, and 3:1) with five payoff ratios (1:3, 1:2, 1:1, 2:1, and 3:1). This large database allowed an initial comparison of the competition between reward and accuracy maximization (COBRA) hypothesis with a competition between reward maximization and probability matching (COBRM) hypothesis, and an extensive and critical comparison of the flat-maxima hypothesis with the independence assumption of the optimal classifier. Model-based instantiations of the COBRA and COBRM hypotheses provided good accounts of the data, but there was a consistent advantage for the COBRM instantiation early in learning and for the COBRA instantiation later in learning. This pattern held in the present study and in a reanalysis of Bohil and Maddox (2003). Strong support was obtained for the flat-maxima hypothesis over the independence assumption, especially as the observers gained experience with the task. Model parameters indicated that observers' reward-maximizing decision criterion rapidly approaches the optimal value and that more weight is placed on accuracy maximization in separate base-rate/payoff conditions than in simultaneous base-rate/payoff conditions. The superiority of the flat-maxima hypothesis suggests that violations of the independence assumption are to be expected, and are well captured by the flat-maxima hypothesis, with no need for any additional assumptions.
Speeding up Growth: Selection for Mass-Independent Maximal Metabolic Rate Alters Growth Rates.
Downs, Cynthia J; Brown, Jessi L; Wone, Bernard W M; Donovan, Edward R; Hayes, Jack P
2016-03-01
Investigations into relationships between life-history traits, such as growth rate and energy metabolism, typically focus on basal metabolic rate (BMR). In contrast, investigators rarely examine maximal metabolic rate (MMR) as a relevant metric of energy metabolism, even though it indicates the maximal capacity to metabolize energy aerobically, and hence it might also be important in trade-offs. We studied the relationship between energy metabolism and growth in mice (Mus musculus domesticus Linnaeus) selected for high mass-independent metabolic rates. Selection for high mass-independent MMR increased maximal growth rate, increased body mass at 20 weeks of age, and generally altered growth patterns in both male and female mice. In contrast, there was little evidence that the correlated response in mass-adjusted BMR altered growth patterns. The relationship between mass-adjusted MMR and growth rate indicates that MMR is an important mediator of life histories. Studies investigating associations between energy metabolism and life histories should consider MMR because it is potentially as important in understanding life history as BMR.
Evaluation of true maximal oxygen uptake based on a novel set of standardized criteria.
Midgley, Adrian W; Carroll, Sean; Marchant, David; McNaughton, Lars R; Siegler, Jason
2009-04-01
In this study, criteria are used to identify whether a subject has elicited maximal oxygen uptake. We evaluated the validity of traditional maximal oxygen uptake criteria and propose a novel set of criteria. Twenty athletes completed a maximal oxygen uptake test, consisting of an incremental phase and a subsequent supramaximal phase to exhaustion (verification phase). Traditional and novel maximal oxygen uptake criteria were evaluated. Novel criteria were: oxygen uptake plateau defined as the difference between modelled and actual maximal oxygen uptake >50% of the regression slope of the individual oxygen uptake-workrate relationship; as in the first criterion, but for maximal verification oxygen uptake; and a difference of
Influence maximization in social networks under an independent cascade-based model
NASA Astrophysics Data System (ADS)
Wang, Qiyao; Jin, Yuehui; Lin, Zhen; Cheng, Shiduan; Yang, Tan
2016-02-01
The rapid growth of online social networks is important for viral marketing. Influence maximization refers to the process of finding influential users who make the most of information or product adoption. An independent cascade-based model for influence maximization, called IMIC-OC, was proposed to calculate positive influence. We assumed that influential users spread positive opinions. At the beginning, users held positive or negative opinions as their initial opinions. When more users became involved in the discussions, users balanced their own opinions and those of their neighbors. The number of users who did not change positive opinions was used to determine positive influence. Corresponding influential users who had maximum positive influence were then obtained. Experiments were conducted on three real networks, namely, Facebook, HEP-PH and Epinions, to calculate maximum positive influence based on the IMIC-OC model and two other baseline methods. The proposed model resulted in larger positive influence, thus indicating better performance compared with the baseline methods.
State-independent contextuality sets for a qutrit
NASA Astrophysics Data System (ADS)
Xu, Zhen-Peng; Chen, Jing-Ling; Su, Hong-Yi
2015-09-01
We present a generalized set of complex rays for a qutrit in terms of parameter q =e i 2 π / k, a k-th root of unity. Remarkably, when k = 2 , 3, the set reduces to two well-known state-independent contextuality (SIC) sets: the Yu-Oh set and the Bengtsson-Blanchfield-Cabello set. Based on the Ramanathan-Horodecki criterion and the violation of a noncontextuality inequality, we have proven that the sets with k = 3 m and k = 4 are SIC sets, while the set with k = 5 is not. Our generalized set of rays will theoretically enrich the study of SIC proofs, and stimulate the novel application to quantum information processing.
Maximal lactate steady-state independent of recovery period during intermittent protocol.
Barbosa, Luis F; de Souza, Mariana R; Caritá, Renato A C; Caputo, Fabrizio; Denadai, Benedito S; Greco, Camila C
2011-12-01
Barbosa, LF, de Souza, MR, Corrêa Caritá, RA, Caputo, F, Denadai, BS, and Greco, CC. Maximal lactate steady-state independent of recovery period during intermittent protocol. J Strength Cond Res 25(12): 3385-3390, 2011-The purpose of this study was to analyze the effect of the measurement time for blood lactate concentration ([La]) determination on [La] (maximal lactate steady state [MLSS]) and workload (MLSS during intermittent protocols [MLSSwi]) at maximal lactate steady state determined using intermittent protocols. Nineteen trained male cyclists were divided into 2 groups, for the determination of MLSSwi using passive (VO(2)max = 58.1 ± 3.5 ml·kg·min; N = 9) or active recovery (VO(2)max = 60.3 ± 9.0 ml·kg·min; N = 10). They performed the following tests, in different days, on a cycle ergometer: (a) Incremental test until exhaustion to determine (VO(2)max and (b) 30-minute intermittent constant-workload tests (7 × 4 and 1 × 2 minutes, with 2-minute recovery) to determine MLSSwi and MLSS. Each group performed the intermittent tests with passive or active recovery. The MLSSwi was defined as the highest workload at which [La] increased by no more than 1 mmol·L between minutes 10 and 30 (T1) or minutes 14 and 44 (T2) of the protocol. The MLSS (Passive-T1: 5.89 ± 1.41 vs. T2: 5.61 ± 1.78 mmol·L) and MLSSwi (Passive-T1: 294.5 ± 31.8 vs. T2: 294.7 ± 32.2 W; Active-T1: 304.6 ± 23.0 vs. T2: 300.5 ± 23.9 W) were similar for both criteria. However, MLSS was lower in T2 (4.91 ± 1.91 mmol·L) when compared with in T1 (5.62 ± 1.83 mmol·L) using active recovery. We can conclude that the MLSSwi (passive and active conditions) was unchanged whether recovery periods were considered (T1) or not (T2) for the interpretation of [La] kinetics. In contrast, MLSS was lowered when considering the active recovery periods (T2). Thus, shorter intermittent protocols (i.e., T1) to determine MLSSwi may optimize time of the aerobic capacity evaluation of well
Extension of zero-dimensional hyperbolic sets to locally maximal ones
Anosov, Dmitry V
2010-09-02
It is proved that in any neighbourhood of a zero-dimensional hyperbolic set F (hyperbolic sets are assumed to be compact) there is a locally maximal set F{sub 1} contains F. In the proof, several already known or simple results are used, whose statements are given as separate assertions. The main theorem is compared with known related results, whose statements are also presented. (For example, it is known that the existence of F{sub 1} is not guaranteed for F of positive dimension.) Bibliography: 7 titles.
Existence of independent [1, 2]-sets in caterpillars
NASA Astrophysics Data System (ADS)
Santoso, Eko Budi; Marcelo, Reginaldo M.
2016-02-01
Given a graph G, a subset S ⊆ V (G) is an independent [1, 2]-set if no two vertices in S are adjacent and for every vertex ν ∈ V (G)/S, 1 ≤ |N(ν) ∩ S| ≤ 2, that is, every vertex ν ∈ V (G)/S is adjacent to at least one but not more than two vertices in S. In this paper, we discuss the existence of independent [1, 2]-sets in a family of trees called caterpillars.
Eves, Frank F; Glover, Elisa I; Robinson, Scott L; Vernooij, Carlijn A
2017-01-01
Background: Substantial interindividual variability exists in the maximal rate of fat oxidation (MFO) during exercise with potential implications for metabolic health. Although the diet can affect the metabolic response to exercise, the contribution of a self-selected diet to the interindividual variability in the MFO requires further clarification. Objective: We sought to identify whether recent, self-selected dietary intake independently predicts the MFO in healthy men and women. Design: The MFO and maximal oxygen uptake (O2 max) were determined with the use of indirect calorimetry in 305 healthy volunteers [150 men and 155 women; mean ± SD age: 25 ± 6 y; body mass index (BMI; in kg/m2): 23 ± 2]. Dual-energy X-ray absorptiometry was used to assess body composition with the self-reported physical activity level (SRPAL) and dietary intake determined in the 4 d before exercise testing. To minimize potential confounding with typically observed sex-related differences (e.g., body composition), predictor variables were mean-centered by sex. In the analyses, hierarchical multiple linear regressions were used to quantify each variable’s influence on the MFO. Results: The mean absolute MFO was 0.55 ± 0.19 g/min (range: 0.19–1.13 g/min). A total of 44.4% of the interindividual variability in the MFO was explained by the O2 max, sex, and SRPAL with dietary carbohydrate (carbohydrate; negative association with the MFO) and fat intake (positive association) associated with an additional 3.2% of the variance. When expressed relative to fat-free mass (FFM), the MFO was 10.8 ± 3.2 mg · kg FFM−1 · min−1 (range: 3.5–20.7 mg · kg FFM−1 · min−1) with 16.6% of the variability explained by the O2 max, sex, and SRPAL; dietary carbohydrate and fat intakes together explained an additional 2.6% of the variability. Biological sex was an independent determinant of the MFO with women showing a higher MFO [men: 10.3 ± 3.1 mg · kg FFM−1 · min−1 (3.5–19.9 mg
Fletcher, Gareth; Eves, Frank F; Glover, Elisa I; Robinson, Scott L; Vernooij, Carlijn A; Thompson, Janice L; Wallis, Gareth A
2017-04-01
Background: Substantial interindividual variability exists in the maximal rate of fat oxidation (MFO) during exercise with potential implications for metabolic health. Although the diet can affect the metabolic response to exercise, the contribution of a self-selected diet to the interindividual variability in the MFO requires further clarification.Objective: We sought to identify whether recent, self-selected dietary intake independently predicts the MFO in healthy men and women.Design: The MFO and maximal oxygen uptake ([Formula: see text]O2 max) were determined with the use of indirect calorimetry in 305 healthy volunteers [150 men and 155 women; mean ± SD age: 25 ± 6 y; body mass index (BMI; in kg/m(2)): 23 ± 2]. Dual-energy X-ray absorptiometry was used to assess body composition with the self-reported physical activity level (SRPAL) and dietary intake determined in the 4 d before exercise testing. To minimize potential confounding with typically observed sex-related differences (e.g., body composition), predictor variables were mean-centered by sex. In the analyses, hierarchical multiple linear regressions were used to quantify each variable's influence on the MFO.Results: The mean absolute MFO was 0.55 ± 0.19 g/min (range: 0.19-1.13 g/min). A total of 44.4% of the interindividual variability in the MFO was explained by the [Formula: see text]O2 max, sex, and SRPAL with dietary carbohydrate (carbohydrate; negative association with the MFO) and fat intake (positive association) associated with an additional 3.2% of the variance. When expressed relative to fat-free mass (FFM), the MFO was 10.8 ± 3.2 mg · kg FFM(-1) · min(-1) (range: 3.5-20.7 mg · kg FFM(-1) · min(-1)) with 16.6% of the variability explained by the [Formula: see text]O2 max, sex, and SRPAL; dietary carbohydrate and fat intakes together explained an additional 2.6% of the variability. Biological sex was an independent determinant of the MFO with women showing a higher MFO [men: 10.3 ± 3
Gauge origin independence in finite basis sets and perturbation theory
NASA Astrophysics Data System (ADS)
Sørensen, Lasse Kragh; Lindh, Roland; Lundberg, Marcus
2017-09-01
We show that origin independence in finite basis sets for the oscillator strengths is possibly in any gauge contrary to what is stated in literature. This is proved from a discussion of the consequences in perturbation theory when the exact eigenfunctions and eigenvalues to the zeroth order Hamiltonian H0 cannot be found. We demonstrate that the erroneous conclusion for the lack of gauge origin independence in the length gauge stems from not transforming the magnetic terms in the multipole expansion leading to the use of a mixed gauge. Numerical examples of exact origin dependence are shown.
Balance between Noise and Information Flow Maximizes Set Complexity of Network Dynamics
Mäki-Marttunen, Tuomo; Kesseli, Juha; Nykter, Matti
2013-01-01
Boolean networks have been used as a discrete model for several biological systems, including metabolic and genetic regulatory networks. Due to their simplicity they offer a firm foundation for generic studies of physical systems. In this work we show, using a measure of context-dependent information, set complexity, that prior to reaching an attractor, random Boolean networks pass through a transient state characterized by high complexity. We justify this finding with a use of another measure of complexity, namely, the statistical complexity. We show that the networks can be tuned to the regime of maximal complexity by adding a suitable amount of noise to the deterministic Boolean dynamics. In fact, we show that for networks with Poisson degree distributions, all networks ranging from subcritical to slightly supercritical can be tuned with noise to reach maximal set complexity in their dynamics. For networks with a fixed number of inputs this is true for near-to-critical networks. This increase in complexity is obtained at the expense of disruption in information flow. For a large ensemble of networks showing maximal complexity, there exists a balance between noise and contracting dynamics in the state space. In networks that are close to critical the intrinsic noise required for the tuning is smaller and thus also has the smallest effect in terms of the information processing in the system. Our results suggest that the maximization of complexity near to the state transition might be a more general phenomenon in physical systems, and that noise present in a system may in fact be useful in retaining the system in a state with high information content. PMID:23516395
San Martín, René; Appelbaum, Lawrence G; Pearson, John M; Huettel, Scott A; Woldorff, Marty G
2013-04-17
Success in many decision-making scenarios depends on the ability to maximize gains and minimize losses. Even if an agent knows which cues lead to gains and which lead to losses, that agent could still make choices yielding suboptimal rewards. Here, by analyzing event-related potentials (ERPs) recorded in humans during a probabilistic gambling task, we show that individuals' behavioral tendencies to maximize gains and to minimize losses are associated with their ERP responses to the receipt of those gains and losses, respectively. We focused our analyses on ERP signals that predict behavioral adjustment: the frontocentral feedback-related negativity (FRN) and two P300 (P3) subcomponents, the frontocentral P3a and the parietal P3b. We found that, across participants, gain maximization was predicted by differences in amplitude of the P3b for suboptimal versus optimal gains (i.e., P3b amplitude difference between the least good and the best gains). Conversely, loss minimization was predicted by differences in the P3b amplitude to suboptimal versus optimal losses (i.e., difference between the worst and the least bad losses). Finally, we observed that the P3a and P3b, but not the FRN, predicted behavioral adjustment on subsequent trials, suggesting a specific adaptive mechanism by which prior experience may alter ensuing behavior. These findings indicate that individual differences in gain maximization and loss minimization are linked to individual differences in rapid neural responses to monetary outcomes.
Sartor, Francesco; Vernillo, Gianluca; de Morree, Helma M; Bonomi, Alberto G; La Torre, Antonio; Kubis, Hans-Peter; Veicsteinas, Arsenio
2013-09-01
Assessment of the functional capacity of the cardiovascular system is essential in sports medicine. For athletes, the maximal oxygen uptake [Formula: see text] provides valuable information about their aerobic power. In the clinical setting, the (VO(2max)) provides important diagnostic and prognostic information in several clinical populations, such as patients with coronary artery disease or heart failure. Likewise, VO(2max) assessment can be very important to evaluate fitness in asymptomatic adults. Although direct determination of [VO(2max) is the most accurate method, it requires a maximal level of exertion, which brings a higher risk of adverse events in individuals with an intermediate to high risk of cardiovascular problems. Estimation of VO(2max) during submaximal exercise testing can offer a precious alternative. Over the past decades, many protocols have been developed for this purpose. The present review gives an overview of these submaximal protocols and aims to facilitate appropriate test selection in sports, clinical, and home settings. Several factors must be considered when selecting a protocol: (i) The population being tested and its specific needs in terms of safety, supervision, and accuracy and repeatability of the VO(2max) estimation. (ii) The parameters upon which the prediction is based (e.g. heart rate, power output, rating of perceived exertion [RPE]), as well as the need for additional clinically relevant parameters (e.g. blood pressure, ECG). (iii) The appropriate test modality that should meet the above-mentioned requirements should also be in line with the functional mobility of the target population, and depends on the available equipment. In the sports setting, high repeatability is crucial to track training-induced seasonal changes. In the clinical setting, special attention must be paid to the test modality, because multiple physiological parameters often need to be measured during test execution. When estimating VO(2max), one has
PMCR-Miner: parallel maximal confident association rules miner algorithm for microarray data set.
Zakaria, Wael; Kotb, Yasser; Ghaleb, Fayed F M
2015-01-01
The MCR-Miner algorithm is aimed to mine all maximal high confident association rules form the microarray up/down-expressed genes data set. This paper introduces two new algorithms: IMCR-Miner and PMCR-Miner. The IMCR-Miner algorithm is an extension of the MCR-Miner algorithm with some improvements. These improvements implement a novel way to store the samples of each gene into a list of unsigned integers in order to benefit using the bitwise operations. In addition, the IMCR-Miner algorithm overcomes the drawbacks faced by the MCR-Miner algorithm by setting some restrictions to ignore repeated comparisons. The PMCR-Miner algorithm is a parallel version of the new proposed IMCR-Miner algorithm. The PMCR-Miner algorithm is based on shared-memory systems and task parallelism, where no time is needed in the process of sharing and combining data between processors. The experimental results on real microarray data sets show that the PMCR-Miner algorithm is more efficient and scalable than the counterparts.
Beyond Maximum Independent Set: AN Extended Model for Point-Feature Label Placement
NASA Astrophysics Data System (ADS)
Haunert, Jan-Henrik; Wolff, Alexander
2016-06-01
Map labeling is a classical problem of cartography that has frequently been approached by combinatorial optimization. Given a set of features in the map and for each feature a set of label candidates, a common problem is to select an independent set of labels (that is, a labeling without label-label overlaps) that contains as many labels as possible and at most one label for each feature. To obtain solutions of high cartographic quality, the labels can be weighted and one can maximize the total weight (rather than the number) of the selected labels. We argue, however, that when maximizing the weight of the labeling, interdependences between labels are insufficiently addressed. Furthermore, in a maximum-weight labeling, the labels tend to be densely packed and thus the map background can be occluded too much. We propose extensions of an existing model to overcome these limitations. Since even without our extensions the problem is NP-hard, we cannot hope for an efficient exact algorithm for the problem. Therefore, we present a formalization of our model as an integer linear program (ILP). This allows us to compute optimal solutions in reasonable time, which we demonstrate for randomly generated instances.
Wone, B W M; Madsen, P; Donovan, E R; Labocha, M K; Sears, M W; Downs, C J; Sorensen, D A; Hayes, J P
2015-04-01
Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR, and BMR was slightly, but not significantly, higher (2.5%). Compared with controls, MMR was significantly higher (5.3%) in antagonistically selected lines, and BMR was slightly, but not significantly, lower (4.2%). Analysis of breeding values revealed no positive genetic trend for elevated BMR in high-MMR lines. A weak positive genetic correlation was detected between MMR and BMR. That weak positive genetic correlation supports the aerobic capacity model for the evolution of endothermy in the sense that it fails to falsify a key model assumption. Overall, the results suggest that at least in these mice there is significant capacity for independent evolution of metabolic traits. Whether that is true in the ancestral animals that evolved endothermy remains an important but unanswered question.
Wone, B W M; Madsen, P; Donovan, E R; Labocha, M K; Sears, M W; Downs, C J; Sorensen, D A; Hayes, J P
2015-01-01
Metabolic rates are correlated with many aspects of ecology, but how selection on different aspects of metabolic rates affects their mutual evolution is poorly understood. Using laboratory mice, we artificially selected for high maximal mass-independent metabolic rate (MMR) without direct selection on mass-independent basal metabolic rate (BMR). Then we tested for responses to selection in MMR and correlated responses to selection in BMR. In other lines, we antagonistically selected for mice with a combination of high mass-independent MMR and low mass-independent BMR. All selection protocols and data analyses included body mass as a covariate, so effects of selection on the metabolic rates are mass adjusted (that is, independent of effects of body mass). The selection lasted eight generations. Compared with controls, MMR was significantly higher (11.2%) in lines selected for increased MMR, and BMR was slightly, but not significantly, higher (2.5%). Compared with controls, MMR was significantly higher (5.3%) in antagonistically selected lines, and BMR was slightly, but not significantly, lower (4.2%). Analysis of breeding values revealed no positive genetic trend for elevated BMR in high-MMR lines. A weak positive genetic correlation was detected between MMR and BMR. That weak positive genetic correlation supports the aerobic capacity model for the evolution of endothermy in the sense that it fails to falsify a key model assumption. Overall, the results suggest that at least in these mice there is significant capacity for independent evolution of metabolic traits. Whether that is true in the ancestral animals that evolved endothermy remains an important but unanswered question. PMID:25604947
Linear scaling calculation of maximally localized Wannier functions with atomic basis set.
Xiang, H J; Li, Zhenyu; Liang, W Z; Yang, Jinlong; Hou, J G; Zhu, Qingshi
2006-06-21
We have developed a linear scaling algorithm for calculating maximally localized Wannier functions (MLWFs) using atomic orbital basis. An O(N) ground state calculation is carried out to get the density matrix (DM). Through a projection of the DM onto atomic orbitals and a subsequent O(N) orthogonalization, we obtain initial orthogonal localized orbitals. These orbitals can be maximally localized in linear scaling by simple Jacobi sweeps. Our O(N) method is validated by applying it to water molecule and wurtzite ZnO. The linear scaling behavior of the new method is demonstrated by computing the MLWFs of boron nitride nanotubes.
Maximizing Accommodations for Learning Disabled Students in the Regular Classroom Setting.
ERIC Educational Resources Information Center
Ribich, Frank M.; Debenham, Adaria Ruey
For a mainstreaming program to be effective, students should be able to experience success in a regular classroom with the enhanced self-image that accompanies a positive experience. Maximizing classroom accommodations for learning disabled students can help improve these students' access to educational opportunities and increase the probability…
Conformational freedom of proteins and the maximal probability of sets of orientations
NASA Astrophysics Data System (ADS)
Sgheri, Luca
2010-03-01
We study the inverse problem of determining the relative orientations of the moving C- and N-terminal domains in a flexible protein from measurements of its mean magnetic susceptibility tensor \\bar{\\chi } . The latter is an integral average of rotations of the corresponding magnetic susceptibility tensor χ. The largest fraction of time that the two terminals can stay in a given orientation, still producing the \\bar{\\chi } measurements, is the maximal probability of that orientation. We extend this definition to any measurable subset of the rotation group. This extension permits a quantitative assessment of the results when the generating distribution is either continuous or discrete. We establish some properties of the maximal probability and present some numerical experiments.
Maximizing lipocalin prediction through balanced and diversified training set and decision fusion.
Nath, Abhigyan; Subbiah, Karthikeyan
2015-12-01
Lipocalins are short in sequence length and perform several important biological functions. These proteins are having less than 20% sequence similarity among paralogs. Experimentally identifying them is an expensive and time consuming process. The computational methods based on the sequence similarity for allocating putative members to this family are also far elusive due to the low sequence similarity existing among the members of this family. Consequently, the machine learning methods become a viable alternative for their prediction by using the underlying sequence/structurally derived features as the input. Ideally, any machine learning based prediction method must be trained with all possible variations in the input feature vector (all the sub-class input patterns) to achieve perfect learning. A near perfect learning can be achieved by training the model with diverse types of input instances belonging to the different regions of the entire input space. Furthermore, the prediction performance can be improved through balancing the training set as the imbalanced data sets will tend to produce the prediction bias towards majority class and its sub-classes. This paper is aimed to achieve (i) the high generalization ability without any classification bias through the diversified and balanced training sets as well as (ii) enhanced the prediction accuracy by combining the results of individual classifiers with an appropriate fusion scheme. Instead of creating the training set randomly, we have first used the unsupervised Kmeans clustering algorithm to create diversified clusters of input patterns and created the diversified and balanced training set by selecting an equal number of patterns from each of these clusters. Finally, probability based classifier fusion scheme was applied on boosted random forest algorithm (which produced greater sensitivity) and K nearest neighbour algorithm (which produced greater specificity) to achieve the enhanced predictive performance
Design of irregular screen sets that generate maximally smooth halftone patterns
NASA Astrophysics Data System (ADS)
Jumabayeva, Altyngul; Chen, Yi-Ting; Frank, Tal; Ulichney, Robert; Allebach, Jan
2015-01-01
With the emergence of high-end digital printing technologies, it is of interest to analyze the nature and causes of image graininess in order to understand the factors that prevent high-end digital presses from achieving the same print quality as commercial offset presses. In this paper, we report on a study to understand the relationship between image graininess and halftone technology. With high-end digital printing technology, irregular screens can be considered since they can achieve a better approximation to the screen sets used for commercial offset presses. This is due to the fact that the elements of the periodicity matrix of an irregular screen are rational numbers, rather than integers, which would be the case for a regular screen. To understand how image graininess relates to the halftoning technology, we recently performed a Fourier-based analysis of regular and irregular periodic, clustered-dot halftone textures. From the analytical results, we showed that irregular halftone textures generate new frequency components near the spectrum origin; and that these frequency components are low enough to be visible to the human viewer, and to be perceived as a lack of smoothness. In this paper, given a set of target irrational screen periodicity matrices, we describe a process, based on this Fourier analysis, for finding the best realizable screen set. We demonstrate the efficacy of our method with a number of experimental results.
Takahashi, Kei-ichiro; Takigawa, Ichigaku; Mamitsuka, Hiroshi
2013-01-01
Detecting biclusters from expression data is useful, since biclusters are coexpressed genes under only part of all given experimental conditions. We present a software called SiBIC, which from a given expression dataset, first exhaustively enumerates biclusters, which are then merged into rather independent biclusters, which finally are used to generate gene set networks, in which a gene set assigned to one node has coexpressed genes. We evaluated each step of this procedure: 1) significance of the generated biclusters biologically and statistically, 2) biological quality of merged biclusters, and 3) biological significance of gene set networks. We emphasize that gene set networks, in which nodes are not genes but gene sets, can be more compact than usual gene networks, meaning that gene set networks are more comprehensible. SiBIC is available at http://utrecht.kuicr.kyoto-u.ac.jp:8080/miami/faces/index.jsp.
Kim, Yongbok; Trombetta, Mark G.
2011-04-15
Purpose: An objective method was proposed and compared with a manual selection method to determine planner-independent skin and rib maximal dose in balloon-based high dose rate (HDR) brachytherapy planning. Methods: The maximal dose to skin and rib was objectively extracted from a dose volume histogram (DVH) of skin and rib volumes. A virtual skin volume was produced by expanding the skin surface in three dimensions (3D) external to the breast with a certain thickness in the planning computed tomography (CT) images. Therefore, the maximal dose to this volume occurs on the skin surface the same with a conventional manual selection method. The rib was also delineated in the planning CT images and its maximal dose was extracted from its DVH. The absolute (Abdiff=|D{sub max}{sup Man}-D{sub max}{sup DVH}|) and relative (Rediff[%]=100x(|D{sub max}{sup Man}-D{sub max}{sup DVH}|)/D{sub max}{sup DVH}) maximal skin and rib dose differences between the manual selection method (D{sub max}{sup Man}) and the objective method (D{sub max}{sup DVH}) were measured for 50 balloon-based HDR (25 MammoSite and 25 Contura) patients. Results: The average{+-}standard deviation of maximal dose difference was 1.67%{+-}1.69% of the prescribed dose (PD). No statistical difference was observed between MammoSite and Contura patients for both Abdiff and Rediff[%] values. However, a statistically significant difference (p value <0.0001) was observed in maximal rib dose difference compared with maximal skin dose difference for both Abdiff (2.30%{+-}1.71% vs 1.05%{+-}1.43%) and Rediff[%] (2.32%{+-}1.79% vs 1.21%{+-}1.41%). In general, rib has a more irregular contour and it is more proximally located to the balloon for 50 HDR patients. Due to the inverse square law factor, more dose difference was observed in higher dose range (D{sub max}>90%) compared with lower dose range (D{sub max}<90%): 2.16%{+-}1.93% vs 1.19%{+-}1.25% with p value of 0.0049. However, the Rediff[%] analysis eliminated the
Grignon, Jessica S; Ledikwe, Jenny H; Makati, Ditsapelo; Nyangah, Robert; Sento, Baraedi W; Semo, Bazghina-Werq
2014-01-01
To address health systems challenges in limited-resource settings, global health initiatives, particularly the President's Emergency Plan for AIDS Relief, have seconded health workers to the public sector. Implementation considerations for secondment as a health workforce development strategy are not well documented. The purpose of this article is to present outcomes, best practices, and lessons learned from a President's Emergency Plan for AIDS Relief-funded secondment program in Botswana. Outcomes are documented across four World Health Organization health systems' building blocks. Best practices include documentation of joint stakeholder expectations, collaborative recruitment, and early identification of counterparts. Lessons learned include inadequate ownership, a two-tier employment system, and ill-defined position duration. These findings can inform program and policy development to maximize the benefit of health workforce secondment. Secondment requires substantial investment, and emphasis should be placed on high-level technical positions responsible for building systems, developing health workers, and strengthening government to translate policy into programs.
An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
Martín, René San; Appelbaum, Lawrence G.; Pearson, John M.; Huettel, Scott A.; Woldorff, Marty G.
2013-01-01
Success in many decision-making scenarios depends on the ability to maximize gains and minimize losses. Even if an agent knows which cues lead to gains and which lead to losses, that agent could still make choices yielding suboptimal rewards. Here, by analyzing event-related potentials (ERPs) recorded in humans during a probabilistic gambling task, we show that individuals’ behavioral tendencies to maximize gains and to minimize losses are associated with their ERP responses to the receipt of those gains and losses, respectively. We focused our analyses on ERP signals that predict behavioral adjustment: the fronto-central feedback-related negativity (FRN) and two P300 (P3) subcomponents: the fronto-central P3a and the parietal P3b. We found that, across participants, gain-maximization was predicted by differences in amplitude of the P3b for suboptimal versus optimal gains (i.e., P3b amplitude difference between the least good and the best possible gains). Conversely, loss-minimization was predicted by differences in the P3b amplitude to suboptimal versus optimal losses (i.e., difference between the worst and the least bad losses). Finally, we observed that the P3a and P3b, but not the FRN, predicted behavioral adjustment on subsequent trials, suggesting a specific adaptive mechanism by which prior experience may alter ensuing behavior. These findings indicate that individual differences in gain-maximization and loss-minimization are linked to individual differences in rapid neural responses to monetary outcomes. PMID:23595758
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
NASA Astrophysics Data System (ADS)
Takahashi, Jun; Takabe, Satoshi; Hukushima, Koji
2017-07-01
A recently proposed exact algorithm for the maximum independent set problem is analyzed. The typical running time is improved exponentially in some parameter regions compared to simple binary search. Furthermore, the algorithm overcomes the core transition point, where the conventional leaf removal algorithm fails, and works up to the replica symmetry breaking (RSB) transition point. This suggests that a leaf removal core itself is not enough for typical hardness in the random maximum independent set problem, providing further evidence for RSB being the obstacle for algorithms in general.
Maximally nonlocal theories cannot be maximally random.
de la Torre, Gonzalo; Hoban, Matty J; Dhara, Chirag; Prettico, Giuseppe; Acín, Antonio
2015-04-24
Correlations that violate a Bell inequality are said to be nonlocal; i.e., they do not admit a local and deterministic explanation. Great effort has been devoted to study how the amount of nonlocality (as measured by a Bell inequality violation) serves to quantify the amount of randomness present in observed correlations. In this work we reverse this research program and ask what do the randomness certification capabilities of a theory tell us about the nonlocality of that theory. We find that, contrary to initial intuition, maximal randomness certification cannot occur in maximally nonlocal theories. We go on and show that quantum theory, in contrast, permits certification of maximal randomness in all dichotomic scenarios. We hence pose the question of whether quantum theory is optimal for randomness; i.e., is it the most nonlocal theory that allows maximal randomness certification? We answer this question in the negative by identifying a larger-than-quantum set of correlations capable of this feat. Not only are these results relevant to understanding quantum mechanics' fundamental features, but also put fundamental restrictions on device-independent protocols based on the no-signaling principle.
Validation of outlier loci through replication in independent data sets: a test on Arabis alpina.
Buehler, Dominique; Holderegger, Rolf; Brodbeck, Sabine; Schnyder, Elvira; Gugerli, Felix
2014-11-01
Outlier detection and environmental association analysis are common methods to search for loci or genomic regions exhibiting signals of adaptation to environmental factors. However, a validation of outlier loci and corresponding allele distribution models through functional molecular biology or transplant/common garden experiments is rarely carried out. Here, we employ another method for validation, namely testing outlier loci in specifically designed, independent data sets. Previously, an outlier locus associated with three different habitat types had been detected in Arabis alpina. For the independent validation data set, we sampled 30 populations occurring in these three habitat types across five biogeographic regions of the Swiss Alps. The allele distribution model found in the original study could not be validated in the independent test data set: The outlier locus was no longer indicative of habitat-mediated selection. We propose several potential causes of this failure of validation, of which unaccounted genetic structure and technical issues in the original data set used to detect the outlier locus were most probable. Thus, our study shows that validating outlier loci and allele distribution models in independent data sets is a helpful tool in ecological genomics which, in the case of positive validation, adds confidence to outlier loci and their association with environmental factors or, in the case of failure of validation, helps to explain inconsistencies.
Pal, Karoly F.; Vertesi, Tamas
2010-08-15
The I{sub 3322} inequality is the simplest bipartite two-outcome Bell inequality beyond the Clauser-Horne-Shimony-Holt (CHSH) inequality, consisting of three two-outcome measurements per party. In the case of the CHSH inequality the maximal quantum violation can already be attained with local two-dimensional quantum systems; however, there is no such evidence for the I{sub 3322} inequality. In this paper a family of measurement operators and states is given which enables us to attain the maximum quantum value in an infinite-dimensional Hilbert space. Further, it is conjectured that our construction is optimal in the sense that measuring finite-dimensional quantum systems is not enough to achieve the true quantum maximum. We also describe an efficient iterative algorithm for computing quantum maximum of an arbitrary two-outcome Bell inequality in any given Hilbert space dimension. This algorithm played a key role in obtaining our results for the I{sub 3322} inequality, and we also applied it to improve on our previous results concerning the maximum quantum violation of several bipartite two-outcome Bell inequalities with up to five settings per party.
NASA Technical Reports Server (NTRS)
Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)
2002-01-01
The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.
NASA Technical Reports Server (NTRS)
Howell, Leonard W., Jr.; Six, N. Frank (Technical Monitor)
2002-01-01
The Maximum Likelihood (ML) statistical theory required to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments is developed in this paper. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral information based on the combination of data sets. The procedure is of significant value to both existing data sets and those to be produced by future astrophysics missions consisting of two or more detectors by allowing instrument developers to optimize each detector's design parameters through simulation studies in order to design and build complementary detectors that will maximize the precision with which the science objectives may be obtained. The benefits of this ML theory and its application is measured in terms of the reduction of the statistical errors (standard deviations) of the spectra information using the multiple data sets in concert as compared to the statistical errors of the spectra information when the data sets are considered separately, as well as any biases resulting from poor statistics in one or more of the individual data sets that might be reduced when the data sets are combined.
Scope of physician procedures independently billed by mid-level providers in the office setting.
Coldiron, Brett; Ratnarathorn, Mondhipa
2014-11-01
Mid-level providers (nurse practitioners and physician assistants) were originally envisioned to provide primary care services in underserved areas. This study details the current scope of independent procedural billing to Medicare of difficult, invasive, and surgical procedures by medical mid-level providers. To understand the scope of independent billing to Medicare for procedures performed by mid-level providers in an outpatient office setting for a calendar year. Analyses of the 2012 Medicare Physician/Supplier Procedure Summary Master File, which reflects fee-for-service claims that were paid by Medicare, for Current Procedural Terminology procedures independently billed by mid-level providers. Outpatient office setting among health care providers. The scope of independent billing to Medicare for procedures performed by mid-level providers. In 2012, nurse practitioners and physician assistants billed independently for more than 4 million procedures at our cutoff of 5000 paid claims per procedure. Most (54.8%) of these procedures were performed in the specialty area of dermatology. The findings of this study are relevant to safety and quality of care. Recently, the shortage of primary care clinicians has prompted discussion of widening the scope of practice for mid-level providers. It would be prudent to temper widening the scope of practice of mid-level providers by recognizing that mid-level providers are not solely limited to primary care, and may involve procedures for which they may not have formal training.
Holocene sea level variations on the basis of integration of independent data sets
Sahagian, D.; Berkman, P. . Dept. of Geological Sciences and Byrd Polar Research Center)
1992-01-01
Variations in sea level through earth history have occurred at a wide variety of time scales. Sea level researchers have attacked the problem of measuring these sea level changes through a variety of approaches, each relevant only to the time scale in question, and usually only relevant to the specific locality from which a specific type of data are derived. There is a plethora of different data types that can and have been used (locally) for the measurement of Holocene sea level variations. The problem of merging different data sets for the purpose of constructing a global eustatic sea level curve for the Holocene has not previously been adequately addressed. The authors direct the efforts to that end. Numerous studies have been published regarding Holocene sea level changes. These have involved exposed fossil reef elevations, elevation of tidal deltas, elevation of depth of intertidal peat deposits, caves, tree rings, ice cores, moraines, eolian dune ridges, marine-cut terrace elevations, marine carbonate species, tide gauges, and lake level variations. Each of these data sets is based on particular set of assumptions, and is valid for a specific set of environments. In order to obtain the most accurate possible sea level curve for the Holocene, these data sets must be merged so that local and other influences can be filtered out of each data set. Since each data set involves very different measurements, each is scaled in order to define the sensitivity of the proxy measurement parameter to sea level, including error bounds. This effectively determines the temporal and spatial resolution of each data set. The level of independence of data sets is also quantified, in order to rule out the possibility of a common non-eustatic factor affecting more than one variety of data. The Holocene sea level curve is considered to be independent of other factors affecting the proxy data, and is taken to represent the relation between global ocean water and basin volumes.
ERIC Educational Resources Information Center
Stephenson, Margaret E.
2000-01-01
Discusses the four planes of development and the periods of creation and crystallization within each plane. Identifies the type of independence that should be achieved by the end of the first two planes of development. Maintains that it is through individual work on the environment that one achieves independence. (KB)
Parallel group independent component analysis for massive fMRI data sets
Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H.; Pekar, James J.; Lindquist, Martin A.; Eloyan, Ani; Caffo, Brian S.
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively. PMID:28278208
GreedyMAX-type Algorithms for the Maximum Independent Set Problem
NASA Astrophysics Data System (ADS)
Borowiecki, Piotr; Göring, Frank
A maximum independent set problem for a simple graph G = (V,E) is to find the largest subset of pairwise nonadjacent vertices. The problem is known to be NP-hard and it is also hard to approximate. Within this article we introduce a non-negative integer valued function p defined on the vertex set V(G) and called a potential function of a graph G, while P(G) = max v ∈ V(G) p(v) is called a potential of G. For any graph P(G) ≤ Δ(G), where Δ(G) is the maximum degree of G. Moreover, Δ(G) - P(G) may be arbitrarily large. A potential of a vertex lets us get a closer insight into the properties of its neighborhood which leads to the definition of the family of GreedyMAX-type algorithms having the classical GreedyMAX algorithm as their origin. We establish a lower bound 1/(P + 1) for the performance ratio of GreedyMAX-type algorithms which favorably compares with the bound 1/(Δ + 1) known to hold for GreedyMAX. The cardinality of an independent set generated by any GreedyMAX-type algorithm is at least sum_{vin V(G)} (p(v)+1)^{-1}, which strengthens the bounds of Turán and Caro-Wei stated in terms of vertex degrees.
Parallel group independent component analysis for massive fMRI data sets.
Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.
Generalizations of the subject-independent feature set for music-induced emotion recognition.
Lin, Yuan-Pin; Chen, Jyh-Horng; Duann, Jeng-Ren; Lin, Chin-Teng; Jung, Tzyy-Ping
2011-01-01
Electroencephalogram (EEG)-based emotion recognition has been an intensely growing field. Yet, how to achieve acceptable accuracy on a practical system with as fewer electrodes as possible is less concerned. This study evaluates a set of subject-independent features, based on differential power asymmetry of symmetric electrode pairs [1], with emphasis on its applicability to subject variability in music-induced emotion classification problem. Results of this study have evidently validated the feasibility of using subject-independent EEG features to classify four emotional states with acceptable accuracy in second-scale temporal resolution. These features could be generalized across subjects to detect emotion induced by music excerpts not limited to the music database that was used to derive the emotion-specific features.
ERIC Educational Resources Information Center
Trapp, Georgina; Giles-Corti, Billie; Martin, Karen; Timperio, Anna; Villanueva, Karen
2012-01-01
Background: Schools are an ideal setting in which to involve children in research. Yet for investigators wishing to work in these settings, there are few method papers providing insights into working efficiently in this setting. Objective: The aim of this paper is to describe the five strategies used to increase response rates, data quality and…
ERIC Educational Resources Information Center
Trapp, Georgina; Giles-Corti, Billie; Martin, Karen; Timperio, Anna; Villanueva, Karen
2012-01-01
Background: Schools are an ideal setting in which to involve children in research. Yet for investigators wishing to work in these settings, there are few method papers providing insights into working efficiently in this setting. Objective: The aim of this paper is to describe the five strategies used to increase response rates, data quality and…
InfVis--platform-independent visual data mining of multidimensional chemical data sets.
Oellien, Frank; Ihlenfeldt, Wolf-Dietrich; Gasteiger, Johann
2005-01-01
The tremendous increase of chemical data sets, both in size and number, and the simultaneous desire to speed up the drug discovery process has resulted in an increasing need for a new generation of computational tools that assist in the extraction of information from data and allow for rapid and in-depth data mining. During recent years, visual data mining has become an important tool within the life sciences and drug discovery area with the potential to help avoiding data analysis from turning into a bottleneck. In this paper, we present InfVis, a platform-independent visual data mining tool for chemists, who usually only have little experience with classical data mining tools, for the visualization, exploration, and analysis of multivariate data sets. InfVis represents multidimensional data sets by using intuitive 3D glyph information visualization techniques. Interactive and dynamic tools such as dynamic query devices allow real-time, interactive data set manipulations and support the user in the identification of relationships and patterns. InfVis has been implemented in Java and Java3D and can be run on a broad range of platforms and operating systems. It can also be embedded as an applet in Web-based interfaces. We will present in this paper examples detailing the analysis of a reaction database that demonstrate how InfVis assists chemists in identifying and extracting hidden information.
Scudese, Estevão; Willardson, Jeffrey M; Simão, Roberto; Senna, Gilmar; de Salles, Belmiro F; Miranda, Humberto
2015-11-01
The purpose of this study was to compare different rest intervals between sets on repetition consistency and ratings of perceived exertion (RPE) during consecutive bench press sets with an absolute 3RM (3 repetition maximum) load. Sixteen trained men (23.75 ± 4.21 years; 74.63 ± 5.36 kg; 175 ± 4.64 cm; bench press relative strength: 1.44 ± 0.19 kg/kg of body mass) attended 4 randomly ordered sessions during which 5 consecutive sets of the bench press were performed with an absolute 3RM load and 1, 2, 3, or 5 minutes of rest interval between sets. The results indicated that significantly greater bench press repetitions were completed with 2, 3, and 5 minutes vs. 1-minute rest between sets (p ≤ 0.05); no significant differences were noted between the 2, 3, and 5 minutes rest conditions. For the 1-minute rest condition, performance reductions (relative to the first set) were observed commencing with the second set; whereas for the other conditions (2, 3, and 5 minutes rest), performance reductions were not evident until the third and fourth sets. The RPE values before each of the successive sets were significantly greater, commencing with the second set for the 1-minute vs. the 3 and 5 minutes rest conditions. Significant increases were also evident in RPE immediately after each set between the 1 and 5 minutes rest conditions from the second through fifth sets. These findings indicate that when utilizing an absolute 3RM load for the bench press, practitioners may prescribe a time-efficient minimum of 2 minutes rest between sets without significant impairments in repetition performance. However, lower perceived exertion levels may necessitate prescription of a minimum of 3 minutes rest between sets.
Scudese, Estevão; Willardson, Jeffrey M; Simão, Roberto; Senna, Gilmar; Freitas de Salles, Belmiro; Miranda, Humberto
2013-11-20
The purpose of this study was to compare different rest intervals between sets on repetition consistency and ratings of perceived exertion (RPE) during consecutive bench press sets with an absolute 3RM load. Sixteen trained men (23.75 ± 4.21 years; 74.63 ± 5.36 kg; 175 ± 4.64 cm; bench press relative strength: 1.44 ± 0.19 kg/kg of body mass) attended four randomly ordered sessions during which five consecutive sets of the bench press were performed with an absolute 3RM load and 1, 2, 3, or 5 minutes of rest interval between sets. The results indicated that significantly greater bench press repetitions were completed with 2, 3 and 5 minutes versus 1 minute rest between sets (p ≤ 0.05); no significant differences were noted between the 2, 3 and 5 minute rest conditions. For the 1 minute rest condition, performance reductions (relative to the first set) were observed commencing with the second set; whereas for the other conditions (2, 3, and 5 minutes rest), performance reductions were not evident until the third and fourth sets. The RPE values prior to each of the successive sets were significantly greater, commencing with the second set for the 1 minute versus the 3 and 5 minute rest conditions. Significant increases were also evident in RPE immediately following each set between the 1 and 5 minute rest conditions from the second through fifth sets. These findings indicate that when utilizing an absolute 3RM load for the bench press, practitioners may prescribe a time-efficient minimum of 2 minutes rest between sets without significant impairments in repetition performance. However, lower perceived exertion levels may necessitate prescription of a minimum of 3 minutes rest between sets.
Barbosa, Valmir C.
2010-01-01
Background Given an undirected graph, we consider the two problems of combinatorial optimization, which ask that its chromatic and independence numbers be found. Although both problems are NP-hard, when either one is solved on the incrementally denser graphs of a random sequence, at certain critical values of the number of edges, it happens that the transition to the next value causes optimal solutions to be obtainable substantially more easily than right before it. Methodology/Principal Findings We introduce the notion of a network's conduciveness, a probabilistically interpretable measure of how the network's structure allows it to be conducive to roaming agents, in certain conditions, from one portion of the network to another. We demonstrate that the performance jumps of graph coloring and independent sets at the critical-value transitions in the number of edges can be understood by resorting to the network that represents the solution space of the problems for each graph and examining its conduciveness between the non-optimal solutions and the optimal ones. Right past each transition, this network becomes strikingly more conducive in the direction of the optimal solutions than it was just before it, while at the same time becoming less conducive in the opposite direction. Conclusions/Significance Network conduciveness provides a useful conceptual framework for explaining the performance jumps associated with graph coloring and independent sets. We believe it may also become instrumental in helping clarify further issues related to NP-hardness that remain poorly understood. Additionally, it may become useful also in other areas in which network theory has a role to play. PMID:20628597
Barbosa, Valmir C
2010-07-08
Given an undirected graph, we consider the two problems of combinatorial optimization, which ask that its chromatic and independence numbers be found. Although both problems are NP-hard, when either one is solved on the incrementally denser graphs of a random sequence, at certain critical values of the number of edges, it happens that the transition to the next value causes optimal solutions to be obtainable substantially more easily than right before it. We introduce the notion of a network's conduciveness, a probabilistically interpretable measure of how the network's structure allows it to be conducive to roaming agents, in certain conditions, from one portion of the network to another. We demonstrate that the performance jumps of graph coloring and independent sets at the critical-value transitions in the number of edges can be understood by resorting to the network that represents the solution space of the problems for each graph and examining its conduciveness between the non-optimal solutions and the optimal ones. Right past each transition, this network becomes strikingly more conducive in the direction of the optimal solutions than it was just before it, while at the same time becoming less conducive in the opposite direction. Network conduciveness provides a useful conceptual framework for explaining the performance jumps associated with graph coloring and independent sets. We believe it may also become instrumental in helping clarify further issues related to NP-hardness that remain poorly understood. Additionally, it may become useful also in other areas in which network theory has a role to play.
1983-05-15
which no two modules test each other, and the number of faulty modules is smalL In this peper , we show that the implied faulty sets of one-step v...test outcoies Le., an outcome aij for emh (i J) in TID is called a on*~. The diagnosis problem cosists in partitioning S nto th set Gs of non faulty
EEG-based recognition of video-induced emotions: selecting subject-independent feature set.
Kortelainen, Jukka; Seppänen, Tapio
2013-01-01
Emotions are fundamental for everyday life affecting our communication, learning, perception, and decision making. Including emotions into the human-computer interaction (HCI) could be seen as a significant step forward offering a great potential for developing advanced future technologies. While the electrical activity of the brain is affected by emotions, offers electroencephalogram (EEG) an interesting channel to improve the HCI. In this paper, the selection of subject-independent feature set for EEG-based emotion recognition is studied. We investigate the effect of different feature sets in classifying person's arousal and valence while watching videos with emotional content. The classification performance is optimized by applying a sequential forward floating search algorithm for feature selection. The best classification rate (65.1% for arousal and 63.0% for valence) is obtained with a feature set containing power spectral features from the frequency band of 1-32 Hz. The proposed approach substantially improves the classification rate reported in the literature. In future, further analysis of the video-induced EEG changes including the topographical differences in the spectral features is needed.
NASA Astrophysics Data System (ADS)
Hebenstreit, M.; Spee, C.; Kraus, B.
2016-01-01
Entanglement is the resource to overcome the restriction of operations to local operations assisted by classical communication (LOCC). The maximally entangled set (MES) of states is the minimal set of n -partite pure states with the property that any truly n -partite entangled pure state can be obtained deterministically via LOCC from some state in this set. Hence, this set contains the most useful states for applications. In this work, we characterize the MES for generic three-qutrit states. Moreover, we analyze which generic three-qutrit states are reachable (and convertible) under LOCC transformations. To this end, we study reachability via separable operations (SEP), a class of operations that is strictly larger than LOCC. Interestingly, we identify a family of pure states that can be obtained deterministically via SEP but not via LOCC. This gives an affirmative answer to the question of whether there is a difference between SEP and LOCC for transformations among pure states.
Lunt, Helen; Draper, Nick; Marshall, Helen C.; Logan, Florence J.; Hamlin, Michael J.; Shearman, Jeremy P.; Cotter, James D.; Kimber, Nicholas E.; Blackwell, Gavin; Frampton, Christopher M. A.
2014-01-01
Background In research clinic settings, overweight adults undertaking HIIT (high intensity interval training) improve their fitness as effectively as those undertaking conventional walking programs but can do so within a shorter time spent exercising. We undertook a randomized controlled feasibility (pilot) study aimed at extending HIIT into a real world setting by recruiting overweight/obese, inactive adults into a group based activity program, held in a community park. Methods Participants were allocated into one of three groups. The two interventions, aerobic interval training and maximal volitional interval training, were compared with an active control group undertaking walking based exercise. Supervised group sessions (36 per intervention) were held outdoors. Cardiorespiratory fitness was measured using VO2max (maximal oxygen uptake, results expressed in ml/min/kg), before and after the 12 week interventions. Results On ITT (intention to treat) analyses, baseline (N = 49) and exit (N = 39) O2 was 25.3±4.5 and 25.3±3.9, respectively. Participant allocation and baseline/exit VO2max by group was as follows: Aerobic interval training N = 16, 24.2±4.8/25.6±4.8; maximal volitional interval training N = 16, 25.0±2.8/25.2±3.4; walking N = 17, 26.5±5.3/25.2±3.6. The post intervention change in VO2max was +1.01 in the aerobic interval training, −0.06 in the maximal volitional interval training and −1.03 in the walking subgroups. The aerobic interval training subgroup increased VO2max compared to walking (p = 0.03). The actual (observed, rather than prescribed) time spent exercising (minutes per week, ITT analysis) was 74 for aerobic interval training, 45 for maximal volitional interval training and 116 for walking (p = 0.001). On descriptive analysis, the walking subgroup had the fewest adverse events. Conclusions In contrast to earlier studies, the improvement in cardiorespiratory fitness in a cohort of overweight
ERIC Educational Resources Information Center
Hume, Kara; Plavnick, Joshua; Odom, Samuel L.
2012-01-01
Strategies that promote the independent demonstration of skills across educational settings are critical for improving the accessibility of general education settings for students with ASD. This research assessed the impact of an individual work system on the accuracy of task completion and level of adult prompting across educational setting.…
A Method Defining a Limited Set of Character-Strings with Maximal Coverage of a Sample of Text.
ERIC Educational Resources Information Center
Hultgren, Jan; Larsson, Rolf
This is a progress report on a project attempting to design a system of compacting text for storage appropriate to disc oriented demand searching. After noting a number of previously designed methods of compression, it offers a tentative solution which couples a dictionary of most frequent character-strings with a set of variable-length codes. The…
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M; Chughtai, Aamer; Patel, Smita; Wei, Jun; Cascade, Philip N; Kazerooni, Ella A
2009-08-01
The authors are developing a computer-aided detection system for pulmonary emboli (PE) in computed tomographic pulmonary angiography (CTPA) scans. The pulmonary vessel tree is extracted using a 3D expectation-maximization segmentation method based on the analysis of eigen-values of Hessian matrices at multiple scales. A parallel multiprescreening method is applied to the segmented vessels to identify volume of interests (VOIs) that contained suspicious PE. A linear discriminant analysis (LDA) classifier with feature selection is designed to reduce false positives (FPs). Features that characterize the contrast, gray level, and size of PE are extracted as input predictor variables to the LDA classifier. With the IRB approval, 59 CTPA PE cases were collected retrospectively from the patient files (UM cases). With access permission, 69 CTPA PE cases were randomly selected from the data set of the prospective investigation of pulmonary embolism diagnosis (PIOPED) II clinical trial. Extensive lung parenchymal or pleural diseases were present in 22/59 UM and 26/69 PIOPED cases. Experienced thoracic radiologists manually marked 595 and 800 PE as the reference standards in the UM and PIOPED data sets, respectively. PE occlusion of arteries ranged from 5% to 100%, with PE located from the main pulmonary artery to the subsegmental artery levels. Of the 595 PE identified in the UM cases, 245 and 350 PE were located in the subsegmental arteries and the more proximal arteries, respectively. The detection performance was assessed by free response ROC (FROC) analysis. The FROC analysis indicated that the PE detection system could achieve an overall sensitivity of 80% at 18.9 FPs/case for the PIOPED cases when the LDA classifier was trained with the UM cases. The test sensitivity with the UM cases was 80% at 22.6 FPs/cases when the LDA classifier was trained with the PIOPED cases. The detection performance depended on the arterial level where the PE was located and on the
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Wei, Jun; Cascade, Philip N.; Kazerooni, Ella A.
2009-01-01
The authors are developing a computer-aided detection system for pulmonary emboli (PE) in computed tomographic pulmonary angiography (CTPA) scans. The pulmonary vessel tree is extracted using a 3D expectation-maximization segmentation method based on the analysis of eigenvalues of Hessian matrices at multiple scales. A parallel multiprescreening method is applied to the segmented vessels to identify volume of interests (VOIs) that contained suspicious PE. A linear discriminant analysis (LDA) classifier with feature selection is designed to reduce false positives (FPs). Features that characterize the contrast, gray level, and size of PE are extracted as input predictor variables to the LDA classifier. With the IRB approval, 59 CTPA PE cases were collected retrospectively from the patient files (UM cases). With access permission, 69 CTPA PE cases were randomly selected from the data set of the prospective investigation of pulmonary embolism diagnosis (PIOPED) II clinical trial. Extensive lung parenchymal or pleural diseases were present in 22∕59 UM and 26∕69 PIOPED cases. Experienced thoracic radiologists manually marked 595 and 800 PE as the reference standards in the UM and PIOPED data sets, respectively. PE occlusion of arteries ranged from 5% to 100%, with PE located from the main pulmonary artery to the subsegmental artery levels. Of the 595 PE identified in the UM cases, 245 and 350 PE were located in the subsegmental arteries and the more proximal arteries, respectively. The detection performance was assessed by free response ROC (FROC) analysis. The FROC analysis indicated that the PE detection system could achieve an overall sensitivity of 80% at 18.9 FPs∕case for the PIOPED cases when the LDA classifier was trained with the UM cases. The test sensitivity with the UM cases was 80% at 22.6 FPs∕cases when the LDA classifier was trained with the PIOPED cases. The detection performance depended on the arterial level where the PE was located and on the
Independent validation of the Pain Management Plan in a multi-disciplinary pain team setting
Quinlan, Joanna; Hughes, Richard; Laird, David
2016-01-01
Context/background: The Pain Management Plan (PP) is a brief cognitive behavioural therapy (CBT) self-management programme for people living with persistent pain that can be individually facilitated or provided in a group setting. Evidence of PP efficacy has been reported previously by the pain centres involved in its development. Objectives: To provide a fully independent evaluation of the PP and compare these with the findings reported by Cole et al. Methods: The PP programme was delivered by the County Durham Pain Team (Co. Durham PT) as outlined in training sessions led by Cole et al. Pre- and post-quantitative/patient experience measures were repeated with reliable and clinical significant change determined and compared to the original evaluation. Results: Of the 69 participants who completed the programme, 33% achieved reliable change and 20% clinical significant change using the Pain Self-Efficacy Questionnaire (PSEQ). Across the Brief Pain Inventory (BPI) interference domains between 11% and 22% of participants achieved clinical significant change. There were high levels of positive patient feedback with 25% of participants scoring 100% satisfaction. The mean participant satisfaction across the population was 88%. Conclusion: The results from this evaluation validate those reported by Cole et al. It demonstrates clinically significant improvement in pain and health functioning and high patient appreciation results. Both evaluations emphasise the potential of this programme as an early intervention delivered within a stratified care pain pathway. This approach could optimise the use of finite resources and improve wider access to pain management. PMID:27867506
Unextendible maximally entangled bases
Bravyi, Sergei; Smolin, John A.
2011-10-15
We introduce the notion of the unextendible maximally entangled basis (UMEB), a set of orthonormal maximally entangled states in C{sup d} x C{sup d} consisting of fewer than d{sup 2} vectors which have no additional maximally entangled vectors orthogonal to all of them. We prove that UMEBs do not exist for d=2 and give explicit constructions for a six-member UMEB with d=3 and a 12-member UMEB with d=4.
Rincent, R.; Laloë, D.; Nicolas, S.; Altmann, T.; Brunel, D.; Revilla, P.; Rodríguez, V.M.; Moreno-Gonzalez, J.; Melchinger, A.; Bauer, E.; Schoen, C-C.; Meyer, N.; Giauffret, C.; Bauland, C.; Jamin, P.; Laborde, J.; Monod, H.; Flament, P.; Charcosset, A.; Moreau, L.
2012-01-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix–best linear unbiased predictions model (RA–BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request. PMID:22865733
Rincent, R; Laloë, D; Nicolas, S; Altmann, T; Brunel, D; Revilla, P; Rodríguez, V M; Moreno-Gonzalez, J; Melchinger, A; Bauer, E; Schoen, C-C; Meyer, N; Giauffret, C; Bauland, C; Jamin, P; Laborde, J; Monod, H; Flament, P; Charcosset, A; Moreau, L
2012-10-01
Genomic selection refers to the use of genotypic information for predicting breeding values of selection candidates. A prediction formula is calibrated with the genotypes and phenotypes of reference individuals constituting the calibration set. The size and the composition of this set are essential parameters affecting the prediction reliabilities. The objective of this study was to maximize reliabilities by optimizing the calibration set. Different criteria based on the diversity or on the prediction error variance (PEV) derived from the realized additive relationship matrix-best linear unbiased predictions model (RA-BLUP) were used to select the reference individuals. For the latter, we considered the mean of the PEV of the contrasts between each selection candidate and the mean of the population (PEVmean) and the mean of the expected reliabilities of the same contrasts (CDmean). These criteria were tested with phenotypic data collected on two diversity panels of maize (Zea mays L.) genotyped with a 50k SNPs array. In the two panels, samples chosen based on CDmean gave higher reliabilities than random samples for various calibration set sizes. CDmean also appeared superior to PEVmean, which can be explained by the fact that it takes into account the reduction of variance due to the relatedness between individuals. Selected samples were close to optimality for a wide range of trait heritabilities, which suggests that the strategy presented here can efficiently sample subsets in panels of inbred lines. A script to optimize reference samples based on CDmean is available on request.
Hébert-Losier, Kim; Holmberg, Hans-Christer
2014-03-01
Sex and age are reported to influence the maximal dynamometric performance of major muscle groups, inclusive of ankle plantar flexors. Knee flexion (KF) also impacts plantar flexion function from where stems use of 0° and 45° angles of KF for clinical assessment of gastrocnemius and soleus, respectively. The influence of KF, sex, and age on dynamometric indicators of plantar flexion fatigue was examined in 28 males and 28 females recruited in 2 different age groups (older and younger than 40 years). Each subject performed 50 maximal concentric isokinetic plantar flexions at 60-degree angle per·second with 0° and 45° angles of KF. Maximal voluntary isometric contractions were determined before and after isokinetic trials and maximal, minimal, and normalized linear slopes of peak power during testing. Main effects of and 2-way interactions between KF, sex, age, and order of testing were explored using mixed-effect models and stepwise regressions. At angles of 0° and 45°, the fatigue indicators in younger and older individuals were similar and not influenced by testing order. However, peak isokinetic power and isometric torque declined to greater extents in males than females and, moreover, KF exerted greater impacts on the absolute plantar flexion performance and maximal-to-minimal reduction in isokinetic power in males. Because KF wielded no pronounced effect on fatigue indicators, this test may perhaps be used over time with no major concern regarding the exact knee angle. Our findings indicate that sex, rather than age, should be considered when interpreting dynamometric indicators of fatigue from repeated maximal concentric isokinetic plantar flexions, for example, when establishing normative values or comparing outcomes.
Testing Bell's inequality with cosmic photons: closing the setting-independence loophole.
Gallicchio, Jason; Friedman, Andrew S; Kaiser, David I
2014-03-21
We propose a practical scheme to use photons from causally disconnected cosmic sources to set the detectors in an experimental test of Bell's inequality. In current experiments, with settings determined by quantum random number generators, only a small amount of correlation between detector settings and local hidden variables, established less than a millisecond before each experiment, would suffice to mimic the predictions of quantum mechanics. By setting the detectors using pairs of quasars or patches of the cosmic microwave background, observed violations of Bell's inequality would require any such coordination to have existed for billions of years-an improvement of 20 orders of magnitude.
Testing Bell's Inequality with Cosmic Photons: Closing the Setting-Independence Loophole
NASA Astrophysics Data System (ADS)
Gallicchio, Jason; Friedman, Andrew S.; Kaiser, David I.
2014-03-01
We propose a practical scheme to use photons from causally disconnected cosmic sources to set the detectors in an experimental test of Bell's inequality. In current experiments, with settings determined by quantum random number generators, only a small amount of correlation between detector settings and local hidden variables, established less than a millisecond before each experiment, would suffice to mimic the predictions of quantum mechanics. By setting the detectors using pairs of quasars or patches of the cosmic microwave background, observed violations of Bell's inequality would require any such coordination to have existed for billions of years—an improvement of 20 orders of magnitude.
ERIC Educational Resources Information Center
Jones, Rebecca
2010-01-01
Two information literacy skills pilot projects are being undertaken at Malvern St James School (MSJ) with Year 6 and Year 9 pupils during 2009-10. The projects encourage the development of independent learning skills, with pupils planning, managing and executing both the research and practical elements of their project. Each pupil sets their own…
The limitations of simple gene set enrichment analysis assuming gene independence.
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P
2016-02-01
Since its first publication in 2003, the Gene Set Enrichment Analysis method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes Gene Set Enrichment Analysis's nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with Gene Set Enrichment Analysis's on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. © The Author(s) 2012.
Lorenz, David R.; Cam, Hugh P.
2014-01-01
Histone modifiers are critical regulators of chromatin-based processes in eukaryotes. The histone methyltransferase Set1, a component of the Set1C/COMPASS complex, catalyzes the methylation at lysine 4 of histone H3 (H3K4me), a hallmark of euchromatin. Here, we show that the fission yeast Schizosaccharomyces pombe Set1 utilizes distinct domain modules to regulate disparate classes of repetitive elements associated with euchromatin and heterochromatin via H3K4me-dependent and -independent pathways. Set1 employs its RNA-binding RRM2 and catalytic SET domains to repress Tf2 retrotransposons and pericentromeric repeats while relying on its H3K4me function to maintain transcriptional repression at the silent mating type (mat) locus and subtelomeric regions. These repressive functions of Set1 correlate with the requirement of Set1C components to maintain repression at the mat locus and subtelomeres while dispensing Set1C in repressing Tf2s and pericentromeric repeats. We show that the contributions of several Set1C subunits to the states of H3K4me diverge considerably from those of Saccharomyces cerevisiae orthologs. Moreover, unlike S. cerevisiae, the regulation of Set1 protein level is not coupled to the status of H3K4me or histone H2B ubiquitination by the HULC complex. Intriguingly, we uncover a genome organization role for Set1C and H3K4me in mediating the clustering of Tf2s into Tf bodies by antagonizing the acetyltransferase Mst1-mediated H3K4 acetylation. Our study provides unexpected insights into the regulatory intricacies of a highly conserved chromatin-modifying complex with diverse roles in genome control. PMID:25356590
Mikheyeva, Irina V; Grady, Patrick J R; Tamburini, Fiona B; Lorenz, David R; Cam, Hugh P
2014-10-01
Histone modifiers are critical regulators of chromatin-based processes in eukaryotes. The histone methyltransferase Set1, a component of the Set1C/COMPASS complex, catalyzes the methylation at lysine 4 of histone H3 (H3K4me), a hallmark of euchromatin. Here, we show that the fission yeast Schizosaccharomyces pombe Set1 utilizes distinct domain modules to regulate disparate classes of repetitive elements associated with euchromatin and heterochromatin via H3K4me-dependent and -independent pathways. Set1 employs its RNA-binding RRM2 and catalytic SET domains to repress Tf2 retrotransposons and pericentromeric repeats while relying on its H3K4me function to maintain transcriptional repression at the silent mating type (mat) locus and subtelomeric regions. These repressive functions of Set1 correlate with the requirement of Set1C components to maintain repression at the mat locus and subtelomeres while dispensing Set1C in repressing Tf2s and pericentromeric repeats. We show that the contributions of several Set1C subunits to the states of H3K4me diverge considerably from those of Saccharomyces cerevisiae orthologs. Moreover, unlike S. cerevisiae, the regulation of Set1 protein level is not coupled to the status of H3K4me or histone H2B ubiquitination by the HULC complex. Intriguingly, we uncover a genome organization role for Set1C and H3K4me in mediating the clustering of Tf2s into Tf bodies by antagonizing the acetyltransferase Mst1-mediated H3K4 acetylation. Our study provides unexpected insights into the regulatory intricacies of a highly conserved chromatin-modifying complex with diverse roles in genome control.
The Limitations of Simple Gene Set Enrichment Analysis Assuming Gene Independence
Tamayo, Pablo; Steinhardt, George; Liberzon, Arthur; Mesirov, Jill P.
2013-01-01
Since its first publication in 2003, the Gene Set Enrichment Analysis (GSEA) method, based on the Kolmogorov-Smirnov statistic, has been heavily used, modified, and also questioned. Recently a simplified approach, using a one-sample t-test score to assess enrichment and ignoring gene-gene correlations was proposed by Irizarry et al. 2009 as a serious contender. The argument criticizes GSEA’s nonparametric nature and its use of an empirical null distribution as unnecessary and hard to compute. We refute these claims by careful consideration of the assumptions of the simplified method and its results, including a comparison with GSEA’s on a large benchmark set of 50 datasets. Our results provide strong empirical evidence that gene-gene correlations cannot be ignored due to the significant variance inflation they produced on the enrichment scores and should be taken into account when estimating gene set enrichment significance. In addition, we discuss the challenges that the complex correlation structure and multi-modality of gene sets pose more generally for gene set enrichment methods. PMID:23070592
TLR-4-dependent and -independent mechanisms of fetal brain injury in the setting of preterm birth.
Breen, Kelsey; Brown, Amy; Burd, Irina; Chai, Jinghua; Friedman, Alexander; Elovitz, Michal A
2012-08-01
In this study, we sought to assess how essential activation of toll-like receptor 4 (TLR-4) is to fetal brain injury from intrauterine inflammation. Both wild-type and TLR-4 mutant fetal central nervous system cells were exposed to inflammation using lipopolysaccharide in vivo or in vitro. Inflammation could not induce neuronal injury in the absence of glial cells, in either wild-type or TLR-4 mutant neurons. However, injured neurons could induce injury in other neurons regardless of TLR-4 competency. Our results indicate that initiation of neuronal injury is a TLR-4-dependent event, while propagation is a TLR-4-independent event.
1981-11-01
Whitney provided a set of axioms for a structure commonly called i a matroid. Matroid theory (see ( Tutte , 19711, [Lawler, 19761) has applications to a wide...applicable in this case. On the other hand, there is no known efficient ( polynomial time) algorithm for constructing cliques of size 2 log n with...intersection. The problem of constructing a maximal independent set in the intersection of k matroids has a polynomial time (in tEl) algorithm [Lawler, 197
Baba, Mika; Maeda, Isseki; Morita, Tatsuya; Hisanaga, Takayuki; Ishihara, Tatsuhiko; Iwashita, Tomoyuki; Kaneishi, Keisuke; Kawagoe, Shohei; Kuriyama, Toshiyuki; Maeda, Takashi; Mori, Ichiro; Nakajima, Nobuhisa; Nishi, Tomohiro; Sakurai, Hiroki; Shimoyama, Satofumi; Shinjo, Takuya; Shirayama, Hiroto; Yamada, Takeshi; Ono, Shigeki; Ozawa, Taketoshi; Yamamoto, Ryo; Tsuneto, Satoru
2015-05-01
Accurate prognostic information in palliative care settings is needed for patients to make decisions and set goals and priorities. The Prognosis Palliative Care Study (PiPS) predictor models were presented in 2011, but have not yet been fully validated by other research teams. The primary aim of this study is to examine the accuracy and to validate the modified PiPS (using physician-proxy ratings of mental status instead of patient interviews) in three palliative care settings, namely palliative care units, hospital-based palliative care teams, and home-based palliative care services. This multicenter prospective cohort study was conducted in 58 palliative care services including 16 palliative care units, 19 hospital-based palliative care teams, and 23 home-based palliative care services in Japan from September 2012 through April 2014. A total of 2426 subjects were recruited. For reasons including lack of followup and missing variables (primarily blood examination data), we obtained analyzable data from 2212 and 1257 patients for the modified PiPS-A and PiPS-B, respectively. In all palliative care settings, both the modified PiPS-A and PiPS-B identified three risk groups with different survival rates (P<0.001). The absolute agreement ranged from 56% to 60% in the PiPS-A model and 60% to 62% in the PiPS-B model. The modified PiPS was successfully validated and can be useful in palliative care units, hospital-based palliative care teams, and home-based palliative care services. Copyright © 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
A set of ligation-independent in vitro translation vectors for eukaryotic protein production
Bardóczy, Viola; Géczi, Viktória; Sawasaki, Tatsuya; Endo, Yaeta; Mészáros, Tamás
2008-01-01
Background The last decade has brought the renaissance of protein studies and accelerated the development of high-throughput methods in all aspects of proteomics. Presently, most protein synthesis systems exploit the capacity of living cells to translate proteins, but their application is limited by several factors. A more flexible alternative protein production method is the cell-free in vitro protein translation. Currently available in vitro translation systems are suitable for high-throughput robotic protein production, fulfilling the requirements of proteomics studies. Wheat germ extract based in vitro translation system is likely the most promising method, since numerous eukaryotic proteins can be cost-efficiently synthesized in their native folded form. Although currently available vectors for wheat embryo in vitro translation systems ensure high productivity, they do not meet the requirements of state-of-the-art proteomics. Target genes have to be inserted using restriction endonucleases and the plasmids do not encode cleavable affinity purification tags. Results We designed four ligation independent cloning (LIC) vectors for wheat germ extract based in vitro protein translation. In these constructs, the RNA transcription is driven by T7 or SP6 phage polymerase and two TEV protease cleavable affinity tags can be added to aid protein purification. To evaluate our improved vectors, a plant mitogen activated protein kinase was cloned in all four constructs. Purification of this eukaryotic protein kinase demonstrated that all constructs functioned as intended: insertion of PCR fragment by LIC worked efficiently, affinity purification of translated proteins by GST-Sepharose or MagneHis particles resulted in high purity kinase, and the affinity tags could efficiently be removed under different reaction conditions. Furthermore, high in vitro kinase activity testified of proper folding of the purified protein. Conclusion Four newly designed in vitro translation
Independent evaluation plan for radiac set AN/VDR-1() Operational Test IIA (OT IIA)
Not Available
1980-02-01
The AN/VDR-1() is being developed in response to a DA approved Qualitative Materiel Requirement (QMR) dated 3 March 1971. The radiac system must provide a means of conducting both dismounted and vehicular radiological surveys and for performing radiological monitoring of personnel and equipment. The system will replace both the IM-174/PD and IM-174A/PD radiacmeters and may replace the AN/PDR-27() radiac set. This system is not envisioned for use as an aerial survey meter, since the AN/ADR-6 is currently under development for that specific task. The system will be operated by the individual soldier. A driver should be able to operate it during vehicular radiological surveys. The system will be a TOE issue item to Army units. The equipment will not normally be pooled at higher echelons, except as maintenance floats. The basis of issue will be one system per platoon, company headquarters and subunit requiring a capability to detect low or high level contamination (e.g., medical section). The system will be operated in various climatic and weather conditions. The system will provide the commander with data concerning gamma dose rates in areas contaminated by fallout, neutron-induced gamma activity or radiological agents. This data will assist in the planning of tactical operations and medical monitoring of radiological casualties.
NASA Astrophysics Data System (ADS)
Li, Zipeng; Chen, Jinglong; Zi, Yanyang; Pan, Jun
2017-02-01
As one of most critical component of high-speed locomotive, wheel set bearing fault identification has attracted an increasing attention in recent years. However, non-stationary vibration signal with modulation phenomenon and heavy background noise make it difficult to excavate the hidden weak fault feature. Variational Mode Decomposition (VMD), which can decompose the non-stationary signal into couple Intrinsic Mode Functions adaptively and non-recursively, brings a feasible tool. However, heavy background noise seriously affects setting of mode number, which may lead to information loss or over decomposition problem. In this paper, an independence-oriented VMD method via correlation analysis is proposed to adaptively extract weak and compound fault feature of wheel set bearing. To overcome the information loss problem, the appropriate mode number is determined by the criterion of approximate complete reconstruction. Then the similar modes are combined according to the similarity of their envelopes to solve the over decomposition problem. Finally, three applications to wheel set bearing fault of high speed locomotive verify the effectiveness of the proposed method compared with original VMD, EMD and EEMD methods.
2016-01-01
Reduced cell wall invertase (CWIN) activity has been shown to be associated with poor seed and fruit set under abiotic stress. Here, we examined whether genetically increasing native CWIN activity would sustain fruit set under long-term moderate heat stress (LMHS), an important factor limiting crop production, by using transgenic tomato (Solanum lycopersicum) with its CWIN inhibitor gene silenced and focusing on ovaries and fruits at 2 d before and after pollination, respectively. We found that the increase of CWIN activity suppressed LMHS-induced programmed cell death in fruits. Surprisingly, measurement of the contents of H2O2 and malondialdehyde and the activities of a cohort of antioxidant enzymes revealed that the CWIN-mediated inhibition on programmed cell death is exerted in a reactive oxygen species-independent manner. Elevation of CWIN activity sustained Suc import into fruits and increased activities of hexokinase and fructokinase in the ovaries in response to LMHS. Compared to the wild type, the CWIN-elevated transgenic plants exhibited higher transcript levels of heat shock protein genes Hsp90 and Hsp100 in ovaries and HspII17.6 in fruits under LMHS, which corresponded to a lower transcript level of a negative auxin responsive factor IAA9 but a higher expression of the auxin biosynthesis gene ToFZY6 in fruits at 2 d after pollination. Collectively, the data indicate that CWIN enhances fruit set under LMHS through suppression of programmed cell death in a reactive oxygen species-independent manner that could involve enhanced Suc import and catabolism, HSP expression, and auxin response and biosynthesis. PMID:27462084
Resources and energetics determined dinosaur maximal size
McNab, Brian K.
2009-01-01
Some dinosaurs reached masses that were ≈8 times those of the largest, ecologically equivalent terrestrial mammals. The factors most responsible for setting the maximal body size of vertebrates are resource quality and quantity, as modified by the mobility of the consumer, and the vertebrate's rate of energy expenditure. If the food intake of the largest herbivorous mammals defines the maximal rate at which plant resources can be consumed in terrestrial environments and if that limit applied to dinosaurs, then the large size of sauropods occurred because they expended energy in the field at rates extrapolated from those of varanid lizards, which are ≈22% of the rates in mammals and 3.6 times the rates of other lizards of equal size. Of 2 species having the same energy income, the species that uses the most energy for mass-independent maintenance of necessity has a smaller size. The larger mass found in some marine mammals reflects a greater resource abundance in marine environments. The presumptively low energy expenditures of dinosaurs potentially permitted Mesozoic communities to support dinosaur biomasses that were up to 5 times those found in mammalian herbivores in Africa today. The maximal size of predatory theropods was ≈8 tons, which if it reflected the maximal capacity to consume vertebrates in terrestrial environments, corresponds in predatory mammals to a maximal mass less than a ton, which is what is observed. Some coelurosaurs may have evolved endothermy in association with the evolution of feathered insulation and a small mass. PMID:19581600
Ansari, Elnaz Saberi; Eslahchi, Changiz; Pezeshk, Hamid; Sadeghi, Mehdi
2014-09-01
Decomposition of structural domains is an essential task in classifying protein structures, predicting protein function, and many other proteomics problems. As the number of known protein structures in PDB grows exponentially, the need for accurate automatic domain decomposition methods becomes more essential. In this article, we introduce a bottom-up algorithm for assigning protein domains using a graph theoretical approach. This algorithm is based on a center-based clustering approach. For constructing initial clusters, members of an independent dominating set for the graph representation of a protein are considered as the centers. A distance matrix is then defined for these clusters. To obtain final domains, these clusters are merged using the compactness principle of domains and a method similar to the neighbor-joining algorithm considering some thresholds. The thresholds are computed using a training set consisting of 50 protein chains. The algorithm is implemented using C++ language and is named ProDomAs. To assess the performance of ProDomAs, its results are compared with seven automatic methods, against five publicly available benchmarks. The results show that ProDomAs outperforms other methods applied on the mentioned benchmarks. The performance of ProDomAs is also evaluated against 6342 chains obtained from ASTRAL SCOP 1.71. ProDomAs is freely available at http://www.bioinf.cs.ipm.ir/software/prodomas. © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Wei, Jun; Cascade, Philip N.; Kazerooni, Ella A.
2009-02-01
Computed tomographic pulmonary angiography (CTPA) has been reported to be an effective means for clinical diagnosis of pulmonary embolism (PE). We are developing a computer-aided diagnosis (CAD) system for assisting radiologists in detection of pulmonary embolism in CTPA images. The pulmonary vessel tree is extracted based on the analysis of eigenvalues of Hessian matrices at multiple scales followed by 3D hierarchical EM segmentation. A multiprescreening method is designed to identify suspicious PEs along the extracted vessels. A linear discriminant analysis (LDA) classifier with feature selection is then used to reduce false positives (FPs). Two data sets of 59 and 69 CTPA PE cases were randomly selected from patient files at the University of Michigan (UM) and the PIOPED II study, respectively, and used as independent training and test sets. The PEs that were identified by three experienced thoracic radiologists were used as the gold standard. The detection performance of the CAD system was assessed by free response receiver operating characteristic analysis. The results indicated that our PE detection system can achieve a sensitivity of 80% at 18.9 FPs/case on the PIOPED cases when the LDA classifier was trained with the UM cases. The test sensitivity with the UM cases is 80% at 22.6 FPs/cases when the LDA classifier was trained with the PIOPED cases.
Independent Component Analysis by Entropy Maximization (INFOMAX)
2007-06-01
two types of signals: audio signals and simple communications signals (polar non-return to zero signals). The Infomax method is found to be...performing gradient ascent in MATLAB. This work specifically focuses on small numbers of two types of signals: audio signals and simple communications...14 Figure 5 Sample audio signal. ........................................................................................30 Figure 6 Hyperbolic
Snijder, E J; Horzinek, M C; Spaan, W J
1990-01-01
By using poly(A)-selected RNA from Berne virus (BEV)-infected embryonic mule skin cells as a template, cDNA was prepared and cloned in plasmid pUC9. Recombinants covering a contiguous sequence of about 10 kilobases were identified. Northern (RNA) blot hybridizations with various restriction fragments from these clones showed that the five BEV mRNAs formed a 3'-coterminal nested set. Sequence analysis revealed the presence of four complete open reading frames of 4743, 699, 426, and 480 nucleotides, with initiation codons coinciding with the 5' ends of BEV RNAs 2 through 5, respectively. By using primer extension analysis and oligonucleotide hybridizations, RNA 5 was found to be contiguous on the consensus sequence. The transcription of BEV mRNAs was studied by means of UV mapping. BEV RNAs 1, 2, and 3 were shown to be transcribed independently, which is also likely--although not rigorously proven--for RNAs 4 and 5. Upstream of the AUG codon of each open reading frame a conserved sequence pattern was observed which is postulated to function as a core promoter sequence in subgenomic RNA transcription. In the area surrounding the core promoter region of the two most abundant subgenomic BEV RNAs, a number of homologous sequence motifs were identified. Images PMID:2293666
2012-01-01
Background The optimal setting and content of primary health care rehabilitation of older people is not known. Our aim was to study independence, institutionalization, death and treatment costs 18 months after primary care rehabilitation of older people in two different settings. Methods Eighteen months follow-up of an open, prospective study comparing the outcome of multi-disciplinary rehabilitation of older people, in a structured and intensive Primary care dedicated inpatient rehabilitation (PCDIR, n=202) versus a less structured and less intensive Primary care nursing home rehabilitation (PCNHR, n=100). Participants: 302 patients, disabled from stroke, hip-fracture, osteoarthritis and other chronic diseases, aged ≥65years, assessed to have a rehabilitation potential and being referred from general hospital or own residence. Outcome measures: Primary: Independence, assessed by Sunnaas ADL Index(SI). Secondary: Hospital and short-term nursing home length of stay (LOS); institutionalization, measured by institutional residence rate; death; and costs of rehabilitation and care. Statistical tests: T-tests, Correlation tests, Pearson’s χ2, ANCOVA, Regression and Kaplan-Meier analyses. Results Overall SI scores were 26.1 (SD 7.2) compared to 27.0 (SD 5.7) at the end of rehabilitation, a statistically, but not clinically significant reduction (p=0.003 95%CI(0.3-1.5)). The PCDIR patients scored 2.2points higher in SI than the PCNHR patients, adjusted for age, gender, baseline MMSE and SI scores (p=0.003, 95%CI(0.8-3.7)). Out of 49 patients staying >28 days in short-term nursing homes, PCNHR-patients stayed significantly longer than PCDIR-patients (mean difference 104.9 days, 95%CI(0.28-209.6), p=0.05). The institutionalization increased in PCNHR (from 12%-28%, p=0.001), but not in PCDIR (from 16.9%-19.3%, p= 0.45). The overall one year mortality rate was 9.6%. Average costs were substantially higher for PCNHR versus PCDIR. The difference per patient was 3528€ for
Parker, Sarah J.; Rost, Hannes; Rosenberger, George; Collins, Ben C.; Malmström, Lars; Amodei, Dario; Venkatraman, Vidya; Raedschelders, Koen; Van Eyk, Jennifer E.; Aebersold, Ruedi
2015-01-01
Accurate knowledge of retention time (RT) in liquid chromatography-based mass spectrometry data facilitates peptide identification, quantification, and multiplexing in targeted and discovery-based workflows. Retention time prediction is particularly important for peptide analysis in emerging data-independent acquisition (DIA) experiments such as SWATH-MS. The indexed RT approach, iRT, uses synthetic spiked-in peptide standards (SiRT) to set RT to a unit-less scale, allowing for normalization of peptide RT between different samples and chromatographic set-ups. The obligatory use of SiRTs can be costly and complicates comparisons and data integration if standards are not included in every sample. Reliance on SiRTs also prevents the inclusion of archived mass spectrometry data for generation of the peptide assay libraries central to targeted DIA-MS data analysis. We have identified a set of peptide sequences that are conserved across most eukaryotic species, termed Common internal Retention Time standards (CiRT). In a series of tests to support the appropriateness of the CiRT-based method, we show: (1) the CiRT peptides normalized RT in human, yeast, and mouse cell lysate derived peptide assay libraries and enabled merging of archived libraries for expanded DIA-MS quantitative applications; (2) CiRTs predicted RT in SWATH-MS data within a 2-min margin of error for the majority of peptides; and (3) normalization of RT using the CiRT peptides enabled the accurate SWATH-MS-based quantification of 340 synthetic isotopically labeled peptides that were spiked into either human or yeast cell lysate. To automate and facilitate the use of these CiRT peptide lists or other custom user-defined internal RT reference peptides in DIA workflows, an algorithm was designed to automatically select a high-quality subset of datapoints for robust linear alignment of RT for use. Implementations of this algorithm are available for the OpenSWATH and Skyline platforms. Thus, CiRT peptides can
Li, Chih-Ying; Romero, Sergio; Simpson, Annie N; Bonilha, Heather S; Simpson, Kit N; Hong, Ickpyo; Velozo, Craig A
2017-07-26
To improve the practical use of the short forms (SFs) developed from the item bank, we compared the measurement precision of the 4- and 8-item SFs generated from a motor item bank composed of the Functional Independence Measure (FIM™) and the Minimum Data Set (MDS). The FIM-MDS motor item bank allowed scores generated from different instruments to be co-calibrated. The 4- and 8-item SFs were developed based on Rasch analysis procedures. This paper compared person strata, ceiling/floor effects, test standard error (SE) plots for each administration form and examined 95% confidence interval (CI) error bands of anchored person measures with the corresponding SFs. We used 0.3 SE as a criterion to reflect a reliability level of 0.90. Veterans' inpatient rehabilitation facilities and community living centers. 2500 Veterans who had both FIM and the MDS data within 6 days during years 2008 through 2010. NA MAIN OUTCOME MEASURE(S): 4- and 8-item SFs of FIM, MDS and FIM-MDS motor item bank. Six SFs were generated with 4- and 8-items across a range of difficulty levels from the FIM-MDS motor item bank. The three 8-item SFs all had higher correlations with the item bank (r=0.82∼0.95), higher person strata and less test error than the corresponding 4-item SFs (r=0.80∼ 0.90). The three 4-item SFs did not meet the criteria of SE less than 0.3 for any theta values. 8-item short forms could improve clinical use of item bank composed of existing instruments across the continuum of care in Veterans. We also found number of items, not test specificity, determines the precision of the instrument. Copyright © 2017. Published by Elsevier Inc.
MAZZETTI, SCOTT A.; WOLFF, CHRISTOPHER; COLLINS, BRITTANY; KOLANKOWSKI, MICHAEL T.; WILKERSON, BRITTANY; OVERSTREET, MATTHEW; GRUBE, TROY
2011-01-01
With resistance exercise, greater intensity typically elicits increased energy expenditure, but heavier loads require that the lifter perform more sets of fewer repetitions, which alters the kilograms lifted per set. Thus, the effect of exercise-intensity on energy expenditure has yielded varying results, especially with explosive resistance exercise. This study was designed to examine the effect of exercise-intensity and kilograms/set on energy expenditure during explosive resistance exercise. Ten resistance-trained men (22±3.6 years; 84±6.4 kg, 180±5.1 cm, and 13±3.8 %fat) performed squat and bench press protocols once/week using different exercise-intensities including 48% (LIGHT-48), 60% (MODERATE-60), and 72% of 1-repetition-maximum (1-RM) (HEAVY-72), plus a no-exercise protocol (CONTROL). To examine the effects of kilograms/set, an additional protocol using 72% of 1-RM was performed (HEAVY-72MATCHED) with kilograms/set matched with LIGHT-48 and MODERATE-60. LIGHT-48 was 4 sets of 10 repetitions (4×10); MODERATE-60 4×8; HEAVY-72 5×5; and HEAVY-72MATCHED 4×6.5. Eccentric and concentric repetition speeds, ranges-of-motion, rest-intervals, and total kilograms were identical between protocols. Expired air was collected continuously throughout each protocol using a metabolic cart, [Blood lactate] using a portable analyzer, and bench press peak power were measured. Rates of energy expenditure were significantly greater (p≤0.05) with LIGHT-48 and HEAVY-72MATCHED than HEAVY-72 during squat (7.3±0.7; 6.9±0.6 > 6.1±0.7 kcal/min), bench press (4.8±0.3; 4.7±0.3 > 4.0±0.4 kcal/min), and +5min after (3.7±0.1; 3.7±0.2 > 3.3±0.3 kcal/min), but there were no significant differences in total kcal among protocols. Therefore, exercise-intensity may not effect energy expenditure with explosive contractions, but light loads (~50% of 1-RM) may be preferred because of higher rates of energy expenditure, and since heavier loading requires more sets with lower
Sze, Christie C; Cao, Kaixiang; Collings, Clayton K; Marshall, Stacy A; Rendleman, Emily J; Ozark, Patrick A; Chen, Fei Xavier; Morgan, Marc A; Wang, Lu; Shilatifard, Ali
2017-09-22
Of the six members of the COMPASS (complex of proteins associated with Set1) family of histone H3 Lys4 (H3K4) methyltransferases identified in mammals, Set1A has been shown to be essential for early embryonic development and the maintenance of embryonic stem cell (ESC) self-renewal. Like its familial relatives, Set1A possesses a catalytic SET domain responsible for histone H3K4 methylation. Whether H3K4 methylation by Set1A/COMPASS is required for ESC maintenance and during differentiation has not yet been addressed. Here, we generated ESCs harboring the deletion of the SET domain of Set1A (Set1A(ΔSET)); surprisingly, the Set1A SET domain is dispensable for ESC proliferation and self-renewal. The removal of the Set1A SET domain does not diminish bulk H3K4 methylation in ESCs; instead, only a subset of genomic loci exhibited reduction in H3K4me3 in Set1A(ΔSET) cells, suggesting a role for Set1A independent of its catalytic domain in ESC self-renewal. However, Set1A(ΔSET) ESCs are unable to undergo normal differentiation, indicating the importance of Set1A-dependent H3K4 methylation during differentiation. Our data also indicate that during differentiation, Set1A but not Mll2 functions as the H3K4 methylase on bivalent genes and is required for their expression, supporting a model for transcriptional switch between Mll2 and Set1A during the self-renewing-to-differentiation transition. Together, our study implicates a critical role for Set1A catalytic methyltransferase activity in regulating ESC differentiation but not self-renewal and suggests the existence of context-specific H3K4 methylation that regulates transcriptional outputs during ESC pluripotency. © 2017 Sze et al.; Published by Cold Spring Harbor Laboratory Press.
Gopal, Ajay K; Pro, Barbara; Connors, Joseph M; Younes, Anas; Engert, Andreas; Shustov, Andrei R; Chi, Xuedong; Larsen, Emily K; Kennedy, Dana A; Sievers, Eric L
2016-10-01
Independent central review of clinical imaging remains the standard for oncology clinical trials with registration potential. A limited independent central review strategy has been proposed for solid tumor trials based on concordance between central and local evaluation of response. Concordance between independent central review and local evaluation of response in hematological malignancies is not known. We retrospectively evaluated concordance between prospectively performed central and local assessments of response using the Revised Response Criteria for Malignant Lymphoma across two international, open-label, single-arm, registration studies of brentuximab vedotin in patients with relapsed or refractory Hodgkin lymphoma (N = 102) or systemic anaplastic large-cell lymphoma (N = 58). Overall objective response rates were similar between assessors for both the trial in Hodgkin lymphoma (75% independent central review, 72% local evaluation) and the trial in anaplastic large-cell lymphoma (86% independent central review, 83% local evaluation). Patient-specific objective response concordance was also substantial (Hodgkin lymphoma: kappa = 0.68; anaplastic large-cell lymphoma: kappa = 0.74). Median progression-free survival was similar between assessors for patients with anaplastic large-cell lymphoma (14.3 months by independent central review (95% confidence interval: 6.9, -); 14.5 months by local evaluation (95% confidence interval: 9.4, -)), but longer by local evaluation in patients with Hodgkin lymphoma (5.8 months by independent central review (95% confidence interval: 5.0, 9.0); 9.0 months by local evaluation (95% confidence interval: 7.1, 12.0)). Median duration of response was longer by local evaluation in both malignancies, which was primarily attributable to earlier computed tomography and positron emission tomography-based scoring of progression by independent central review. A limited independent review audit strategy for clinical
Pelletier, Alexandra; Sunthara, Gajen; Gujral, Nitin; Mittal, Vandna; Bourgeois, Fabienne C
2016-01-01
understanding what features should be built into the app. Phase 3 involved deployment of TaskList on a clinical floor at BCH. Lastly, Phase 4 gathered the lessons learned from the pilot to refine the guideline. Results Fourteen practical recommendations were identified to create the BCH Mobile Application Development Guideline to safeguard custom applications in hospital BYOD settings. The recommendations were grouped into four categories: (1) authentication and authorization, (2) data management, (3) safeguarding app environment, and (4) remote enforcement. Following the guideline, the TaskList app was developed and then was piloted with an inpatient ward team. Conclusions The Mobile Application Development guideline was created and used in the development of TaskList. The guideline is intended for use by developers when addressing integration with hospital information systems, deploying apps in BYOD health care settings, and meeting compliance standards, such as Health Insurance Portability and Accountability Act (HIPAA) regulations. PMID:27169345
Al Ayubi, Soleh U; Pelletier, Alexandra; Sunthara, Gajen; Gujral, Nitin; Mittal, Vandna; Bourgeois, Fabienne C
2016-05-11
built into the app. Phase 3 involved deployment of TaskList on a clinical floor at BCH. Lastly, Phase 4 gathered the lessons learned from the pilot to refine the guideline. Fourteen practical recommendations were identified to create the BCH Mobile Application Development Guideline to safeguard custom applications in hospital BYOD settings. The recommendations were grouped into four categories: (1) authentication and authorization, (2) data management, (3) safeguarding app environment, and (4) remote enforcement. Following the guideline, the TaskList app was developed and then was piloted with an inpatient ward team. The Mobile Application Development guideline was created and used in the development of TaskList. The guideline is intended for use by developers when addressing integration with hospital information systems, deploying apps in BYOD health care settings, and meeting compliance standards, such as Health Insurance Portability and Accountability Act (HIPAA) regulations.
ERIC Educational Resources Information Center
Velastegui, Pamela J.
2013-01-01
This hypothesis-generating case study investigates the naturally emerging roles of technology brokers and technology leaders in three independent schools in New York involving 92 school educators. A multiple and mixed method design utilizing Social Network Analysis (SNA) and fuzzy set Qualitative Comparative Analysis (FSQCA) involved gathering…
ERIC Educational Resources Information Center
Velastegui, Pamela J.
2013-01-01
This hypothesis-generating case study investigates the naturally emerging roles of technology brokers and technology leaders in three independent schools in New York involving 92 school educators. A multiple and mixed method design utilizing Social Network Analysis (SNA) and fuzzy set Qualitative Comparative Analysis (FSQCA) involved gathering…
Taeroe, Anders; Mustapha, Walid Fayez; Stupak, Inge; Raulund-Rasmussen, Karsten
2017-03-25
Forests' potential to mitigate carbon emissions to the atmosphere is heavily debated and a key question is if forests left unmanaged to store carbon in biomass and soil provide larger carbon emission reductions than forests kept under forest management for production of wood that can substitute fossil fuels and fossil fuel intensive materials. We defined a modelling framework for calculation of the carbon pools and fluxes along the forest energy and wood product supply chains over 200 years for three forest management alternatives (FMA): 1) a traditionally managed European beech forest, as a business-as-usual case, 2) an energy poplar plantation, and 3) a set-aside forest left unmanaged for long-term storage of carbon. We calculated the cumulative net carbon emissions (CCE) and carbon parity times (CPT) of the managed forests relative to the unmanaged forest. Energy poplar generally had the lowest CCE when using coal as the reference fossil fuel. With natural gas as the reference fossil fuel, the CCE of the business-as-usual and the energy poplar was nearly equal, with the unmanaged forest having the highest CCE after 40 years. CPTs ranged from 0 to 156 years, depending on the applied model assumptions. CCE and CPT were especially sensitive to the reference fossil fuel, material alternatives to wood, forest growth rates for the three FMAs, and energy conversion efficiencies. Assumptions about the long-term steady-state levels of carbon stored in the unmanaged forest had a limited effect on CCE after 200 years. Analyses also showed that CPT was not a robust measure for ranking of carbon mitigation benefits.
NASA Technical Reports Server (NTRS)
Krage, Frederick J. (Inventor); Westmeyer, Paul A. (Inventor); Wertenberg, Russell F. (Inventor); Riegel, Jack F. (Inventor)
2017-01-01
An authentication procedure utilizes multiple independent sources of data to determine whether usage of a device, such as a desktop computer, is authorized. When a comparison indicates an anomaly from the base-line usage data, the system, provides a notice that access of the first device is not authorized.
Maximal acyclic agreement forests.
Voorkamp, Josh
2014-10-01
Finding the hybridization number of a pair or set of trees, [Formula: see text], is a well-studied problem in phylogenetics and is equivalent to finding a maximum acyclic agreement forest (MAAF) for [Formula: see text]. This article defines a new type of acyclic agreement forest called a maximal acyclic agreement forest (mAAF). The property for which mAAFs are "simplest" is more general and could be considered more biologically relevant than the corresponding property for MAAFs, and the set of MAAFs for any [Formula: see text] is a subset of the set of mAAFs for [Formula: see text]. This article also presents two new algorithms; one finds a mAAF for any [Formula: see text] in polynomial time and the other is an exhaustive search that finds all mAAFs for some [Formula: see text], which is also a new approach to finding the hybridization number when applied to a pair of trees. The exhaustive search algorithm is applied to a real world data set, and the findings are compared to previous results.
Iannario, Maria; Lang, Joseph B
2016-11-10
A new testing approach is described for improving statistical tests of independence in sets of tables stratified on one or more relevant factors in case of categorical (nominal or ordinal) variables. Common tests of independence that exploit the ordinality of one of the variables use a restricted-alternative approach. A different, relaxed-null method is presented. Specifically, the M-moment score tests and the correlation tests are introduced. Using multinomial-Poisson homogeneous modeling theory, it is shown that these tests are computationally and conceptually simple, and simulation results suggest that they can perform better than other common tests of conditional independence. To illustrate, the proposed tests are used to better understand the human papillomavirus type-specific infection by exploring the intention to vaccinate. Copyright © 2016 John Wiley & Sons, Ltd.
Jones, J.W.; Jarnagin, T.
2009-01-01
Given the relatively high cost of mapping impervious surfaces at regional scales, substantial effort is being expended in the development of moderate-resolution, satellite-based methods for estimating impervious surface area (ISA). To rigorously assess the accuracy of these data products high quality, independently derived validation data are needed. High-resolution data were collected across a gradient of development within the Mid-Atlantic region to assess the accuracy of National Land Cover Data (NLCD) Landsat-based ISA estimates. Absolute error (satellite predicted area - "reference area") and relative error [satellite (predicted area - "reference area")/ "reference area"] were calculated for each of 240 sample regions that are each more than 15 Landsat pixels on a side. The ability to compile and examine ancillary data in a geographic information system environment provided for evaluation of both validation and NLCD data and afforded efficient exploration of observed errors. In a minority of cases, errors could be explained by temporal discontinuities between the date of satellite image capture and validation source data in rapidly changing places. In others, errors were created by vegetation cover over impervious surfaces and by other factors that bias the satellite processing algorithms. On average in the Mid-Atlantic region, the NLCD product underestimates ISA by approximately 5%. While the error range varies between 2 and 8%, this underestimation occurs regardless of development intensity. Through such analyses the errors, strengths, and weaknesses of particular satellite products can be explored to suggest appropriate uses for regional, satellite-based data in rapidly developing areas of environmental significance. ?? 2009 ASCE.
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Chughtai, Aamer; Kuriakose, Jean W.; Kazerooni, Ella A.; Hadjiiski, Lubomir M.; Wei, Jun; Patel, Smita
2015-03-01
We have developed a computer-aided detection (CAD) system for assisting radiologists in detection of pulmonary embolism (PE) in computed tomographic pulmonary angiographic (CTPA) images. The CAD system includes stages of pulmonary vessel segmentation, prescreening of PE candidates and false positive (FP) reduction to identify suspicious PEs. The system was trained with 59 CTPA PE cases collected retrospectively from our patient files (UM set) with IRB approval. Five feature groups containing 139 features that characterized the intensity texture, gradient, intensity homogeneity, shape, and topology of PE candidates were initially extracted. Stepwise feature selection guided by simplex optimization was used to select effective features for FP reduction. A linear discriminant analysis (LDA) classifier was formulated to differentiate true PEs from FPs. The purpose of this study is to evaluate the performance of our CAD system using an independent test set of CTPA cases. The test set consists of 50 PE cases from the PIOPED II data set collected by multiple institutions with access permission. A total of 537 PEs were manually marked by experienced thoracic radiologists as reference standard for the test set. The detection performance was evaluated by freeresponse receiver operating characteristic (FROC) analysis. The FP classifier obtained a test Az value of 0.847 and the FROC analysis indicated that the CAD system achieved an overall sensitivity of 80% at 8.6 FPs/case for the PIOPED test set.
Gray, Kathy; Romboli, Joan E
2013-08-01
The purposes of this study were to describe the low-density lipoprotein cholesterol (LDL-C) control rate of patients with type-2 diabetes mellitus (DM2) treated by nurse practitioner (NP) providers, and to describe any significant differences in the population at the LDL-C goal of <100 mg/dL and patients not at goal. Demographic data were collected from a retrospective chart review of patients with (DM2) who were treated in two primary care NP practice settings in New Hampshire where physician collaboration is not required. Data regarding smoking history, lifestyle, comorbidities, and antilipid and antidiabetes medication were collected. Physiological measurements included the body mass index (BMI), hemoglobin A1c (A1c), and the LDL-C. Patients with DM2 treated by NP providers were at goal with respect to the LDL-C in 71% of the cases in this study. Statin therapy was prescribed in 60.5% of the cases. Lifestyle management was recommended 92.6% of the time. NPs prescribe appropriate medications, order and monitor laboratory values, in addition to providing education regarding lifestyle changes to patients with chronic diseases. Reported outcomes achieved by NP providers validate them as evidenced-based providers of quality care for patients with complex diseases such as DM2. ©2012 The Author(s) ©2012 American Association of Nurse Practitioners.
Maximally Expressive Task Modeling
NASA Technical Reports Server (NTRS)
Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.
Maximally Expressive Task Modeling
NASA Technical Reports Server (NTRS)
Japp, John; Davis, Elizabeth; Maxwell, Theresa G. (Technical Monitor)
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiment activities for the Space Station. The equipment used in these experiments is some of the most complex hardware ever developed by mankind, the information sought by these experiments is at the cutting edge of scientific endeavor, and the procedures for executing the experiments are intricate and exacting. Scheduling is made more difficult by a scarcity of space station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling space station experiment operations calls for a "maximally expressive" modeling schema. Modeling even the simplest of activities cannot be automated; no sensor can be attached to a piece of equipment that can discern how to use that piece of equipment; no camera can quantify how to operate a piece of equipment. Modeling is a human enterprise-both an art and a science. The modeling schema should allow the models to flow from the keyboard of the user as easily as works of literature flowed from the pen of Shakespeare. The Ground Systems Department at the Marshall Space Flight Center has embarked on an effort to develop a new scheduling engine that is highlighted by a maximally expressive modeling schema. This schema, presented in this paper, is a synergy of technological advances and domain-specific innovations.
Nearly maximally predictive features and their dimensions
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2017-05-01
Scientific explanation often requires inferring maximally predictive features from a given data set. Unfortunately, the collection of minimal maximally predictive features for most stochastic processes is uncountably infinite. In such cases, one compromises and instead seeks nearly maximally predictive features. Here, we derive upper bounds on the rates at which the number and the coding cost of nearly maximally predictive features scale with desired predictive power. The rates are determined by the fractal dimensions of a process' mixed-state distribution. These results, in turn, show how widely used finite-order Markov models can fail as predictors and that mixed-state predictive features can offer a substantial improvement.
Simulating (log(c) n)-wise Independence in NC
1989-05-01
is d-uniform if every edge has d elements. Kleitman and Alon, Babai, and Itai [KI, ABI] define the large d-partite subhypergraph problem as follows...13 References [ABI] Alon, N., L. Babai, A. Itai , "A Fast and Simple Randomized Parallel Algorithm for the Maximal Independent Set Problem", Journal
Polarity Related Influence Maximization in Signed Social Networks
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods. PMID:25061986
Polarity related influence maximization in signed social networks.
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Bao, Stephen S; Kapellusch, Jay M; Garg, Arun; Silverstein, Barbara A; Harris-Adamson, Carisa; Burt, Susan E; Dale, Ann Marie; Evanoff, Bradley A; Gerr, Frederic E; Hegmann, Kurt T; Merlino, Linda A; Thiese, Matthew S; Rempel, David M
2015-02-01
Six research groups independently conducted prospective studies of carpal tunnel syndrome (CTS) incidence in 54 US workplaces in 10 US States. Physical exposure variables were collected by all research groups at the individual worker level. Data from these research groups were pooled to increase the exposure spectrum and statistical power. This paper provides a detailed description of the characteristics of the pooled physical exposure variables and the source data information from the individual research studies. Physical exposure data were inspected and prepared by each of the individual research studies according to detailed instructions provided by an exposure subcommittee of the research consortium. Descriptive analyses were performed on the pooled physical exposure data set. Correlation analyses were performed among exposure variables estimating similar exposure aspects. At baseline, there were a total of 3010 participants in the pooled physical exposure data set. Overall, the pooled data meaningfully increased the spectra of most exposure variables. The increased spectra were due to the wider range in exposure data of different jobs provided by the research studies. The correlations between variables estimating similar exposure aspects showed different patterns among data provided by the research studies. The increased spectra of the physical exposure variables among the data pooled likely improved the possibility of detecting potential associations between these physical exposure variables and CTS incidence. It is also recognised that methods need to be developed for general use by all researchers for standardisation of physical exposure variable definition, data collection, processing and reduction. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
ERIC Educational Resources Information Center
Branzburg, Jeffrey
2004-01-01
Google is shaking out to be the leading Web search engine, with recent research from Nielsen NetRatings reporting about 40 percent of all U.S. households using the tool at least once in January 2004. This brief article discusses how teachers and students can maximize their use of Google.
Bell Inequalities Tailored to Maximally Entangled States
NASA Astrophysics Data System (ADS)
Salavrakos, Alexia; Augusiak, Remigiusz; Tura, Jordi; Wittek, Peter; Acín, Antonio; Pironio, Stefano
2017-07-01
Bell inequalities have traditionally been used to demonstrate that quantum theory is nonlocal, in the sense that there exist correlations generated from composite quantum states that cannot be explained by means of local hidden variables. With the advent of device-independent quantum information protocols, Bell inequalities have gained an additional role as certificates of relevant quantum properties. In this work, we consider the problem of designing Bell inequalities that are tailored to detect maximally entangled states. We introduce a class of Bell inequalities valid for an arbitrary number of measurements and results, derive analytically their tight classical, nonsignaling, and quantum bounds and prove that the latter is attained by maximally entangled states. Our inequalities can therefore find an application in device-independent protocols requiring maximally entangled states.
Froeschke, John T.; Stunz, Gregory W.; Sterba-Boatwright, Blair; Wildhaber, Mark L.
2010-01-01
Using a long-term fisheries-independent data set, we tested the 'shark nursery area concept' proposed by Heupel et al. (2007) with the suggested working assumptions that a shark nursery habitat would: (1) have an abundance of immature sharks greater than the mean abundance across all habitats where they occur; (2) be used by sharks repeatedly through time (years); and (3) see immature sharks remaining within the habitat for extended periods of time. We tested this concept using young-of-the-year (age 0) and juvenile (age 1+ yr) bull sharks Carcharhinus leucas from gill-net surveys conducted in Texas bays from 1976 to 2006 to estimate the potential nursery function of 9 coastal bays. Of the 9 bay systems considered as potential nursery habitat, only Matagorda Bay satisfied all 3 criteria for young-of-the-year bull sharks. Both Matagorda and San Antonio Bays met the criteria for juvenile bull sharks. Through these analyses we examined the utility of this approach for characterizing nursery areas and we also describe some practical considerations, such as the influence of the temporal or spatial scales considered when applying the nursery role concept to shark populations.
NASA Astrophysics Data System (ADS)
Wang, Y.; Penning de Vries, M.; Xie, P. H.; Beirle, S.; Dörner, S.; Remmers, J.; Li, A.; Wagner, T.
2015-12-01
Multi-axis differential optical absorption spectroscopy (MAX-DOAS) observations of trace gases can be strongly influenced by clouds and aerosols. Thus it is important to identify clouds and characterize their properties. In a recent study Wagner et al. (2014) developed a cloud classification scheme based on the MAX-DOAS measurements themselves with which different "sky conditions" (e.g., clear sky, continuous clouds, broken clouds) can be distinguished. Here we apply this scheme to long-term MAX-DOAS measurements from 2011 to 2013 in Wuxi, China (31.57° N, 120.31° E). The original algorithm has been adapted to the characteristics of the Wuxi instrument, and extended towards smaller solar zenith angles (SZA). Moreover, a method for the determination and correction of instrumental degradation is developed to avoid artificial trends of the cloud classification results. We compared the results of the MAX-DOAS cloud classification scheme to several independent measurements: aerosol optical depth from a nearby Aerosol Robotic Network (AERONET) station and from two Moderate Resolution Imaging Spectroradiometer (MODIS) instruments, visibility derived from a visibility meter and various cloud parameters from different satellite instruments (MODIS, the Ozone Monitoring Instrument (OMI) and the Global Ozone Monitoring Experiment (GOME-2)). Here it should be noted that no quantitative comparison between the MAX-DOAS results and the independent data sets is possible, because (a) not exactly the same quantities are measured, and (b) the spatial and temporal sampling is quite different. Thus our comparison is performed in a semi-quantitative way: the MAX-DOAS cloud classification results are studied as a function of the external quantities. The most important findings from these comparisons are as follows: (1) most cases characterized as clear sky with low or high aerosol load were associated with the respective aerosol optical depth (AOD) ranges obtained by AERONET and MODIS
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Maximizing Complementary Quantities by Projective Measurements
NASA Astrophysics Data System (ADS)
M. Souza, Leonardo A.; Bernardes, Nadja K.; Rossi, Romeu
2017-04-01
In this work, we study the so-called quantitative complementarity quantities. We focus in the following physical situation: two qubits ( q A and q B ) are initially in a maximally entangled state. One of them ( q B ) interacts with a N-qubit system ( R). After the interaction, projective measurements are performed on each of the qubits of R, in a basis that is chosen after independent optimization procedures: maximization of the visibility, the concurrence, and the predictability. For a specific maximization procedure, we study in detail how each of the complementary quantities behave, conditioned on the intensity of the coupling between q B and the N qubits. We show that, if the coupling is sufficiently "strong," independent of the maximization procedure, the concurrence tends to decay quickly. Interestingly enough, the behavior of the concurrence in this model is similar to the entanglement dynamics of a two qubit system subjected to a thermal reservoir, despite that we consider finite N. However, the visibility shows a different behavior: its maximization is more efficient for stronger coupling constants. Moreover, we investigate how the distinguishability, or the information stored in different parts of the system, is distributed for different couplings.
NASA Astrophysics Data System (ADS)
Knop, R. A.; Aldering, G.; Amanullah, R.; Astier, P.; Blanc, G.; Burns, M. S.; Conley, A.; Deustua, S. E.; Doi, M.; Ellis, R.; Fabbro, S.; Folatelli, G.; Fruchter, A. S.; Garavini, G.; Garmond, S.; Garton, K.; Gibbons, R.; Goldhaber, G.; Goobar, A.; Groom, D. E.; Hardin, D.; Hook, I.; Howell, D. A.; Kim, A. G.; Lee, B. C.; Lidman, C.; Mendez, J.; Nobili, S.; Nugent, P. E.; Pain, R.; Panagia, N.; Pennypacker, C. R.; Perlmutter, S.; Quimby, R.; Raux, J.; Regnault, N.; Ruiz-Lapuente, P.; Sainton, G.; Schaefer, B.; Schahmaneche, K.; Smith, E.; Spadafora, A. L.; Stanishev, V.; Sullivan, M.; Walton, N. A.; Wang, L.; Wood-Vasey, W. M.; Yasuda, N.
2003-11-01
We report measurements of ΩM, ΩΛ, and w from 11 supernovae (SNe) at z=0.36-0.86 with high-quality light curves measured using WFPC2 on the Hubble Space Telescope (HST). This is an independent set of high-redshift SNe that confirms previous SN evidence for an accelerating universe. The high-quality light curves available from photometry on WFPC2 make it possible for these 11 SNe alone to provide measurements of the cosmological parameters comparable in statistical weight to the previous results. Combined with earlier Supernova Cosmology Project data, the new SNe yield a measurement of the mass density ΩM=0.25+0.07-0.06(statistical)+/-0.04 (identified systematics), or equivalently, a cosmological constant of ΩΛ=0.75+0.06-0.07(statistical)+/-0.04 (identified systematics), under the assumptions of a flat universe and that the dark energy equation-of-state parameter has a constant value w=-1. When the SN results are combined with independent flat-universe measurements of ΩM from cosmic microwave background and galaxy redshift distortion data, they provide a measurement of w=-1.05+0.15-0.20(statistical)+/-0.09 (identified systematic), if w is assumed to be constant in time. In addition to high-precision light-curve measurements, the new data offer greatly improved color measurements of the high-redshift SNe and hence improved host galaxy extinction estimates. These extinction measurements show no anomalous negative E(B-V) at high redshift. The precision of the measurements is such that it is possible to perform a host galaxy extinction correction directly for individual SNe without any assumptions or priors on the parent E(B-V) distribution. Our cosmological fits using full extinction corrections confirm that dark energy is required with P(ΩΛ>0)>0.99, a result consistent with previous and current SN analyses that rely on the identification of a low-extinction subset or prior assumptions concerning the intrinsic extinction distribution. Based in part on
Energy Band Calculations for Maximally Even Superlattices
NASA Astrophysics Data System (ADS)
Krantz, Richard; Byrd, Jason
2007-03-01
Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.
The Winning Edge: Maximizing Success in College.
ERIC Educational Resources Information Center
Schmitt, David E.
This book offers college students ideas on how to maximize their success in college by examining the personal management techniques a student needs to succeed. Chapters are as follows: "Getting and Staying Motivated"; "Setting Goals and Tapping Your Resources"; "Conquering Time"; "Think Yourself to College Success"; "Understanding and Remembering…
The Winning Edge: Maximizing Success in College.
ERIC Educational Resources Information Center
Schmitt, David E.
This book offers college students ideas on how to maximize their success in college by examining the personal management techniques a student needs to succeed. Chapters are as follows: "Getting and Staying Motivated"; "Setting Goals and Tapping Your Resources"; "Conquering Time"; "Think Yourself to College Success"; "Understanding and Remembering…
NASA Technical Reports Server (NTRS)
Jaap, John; Davis, Elizabeth; Richardson, Lea
2004-01-01
Planning and scheduling systems organize tasks into a timeline or schedule. Tasks are logically grouped into containers called models. Models are a collection of related tasks, along with their dependencies and requirements, that when met will produce the desired result. One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed; the information sought is at the cutting edge of scientific endeavor; and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a maximally expressive modeling schema.
Maximally natural supersymmetry.
Dimopoulos, Savas; Howe, Kiel; March-Russell, John
2014-09-12
We consider 4D weak scale theories arising from 5D supersymmetric (SUSY) theories with maximal Scherk-Schwarz breaking at a Kaluza-Klein scale of several TeV. Many of the problems of conventional SUSY are avoided. Apart from 3rd family sfermions the SUSY spectrum is heavy, with only ∼50% tuning at a gluino mass of ∼2 TeV and a stop mass of ∼650 GeV. A single Higgs doublet acquires a vacuum expectation value, so the physical Higgs boson is automatically standard-model-like. A new U(1)^{'} interaction raises m_{h} to 126 GeV. For minimal tuning the associated Z^{'}, as well as the 3rd family sfermions, must be accessible to LHC13. A gravitational wave signal consistent with hints from BICEP2 is possible if inflation occurs when the extra dimensions are small.
Maximally spaced projection sequencing in electron paramagnetic resonance imaging
Redler, Gage; Epel, Boris; Halpern, Howard J.
2015-01-01
Electron paramagnetic resonance imaging (EPRI) provides 3D images of absolute oxygen concentration (pO2) in vivo with excellent spatial and pO2 resolution. When investigating such physiologic parameters in living animals, the situation is inherently dynamic. Improvements in temporal resolution and experimental versatility are necessary to properly study such a system. Uniformly distributed projections result in efficient use of data for image reconstruction. This has dictated current methods such as equal-solid-angle (ESA) spacing of projections. However, acquisition sequencing must still be optimized to achieve uniformity throughout imaging. An object-independent method for uniform acquisition of projections, using the ESA uniform distribution for the final set of projections, is presented. Each successive projection maximizes the distance in the gradient space between itself and prior projections. This maximally spaced projection sequencing (MSPS) method improves image quality for intermediate images reconstructed from incomplete projection sets, enabling useful real-time reconstruction. This method also provides improved experimental versatility, reduced artifacts, and the ability to adjust temporal resolution post factum to best fit the data and its application. The MSPS method in EPRI provides the improvements necessary to more appropriately study a dynamic system. PMID:26185490
ERIC Educational Resources Information Center
National Council on Disability, Washington, DC.
The National Council on Disability (NCD) held a National Summit on Disability Policy on April 27-29, 1996 at which 300 grassroots disability leaders gathered to discuss how to achieve independence in the next decade. Following an analysis of disability demographics and disability rights and culture, disability policy is assessed in 11 areas:…
Maximizing relationship possibilities: relational maximization in romantic relationships.
Mikkelson, Alan C; Pauley, Perry M
2013-01-01
Using Rusbult's (1980) investment model and Schwartz's (2000) conceptualization of decision maximization, we sought to understand how an individual's propensity to maximize his or her decisions factored into investment, satisfaction, and awareness of alternatives in romantic relationships. In study one, 275 participants currently involved in romantic relationships completed measures of maximization, satisfaction, investment size, quality of alternatives, and commitment. In study two, 343 participants were surveyed as part of the creation of a scale of relational maximization. Results from both studies revealed that the tendency to maximize (in general and in relationships specifically) was negatively correlated with satisfaction, investment, and commitment, and positively correlated with quality of alternatives. Furthermore, we found that satisfaction and investments mediated the relationship between maximization and relationship commitment.
GOLDENBERG, Shira M.; CHETTIAR, Jill; SIMO, Annick; SILVERMAN, Jay G.; STRATHDEE, Steffanie A.; MONTANER, Julio; SHANNON, Kate
2014-01-01
Objectives To explore factors associated with early sex work initiation, and model the independent effect of early initiation on HIV infection and prostitution arrests among adult sex workers (SWs). Design Baseline data (2010–2011) were drawn from a cohort of SWs who exchanged sex for money within the last month and were recruited through time-location sampling in Vancouver, Canada. Analyses were restricted to adults ≥18 years old. Methods SWs completed a questionnaire and HIV/STI testing. Using multivariate logistic regression, we identified associations with early sex work initiation (<18 years old) and constructed confounder models examining the independent effect of early initiation on HIV and prostitution arrests among adult SWs. Results Of 508 SWs, 193 (38.0%) reported early sex work initiation, with 78.53% primarily street-involved SWs and 21.46% off-street SWs. HIV prevalence was 11.22%, which was 19.69% among early initiates. Early initiates were more likely to be Canadian-born (Adjusted Odds Ratio (AOR): 6.8, 95% Confidence Interval (CI): 2.42–19.02), inject drugs (AOR: 1.6, 95%CI: 1.0–2.5), and to have worked for a manager (AOR: 2.22, 95%CI: 1.3–3.6) or been coerced into sex work (AOR: 2.3, 95%CI: 1.14–4.44). Early initiation retained an independent effect on increased risk of HIV infection (AOR: 2.5, 95% CI: 1.3–3.2) and prostitution arrests (AOR: 2.0, 95%CI: 1.3–3.2). Conclusions Adolescent sex work initiation is concentrated among marginalized, drug and street-involved SWs. Early initiation holds an independent increased effect on HIV infection and criminalization of adult SWs. Findings suggest the need for evidence-based approaches to reduce harm among adult and youth SWs. PMID:23982660
Goldenberg, Shira M; Chettiar, Jill; Simo, Annick; Silverman, Jay G; Strathdee, Steffanie A; Montaner, Julio S G; Shannon, Kate
2014-01-01
To explore factors associated with early sex work initiation and model the independent effect of early initiation on HIV infection and prostitution arrests among adult sex workers (SWs). Baseline data (2010-2011) were drawn from a cohort of SWs who exchanged sex for money within the last month and were recruited through time location sampling in Vancouver, Canada. Analyses were restricted to adults ≥18 years old. SWs completed a questionnaire and HIV/sexually transmitted infection testing. Using multivariate logistic regression, we identified associations with early sex work initiation (<18 years old) and constructed confounder models examining the independent effect of early initiation on HIV and prostitution arrests among adult SWs. Of 508 SWs, 193 (38.0%) reported early sex work initiation, with 78.53% primarily street-involved SWs and 21.46% off-street SWs. HIV prevalence was 11.22%, which was 19.69% among early initiates. Early initiates were more likely to be Canadian born [adjusted odds ratio (AOR): 6.8, 95% confidence interval (CI): 2.42 to 19.02], inject drugs (AOR: 1.6, 95% CI: 1.0 to 2.5), and to have worked for a manager (AOR: 2.22, 95% CI: 1.3 to 3.6) or been coerced into sex work (AOR: 2.3, 95% CI: 1.14 to 4.44). Early initiation retained an independent effect on increased risk of HIV infection (AOR: 2.5, 95% CI: 1.3 to 3.2) and prostitution arrests (AOR: 2.0, 95% CI: 1.3 to 3.2). Adolescent sex work initiation is concentrated among marginalized, drug, and street-involved SWs. Early initiation holds an independent increased effect on HIV infection and criminalization of adult SWs. Findings suggest the need for evidence-based approaches to reduce harm among adult and youth SWs.
COPD: maximization of bronchodilation.
Nardini, Stefano; Camiciottoli, Gianna; Locicero, Salvatore; Maselli, Rosario; Pasqua, Franco; Passalacqua, Giovanni; Pela, Riccardo; Pesci, Alberto; Sebastiani, Alfredo; Vatrella, Alessandro
2014-01-01
The most recent guidelines define COPD in a multidimensional way, nevertheless the diagnosis is still linked to the limitation of airflow, usually measured by the reduction in the FEV1/FVC ratio below 70%. However, the severity of obstruction is not directly correlated to symptoms or to invalidity determined by COPD. Thus, besides respiratory function, COPD should be evaluated based on symptoms, frequency and severity of exacerbations, patient's functional status and health related quality of life (HRQoL). Therapy is mainly aimed at increasing exercise tolerance and reducing dyspnea, with improvement of daily activities and HRQoL. This can be accomplished by a drug-induced reduction of pulmonary hyperinflation and exacerbations frequency and severity. All guidelines recommend bronchodilators as baseline therapy for all stages of COPD, and long-acting inhaled bronchodilators, both beta-2 agonist (LABA) and antimuscarinic (LAMA) drugs, are the most effective in regular treatment in the clinically stable phase. The effectiveness of bronchodilators should be evaluated in terms of functional (relief of bronchial obstruction and pulmonary hyperinflation), symptomatic (exercise tolerance and HRQoL), and clinical improvement (reduction in number or severity of exacerbations), while the absence of a spirometric response is not a reason for interrupting treatment, if there is subjective improvement in symptoms. Because LABA and LAMA act via different mechanisms of action, when administered in combination they can exert additional effects, thus optimizing (i.e. maximizing) sustained bronchodilation in COPD patients with severe airflow limitation, who cannot benefit (or can get only partial benefit) by therapy with a single bronchodilator. Recently, a fixed combination of ultra LABA/LAMA (indacaterol/glycopyrronium) has shown that it is possible to get a stable and persistent bronchodilation, which can help in avoiding undesirable fluctuations of bronchial calibre.
Maximal cuts in arbitrary dimension
NASA Astrophysics Data System (ADS)
Bosma, Jorrit; Sogaard, Mads; Zhang, Yang
2017-08-01
We develop a systematic procedure for computing maximal unitarity cuts of multiloop Feynman integrals in arbitrary dimension. Our approach is based on the Baikov representation in which the structure of the cuts is particularly simple. We examine several planar and nonplanar integral topologies and demonstrate that the maximal cut inherits IBPs and dimension shift identities satisfied by the uncut integral. Furthermore, for the examples we calculated, we find that the maximal cut functions from different allowed regions, form the Wronskian matrix of the differential equations on the maximal cut.
Maximal switchability of centralized networks
NASA Astrophysics Data System (ADS)
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
Hamiltonian formalism and path entropy maximization
NASA Astrophysics Data System (ADS)
Davis, Sergio; González, Diego
2015-10-01
Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.
Silva-Santiago, Evangelina; Rivera-Mulia, Juan Carlos; Aranda-Anzaldo, Armando
2017-08-01
In metazoans, nuclear DNA is organized during the interphase in negatively supercoiled loops anchored to a compartment or substructure known as the nuclear matrix. The interactions between DNA and the nuclear matrix (NM) are of higher affinity than those between DNA and chromatin proteins since the last ones do not resist the procedures for extracting the NM. The structural interactions DNA-NM constitute a set of topological relationships that define a nuclear higher order structure (NHOS) although there are further higher order levels of organization within the nucleus. So far, the evidence derived from studies with primary hepatocytes and naïve B lymphocytes indicates that the NHOS is cell-type specific at the local and at the large-scale level, and so it has been suggested that such NHOS is primary determined by structural and thermodynamic constraints. We carried out a comparative characterization of the NHOS of postmitotic cortical neurons with that of hepatocytes and naïve B lymphocytes. Our results indicate that the NHOS of neurons is completely different at the large scale and at the local level from that one observed in hepatocytes or in naïve B lymphocytes, confirming on the one hand that the set of structural DNA-NM interactions is cell-type specific and supporting, on the other hand the notion that structural constraints that impinge on chromosomal DNA and the NM are more important for determining this NHOS than functional constraints related to replication and/or transcription. J. Cell. Biochem. 118: 2151-2160, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
2010-01-01
Background The genus Neisseria contains two important yet very different pathogens, N. meningitidis and N. gonorrhoeae, in addition to non-pathogenic species, of which N. lactamica is the best characterized. Genomic comparisons of these three bacteria will provide insights into the mechanisms and evolution of pathogenesis in this group of organisms, which are applicable to understanding these processes more generally. Results Non-pathogenic N. lactamica exhibits very similar population structure and levels of diversity to the meningococcus, whilst gonococci are essentially recent descendents of a single clone. All three species share a common core gene set estimated to comprise around 1190 CDSs, corresponding to about 60% of the genome. However, some of the nucleotide sequence diversity within this core genome is particular to each group, indicating that cross-species recombination is rare in this shared core gene set. Other than the meningococcal cps region, which encodes the polysaccharide capsule, relatively few members of the large accessory gene pool are exclusive to one species group, and cross-species recombination within this accessory genome is frequent. Conclusion The three Neisseria species groups represent coherent biological and genetic groupings which appear to be maintained by low rates of inter-species horizontal genetic exchange within the core genome. There is extensive evidence for exchange among positively selected genes and the accessory genome and some evidence of hitch-hiking of housekeeping genes with other loci. It is not possible to define a 'pathogenome' for this group of organisms and the disease causing phenotypes are therefore likely to be complex, polygenic, and different among the various disease-associated phenotypes observed. PMID:21092259
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2009-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Jacob, Christian P; Nguyen, Thuy Trang; Dempfle, Astrid; Heine, Monika; Windemuth-Kieselbach, Christine; Baumann, Katarina; Jacob, Florian; Prechtl, Julian; Wittlich, Maike; Herrmann, Martin J; Gross-Lesch, Silke; Lesch, Klaus-Peter; Reif, Andreas
2010-06-01
While an interactive effect of genes with adverse life events is increasingly appreciated in current concepts of depression etiology, no data are presently available on interactions between genetic and environmental (G x E) factors with respect to personality and related disorders. The present study therefore aimed to detect main effects as well as interactions of serotonergic candidate genes (coding for the serotonin transporter, 5-HTT; the serotonin autoreceptor, HTR1A; and the enzyme which synthesizes serotonin in the brain, TPH2) with the burden of life events (#LE) in two independent samples consisting of 183 patients suffering from personality disorders and 123 patients suffering from adult attention deficit/hyperactivity disorder (aADHD). Simple analyses ignoring possible G x E interactions revealed no evidence for associations of either #LE or of the considered polymorphisms in 5-HTT and TPH2. Only the G allele of HTR1A rs6295 seemed to increase the risk of emotional-dramatic cluster B personality disorders (p = 0.019, in the personality disorder sample) and to decrease the risk of anxious-fearful cluster C personality disorders (p = 0.016, in the aADHD sample). We extended the initial simple model by taking a G x E interaction term into account, since this approach may better fit the data indicating that the effect of a gene is modified by stressful life events or, vice versa, that stressful life events only have an effect in the presence of a susceptibility genotype. By doing so, we observed nominal evidence for G x E effects as well as main effects of 5-HTT-LPR and the TPH2 SNP rs4570625 on the occurrence of personality disorders. Further replication studies, however, are necessary to validate the apparent complexity of G x E interactions in disorders of human personality.
Power Converters Maximize Outputs Of Solar Cell Strings
NASA Technical Reports Server (NTRS)
Frederick, Martin E.; Jermakian, Joel B.
1993-01-01
Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.
Power Converters Maximize Outputs Of Solar Cell Strings
NASA Technical Reports Server (NTRS)
Frederick, Martin E.; Jermakian, Joel B.
1993-01-01
Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.
Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph
2016-02-26
Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.
Hohman, Timothy J; Bush, William S; Jiang, Lan; Brown-Gentry, Kristin D; Torstenson, Eric S; Dudek, Scott M; Mukherjee, Shubhabrata; Naj, Adam; Kunkle, Brian W; Ritchie, Marylyn D; Martin, Eden R; Schellenberg, Gerard D; Mayeux, Richard; Farrer, Lindsay A; Pericak-Vance, Margaret A; Haines, Jonathan L; Thornton-Wells, Tricia A
2016-02-01
Late-onset Alzheimer disease (AD) has a complex genetic etiology, involving locus heterogeneity, polygenic inheritance, and gene-gene interactions; however, the investigation of interactions in recent genome-wide association studies has been limited. We used a biological knowledge-driven approach to evaluate gene-gene interactions for consistency across 13 data sets from the Alzheimer Disease Genetics Consortium. Fifteen single nucleotide polymorphism (SNP)-SNP pairs within 3 gene-gene combinations were identified: SIRT1 × ABCB1, PSAP × PEBP4, and GRIN2B × ADRA1A. In addition, we extend a previously identified interaction from an endophenotype analysis between RYR3 × CACNA1C. Finally, post hoc gene expression analyses of the implicated SNPs further implicate SIRT1 and ABCB1, and implicate CDH23 which was most recently identified as an AD risk locus in an epigenetic analysis of AD. The observed interactions in this article highlight ways in which genotypic variation related to disease may depend on the genetic context in which it occurs. Further, our results highlight the utility of evaluating genetic interactions to explain additional variance in AD risk and identify novel molecular mechanisms of AD pathogenesis.
The Negative Consequences of Maximizing in Friendship Selection.
Newman, David B; Schug, Joanna; Yuki, Masaki; Yamada, Junko; Nezlek, John B
2017-02-27
Previous studies have shown that the maximizing orientation, reflecting a motivation to select the best option among a given set of choices, is associated with various negative psychological outcomes. In the present studies, we examined whether these relationships extend to friendship selection and how the number of options for friends moderated these effects. Across 5 studies, maximizing in selecting friends was negatively related to life satisfaction, positive affect, and self-esteem, and was positively related to negative affect and regret. In Study 1, a maximizing in selecting friends scale was created, and regret mediated the relationships between maximizing and well-being. In a naturalistic setting in Studies 2a and 2b, the tendency to maximize among those who participated in the fraternity and sorority recruitment process was negatively related to satisfaction with their selection, and positively related to regret and negative affect. In Study 3, daily levels of maximizing were negatively related to daily well-being, and these relationships were mediated by daily regret. In Study 4, we extended the findings to samples from the U.S. and Japan. When participants who tended to maximize were faced with many choices, operationalized as the daily number of friends met (Study 3) and relational mobility (Study 4), the opportunities to regret a decision increased and further diminished well-being. These findings imply that, paradoxically, attempts to maximize when selecting potential friends is detrimental to one's well-being. (PsycINFO Database Record
Modularity maximization using completely positive programming
NASA Astrophysics Data System (ADS)
Yazdanparast, Sakineh; Havens, Timothy C.
2017-04-01
Community detection is one of the most prominent problems of social network analysis. In this paper, a novel method for Modularity Maximization (MM) for community detection is presented which exploits the Alternating Direction Augmented Lagrangian (ADAL) method for maximizing a generalized form of Newman's modularity function. We first transform Newman's modularity function into a quadratic program and then use Completely Positive Programming (CPP) to map the quadratic program to a linear program, which provides the globally optimal maximum modularity partition. In order to solve the proposed CPP problem, a closed form solution using the ADAL merged with a rank minimization approach is proposed. The performance of the proposed method is evaluated on several real-world data sets used for benchmarks community detection. Simulation results shows the proposed technique provides outstanding results in terms of modularity value for crisp partitions.
Maximizing the usefulness of hypnosis in forensic investigative settings.
Hibler, Neil S; Scheflin, Alan W
2012-07-01
This is an article written for mental health professionals interested in using investigative hypnosis with law enforcement agencies in the effort to enhance the memory of witnesses and victims. Discussion focuses on how to work with law enforcement agencies so as to control for factors that can interfere with recall. Specifics include what police need to know about how to conduct case review, to prepare interviewees, to conduct interviews, and what to do with the results. Case examples are used to illustrate applications of this guidance in actual investigations.
Are Independent Probes Truly Independent?
ERIC Educational Resources Information Center
Camp, Gino; Pecher, Diane; Schmidt, Henk G.; Zeelenberg, Rene
2009-01-01
The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval…
Are Independent Probes Truly Independent?
ERIC Educational Resources Information Center
Camp, Gino; Pecher, Diane; Schmidt, Henk G.; Zeelenberg, Rene
2009-01-01
The independent cue technique has been developed to test traditional interference theories against inhibition theories of forgetting. In the present study, the authors tested the critical criterion for the independence of independent cues: Studied cues not presented during test (and unrelated to test cues) should not contribute to the retrieval…
Takahashi, Yoh Hei; Lee, Jung Shin; Swanson, Selene K; Saraf, Anita; Florens, Laurence; Washburn, Michael P; Trievel, Raymond C; Shilatifard, Ali
2009-07-01
The multiprotein complex Set1/COMPASS is the founding member of the histone H3 lysine 4 (H3K4) methyltransferases, whose human homologs include the MLL and hSet1 complexes. COMPASS can mono-, di-, and trimethylate H3K4, but transitioning to di- and trimethylation requires prior H2B monoubiquitination followed by recruitment of the Cps35 (Swd2) subunit of COMPASS. Another subunit, Cps40 (Spp1), interacts directly with Set1 and is only required for transitioning to trimethylation. To investigate how the Set1 and COMPASS subunits establish the methylation states of H3K4, we generated a homology model of the catalytic domain of Saccharomyces cerevisiae yeast Set1 and identified several key residues within the Set1 catalytic pocket that are capable of regulating COMPASS's activity. We show that Tyr1052, a putative Phe/Tyr switch of Set1, plays an essential role in the regulation of H3K4 trimethylation by COMPASS and that the mutation to phenylalanine (Y1052F) suppresses the loss of Cps40 in H3K4 trimethylation levels, suggesting that Tyr1052 functions together with Cps40. However, the loss of H2B monoubiquitination is not suppressed by this mutation, while Cps40 is stably assembled in COMPASS on chromatin, demonstrating that Tyr1052- and Cps40-mediated H3K4 trimethylation takes place following and independently of H2B monoubiquitination. Our studies provide a molecular basis for the way in which H3K4 trimethylation is regulated by Tyr1052 and the Cps40 subunit of COMPASS.
Limitations to maximal oxygen uptake.
Sutton, J R
1992-02-01
An increase in exercise capacity depends on the magnitude of increase in maximum aerobic capacity. Central and peripheral factors may limit oxygen uptake. Central oxygen delivery depends on cardiac output and maximal arterial oxygen content. Peripheral extraction of the delivered oxygen is expressed as a-v O2. With increasing intensities of exercise, the respiratory system may become limiting in some trained individuals. Most studies have shown a higher stroke volume in maximal as well as submaximal exercise in the trained vs untrained individuals. A variety of peripheral factors determine vascular tone. Maximal oxygen uptake depends on all components of the oxygen transporting system, but stroke volume appears to be the prime determinant in the trained subject. At maximum exercise the capacity of the muscle capillary network is never reached.
Approximation Algorithms for Free-Label Maximization
NASA Astrophysics Data System (ADS)
de Berg, Mark; Gerrits, Dirk H. P.
Inspired by air traffic control and other applications where moving objects have to be labeled, we consider the following (static) point labeling problem: given a set P of n points in the plane and labels that are unit squares, place a label with each point in P in such a way that the number of free labels (labels not intersecting any other label) is maximized. We develop efficient constant-factor approximation algorithms for this problem, as well as PTASs, for various label-placement models.
On maximal parabolic regularity for non-autonomous parabolic operators
NASA Astrophysics Data System (ADS)
Disser, Karoline; ter Elst, A. F. M.; Rehberg, Joachim
2017-02-01
We consider linear inhomogeneous non-autonomous parabolic problems associated to sesquilinear forms, with discontinuous dependence of time. We show that for these problems, the property of maximal parabolic regularity can be extrapolated to time integrability exponents r ≠ 2. This allows us to prove maximal parabolic Lr-regularity for discontinuous non-autonomous second-order divergence form operators in very general geometric settings and to prove existence results for related quasilinear equations.
Chamaebatiaria millefolium (Torr.) Maxim.: fernbush
Nancy L. Shaw; Emerenciana G. Hurd
2008-01-01
Fernbush - Chamaebatiaria millefolium (Torr.) Maxim. - the only species in its genus, is endemic to the Great Basin, Colorado Plateau, and adjacent areas of the western United States. It is an upright, generally multistemmed, sweetly aromatic shrub 0.3 to 2 m tall. Bark of young branches is brown and becomes smooth and gray with age. Leaves are leathery, alternate,...
Maximizing Pharmacy's Contribution to Society.
ERIC Educational Resources Information Center
Marston, Robert Q.
1978-01-01
It is argued that the role of colleges in the effort to maximize pharmacy's contribution to society requires an emphasis on research in the pharmaceutical sciences, in the clinical use of drugs, and in the socioeconomic aspects of drug therapy. This will produce more qualified pharmacists and greater credibility for the profession. (JMD)
Maximizing Human Learning and Performance.
ERIC Educational Resources Information Center
Fletcher, Jerry L.
1978-01-01
Stating that national educational policy increasingly involves the minimum competencies mentality, the author discusses his proposal to investigate the outer limits of human educability, addressing five steps toward creating educational programs to maximize human educability: master patterns, personal patterns, stages of development, educational…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-20
... systems advocacy--to maximize the leadership, empowerment, independence and productivity of individuals..., empowerment, independence and productivity of individuals with significant disabilities and to promote...
NASA Astrophysics Data System (ADS)
Fraser, Gordon
2009-01-01
In his kind review of my biography of the Nobel laureate Abdus Salam (December 2008 pp45-46), John W Moffat wrongly claims that Salam had "independently thought of the idea of parity violation in weak interactions".
Computing Maximally Supersymmetric Scattering Amplitudes
NASA Astrophysics Data System (ADS)
Stankowicz, James Michael, Jr.
This dissertation reviews work in computing N = 4 super-Yang--Mills (sYM) and N = 8 maximally supersymmetric gravity (mSUGRA) scattering amplitudes in D = 4 spacetime dimensions in novel ways. After a brief introduction and overview in Ch. 1, the various techniques used to construct amplitudes in the remainder of the dissertation are discussed in Ch. 2. This includes several new concepts such as d log and pure integrand bases, as well as how to construct the amplitude using exactly one kinematic point where it vanishes. Also included in this chapter is an outline of the Mathematica package on shell diagrams and numerics.m (osdn) that was developed for the computations herein. The rest of the dissertation is devoted to explicit examples. In Ch. 3, the starting point is tree-level sYM amplitudes that have integral representations with residues that obey amplitude relations. These residues are shown to have corresponding residue numerators that allow a double copy prescription that results in mSUGRA residues. In Ch. 4, the two-loop four-point sYM amplitude is constructed in several ways, showcasing many of the techniques of Ch. 2; this includes an example of how to use osdn. The two-loop five-point amplitude is also presented in a pure integrand representation with comments on how it was constructed from one homogeneous cut of the amplitude. On-going work on the two-loop n-point amplitude is presented at the end of Ch. 4. In Ch. 5, the three-loop four-point amplitude is presented in the d log representation and in the pure integrand representation. In Ch. 6, there are several examples of four- through seven-loop planar diagrams that illustrate how considerations of the singularity structure of the amplitude underpin dual-conformal invariance. Taken with the previous examples, this is additional evidence that the structure known to exist in the planar sector extends to the full theory. At the end of this chapter is a proof that all mSUGRA amplitudes have a pole at
Optimizing Population Variability to Maximize Benefit.
Izu, Leighton T; Bányász, Tamás; Chen-Izu, Ye
2015-01-01
Variability is inherent in any population, regardless whether the population comprises humans, plants, biological cells, or manufactured parts. Is the variability beneficial, detrimental, or inconsequential? This question is of fundamental importance in manufacturing, agriculture, and bioengineering. This question has no simple categorical answer because research shows that variability in a population can have both beneficial and detrimental effects. Here we ask whether there is a certain level of variability that can maximize benefit to the population as a whole. We answer this question by using a model composed of a population of individuals who independently make binary decisions; individuals vary in making a yes or no decision, and the aggregated effect of these decisions on the population is quantified by a benefit function (e.g. accuracy of the measurement using binary rulers, aggregate income of a town of farmers). Here we show that an optimal variance exists for maximizing the population benefit function; this optimal variance quantifies what is often called the "right mix" of individuals in a population.
Optimizing Population Variability to Maximize Benefit
Izu, Leighton T.; Bányász, Tamás; Chen-Izu, Ye
2015-01-01
Variability is inherent in any population, regardless whether the population comprises humans, plants, biological cells, or manufactured parts. Is the variability beneficial, detrimental, or inconsequential? This question is of fundamental importance in manufacturing, agriculture, and bioengineering. This question has no simple categorical answer because research shows that variability in a population can have both beneficial and detrimental effects. Here we ask whether there is a certain level of variability that can maximize benefit to the population as a whole. We answer this question by using a model composed of a population of individuals who independently make binary decisions; individuals vary in making a yes or no decision, and the aggregated effect of these decisions on the population is quantified by a benefit function (e.g. accuracy of the measurement using binary rulers, aggregate income of a town of farmers). Here we show that an optimal variance exists for maximizing the population benefit function; this optimal variance quantifies what is often called the “right mix” of individuals in a population. PMID:26650247
Left atrial strain after maximal exercise in competitive waterpolo players.
Santoro, Amato; Alvino, Federico; Antonelli, Giovanni; Molle, Roberta; Mondillo, Sergio
2016-03-01
Left atrial (LA) function is a determinant of left ventricular (LV) filling. It carries out three main functions: reservoir, conduit, contractile. Aim of this study was to evaluate the role of LA and its deformation properties on LV filling at rest (R) and immediately after a maximal exercise (ME) through the speckle tracking echocardiography. Population enrolled was composed by 23 water polo athletes who performed a ME of six repeats of 100 m freestyle swim sets. At ME peak atrial longitudinal strain was reduced but all strain rate (SR) parameters increased, respectively positive peak SR at reservoir phase, SR negative peak at rapid ventricular filling (SRep) and SR negative peak at late ventricular filling (SRlp), that corresponds to atrial contraction phase. We showed a parallel increase in E and A pulsed Doppler wave and SRep and SRlp; particularly at ME, A wave and SRlp increased more respectively than E wave and SRep. SRlp was related to ejection fraction (EF) (r = -0.47; p < 0.01). At multivariate analysis SRlp was an independent predictor of EF (β: -0.47; p = 0.016). The increased sympathetic tone results into increased late diastolic LV filling with augmented atrial contractility and a decrease in diastolic filling time. During exercise LV filling was probably optimized by an enhanced and rapid LA conduit phase and by a vigorous atrial contraction during late LV filling.
Sensitivity to conversational maxims in deaf and hearing children.
Surian, Luca; Tedoldi, Mariantonia; Siegal, Michael
2010-09-01
We investigated whether access to a sign language affects the development of pragmatic competence in three groups of deaf children aged 6 to 11 years: native signers from deaf families receiving bimodal/bilingual instruction, native signers from deaf families receiving oralist instruction and late signers from hearing families receiving oralist instruction. The performance of these children was compared to a group of hearing children aged 6 to 7 years on a test designed to assess sensitivity to violations of conversational maxims. Native signers with bimodal/bilingual instruction were as able as the hearing children to detect violations that concern truthfulness (Maxim of Quality) and relevance (Maxim of Relation). On items involving these maxims, they outperformed both the late signers and native signers attending oralist schools. These results dovetail with previous findings on mindreading in deaf children and underscore the role of early conversational experience and instructional setting in the development of pragmatics.
ERIC Educational Resources Information Center
Nathanson, Jeanne H., Ed.
1994-01-01
This issue of "OSERS" addresses the subject of independent living of individuals with disabilities. The issue includes a message from Judith E. Heumann, the Assistant Secretary of the Office of Special Education and Rehabilitative Services (OSERS), and 10 papers. Papers have the following titles and authors: "Changes in the…
NASA Astrophysics Data System (ADS)
Reif, Maria M.; Hünenberger, Philippe H.
2011-04-01
The raw single-ion solvation free energies computed from atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [M. A. Kastenholz and P. H. Hünenberger, J. Chem. Phys. 124, 224501 (2006), 10.1529/biophysj.106.083667; M. M. Reif and P. H. Hünenberger, J. Chem. Phys. 134, 144103 (2010)], the application of appropriate correction terms permits to obtain methodology-independent results. The corrected values are then exclusively characteristic of the underlying molecular model including in particular the ion-solvent van der Waals interaction parameters, determining the effective ion size and the magnitude of its dispersion interactions. In the present study, the comparison of calculated (corrected) hydration free energies with experimental data (along with the consideration of ionic polarizabilities) is used to calibrate new sets of ion-solvent van der Waals (Lennard-Jones) interaction parameters for the alkali (Li+, Na+, K+, Rb+, Cs+) and halide (F-, Cl-, Br-, I-) ions along with either the SPC or the SPC/E water models. The experimental dataset is defined by conventional single-ion hydration free energies [Tissandier et al., J. Phys. Chem. A 102, 7787 (1998), 10.1021/jp982638r; Fawcett, J. Phys. Chem. B 103, 11181] along with three plausible choices for the (experimentally elusive) value of the absolute (intrinsic) hydration free energy of the proton, namely, Δ G_hyd^{ominus }[H+] = -1100, -1075 or -1050 kJ mol-1, resulting in three sets L, M, and H for the SPC water model and three sets LE, ME, and HE for the SPC/E water model (alternative sets can easily be interpolated to intermediate Δ G_hyd^{ominus }[H+] values). The residual sensitivity of the calculated (corrected) hydration free energies on the volume-pressure boundary conditions and on the effective ionic radius entering into the calculation of the correction terms is
Multivariate residues and maximal unitarity
NASA Astrophysics Data System (ADS)
Søgaard, Mads; Zhang, Yang
2013-12-01
We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.
Maximizing algebraic connectivity in air transportation networks
NASA Astrophysics Data System (ADS)
Wei, Peng
In air transportation networks the robustness of a network regarding node and link failures is a key factor for its design. An experiment based on the real air transportation network is performed to show that the algebraic connectivity is a good measure for network robustness. Three optimization problems of algebraic connectivity maximization are then formulated in order to find the most robust network design under different constraints. The algebraic connectivity maximization problem with flight routes addition or deletion is first formulated. Three methods to optimize and analyze the network algebraic connectivity are proposed. The Modified Greedy Perturbation Algorithm (MGP) provides a sub-optimal solution in a fast iterative manner. The Weighted Tabu Search (WTS) is designed to offer a near optimal solution with longer running time. The relaxed semi-definite programming (SDP) is used to set a performance upper bound and three rounding techniques are discussed to find the feasible solution. The simulation results present the trade-off among the three methods. The case study on two air transportation networks of Virgin America and Southwest Airlines show that the developed methods can be applied in real world large scale networks. The algebraic connectivity maximization problem is extended by adding the leg number constraint, which considers the traveler's tolerance for the total connecting stops. The Binary Semi-Definite Programming (BSDP) with cutting plane method provides the optimal solution. The tabu search and 2-opt search heuristics can find the optimal solution in small scale networks and the near optimal solution in large scale networks. The third algebraic connectivity maximization problem with operating cost constraint is formulated. When the total operating cost budget is given, the number of the edges to be added is not fixed. Each edge weight needs to be calculated instead of being pre-determined. It is illustrated that the edge addition and the
Knowledge discovery by accuracy maximization
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-01-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821
NASA Astrophysics Data System (ADS)
Annan, James; Hargreaves, Julia
2016-04-01
In order to perform any Bayesian processing of a model ensemble, we need a prior over the ensemble members. In the case of multimodel ensembles such as CMIP, the historical approach of ``model democracy'' (i.e. equal weight for all models in the sample) is no longer credible (if it ever was) due to model duplication and inbreeding. The question of ``model independence'' is central to the question of prior weights. However, although this question has been repeatedly raised, it has not yet been satisfactorily addressed. Here I will discuss the issue of independence and present a theoretical foundation for understanding and analysing the ensemble in this context. I will also present some simple examples showing how these ideas may be applied and developed.
Mixtures of maximally entangled pure states
Flores, M.M. Galapon, E.A.
2016-09-15
We study the conditions when mixtures of maximally entangled pure states remain entangled. We found that the resulting mixed state remains entangled when the number of entangled pure states to be mixed is less than or equal to the dimension of the pure states. For the latter case of mixing a number of pure states equal to their dimension, we found that the mixed state is entangled provided that the entangled pure states to be mixed are not equally weighted. We also found that one can restrict the set of pure states that one can mix from in order to ensure that the resulting mixed state is genuinely entangled. Also, we demonstrate how these results could be applied as a way to detect entanglement in mixtures of the entangled pure states with noise.
Maximal energy extraction under discrete diffusive exchange
Hay, M. J.; Schiff, J.; Fisch, N. J.
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Maximal energy extraction under discrete diffusive exchange
NASA Astrophysics Data System (ADS)
Hay, M. J.; Schiff, J.; Fisch, N. J.
2015-10-01
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake
NASA Technical Reports Server (NTRS)
Convertino, V. A.
1997-01-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake
NASA Technical Reports Server (NTRS)
Convertino, V. A.
1997-01-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
Enumerating all maximal frequent subtrees in collections of phylogenetic trees
2014-01-01
Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474
Hardy-Littlewood maximal operator in generalized grand Lebesgue spaces
NASA Astrophysics Data System (ADS)
Umarkhadzhiev, Salaudin M.
2014-12-01
We obtain sufficient conditions and necessary conditions for the maximal operator to be bounded in the generalized grand Lebesgue space on an open set Ω ∈ Rn which is not necessarily bounded. The sufficient conditions coincide with necessary conditions for instance in the case where Ω is bounded and the standard definition of the grand space is used.
General form of a cooperative gradual maximal covering location problem
NASA Astrophysics Data System (ADS)
Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh
2017-07-01
Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.
Natural selection maximizes Fisher information.
Frank, S A
2009-02-01
In biology, information flows from the environment to the genome by the process of natural selection. However, it has not been clear precisely what sort of information metric properly describes natural selection. Here, I show that Fisher information arises as the intrinsic metric of natural selection and evolutionary dynamics. Maximizing the amount of Fisher information about the environment captured by the population leads to Fisher's fundamental theorem of natural selection, the most profound statement about how natural selection influences evolutionary dynamics. I also show a relation between Fisher information and Shannon information (entropy) that may help to unify the correspondence between information and dynamics. Finally, I discuss possible connections between the fundamental role of Fisher information in statistics, biology and other fields of science.
Reif, Maria M.; Huenenberger, Philippe H.
2011-04-14
The raw single-ion solvation free energies computed from atomistic (explicit-solvent) simulations are extremely sensitive to the boundary conditions and treatment of electrostatic interactions used during these simulations. However, as shown recently [M. A. Kastenholz and P. H. Huenenberger, J. Chem. Phys. 124, 224501 (2006); M. M. Reif and P. H. Huenenberger, J. Chem. Phys. 134, 144103 (2010)], the application of appropriate correction terms permits to obtain methodology-independent results. The corrected values are then exclusively characteristic of the underlying molecular model including in particular the ion-solvent van der Waals interaction parameters, determining the effective ion size and the magnitude of its dispersion interactions. In the present study, the comparison of calculated (corrected) hydration free energies with experimental data (along with the consideration of ionic polarizabilities) is used to calibrate new sets of ion-solvent van der Waals (Lennard-Jones) interaction parameters for the alkali (Li{sup +}, Na{sup +}, K{sup +}, Rb{sup +}, Cs{sup +}) and halide (F{sup -}, Cl{sup -}, Br{sup -}, I{sup -}) ions along with either the SPC or the SPC/E water models. The experimental dataset is defined by conventional single-ion hydration free energies [Tissandier et al., J. Phys. Chem. A 102, 7787 (1998); Fawcett, J. Phys. Chem. B 103, 11181] along with three plausible choices for the (experimentally elusive) value of the absolute (intrinsic) hydration free energy of the proton, namely, {Delta}G{sub hyd} {sup O-minus} [H{sup +}]=-1100, -1075 or -1050 kJ mol{sup -1}, resulting in three sets L, M, and H for the SPC water model and three sets L{sub E}, M{sub E}, and H{sub E} for the SPC/E water model (alternative sets can easily be interpolated to intermediate {Delta}G{sub hyd} {sup O-minus} [H{sup +}] values). The residual sensitivity of the calculated (corrected) hydration free energies on the volume-pressure boundary conditions and on the effective
Remarks on the information entropy maximization method and extended thermodynamics
NASA Astrophysics Data System (ADS)
Eu, Byung Chan
1998-04-01
The information entropy maximization method was applied by Jou et al. [J. Phys. A 17, 2799 (1984)] to heat conduction in the past. Advancing this method one more step, Nettleton [J. Chem. Phys. 106, 10311 (1997)] combined the method with a projection operator technique to derive a set of evolution equations for macroscopic variables from the Liouville equation for a simple liquid, and a claim was made that the method provides a statistical mechanical theory basis of irreversible processes and, in particular, of extended thermodynamics which is consistent with the laws of thermodynamics. This line of information entropy maximization method is analyzed from the viewpoint of the laws of thermodynamics in this paper.
NASA Technical Reports Server (NTRS)
2005-01-01
[figure removed for brevity, see original site] Click on the image for 'Independence' Panorama (QTVR)
This is the Spirit 'Independence' panorama, acquired on martian days, or sols, 536 to 543 (July 6 to 13, 2005), from a position in the 'Columbia Hills' near the summit of 'Husband Hill.' The summit of 'Husband Hill' is the peak near the right side of this panorama and is about 100 meters (328 feet) away from the rover and about 30 meters (98 feet) higher in elevation. The rocky outcrops downhill and on the left side of this mosaic include 'Larry's Lookout' and 'Cumberland Ridge,' which Spirit explored in April, May, and June of 2005.
The panorama spans 360 degrees and consists of 108 individual images, each acquired with five filters of the rover's panoramic camera. The approximate true color of the mosaic was generated using the camera's 750-, 530-, and 480-nanometer filters. During the 8 martian days, or sols, that it took to acquire this image, the lighting varied considerably, partly because of imaging at different times of sol, and partly because of small sol-to-sol variations in the dustiness of the atmosphere. These slight changes produced some image seams and rock shadows. These seams have been eliminated from the sky portion of the mosaic to better simulate the vista a person standing on Mars would see. However, it is often not possible or practical to smooth out such seams for regions of rock, soil, rover tracks or solar panels. Such is the nature of acquiring and assembling large panoramas from the rovers.
Dimension independence in exterior algebra.
Hawrylycz, M
1995-03-14
The identities between homogeneous expressions in rank 1 vectors and rank n - 1 covectors in a Grassmann-Cayley algebra of rank n, in which one set occurs multilinearly, are shown to represent a set of dimension-independent identities. The theorem yields an infinite set of nontrivial geometric identities from a given identity.
Dimension independence in exterior algebra.
Hawrylycz, M
1995-01-01
The identities between homogeneous expressions in rank 1 vectors and rank n - 1 covectors in a Grassmann-Cayley algebra of rank n, in which one set occurs multilinearly, are shown to represent a set of dimension-independent identities. The theorem yields an infinite set of nontrivial geometric identities from a given identity. PMID:11607520
Maximal dinucleotide and trinucleotide circular codes.
Michel, Christian J; Pellegrini, Marco; Pirillo, Giuseppe
2016-01-21
We determine here the number and the list of maximal dinucleotide and trinucleotide circular codes. We prove that there is no maximal dinucleotide circular code having strictly less than 6 elements (maximum size of dinucleotide circular codes). On the other hand, a computer calculus shows that there are maximal trinucleotide circular codes with less than 20 elements (maximum size of trinucleotide circular codes). More precisely, there are maximal trinucleotide circular codes with 14, 15, 16, 17, 18 and 19 elements and no maximal trinucleotide circular code having less than 14 elements. We give the same information for the maximal self-complementary dinucleotide and trinucleotide circular codes. The amino acid distribution of maximal trinucleotide circular codes is also determined.
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Maximizing the optical network capacity
Bayvel, Polina; Maher, Robert; Liga, Gabriele; Shevchenko, Nikita A.; Lavery, Domaniç; Killey, Robert I.
2016-01-01
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. PMID:26809572
A maximally supersymmetric Kondo model
NASA Astrophysics Data System (ADS)
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo
2012-10-01
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N=4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N=4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Viral quasispecies assembly via maximal clique enumeration.
Töpfer, Armin; Marschall, Tobias; Bull, Rowena A; Luciani, Fabio; Schönhuth, Alexander; Beerenwinkel, Niko
2014-03-01
Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the structure of a viral quasispecies from next-generation sequencing data as obtained from bulk sequencing of mixed virus samples. We develop a statistical model for paired-end reads accounting for mutations, insertions, and deletions. Using an iterative maximal clique enumeration approach, read pairs are assembled into haplotypes of increasing length, eventually enabling global haplotype assembly. The performance of our quasispecies assembly method is assessed on simulated data for varying population characteristics and sequencing technology parameters. Owing to its paired-end handling, HaploClique compares favorably to state-of-the-art haplotype inference methods. It can reconstruct error-free full-length haplotypes from low coverage samples and detect large insertions and deletions at low frequencies. We applied HaploClique to sequencing data derived from a clinical hepatitis C virus population of an infected patient and discovered a novel deletion of length 357±167 bp that was validated by two independent long-read sequencing experiments. HaploClique is available at https://github.com/armintoepfer/haploclique. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Maximally symmetric stabilizer MUBs in even prime-power dimensions
NASA Astrophysics Data System (ADS)
Carmeli, Claudio; Schultz, Jussi; Toigo, Alessandro
2017-03-01
One way to construct a maximal set of mutually unbiased bases (MUBs) in a prime-power dimensional Hilbert space is by means of finite phase-space methods. MUBs obtained in this way are covariant with respect to some subgroup of the group of all affine symplectic phase-space transformations. However, this construction is not canonical: as a consequence, many different choices of covariance subgroups are possible. In particular, when the Hilbert space is 2n dimensional, it is known that covariance with respect to the full group of affine symplectic phase-space transformations can never be achieved. Here we show that in this case there exist two essentially different choices of maximal subgroups admitting covariant MUBs. For both of them, we explicitly construct a family of 2n covariant MUBs. We thus prove that, contrary to the odd dimensional case, maximally covariant MUBs are very far from being unique in even prime-power dimensions.
Many Parameter Sets in a Multicompartment Model Oscillator Are Robust to Temperature Perturbations
Caplan, Jonathan S.; Williams, Alex H.
2014-01-01
Neurons in cold-blooded animals remarkably maintain their function over a wide range of temperatures, even though the rates of many cellular processes increase twofold, threefold, or many-fold for each 10°C increase in temperature. Moreover, the kinetics of ion channels, maximal conductances, and Ca2+ buffering each have independent temperature sensitivities, suggesting that the balance of biological parameters can be disturbed by even modest temperature changes. In stomatogastric ganglia of the crab Cancer borealis, the duty cycle of the bursting pacemaker kernel is highly robust between 7 and 23°C (Rinberg et al., 2013). We examined how this might be achieved in a detailed conductance-based model in which exponential temperature sensitivities were given by Q10 parameters. We assessed the temperature robustness of this model across 125,000 random sets of Q10 parameters. To examine how robustness might be achieved across a variable population of animals, we repeated this analysis across six sets of maximal conductance parameters that produced similar activity at 11°C. Many permissible combinations of maximal conductance and Q10 parameters were found over broad regions of parameter space and relatively few correlations among Q10s were observed across successful parameter sets. A significant portion of Q10 sets worked for at least 3 of the 6 maximal conductance sets (∼11.1%). Nonetheless, no Q10 set produced robust function across all six maximal conductance sets, suggesting that maximal conductance parameters critically contribute to temperature robustness. Overall, these results provide insight into principles of temperature robustness in neuronal oscillators. PMID:24695714
Moving multiple sinks through wireless sensor networks for lifetime maximization.
Petrioli, Chiara; Carosi, Alessio; Basagni, Stefano; Phillips, Cynthia Ann
2008-01-01
Unattended sensor networks typically watch for some phenomena such as volcanic events, forest fires, pollution, or movements in animal populations. Sensors report to a collection point periodically or when they observe reportable events. When sensors are too far from the collection point to communicate directly, other sensors relay messages for them. If the collection point location is static, sensor nodes that are closer to the collection point relay far more messages than those on the periphery. Assuming all sensor nodes have roughly the same capabilities, those with high relay burden experience battery failure much faster than the rest of the network. However, since their death disconnects the live nodes from the collection point, the whole network is then dead. We consider the problem of moving a set of collectors (sinks) through a wireless sensor network to balance the energy used for relaying messages, maximizing the lifetime of the network. We show how to compute an upper bound on the lifetime for any instance using linear and integer programming. We present a centralized heuristic that produces sink movement schedules that produce network lifetimes within 1.4% of the upper bound for realistic settings. We also present a distributed heuristic that produces lifetimes at most 25:3% below the upper bound. More specifically, we formulate a linear program (LP) that is a relaxation of the scheduling problem. The variables are naturally continuous, but the LP relaxes some constraints. The LP has an exponential number of constraints, but we can satisfy them all by enforcing only a polynomial number using a separation algorithm. This separation algorithm is a p-median facility location problem, which we can solve efficiently in practice for huge instances using integer programming technology. This LP selects a set of good sensor configurations. Given the solution to the LP, we can find a feasible schedule by selecting a subset of these configurations, ordering them
Predicted maximal heart rate for upper body exercise testing.
Hill, M; Talbot, C; Price, M
2016-03-01
Age-predicted maximal heart rate (HRMAX ) equations are commonly used for the purpose of prescribing exercise regimens, as criteria for achieving maximal exertion and for diagnostic exercise testing. Despite the growing popularity of upper body exercise in both healthy and clinical settings, no recommendations are available for exercise modes using the smaller upper body muscle mass. The purpose of this study was to determine how well commonly used age-adjusted prediction equations for HRMAX estimate actual HRMAX for upper body exercise in healthy young and older adults. A total of 30 young (age: 20 ± 2 years, height: 171·9 ± 32·8 cm, mass: 77·7 ± 12·6 kg) and 20 elderly adults (age: 66 ± 6 years, height: 162 ± 8·1 cm, mass: 65·3 ± 12·3 kg) undertook maximal incremental exercise tests on a conventional arm crank ergometer. Age-adjusted maximal heart rate was calculated using prediction equations based on leg exercise and compared with measured HRMAX data for the arms. Maximal HR for arm exercise was significantly overpredicted compared with age-adjusted prediction equations in both young and older adults. Subtracting 10-20 beats min(-1) from conventional prediction equations provides a reasonable estimate of HRMAX for upper body exercise in healthy older and younger adults. © 2014 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Disk Density Tuning of a Maximal Random Packing
Ebeida, Mohamed S.; Rushdi, Ahmad A.; Awad, Muhammad A.; Mahmoud, Ahmed H.; Yan, Dong-Ming; English, Shawn A.; Owens, John D.; Bajaj, Chandrajit L.; Mitchell, Scott A.
2016-01-01
We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations. PMID:27563162
Assessment of maximal handgrip strength: how many attempts are needed?
Reijnierse, Esmee M.; de Jong, Nynke; Trappenburg, Marijke C.; Blauw, Gerard Jan; Butler‐Browne, Gillian; Gapeyeva, Helena; Hogrel, Jean‐Yves; McPhee, Jamie S.; Narici, Marco V.; Sipilä, Sarianna; Stenroth, Lauri; van Lummel, Rob C.; Pijnappels, Mirjam; Meskers, Carel G.M.
2017-01-01
Abstract Background Handgrip strength (HGS) is used to identify individuals with low muscle strength (dynapenia). The influence of the number of attempts on maximal HGS is not yet known and may differ depending on age and health status. This study aimed to assess how many attempts of HGS are required to obtain maximal HGS. Methods Three cohorts (939 individuals) differing in age and health status were included. HGS was assessed three times and explored as continuous and dichotomous variable. Paired t‐test, intraclass correlation coefficients (ICC) and Bland–Altman analysis were used to test reproducibility of HGS. The number of individuals with misclassified dynapenia at attempts 1 and 2 with respect to attempt 3 were assessed. Results Results showed the same pattern in all three cohorts. Maximal HGS at attempts 1 and 2 was higher than at attempt 3 on population level (P < 0.001 for all three cohorts). ICC values between all attempts were above 0.8, indicating moderate to high reproducibility. Bland–Altman analysis showed that 41.0 to 58.9% of individuals had the highest HGS at attempt 2 and 12.4 to 37.2% at attempt 3. The percentage of individuals with a maximal HGS above the gender‐specific cut‐off value at attempt 3 compared with attempts 1 and 2 ranged from 0 to 50.0%, with a higher percentage of misclassification in middle‐aged and older populations. Conclusions Maximal HGS is dependent on the number of attempts, independent of age and health status. To assess maximal HGS, at least three attempts are needed if HGS is considered to be a continuous variable. If HGS is considered as a discrete variable to assess dynapenia, two attempts are sufficient to assess dynapenia in younger populations. Misclassification should be taken into account in middle‐aged and older populations. PMID:28150387
NASA Astrophysics Data System (ADS)
Nomoto, Hideki; Katahira, Masafumi; Fukatsu, Tsutomu; Okabe, Hideki; Yamanaka, Kohji
2005-12-01
This paper describes IV&V (Independent Verification & Validation) methodology applied for the Rendezvous Flight Software (RVFS) of the H-IIA Transfer Vehicle (HTV).HTV is a cargo transportation vehicle to the International Space Station (ISS). RVFS not only controls the HTV's flight sequence autonomously, but also is deployed with two-fault tolerant FDIR (Fault Detection Isolation Recovery) functionality. Since software such as RVFS is very critical for safe and successful HTV operations, exhaustive IV&V is being conducted.RVFS is required to function to avoid HTV's collision to the ISS. The biggest challenge is thoroughness of the verification. Due to its complicated software algorithm to accomplish fully autonomous rendezvous to the ISS, required numbers of test cases can easily go beyond realistic for conventional verification methodologies.IV&V team developed a verification environment on which 1) formal specification model was made from the detailed software design specification and 2) C source code and test sequence were generated/executed from the model automatically.One of the main efforts of this activity was to increase fidelity of the model and the quality of the generated code sufficiently. This will be discussed in the modeling, the model checking and code generation technology sections. The other issue that had to be resolved was the methodology to generate exhaustive test cases for the developed model that takes continuous input values (so called hybrid model). Conventional random test case generation or boundary condition generation does not assure the sufficiency/validity of the test cases because combinations of inputs becomes theoretically infinite for this kind of model [1]. To resolve this problem, IV&V team developed a unique algorithm to generate finite number test cases that satisfies full-scale test requirement. This will be discussed in the testing section.The generated code was put through 550 billion full-path test cases. We succeeded to
Neuromuscular fatigue during dynamic maximal strength and hypertrophic resistance loadings.
Walker, Simon; Davis, Lisa; Avela, Janne; Häkkinen, Keijo
2012-06-01
The purpose of this study was to compare the acute neuromuscular fatigue during dynamic maximal strength and hypertrophic loadings, known to cause different adaptations underlying strength gain during training. Thirteen healthy, untrained males performed two leg press loadings, one week apart, consisting of 15 sets of 1 repetition maximum (MAX) and 5 sets of 10 repetition maximums (HYP). Concentric load and muscle activity, electromyography (EMG) amplitude and median frequency, was assessed throughout each set. Additionally, maximal bilateral isometric force and muscle activity was assessed pre-, mid-, and up to 30 min post-loading. Concentric load during MAX was decreased after set 10 (P<0.05), while the load was maintained throughout HYP. Both loadings caused large reductions in maximal isometric force (MAX=-30±6.4% vs. HYP=-48±9.7%, P<0.001). The decreased concentric and isometric strength during MAX loading was accompanied by reduced EMG amplitude (P<0.05). Conversely, hypertrophic loading caused decreased median frequency only during isometric contractions (P<0.01). During concentric contractions, EMG amplitude increased and median frequency decreased in HYP (P<0.01). Our results indicate reduced neural drive during MAX loading and more complex changes in muscle activity during HYP loading. Copyright © 2011 Elsevier Ltd. All rights reserved.
Inflation in maximal gauged supergravities
Kodama, Hideo; Nozawa, Masato
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Inflation in maximal gauged supergravities
Kodama, Hideo; Nozawa, Masato E-mail: Masato.Nozawa@mi.infn.it
2015-05-01
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the (\\bf 36) and (\\bf 36') representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)× SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall'Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall'Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Maximally Expressive Modeling of Operations Tasks
NASA Technical Reports Server (NTRS)
Jaap, John; Richardson, Lea; Davis, Elizabeth
2002-01-01
Planning and scheduling systems organize "tasks" into a timeline or schedule. The tasks are defined within the scheduling system in logical containers called models. The dictionary might define a model of this type as "a system of things and relations satisfying a set of rules that, when applied to the things and relations, produce certainty about the tasks that are being modeled." One challenging domain for a planning and scheduling system is the operation of on-board experiments for the International Space Station. In these experiments, the equipment used is among the most complex hardware ever developed, the information sought is at the cutting edge of scientific endeavor, and the procedures are intricate and exacting. Scheduling is made more difficult by a scarcity of station resources. The models to be fed into the scheduler must describe both the complexity of the experiments and procedures (to ensure a valid schedule) and the flexibilities of the procedures and the equipment (to effectively utilize available resources). Clearly, scheduling International Space Station experiment operations calls for a "maximally expressive" modeling schema.
Inverting Monotonic Nonlinearities by Entropy Maximization
López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261
Maximally localized Wannier functions: Theory and applications
NASA Astrophysics Data System (ADS)
Marzari, Nicola; Mostofi, Arash A.; Yates, Jonathan R.; Souza, Ivo; Vanderbilt, David
2012-10-01
The electronic ground state of a periodic system is usually described in terms of extended Bloch orbitals, but an alternative representation in terms of localized “Wannier functions” was introduced by Gregory Wannier in 1937. The connection between the Bloch and Wannier representations is realized by families of transformations in a continuous space of unitary matrices, carrying a large degree of arbitrariness. Since 1997, methods have been developed that allow one to iteratively transform the extended Bloch orbitals of a first-principles calculation into a unique set of maximally localized Wannier functions, accomplishing the solid-state equivalent of constructing localized molecular orbitals, or “Boys orbitals” as previously known from the chemistry literature. These developments are reviewed here, and a survey of the applications of these methods is presented. This latter includes a description of their use in analyzing the nature of chemical bonding, or as a local probe of phenomena related to electric polarization and orbital magnetization. Wannier interpolation schemes are also reviewed, by which quantities computed on a coarse reciprocal-space mesh can be used to interpolate onto much finer meshes at low cost, and applications in which Wannier functions are used as efficient basis functions are discussed. Finally the construction and use of Wannier functions outside the context of electronic-structure theory is presented, for cases that include phonon excitations, photonic crystals, and cold-atom optical lattices.
Inverting Monotonic Nonlinearities by Entropy Maximization.
Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F
2016-01-01
This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
Reflection quasilattices and the maximal quasilattice
NASA Astrophysics Data System (ADS)
Boyle, Latham; Steinhardt, Paul J.
2016-08-01
We introduce the concept of a reflection quasilattice, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e., Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we explain that reflection quasilattices only exist in dimensions two, three, and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. Unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. We tabulate the complete set of scale factors for all reflection quasilattices in dimension d >2 , and for all those with quadratic irrational scale factors in d =2 .
Exactly maximally convergent multipoint Padé approximants
NASA Astrophysics Data System (ADS)
Kovacheva, Ralitza K.
2016-12-01
Given a regular compact set E in the complex plane C, a unit measure µ supported by ∂E, a triangular point set β := {βn,kk=1nn=1∞, β ⊂ ∂E and a function f, holomorphic on E, let πn,m β ,f be the associated multipoint β- Padé approximant of order (n, m). We show that if the sequence πn,m β ,f , n ∈ Λ, m- fixed, converges exact maximally to f relatively to the measure µ, then the points βn,k are uniformly distributed on ∂E with respect to µ as n ∈ Λ. Furthermore, a result about the zeros behavior of the exact maximally convergent sequence Λ is provided, under the condition that Λ is "dense enough."
Maximizing TDRS Command Load Lifetime
NASA Technical Reports Server (NTRS)
Brown, Aaron J.
2002-01-01
was therefore the key to achieving this goal. This goal was eventually realized through development of an Excel spreadsheet tool called EMMIE (Excel Mean Motion Interactive Estimation). EMMIE utilizes ground ephemeris nodal data to perform a least-squares fit to inferred mean anomaly as a function of time, thus generating an initial estimate for mean motion. This mean motion in turn drives a plot of estimated downtrack position difference versus time. The user can then manually iterate the mean motion, and determine an optimal value that will maximize command load lifetime. Once this optimal value is determined, the mean motion initially calculated by the command builder tool is overwritten with the new optimal value, and the command load is built for uplink to ISS. EMMIE also provides the capability for command load lifetime to be tracked through multiple TORS ephemeris updates. Using EMMIE, TORS command load lifetimes of approximately 30 days have been achieved.
Alternative trailer configurations for maximizing payloads
Jason D. Thompson; Dana Mitchell; John Klepac
2017-01-01
In order for harvesting contractors to stay ahead of increasing costs, it is imperative that they employ all options to maximize productivity and efficiency. Transportation can account for half the cost to deliver wood to a mill. Contractors seek to maximize truck payload to increase productivity. The Forest Operations Research Unit, Southern Research Station, USDA...
Diurnal Variations in Maximal Oxygen Uptake.
ERIC Educational Resources Information Center
McClellan, Powell D.
A study attempted to determine if diurnal (daily cyclical) variations were present during maximal exercise. The subjects' (30 female undergraduate physical education majors) oxygen consumption and heart rates were monitored while they walked on a treadmill on which the grade was raised every minute. Each subject was tested for maximal oxygen…
Specificity of a Maximal Step Exercise Test
ERIC Educational Resources Information Center
Darby, Lynn A.; Marsh, Jennifer L.; Shewokis, Patricia A.; Pohlman, Roberta L.
2007-01-01
To adhere to the principle of "exercise specificity" exercise testing should be completed using the same physical activity that is performed during exercise training. The present study was designed to assess whether aerobic step exercisers have a greater maximal oxygen consumption (max VO sub 2) when tested using an activity specific, maximal step…
Specificity of a Maximal Step Exercise Test
ERIC Educational Resources Information Center
Darby, Lynn A.; Marsh, Jennifer L.; Shewokis, Patricia A.; Pohlman, Roberta L.
2007-01-01
To adhere to the principle of "exercise specificity" exercise testing should be completed using the same physical activity that is performed during exercise training. The present study was designed to assess whether aerobic step exercisers have a greater maximal oxygen consumption (max VO sub 2) when tested using an activity specific, maximal step…
Dynamically Disordered Quantum Walk as a Maximal Entanglement Generator
NASA Astrophysics Data System (ADS)
Vieira, Rafael; Amorim, Edgard P. M.; Rigolin, Gustavo
2013-11-01
We show that the entanglement between the internal (spin) and external (position) degrees of freedom of a qubit in a random (dynamically disordered) one-dimensional discrete time quantum random walk (QRW) achieves its maximal possible value asymptotically in the number of steps, outperforming the entanglement attained by using ordered QRW. The disorder is modeled by introducing an extra random aspect to QRW, a classical coin that randomly dictates which quantum coin drives the system’s time evolution. We also show that maximal entanglement is achieved independently of the initial state of the walker, study the number of steps the system must move to be within a small fixed neighborhood of its asymptotic limit, and propose two experiments where these ideas can be tested.
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-07
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.
Larsen, Filip J; Weitzberg, Eddie; Lundberg, Jon O; Ekblom, Björn
2010-01-15
The anion nitrate-abundant in our diet-has recently emerged as a major pool of nitric oxide (NO) synthase-independent NO production. Nitrate is reduced stepwise in vivo to nitrite and then NO and possibly other bioactive nitrogen oxides. This reductive pathway is enhanced during low oxygen tension and acidosis. A recent study shows a reduction in oxygen consumption during submaximal exercise attributable to dietary nitrate. We went on to study the effects of dietary nitrate on various physiological and biochemical parameters during maximal exercise. Nine healthy, nonsmoking volunteers (age 30+/-2.3 years, VO(2max) 3.72+/-0.33 L/min) participated in this study, which had a randomized, double-blind crossover design. Subjects received dietary supplementation with sodium nitrate (0.1 mmol/kg/day) or placebo (NaCl) for 2 days before the test. This dose corresponds to the amount found in 100-300 g of a nitrate-rich vegetable such as spinach or beetroot. The maximal exercise tests consisted of an incremental exercise to exhaustion with combined arm and leg cranking on two separate ergometers. Dietary nitrate reduced VO(2max) from 3.72+/-0.33 to 3.62+/-0.31 L/min, P<0.05. Despite the reduction in VO(2max) the time to exhaustion trended to an increase after nitrate supplementation (524+/-31 vs 563+/-30 s, P=0.13). There was a correlation between the change in time to exhaustion and the change in VO(2max) (R(2)=0.47, P=0.04). A moderate dietary dose of nitrate significantly reduces VO(2max) during maximal exercise using a large active muscle mass. This reduction occurred with a trend toward increased time to exhaustion implying that two separate mechanisms are involved: one that reduces VO(2max) and another that improves the energetic function of the working muscles. Copyright 2009 Elsevier Inc. All rights reserved.
Statistical mechanics of influence maximization with thermal noise
NASA Astrophysics Data System (ADS)
Lynn, Christopher W.; Lee, Daniel D.
2017-03-01
The problem of optimally distributing a budget of influence among individuals in a social network, known as influence maximization, has typically been studied in the context of contagion models and deterministic processes, which fail to capture stochastic interactions inherent in real-world settings. Here, we show that by introducing thermal noise into influence models, the dynamics exactly resemble spins in a heterogeneous Ising system. In this way, influence maximization in the presence of thermal noise has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Using this statistical mechanical formulation, we demonstrate analytically that for small external-field budgets, the optimal influence solutions exhibit a highly non-trivial temperature dependence, focusing on high-degree hub nodes at high temperatures and on easily influenced peripheral nodes at low temperatures. For the general problem, we present a projected gradient ascent algorithm that uses the magnetic susceptibility to calculate locally optimal external-field distributions. We apply our algorithm to synthetic and real-world networks, demonstrating that our analytic results generalize qualitatively. Our work establishes a fruitful connection with statistical mechanics and demonstrates that influence maximization depends crucially on the temperature of the system, a fact that has not been appreciated by existing research.
Maximizing your return on people.
Bassi, Laurie; McMurrer, Daniel
2007-03-01
Though most traditional HR performance metrics don't predict organizational performance, alternatives simply have not existed--until now. During the past ten years, researchers Laurie Bassi and Daniel McMurrer have worked to develop a system that allows executives to assess human capital management (HCM) and to use those metrics both to predict organizational performance and to guide organizations' investments in people. The new framework is based on a core set of HCM drivers that fall into five major categories: leadership practices, employee engagement, knowledge accessibility, workforce optimization, and organizational learning capacity. By employing rigorously designed surveys to score a company on the range of HCM practices across the five categories, it's possible to benchmark organizational HCM capabilities, identify HCM strengths and weaknesses, and link improvements or back-sliding in specific HCM practices with improvements or shortcomings in organizational performance. The process requires determining a "maturity" score for each practice, based on a scale of 1 (low) to 5 (high). Over time, evolving maturity scores from multiple surveys can reveal progress in each of the HCM practices and help a company decide where to focus improvement efforts that will have a direct impact on performance. The authors draw from their work with American Standard, South Carolina's Beaufort County School District, and a bevy of financial firms to show how improving HCM scores led to increased sales, safety, academic test scores, and stock returns. Bassi and McMurrer urge HR departments to move beyond the usual metrics and begin using HCM measurement tools to gauge how well people are managed and developed throughout the organization. In this new role, according to the authors, HR can take on strategic responsibility and ensure that superior human capital management becomes central to the organization's culture.
The effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake.
Oh, Deuk-Ja; Hong, Hyeon-Ok; Lee, Bo-Ae
2016-02-01
The purpose of this study is to investigate the effects of strenuous exercises on resting heart rate, blood pressure, and maximal oxygen uptake. To achieve the purpose of the study, a total of 30 subjects were selected, including 15 people who performed continued regular exercises and 15 people as the control group. With regard to data processing, the IBM SPSS Statistics ver. 21.0 was used to calculate the mean and standard deviation. The difference of mean change between groups was verified through an independent t-test. As a result, there were significant differences in resting heart rate, maximal heart rate, maximal systolic blood pressure, and maximal oxygen uptake. However, the maximal systolic blood pressure was found to be an exercise-induced high blood pressure. Thus, it is thought that a risk diagnosis for it through a regular exercise stress test is necessary.
wannier90: A tool for obtaining maximally-localised Wannier functions
NASA Astrophysics Data System (ADS)
Mostofi, Arash A.; Yates, Jonathan R.; Lee, Young-Su; Souza, Ivo; Vanderbilt, David; Marzari, Nicola
2008-05-01
We present wannier90, a program for calculating maximally-localised Wannier functions (MLWF) from a set of Bloch energy bands that may or may not be attached to or mixed with other bands. The formalism works by minimising the total spread of the MLWF in real space. This is done in the space of unitary matrices that describe rotations of the Bloch bands at each k-point. As a result, wannier90 is independent of the basis set used in the underlying calculation to obtain the Bloch states. Therefore, it may be interfaced straightforwardly to any electronic structure code. The locality of MLWF can be exploited to compute band-structure, density of states and Fermi surfaces at modest computational cost. Furthermore, wannier90 is able to output MLWF for visualisation and other post-processing purposes. Wannier functions are already used in a wide variety of applications. These include analysis of chemical bonding in real space; calculation of dielectric properties via the modern theory of polarisation; and as an accurate and minimal basis set in the construction of model Hamiltonians for large-scale systems, in linear-scaling quantum Monte Carlo calculations, and for efficient computation of material properties, such as the anomalous Hall coefficient. wannier90 is freely available under the GNU General Public License from http://www.wannier.org/. Program summaryProgram title: wannier90 Catalogue identifier: AEAK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 556 495 No. of bytes in distributed program, including test data, etc.: 5 709 419 Distribution format: tar.gz Programming language: Fortran 90, perl Computer: any architecture with a Fortran 90 compiler Operating system: Linux, Windows, Solaris, AIX, Tru64
MAXIMAL POINTS OF A REGULAR TRUTH FUNCTION
Every canonical linearly separable truth function is a regular function, but not every regular truth function is linearly separable. The most...promising method of determining which of the regular truth functions are linearly separable r quires finding their maximal and minimal points. In this...report is developed a quick, systematic method of finding the maximal points of any regular truth function in terms of its arithmetic invariants. (Author)
Construction of maximally localized Wannier functions
NASA Astrophysics Data System (ADS)
Zhu, Junbo; Chen, Zhu; Wu, Biao
2017-10-01
We present a general method for constructing maximally localized Wannier functions. It consists of three steps: (i) picking a localized trial wave function, (ii) performing a full band projection, and (iii) orthonormalizing with the Löwdin method. Our method is capable of producing maximally localized Wannier functions without further minimization, and it can be applied straightforwardly to random potentials without using supercells. The effectiveness of our method is demonstrated for both simple bands and composite bands.
Vacuum spacetimes that admit no maximal slice
NASA Astrophysics Data System (ADS)
Witt, Donald M.
1986-09-01
Every closed three-manifold occurs as a spacelike hypersurface of a vacuum spacetime. For most of these three-manifolds, however, the vacuum spacetimes that contain them have no maximal slice. For asymptotically flat manifolds there are no restrictions on which three-manifolds can occur obeying the local energy conditions ρ>=(JaJa)1/2, and the spacetimes that contain them in most cases have no maximal slice.
AUC-Maximizing Ensembles through Metalearning
LeDell, Erin; van der Laan, Mark J.; Peterson, Maya
2016-01-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721
Prediction of Maximal Aerobic Capacity in Severely Burned Children
Porro, Laura; Rivero, Haidy G.; Gonzalez, Dante; Tan, Alai; Herndon, David N.; Suman, Oscar E.
2011-01-01
Introduction Maximal oxygen uptake (VO2 peak) is an indicator of cardiorespiratory fitness, but requires expensive equipment and a relatively high technical skill level. Purpose The aim of this study is to provide a formula for estimating VO2 peak in burned children, using information obtained without expensive equipment. Methods Children, with ≥40% total surface area burned (TBSA), underwent a modified Bruce treadmill test to asses VO2 peak at 6 months after injury. We recorded gender, age, %TBSA, %3rd degree burn, height, weight, treadmill time, maximal speed, maximal grade, and peak heart rate, and applied McHenry’s select algorithm to extract important independent variables and Robust multiple regression to establish prediction equations. Results 42 children; 7 to 17 years old were tested. Robust multiple regression model provided the equation: VO2=10.33 – 0.62 *Age (years) + 1.88 * Treadmill Time (min) + 2.3 (gender; Females = 0, Males = 1). The correlation between measured and estimated VO2 peak was R=0.80. We then validated the equation with a group of 33 burned children, which yielded a correlation between measured and estimated VO2 peak of R=0.79. Conclusions Using only a treadmill and easily gathered information, VO2 peak can be estimated in children with burns. PMID:21316155
Tseng, Kuo-Wei; Tseng, Wei-Chin; Lin, Ming-Ju; Chen, Hsin-Lian; Nosaka, Kazunori; Chen, Trevor C
2016-01-01
This study investigated whether maximal voluntary isometric contractions (MVIC) performed before maximal eccentric contractions (MaxEC) would attenuate muscle damage of the knee extensors. Untrained men were placed to an experimental group that performed 6 sets of 10 MVIC at 90° knee flexion 2 weeks before 6 sets of 10 MaxEC or a control group that performed MaxEC only (n = 13/group). Changes in muscle damage markers were assessed before to 5 days after each exercise. Small but significant changes in maximal voluntary concentric contraction torque, range of motion (ROM) and plasma creatine kinase (CK) activity were evident at immediately to 2 days post-MVIC (p < 0.05), but other variables (e.g. thigh girth, myoglobin concentration, B-mode echo intensity) did not change significantly. Changes in all variables after MaxEC were smaller (p < 0.05) by 45% (soreness)-67% (CK) for the experimental than the control group. These results suggest that MVIC conferred potent protective effect against MaxEC-induced muscle damage.
Assessment of maximal handgrip strength: how many attempts are needed?
Reijnierse, Esmee M; de Jong, Nynke; Trappenburg, Marijke C; Blauw, Gerard Jan; Butler-Browne, Gillian; Gapeyeva, Helena; Hogrel, Jean-Yves; McPhee, Jamie S; Narici, Marco V; Sipilä, Sarianna; Stenroth, Lauri; van Lummel, Rob C; Pijnappels, Mirjam; Meskers, Carel G M; Maier, Andrea B
2017-06-01
Handgrip strength (HGS) is used to identify individuals with low muscle strength (dynapenia). The influence of the number of attempts on maximal HGS is not yet known and may differ depending on age and health status. This study aimed to assess how many attempts of HGS are required to obtain maximal HGS. Three cohorts (939 individuals) differing in age and health status were included. HGS was assessed three times and explored as continuous and dichotomous variable. Paired t-test, intraclass correlation coefficients (ICC) and Bland-Altman analysis were used to test reproducibility of HGS. The number of individuals with misclassified dynapenia at attempts 1 and 2 with respect to attempt 3 were assessed. Results showed the same pattern in all three cohorts. Maximal HGS at attempts 1 and 2 was higher than at attempt 3 on population level (P < 0.001 for all three cohorts). ICC values between all attempts were above 0.8, indicating moderate to high reproducibility. Bland-Altman analysis showed that 41.0 to 58.9% of individuals had the highest HGS at attempt 2 and 12.4 to 37.2% at attempt 3. The percentage of individuals with a maximal HGS above the gender-specific cut-off value at attempt 3 compared with attempts 1 and 2 ranged from 0 to 50.0%, with a higher percentage of misclassification in middle-aged and older populations. Maximal HGS is dependent on the number of attempts, independent of age and health status. To assess maximal HGS, at least three attempts are needed if HGS is considered to be a continuous variable. If HGS is considered as a discrete variable to assess dynapenia, two attempts are sufficient to assess dynapenia in younger populations. Misclassification should be taken into account in middle-aged and older populations. © 2017 The Authors. Journal of Cachexia, Sarcopenia and Muscle published by John Wiley & Sons Ltd on behalf on the Society on Sarcopenia, Cachexia and Wasting Disorders.
An information maximization model of eye movements
NASA Technical Reports Server (NTRS)
Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra
2005-01-01
We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.
An information maximization model of eye movements
NASA Technical Reports Server (NTRS)
Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra
2005-01-01
We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.
[Chemical constituents from Salvia przewalskii Maxim].
Yang, Li-Xin; Li, Xing-Cui; Liu, Chao; Xiao, Lei; Qin, De-Hua; Chen, Ruo-Yun
2011-07-01
The investigation on Salvia przewalskii Maxim was carried out to find the relationship of the constituents and their pharmacological activities. The isolation and purification were performed by various chromatographies such as silica gel, Sephadex LH-20, RP-C18 column chromatography, etc. Further investigation on the fraction of the 95% ethanol extract of Salvia przewalskii Maxim yielded przewalskin Y-1 (1), anhydride of tanshinone-II A (2), sugiol (3), epicryptoacetalide (4), cryptoacetalide (5), arucadiol (6), 1-dehydromiltirone (7), miltirone (8), cryptotanshinone (9), tanshinone II A (10) and isotanshinone-I (11). Their structures were elucidated by the spectral analysis such as NMR (Nuclear Magnetic Resonance) and MS (Mass Spectrometry). Compound 1 is a new compound. Compounds 4 and 5 are mirror isomers (1 : 3). Compounds 4, 5, 6, 8, 11 were isolated from Salvia przewalskii Maxim for the first time.
Lipidome determinants of maximal lifespan in mammals.
Bozek, Katarzyna; Khrameeva, Ekaterina E; Reznick, Jane; Omerbašić, Damir; Bennett, Nigel C; Lewin, Gary R; Azpurua, Jorge; Gorbunova, Vera; Seluanov, Andrei; Regnard, Pierrick; Wanert, Fanelie; Marchal, Julia; Pifferi, Fabien; Aujard, Fabienne; Liu, Zhen; Shi, Peng; Pääbo, Svante; Schroeder, Florian; Willmitzer, Lothar; Giavalisco, Patrick; Khaitovich, Philipp
2017-12-01
Maximal lifespan of mammalian species, even if closely related, may differ more than 10-fold, however the nature of the mechanisms that determine this variability is unresolved. Here, we assess the relationship between maximal lifespan duration and concentrations of more than 20,000 lipid compounds, measured in 669 tissue samples from 6 tissues of 35 species representing three mammalian clades: primates, rodents and bats. We identify lipids associated with species' longevity across the three clades, uncoupled from other parameters, such as basal metabolic rate, body size, or body temperature. These lipids clustered in specific lipid classes and pathways, and enzymes linked to them display signatures of greater stabilizing selection in long-living species, and cluster in functional groups related to signaling and protein-modification processes. These findings point towards the existence of defined molecular mechanisms underlying variation in maximal lifespan among mammals.
Absence of parasympathetic reactivation after maximal exercise.
de Oliveira, Tiago Peçanha; de Alvarenga Mattos, Raphael; da Silva, Rhenan Bartels Ferreira; Rezende, Rafael Andrade; de Lima, Jorge Roberto Perrout
2013-03-01
The ability of the human organism to recover its autonomic balance soon after physical exercise cessation has an important impact on the individual's health status. Although the dynamics of heart rate recovery after maximal exercise has been studied, little is known about heart rate variability after this type of exercise. The aim of this study is to analyse the dynamics of heart rate and heart rate variability recovery after maximal exercise in healthy young men. Fifteen healthy male subjects (21·7 ± 3·4 years; 24·0 ± 2·1 kg m(-2) ) participated in the study. The experimental protocol consisted of an incremental maximal exercise test on a cycle ergometer, until maximal voluntary exhaustion. After the test, recovery R-R intervals were recorded for 5 min. From the absolute differences between peak heart rate values and the heart rate values at 1 and 5 min of the recovery, the heart rate recovery was calculated. Postexercise heart rate variability was analysed from calculations of the SDNN and RMSSD indexes, in 30-s windows (SDNN(30s) and RMSSD(30s) ) throughout recovery. One and 5 min after maximal exercise cessation, the heart rate recovered 34·7 (±6·6) and 75·5 (±6·1) bpm, respectively. With regard to HRV recovery, while the SDNN(30s) index had a slight increase, RMSSD(30s) index remained totally suppressed throughout the recovery, suggesting an absence of vagal modulation reactivation and, possibly, a discrete sympathetic withdrawal. Therefore, it is possible that the main mechanism associated with the fall of HR after maximal exercise is sympathetic withdrawal or a vagal tone restoration without vagal modulation recovery. © 2012 The Authors Clinical Physiology and Functional Imaging © 2012 Scandinavian Society of Clinical Physiology and Nuclear Medicine.
Maximal strength, muscular endurance and inflammatory biomarkers in young adult men.
Vaara, J P; Vasankari, T; Fogelholm, M; Häkkinen, K; Santtila, M; Kyröläinen, H
2014-12-01
The aim was to study associations of maximal strength and muscular endurance with inflammatory biomarkers independent of cardiorespiratory fitness in those with and without abdominal obesity. 686 young healthy men participated (25±5 years). Maximal strength was measured via isometric testing using dynamo-meters to determine maximal strength index. Muscular endurance index consisted of push-ups, sit-ups and repeated squats. An indirect cycle ergometer test until exhaustion was used to estimate maximal aerobic capacity (VO2max). Participants were stratified according to those with (>102 cm) and those without abdominal obesity (<102 cm) based on waist circumference. Inflammatory factors (C-reactive protein, interleukin-6 and tumour necrosis factor alpha) were analysed from serum samples. Maximal strength and muscular endurance were inversely associated with IL-6 in those with (β=-0.49, -0.39, respectively) (p<0.05) and in those without abdominal obesity (β=-0.08, -0.14, respectively) (p<0.05) adjusted for smoking and cardio-respiratory fitness. After adjusting for smoking and cardiorespiratory fitness, maximal strength and muscular endurance were inversely associated with CRP only in those without abdominal obesity (β=-0.11, -0.26, respectively) (p<0.05). This cross-sectional study demonstrated that muscular fitness is inversely associated with C-reactive protein and IL-6 concentrations in young adult men independent of cardiorespi-ratory fitness.
Independence: a new reason for recommending regular exercise to your patients.
Shephard, Roy J
2009-04-01
There are many good reasons to advise regular and adequate physical activity: longevity seems extended by up to 2 years, and the risk of a wide range of chronic disorders is substantially reduced. However, from the viewpoint of the overall, quality-adjusted lifespan, perhaps the most important reason is the ability of physical activity to counter the relentless, age-related decrease in physical capacity (maximal aerobic power, muscle strength, flexibility, and balance). The case is detailed for maximal aerobic power, which deteriorates by about 5 mL/ [kg x min] for each decade of adult life. Independence is generally at risk when the maximal oxygen intake has dropped to 18 mL/ [kg x min] in men and 15 mL/ [kg x min] in women. A sedentary person typically reaches this threshold between 80 and 85 years old. However, regular physical activity can augment maximal oxygen transport by 5 to 10 mL/ [kg x min], setting back the need for institutional support by 10 to 20 years.
Singularity structure of maximally supersymmetric scattering amplitudes.
Arkani-Hamed, Nima; Bourjaily, Jacob L; Cachazo, Freddy; Trnka, Jaroslav
2014-12-31
We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic singularities and is free of any poles at infinity--properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA).
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
Maximally reliable spatial filtering of steady state visual evoked potentials.
Dmochowski, Jacek P; Greaves, Alex S; Norcia, Anthony M
2015-04-01
Due to their high signal-to-noise ratio (SNR) and robustness to artifacts, steady state visual evoked potentials (SSVEPs) are a popular technique for studying neural processing in the human visual system. SSVEPs are conventionally analyzed at individual electrodes or linear combinations of electrodes which maximize some variant of the SNR. Here we exploit the fundamental assumption of evoked responses--reproducibility across trials--to develop a technique that extracts a small number of high SNR, maximally reliable SSVEP components. This novel spatial filtering method operates on an array of Fourier coefficients and projects the data into a low-dimensional space in which the trial-to-trial spectral covariance is maximized. When applied to two sample data sets, the resulting technique recovers physiologically plausible components (i.e., the recovered topographies match the lead fields of the underlying sources) while drastically reducing the dimensionality of the data (i.e., more than 90% of the trial-to-trial reliability is captured in the first four components). Moreover, the proposed technique achieves a higher SNR than that of the single-best electrode or the Principal Components. We provide a freely-available MATLAB implementation of the proposed technique, herein termed "Reliable Components Analysis".
Maximally selected chi-square statistics for ordinal variables.
Boulesteix, Anne-Laure
2006-06-01
The association between a binary variable Y and a variable X having an at least ordinal measurement scale might be examined by selecting a cutpoint in the range of X and then performing an association test for the obtained 2 x 2 contingency table using the chi-square statistic. The distribution of the maximally selected chi-square statistic (i.e. the maximal chi-square statistic over all possible cutpoints) under the null-hypothesis of no association between X and Y is different from the known chi-square distribution. In the last decades, this topic has been extensively studied for continuous X variables, but not for non-continuous variables of at least ordinal measurement scale (which include e.g. classical ordinal or discretized continuous variables). In this paper, we suggest an exact method to determine the finite-sample distribution of maximally selected chi-square statistics in this context. This novel approach can be seen as a method to measure the association between a binary variable and variables having an at least ordinal scale of different types (ordinal, discretized continuous, etc). As an illustration, this method is applied to a new data set describing pregnancy and birth for 811 babies.
Maximizing Team Performance: The Critical Role of the Nurse Leader.
Manges, Kirstin; Scott-Cawiezell, Jill; Ward, Marcia M
2017-01-01
Facilitating team development is challenging, yet critical for ongoing improvement across healthcare settings. The purpose of this exemplary case study is to examine the role of nurse leaders in facilitating the development of a high-performing Change Team in implementing a patient safety initiative (TeamSTEPPs) using the Tuckman Model of Group Development as a guiding framework. The case study is the synthesis of 2.5 years of critical access hospital key informant interviews (n = 50). Critical juncture points related to team development and key nurse leader actions are analyzed, suggesting that nurse leaders are essential to maximize clinical teams' performance. © 2016 Wiley Periodicals, Inc.
Mixed maximal and explosive strength training in recreational endurance runners.
Taipale, Ritva S; Mikkola, Jussi; Salo, Tiina; Hokka, Laura; Vesterinen, Ville; Kraemer, William J; Nummela, Ari; Häkkinen, Keijo
2014-03-01
Supervised periodized mixed maximal and explosive strength training added to endurance training in recreational endurance runners was examined during an 8-week intervention preceded by an 8-week preparatory strength training period. Thirty-four subjects (21-45 years) were divided into experimental groups: men (M, n = 9), women (W, n = 9), and control groups: men (MC, n = 7), women (WC, n = 9). The experimental groups performed mixed maximal and explosive exercises, whereas control subjects performed circuit training with body weight. Endurance training included running at an intensity below lactate threshold. Strength, power, endurance performance characteristics, and hormones were monitored throughout the study. Significance was set at p ≤ 0.05. Increases were observed in both experimental groups that were more systematic than in the control groups in explosive strength (12 and 13% in men and women, respectively), muscle activation, maximal strength (6 and 13%), and peak running speed (14.9 ± 1.2 to 15.6 ± 1.2 and 12.9 ± 0.9 to 13.5 ± 0.8 km Ł h). The control groups showed significant improvements in maximal and explosive strength, but Speak increased only in MC. Submaximal running characteristics (blood lactate and heart rate) improved in all groups. Serum hormones fluctuated significantly in men (testosterone) and in women (thyroid stimulating hormone) but returned to baseline by the end of the study. Mixed strength training combined with endurance training may be more effective than circuit training in recreational endurance runners to benefit overall fitness that may be important for other adaptive processes and larger training loads associated with, e.g., marathon training.
Quantum correlations of two-qubit states with one maximally mixed marginal
NASA Astrophysics Data System (ADS)
Milne, Antony; Jennings, David; Jevtic, Sania; Rudolph, Terry
2014-08-01
We investigate the entanglement, CHSH nonlocality, fully entangled fraction, and symmetric extendibility of two-qubit states that have a single maximally mixed marginal. Within this set of states, the steering ellipsoid formalism has recently highlighted an interesting family of so-called maximally obese states. These are found to have extremal quantum correlation properties that are significant in the steering ellipsoid picture and for the study of two-qubit states in general.
Maximizing the Range of a Projectile.
ERIC Educational Resources Information Center
Brown, Ronald A.
1992-01-01
Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)
Ehrenfest's Lottery--Time and Entropy Maximization
ERIC Educational Resources Information Center
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Maximizing Resource Utilization in Video Streaming Systems
ERIC Educational Resources Information Center
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Why Contextual Preference Reversals Maximize Expected Value
2016-01-01
Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391
Does evolution lead to maximizing behavior?
Lehmann, Laurent; Alger, Ingela; Weibull, Jörgen
2015-07-01
A long-standing question in biology and economics is whether individual organisms evolve to behave as if they were striving to maximize some goal function. We here formalize this "as if" question in a patch-structured population in which individuals obtain material payoffs from (perhaps very complex multimove) social interactions. These material payoffs determine personal fitness and, ultimately, invasion fitness. We ask whether individuals in uninvadable population states will appear to be maximizing conventional goal functions (with population-structure coefficients exogenous to the individual's behavior), when what is really being maximized is invasion fitness at the genetic level. We reach two broad conclusions. First, no simple and general individual-centered goal function emerges from the analysis. This stems from the fact that invasion fitness is a gene-centered multigenerational measure of evolutionary success. Second, when selection is weak, all multigenerational effects of selection can be summarized in a neutral type-distribution quantifying identity-by-descent between individuals within patches. Individuals then behave as if they were striving to maximize a weighted sum of material payoffs (own and others). At an uninvadable state it is as if individuals would freely choose their actions and play a Nash equilibrium of a game with a goal function that combines self-interest (own material payoff), group interest (group material payoff if everyone does the same), and local rivalry (material payoff differences). © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.
Reserve design to maximize species persistence
Robert G. Haight; Laurel E. Travis
2008-01-01
We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...
DNA solution of the maximal clique problem.
Ouyang, Q; Kaplan, P D; Liu, S; Libchaber, A
1997-10-17
The maximal clique problem has been solved by means of molecular biology techniques. A pool of DNA molecules corresponding to the total ensemble of six-vertex cliques was built, followed by a series of selection processes. The algorithm is highly parallel and has satisfactory fidelity. This work represents further evidence for the ability of DNA computing to solve NP-complete search problems.
How to Generate Good Profit Maximization Problems
ERIC Educational Resources Information Center
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Robust Utility Maximization Under Convex Portfolio Constraints
Matoussi, Anis; Mezghani, Hanen Mnif, Mohamed
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
Maximizing the Spectacle of Water Fountains
ERIC Educational Resources Information Center
Simoson, Andrew J.
2009-01-01
For a given initial speed of water from a spigot or jet, what angle of the jet will maximize the visual impact of the water spray in the fountain? This paper focuses on fountains whose spigots are arranged in circular fashion, and couches the measurement of the visual impact in terms of the surface area and the volume under the fountain's natural…
Ehrenfest's Lottery--Time and Entropy Maximization
ERIC Educational Resources Information Center
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Maximizing the Effective Use of Formative Assessments
ERIC Educational Resources Information Center
Riddell, Nancy B.
2016-01-01
In the current age of accountability, teachers must be able to produce tangible evidence of students' concept mastery. This article focuses on implementation of formative assessments before, during, and after instruction in order to maximize teachers' ability to effectively monitor student achievement. Suggested strategies are included to help…
Maximizing Resource Utilization in Video Streaming Systems
ERIC Educational Resources Information Center
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
How to Generate Good Profit Maximization Problems
ERIC Educational Resources Information Center
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Maximizing the Motivated Mind for Emergent Giftedness.
ERIC Educational Resources Information Center
Rea, Dan
2001-01-01
This article explains how the theory of the motivated mind conceptualizes the productive interaction of intelligence, creativity, and achievement motivation and how this theory can help educators to maximize students' emergent potential for giftedness. It discusses the integration of cold-order thinking and hot-chaotic thinking into fluid-adaptive…
Maximal dinucleotide comma-free codes.
Fimmel, Elena; Strüngmann, Lutz
2016-01-21
The problem of retrieval and maintenance of the correct reading frame plays a significant role in RNA transcription. Circular codes, and especially comma-free codes, can help to understand the underlying mechanisms of error-detection in this process. In recent years much attention has been paid to the investigation of trinucleotide circular codes (see, for instance, Fimmel et al., 2014; Fimmel and Strüngmann, 2015a; Michel and Pirillo, 2012; Michel et al., 2012, 2008), while dinucleotide codes had been touched on only marginally, even though dinucleotides are associated to important biological functions. Recently, all maximal dinucleotide circular codes were classified (Fimmel et al., 2015; Michel and Pirillo, 2013). The present paper studies maximal dinucleotide comma-free codes and their close connection to maximal dinucleotide circular codes. We give a construction principle for such codes and provide a graphical representation that allows them to be visualized geometrically. Moreover, we compare the results for dinucleotide codes with the corresponding situation for trinucleotide maximal self-complementary C(3)-codes. Finally, the results obtained are discussed with respect to Crick׳s hypothesis about frame-shift-detecting codes without commas.
A Model of College Tuition Maximization
ERIC Educational Resources Information Center
Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.
2009-01-01
This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…
Maximizing the Motivated Mind for Emergent Giftedness.
ERIC Educational Resources Information Center
Rea, Dan
2001-01-01
This article explains how the theory of the motivated mind conceptualizes the productive interaction of intelligence, creativity, and achievement motivation and how this theory can help educators to maximize students' emergent potential for giftedness. It discusses the integration of cold-order thinking and hot-chaotic thinking into fluid-adaptive…
Understanding violations of Gricean maxims in preschoolers and adults
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed. PMID:26191018
2012-03-16
Independent Assessments: DOE's Systems Integrator convenes independent technical reviews to gauge progress toward meeting specific technical targets and to provide technical information necessary for key decisions.
Wagner, Tyler; Vandergoot, Christopher S.; Tyson, Jeff
2011-01-01
Fishery-independent (FI) surveys provide critical information used for the sustainable management and conservation of fish populations. Because fisheries management often requires the effects of management actions to be evaluated and detected within a relatively short time frame, it is important that research be directed toward FI survey evaluation, especially with respect to the ability to detect temporal trends. Using annual FI gill-net survey data for Lake Erie walleyes Sander vitreus collected from 1978 to 2006 as a case study, our goals were to (1) highlight the usefulness of hierarchical models for estimating spatial and temporal sources of variation in catch per effort (CPE); (2) demonstrate how the resulting variance estimates can be used to examine the statistical power to detect temporal trends in CPE in relation to sample size, duration of sampling, and decisions regarding what data are most appropriate for analysis; and (3) discuss recommendations for evaluating FI surveys and analyzing the resulting data to support fisheries management. This case study illustrated that the statistical power to detect temporal trends was low over relatively short sampling periods (e.g., 5–10 years) unless the annual decline in CPE reached 10–20%. For example, if 50 sites were sampled each year, a 10% annual decline in CPE would not be detected with more than 0.80 power until 15 years of sampling, and a 5% annual decline would not be detected with more than 0.8 power for approximately 22 years. Because the evaluation of FI surveys is essential for ensuring that trends in fish populations can be detected over management-relevant time periods, we suggest using a meta-analysis–type approach across systems to quantify sources of spatial and temporal variation. This approach can be used to evaluate and identify sampling designs that increase the ability of managers to make inferences about trends in fish stocks.
Development of the Medical Maximizer-Minimizer Scale.
Scherer, Laura D; Caverly, Tanner J; Burke, James; Zikmund-Fisher, Brian J; Kullgren, Jeffrey T; Steinley, Douglas; McCarthy, Denis M; Roney, Meghan; Fagerlin, Angela
2016-11-01
Medical over- and underutilization are central problems that stand in the way of delivering optimal health care. As a result, one important question is how people decide to take action, versus not, when it comes to their health. The present article proposes and validates a new measure that captures the extent to which individuals are "medical maximizers" who are predisposed to seek health care even for minor problems, versus "medical minimizers" who prefer to avoid medical intervention unless it is necessary. Studies 1-3 recruited participants using Amazon's Mechanical Turk. Study 1 conducted exploratory factor analysis (EFA) to identify items relevant to the proposed construct. In Study 2 confirmatory factor analysis (CFA) was conducted on the identified items, as well as tests of internal, discriminant, and convergent validity. Study 3 examined test-retest reliability of the scale. Study 4 validated the scale in a non-Internet sample. EFA identified 10 items consistent with the proposed construct, and subsequent CFA showed that the 10 items were best understood with a bifactor model that assessed a single underlying construct consistent with medical maximizing-minimizing, with 3 of the 10 items cross-loading on another independent factor. The scale was distinct from hypochondriasis, distrust in medicine, health care access, and health status, and predicted self-reported health care utilization and a variety of treatment preferences. Individuals have general preferences to maximize versus minimize their use of health care, and these preferences are predictive of health care utilization and treatment preferences across a range of health care contexts. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Auctions with Dynamic Populations: Efficiency and Revenue Maximization
NASA Astrophysics Data System (ADS)
Said, Maher
We study a stochastic sequential allocation problem with a dynamic population of privately-informed buyers. We characterize the set of efficient allocation rules and show that a dynamic VCG mechanism is both efficient and periodic ex post incentive compatible; we also show that the revenue-maximizing direct mechanism is a pivot mechanism with a reserve price. We then consider sequential ascending auctions in this setting, both with and without a reserve price. We construct equilibrium bidding strategies in this indirect mechanism where bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes.
Optimal deployment of resources for maximizing impact in spreading processes.
Lokhov, Andrey Y; Saad, David
2017-09-26
The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of "influential spreaders" for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distribution of available resources hence results from an interplay between network topology and spreading dynamics. We show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on a variety of real-world examples.
Rousanoglou, Elissavet N; Oskouei, Ali E; Herzog, Walter
2007-01-01
Mechanical properties of skeletal muscles are often studied for controlled, electrically induced, maximal, or supra-maximal contractions. However, many mechanical properties, such as the force-length relationship and force enhancement following active muscle stretching, are quite different for maximal and sub-maximal, or electrically induced and voluntary contractions. Force depression, the loss of force observed following active muscle shortening, has been observed and is well documented for electrically induced and maximal voluntary contractions. Since sub-maximal voluntary contractions are arguably the most important for everyday movement analysis and for biomechanical models of skeletal muscle function, it is important to study force depression properties under these conditions. Therefore, the purpose of this study was to examine force depression following sub-maximal, voluntary contractions. Sets of isometric reference and isometric-shortening-isometric test contractions at 30% of maximal voluntary effort were performed with the adductor pollicis muscle. All reference and test contractions were executed by controlling force or activation using a feedback system. Test contractions included adductor pollicis shortening over 10 degrees, 20 degrees, and 30 degrees of thumb adduction. Force depression was assessed by comparing the steady-state isometric forces (activation control) or average electromyograms (EMGs) (force control) following active muscle shortening with those obtained in the corresponding isometric reference contractions. Force was decreased by 20% and average EMG was increased by 18% in the shortening test contractions compared to the isometric reference contractions. Furthermore, force depression was increased with increasing shortening amplitudes, and the relative magnitudes of force depression were similar to those found in electrically stimulated and maximal contractions. We conclude from these results that force depression occurs in sub-maximal
NASA Astrophysics Data System (ADS)
Mara, Thierry A.; Fajraoui, Noura; Younes, Anis; Delay, Frederick
2015-02-01
We introduce the concept of maximal conditional posterior distribution (MCPD) to assess the uncertainty of model parameters in a Bayesian framework. Although, Markov Chains Monte Carlo (MCMC) methods are particularly suited for this task, they become challenging with highly parameterized nonlinear models. The MCPD represents the conditional probability distribution function of a given parameter knowing that the other parameters maximize the conditional posterior density function. Unlike MCMC which accepts or rejects solutions sampled in the parameter space, MCPD is calculated through several optimization processes. Model inversion using MCPD algorithm is particularly useful for highly parameterized problems because calculations are independent. Consequently, they can be evaluated simultaneously with a multi-core computer. In the present work, the MCPD approach is applied to invert a 2D stochastic groundwater flow problem where the log-transmissivity field of the medium is inferred from scarce and noisy data. For this purpose, the stochastic field is expanded onto a set of orthogonal functions using a Karhunen-Loève (KL) transformation. Though the prior guess on the stochastic structure (covariance) of the transmissivity field is erroneous, the MCPD inference of the KL coefficients is able to extract relevant inverse solutions.
Maximizing Macromolecule Crystal Size for Neutron Diffraction Experiments
NASA Technical Reports Server (NTRS)
Judge, R. A.; Kephart, R.; Leardi, R.; Myles, D. A.; Snell, E. H.; vanderWoerd, M.; Curreri, Peter A. (Technical Monitor)
2002-01-01
A challenge in neutron diffraction experiments is growing large (greater than 1 cu mm) macromolecule crystals. In taking up this challenge we have used statistical experiment design techniques to quickly identify crystallization conditions under which the largest crystals grow. These techniques provide the maximum information for minimal experimental effort, allowing optimal screening of crystallization variables in a simple experimental matrix, using the minimum amount of sample. Analysis of the results quickly tells the investigator what conditions are the most important for the crystallization. These can then be used to maximize the crystallization results in terms of reducing crystal numbers and providing large crystals of suitable habit. We have used these techniques to grow large crystals of Glucose isomerase. Glucose isomerase is an industrial enzyme used extensively in the food industry for the conversion of glucose to fructose. The aim of this study is the elucidation of the enzymatic mechanism at the molecular level. The accurate determination of hydrogen positions, which is critical for this, is a requirement that neutron diffraction is uniquely suited for. Preliminary neutron diffraction experiments with these crystals conducted at the Institute Laue-Langevin (Grenoble, France) reveal diffraction to beyond 2.5 angstrom. Macromolecular crystal growth is a process involving many parameters, and statistical experimental design is naturally suited to this field. These techniques are sample independent and provide an experimental strategy to maximize crystal volume and habit for neutron diffraction studies.
Maximizing Macromolecule Crystal Size for Neutron Diffraction Experiments
NASA Technical Reports Server (NTRS)
Judge, R. A.; Kephart, R.; Leardi, R.; Myles, D. A.; Snell, E. H.; vanderWoerd, M.; Curreri, Peter A. (Technical Monitor)
2002-01-01
A challenge in neutron diffraction experiments is growing large (greater than 1 cu mm) macromolecule crystals. In taking up this challenge we have used statistical experiment design techniques to quickly identify crystallization conditions under which the largest crystals grow. These techniques provide the maximum information for minimal experimental effort, allowing optimal screening of crystallization variables in a simple experimental matrix, using the minimum amount of sample. Analysis of the results quickly tells the investigator what conditions are the most important for the crystallization. These can then be used to maximize the crystallization results in terms of reducing crystal numbers and providing large crystals of suitable habit. We have used these techniques to grow large crystals of Glucose isomerase. Glucose isomerase is an industrial enzyme used extensively in the food industry for the conversion of glucose to fructose. The aim of this study is the elucidation of the enzymatic mechanism at the molecular level. The accurate determination of hydrogen positions, which is critical for this, is a requirement that neutron diffraction is uniquely suited for. Preliminary neutron diffraction experiments with these crystals conducted at the Institute Laue-Langevin (Grenoble, France) reveal diffraction to beyond 2.5 angstrom. Macromolecular crystal growth is a process involving many parameters, and statistical experimental design is naturally suited to this field. These techniques are sample independent and provide an experimental strategy to maximize crystal volume and habit for neutron diffraction studies.
Reproducibility of diurnal variation in sub-maximal swimming.
Martin, L; Thompson, K
2000-08-01
Swimming training is characterised by the use of early morning and evening training sessions. The purpose of the present study was to investigate if the physiological and kinematic responses to swimming a typical training set are affected by time of day. Seven male collegiate swimmers (age 22 +/- 4 years; height 1.8 +/- 0.1 m; mass 82.1 +/- 4.1 kg) completed a standardised 600 m warm up followed by a 10 x 100 m sub-maximal freestyle set twice a day (06:30-08:00 h and 16:30-20:00 h) on three separate days. Swimming speed was controlled precisely throughout (limits of agreement multiplied/divided 1.00) using a new pacing device (Aquapacer, Challenge and Response, Inverurie, Scotland). Oral temperature (To), heart rate (HR), minute ventilation (VE), oxygen uptake (VO2), carbon dioxide expired (VCO2), respiratory exchange ratio (RER), capillary blood lactate (Bla), and glucose (BGL) were measured at rest and post exercise. Stroke rate (SR) and HR were measured during the first nine 100 m repetitions while rate of perceived exertion (RPE) was measured immediately after each 100 m. Significant diurnal variation was found at rest in To, HR, and VO2 on all three days and for VE and VCO2 on two of the days (P<0.05). During the training set no diurnal variation was evident in HR and SR responses or repetition times although RPE values were higher in morning trials compared to evening trials on two of the three days (P < 0.05). Post-exercise significant diurnal variation was found for To and blood glucose for two of the three days (P < 0.05). Therefore, although diurnal variation is evident at rest, there is no subsequent effect on physiological and kinematic responses during a sub-maximal training set following a standardised warm-up.
Tian, Guojing; Wu, Xia; Cao, Ya; Gao, Fei; Wen, Qiaoyan
2016-07-21
It is known that there exist two locally operational settings, local operations with one-way and two-way classical communication. And recently, some sets of maximally entangled states have been built in specific dimensional quantum systems, which can be locally distinguished only with two-way classical communication. In this paper, we show the existence of such sets is general, through constructing such sets in all the remaining quantum systems. Specifically, such sets including p or n maximally entangled states will be built in the quantum system of (np - 1) ⊗ (np - 1) with n ≥ 3 and p being a prime number, which completes the picture that such sets do exist in every possible dimensional quantum system.
Tian, Guojing; Wu, Xia; Cao, Ya; Gao, Fei; Wen, Qiaoyan
2016-01-01
It is known that there exist two locally operational settings, local operations with one-way and two-way classical communication. And recently, some sets of maximally entangled states have been built in specific dimensional quantum systems, which can be locally distinguished only with two-way classical communication. In this paper, we show the existence of such sets is general, through constructing such sets in all the remaining quantum systems. Specifically, such sets including p or n maximally entangled states will be built in the quantum system of (np − 1) ⊗ (np − 1) with n ≥ 3 and p being a prime number, which completes the picture that such sets do exist in every possible dimensional quantum system. PMID:27440087
Maximal sfermion flavour violation in super-GUTs
Ellis, John; Olive, Keith A.; Velasco-Sevilla, Liliana
2016-10-20
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m_{0} specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m_{1/2}, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m_{1/2} and generation independent. In this case, the input scalar masses m_{0} may violate flavour maximally, a scenario we call MaxSFV, and there is no supersymmetric flavour problem. As a result, we illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity
Maximal sfermion flavour violation in super-GUTs
Ellis, John; Olive, Keith A.; Velasco-Sevilla, Liliana
2016-10-20
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m0 specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m1/2, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m1/2 and generation independent. In this case, the input scalar masses m0 may violate flavour maximally, a scenario we call MaxSFV,more » and there is no supersymmetric flavour problem. As a result, we illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity« less
Price of anarchy is maximized at the percolation threshold.
Skinner, Brian
2015-05-01
When many independent users try to route traffic through a network, the flow can easily become suboptimal as a consequence of congestion of the most efficient paths. The degree of this suboptimality is quantified by the so-called price of anarchy (POA), but so far there are no general rules for when to expect a large POA in a random network. Here I address this question by introducing a simple model of flow through a network with randomly placed congestible and incongestible links. I show that the POA is maximized precisely when the fraction of congestible links matches the percolation threshold of the lattice. Both the POA and the total cost demonstrate critical scaling near the percolation threshold.
An updated version of wannier90: A tool for obtaining maximally-localised Wannier functions
NASA Astrophysics Data System (ADS)
Mostofi, Arash A.; Yates, Jonathan R.; Pizzi, Giovanni; Lee, Young-Su; Souza, Ivo; Vanderbilt, David; Marzari, Nicola
2014-08-01
wannier90 is a program for calculating maximally-localised Wannier functions (MLWFs) from a set of Bloch energy bands that may or may not be attached to or mixed with other bands. The formalism works by minimising the total spread of the MLWFs in real space. This is done in the space of unitary matrices that describe rotations of the Bloch bands at each k-point. As a result, wannier90 is independent of the basis set used in the underlying calculation to obtain the Bloch states. Therefore, it may be interfaced straightforwardly to any electronic structure code. The locality of MLWFs can be exploited to compute band-structure, density of states and Fermi surfaces at modest computational cost. Furthermore, wannier90 is able to output MLWFs for visualisation and other post-processing purposes. Wannier functions are already used in a wide variety of applications. These include analysis of chemical bonding in real space; calculation of dielectric properties via the modern theory of polarisation; and as an accurate and minimal basis set in the construction of model Hamiltonians for large-scale systems, in linear-scaling quantum Monte Carlo calculations, and for efficient computation of material properties, such as the anomalous Hall coefficient. We present here an updated version of wannier90, wannier90 2.0, including minor bug fixes and parallel (MPI) execution for band-structure interpolation and the calculation of properties such as density of states, Berry curvature and orbital magnetisation. wannier90 is freely available under the GNU General Public License from http://www.wannier.org/.
Maximally entangled states of four nonbinary particles
NASA Astrophysics Data System (ADS)
Gaeta, Mario; Klimov, Andrei; Lawrence, Jay
2015-01-01
Systems of four nonbinary particles, with each particle having d ≥3 internal states, exhibit maximally entangled states that are inaccessible to four qubits. This breaks the pattern of two- and three-particle systems, in which the existing graph states are equally accessible to binary and nonbinary systems alike. We compare the entanglement properties of these special states (called P states) with those of the more familiar Greenberger-Horne-Zeilinger (GHZ) and cluster states accessible to qubits. The comparison includes familiar entanglement measures, the "steering" of states by projective measurements, and the probability that two such measurements, chosen at random, leave the remaining particles in a Bell state. These comparisons demonstrate not only that P -state entanglement is stronger than the other types but also that it is maximal in a well-defined sense. We prove that GHZ, cluster, and P states represent all possible entanglement classes of four-particle graph states with prime d ≥3 .
Maximally discordant mixed states of two qubits
Galve, Fernando; Giorgi, Gian Luca; Zambrini, Roberta
2011-01-15
We study the relative strength of classical and quantum correlations, as measured by discord, for two-qubit states. Quantum correlations appear only in the presence of classical correlations, while the reverse is not always true. We identify the family of states that maximize the discord for a given value of the classical correlations and show that the largest attainable discord for mixed states is greater than for pure states. The difference between discord and entanglement is emphasized by the remarkable fact that these states do not maximize entanglement and are, in some cases, even separable. Finally, by random generation of density matrices uniformly distributed over the whole Hilbert space, we quantify the frequency of the appearance of quantum and classical correlations for different ranks.
Nondecoupling of maximal supergravity from the superstring.
Green, Michael B; Ooguri, Hirosi; Schwarz, John H
2007-07-27
We consider the conditions necessary for obtaining perturbative maximal supergravity in d dimensions as a decoupling limit of type II superstring theory compactified on a (10-d) torus. For dimensions d=2 and d=3, it is possible to define a limit in which the only finite-mass states are the 256 massless states of maximal supergravity. However, in dimensions d>or=4, there are infinite towers of additional massless and finite-mass states. These correspond to Kaluza-Klein charges, wound strings, Kaluza-Klein monopoles, or branes wrapping around cycles of the toroidal extra dimensions. We conclude that perturbative supergravity cannot be decoupled from string theory in dimensions>or=4. In particular, we conjecture that pure N=8 supergravity in four dimensions is in the Swampland.
Experimental implementation of maximally synchronizable networks
NASA Astrophysics Data System (ADS)
Sevilla-Escoboza, R.; Buldú, J. M.; Boccaletti, S.; Papo, D.; Hwang, D.-U.; Huerta-Cuellar, G.; Gutiérrez, R.
2016-04-01
Maximally synchronizable networks (MSNs) are acyclic directed networks that maximize synchronizability. In this paper, we investigate the feasibility of transforming networks of coupled oscillators into their corresponding MSNs. By tuning the weights of any given network so as to reach the lowest possible eigenratio λN /λ2, the synchronized state is guaranteed to be maintained across the longest possible range of coupling strengths. We check the robustness of the resulting MSNs with an experimental implementation of a network of nonlinear electronic oscillators and study the propagation of the synchronization errors through the network. Importantly, a method to study the effects of topological uncertainties on the synchronizability is proposed and explored both theoretically and experimentally.
Maximal endurance time at VO2max.
Morton, R H; Billat, V
2000-08-01
There has been significant recent interest in the minimal running velocity which elicits VO2max. There also exists a maximal velocity, beyond which the subject becomes exhausted before VO2max is reached. Between these limits, there must be some velocity that permits maximum endurance at VO2max, and this parameter has also been of recent interest. This study was undertaken to model the system and investigate these parameters. We model the bioenergetic process based on a two-component (aerobic and anaerobic) energy system, a two-component (fast and slow) oxygen uptake system, and a linear control system for maximal attainable velocity resulting from declining anaerobic reserves as exercise proceeds. Ten male subjects each undertook four trials in random order, running until exhaustion at velocities corresponding to 90, 100, 120, and 140% of the minimum velocity estimated as being required to elicit their individual VO2max. The model development produces a skewed curve for endurance time at VO2max, with a single maximum. This curve has been successfully fitted to endurance data collected from all 10 subjects (R2 = 0.821, P < 0.001). For this group of subjects, the maximal endurance time at VO2max can be achieved running at a pace corresponding to 88% of the minimal velocity, which elicits VO2max as measured in an incremental running test. Average maximal endurance at VO2max is predicted to be 603 s in a total endurance time of 1024 s at this velocity. Endurance time at VO2max can be realistically modeled by a curve, which permits estimation of several parameters of interest; such as the minimal running velocity sufficient to elicit VO2max, and that velocity for which endurance at VO2max is the longest.
Profit Maximization Models for Exponential Decay Processes.
1980-08-01
assumptions could easily be analyzed in similar fashion. References [1] Bensoussan, A., Hurst , E.G. and Nislund, B., Management Applications of Modern...TVIPe OF r 04PORNT A i M0 CiH O .V9RAE PROFIT MAXIMIZATION .ODELS FOR EXPONENT IAL Technical Report DECAY PROCESSES August 1990 ~~~I. PtA’OR~idNG ONqG
Maximal supersymmetry and B-mode targets
NASA Astrophysics Data System (ADS)
Kallosh, Renata; Linde, Andrei; Wrase, Timm; Yamada, Yusuke
2017-04-01
Extending the work of Ferrara and one of the authors [1], we present dynamical cosmological models of α-attractors with plateau potentials for 3 α = 1, 2, 3, 4, 5, 6, 7. These models are motivated by geometric properties of maximally supersymmetric theories: M-theory, superstring theory, and maximal N = 8 supergravity. After a consistent truncation of maximal to minimal supersymmetry in a seven-disk geometry, we perform a two-step procedure: 1) we introduce a superpotential, which stabilizes the moduli of the seven-disk geometry in a supersymmetric minimum, 2) we add a cosmological sector with a nilpotent stabilizer, which breaks supersymmetry spontaneously and leads to a desirable class of cosmological attractor models. These models with n s consistent with observational data, and with tensor-to-scalar ratio r ≈ 10-2 - 10-3, provide natural targets for future B-mode searches. We relate the issue of stability of inflationary trajectories in these models to tessellations of a hyperbolic geometry.
Maximal respiratory pressures among adolescent swimmers.
Rocha Crispino Santos, M A; Pinto, M L; Couto Sant'Anna, C; Bernhoeft, M
2011-01-01
Maximal inspiratory pressures (MIP) and maximal expiratory pressures (MEP) are useful indices of respiratory muscle strength in athletes. The aims of this study were: to describe the strength of the respiratory muscles of Olympic junior swim team, at baseline and after a standard physical training; and to determine if there is a differential inspiratory and expiratory pressure response to the physical training. A cross-sectional study evaluated 28 international-level swimmers with ages ranging from 15 to 17 years, 19 (61 %) being males. At baseline, MIP was found to be lower in females (P = .001). The mean values reached by males and females were: MIP(cmH2O) = M: 100.4 (± 26.5)/F: 67.8 (± 23.2); MEP (cmH2O) = M: 87.4 (± 20.7)/F: 73.9 (± 17.3). After the physical training they reached: MIP (cmH2O) = M: 95.3 (± 30.3)/F: 71.8 (± 35.6); MEP (cmH2O) = M: 82.8 (± 26.2)/F: 70.4 (± 8.3). No differential pressure responses were observed in either males or females. These results suggest that swimmers can sustain the magnitude of the initial maximal pressures. Other studies should be developed to clarify if MIP and MEP could be used as a marker of an athlete's performance.
Formation Control for the MAXIM Mission
NASA Technical Reports Server (NTRS)
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.
Maximal acceleration is non-rotating
NASA Astrophysics Data System (ADS)
Page, Don N.
1998-06-01
In a stationary axisymmetric spacetime, the angular velocity of a stationary observer whose acceleration vector is Fermi-Walker transported is also the angular velocity that locally extremizes the magnitude of the acceleration of such an observer. The converse is also true if the spacetime is symmetric under reversing both t and 0264-9381/15/6/020/img1 together. Thus a congruence of non-rotating acceleration worldlines (NAW) is equivalent to a stationary congruence accelerating locally extremely (SCALE). These congruences are defined completely locally, unlike the case of zero angular momentum observers (ZAMOs), which requires knowledge around a symmetry axis. The SCALE subcase of a stationary congruence accelerating maximally (SCAM) is made up of stationary worldlines that may be considered to be locally most nearly at rest in a stationary axisymmetric gravitational field. Formulae for the angular velocity and other properties of the SCALEs are given explicitly on a generalization of an equatorial plane, infinitesimally near a symmetry axis, and in a slowly rotating gravitational field, including the far-field limit, where the SCAM is shown to be counter-rotating relative to infinity. These formulae are evaluated in particular detail for the Kerr-Newman metric. Various other congruences are also defined, such as a stationary congruence rotating at minimum (SCRAM), and stationary worldlines accelerating radially maximally (SWARM), both of which coincide with a SCAM on an equatorial plane of reflection symmetry. Applications are also made to the gravitational fields of maximally rotating stars, the Sun and the Solar System.
The “Independent Components” of Natural Scenes are Edge Filters
BELL, ANTHONY J.; SEJNOWSKI, TERRENCE J.
2010-01-01
It has previously been suggested that neurons with line and edge selectivities found in primary visual cortex of cats and monkeys form a sparse, distributed representation of natural scenes, and it has been reasoned that such responses should emerge from an unsupervised learning algorithm that attempts to find a factorial code of independent visual features. We show here that a new unsupervised learning algorithm based on information maximization, a nonlinear “infomax” network, when applied to an ensemble of natural scenes produces sets of visual filters that are localized and oriented. Some of these filters are Gabor-like and resemble those produced by the sparseness-maximization network. In addition, the outputs of these filters are as independent as possible, since this infomax network performs Independent Components Analysis or ICA, for sparse (super-gaussian) component distributions. We compare the resulting ICA filters and their associated basis functions, with other decorrelating filters produced by Principal Components Analysis (PCA) and zero-phase whitening filters (ZCA). The ICA filters have more sparsely distributed (kurtotic) outputs on natural scenes. They also resemble the receptive fields of simple cells in visual cortex, which suggests that these neurons form a natural, information-theoretic coordinate system for natural images. PMID:9425547
General conditions for maximal violation of non-contextuality in discrete and continuous variables
NASA Astrophysics Data System (ADS)
Laversanne-Finot, A.; Ketterer, A.; Barros, M. R.; Walborn, S. P.; Coudreau, T.; Keller, A.; Milman, P.
2017-04-01
The contextuality of quantum mechanics can be shown by the violation of inequalities based on measurements of well chosen observables. An important property of such observables is that their expectation value can be expressed in terms of probabilities for obtaining two exclusive outcomes. Examples of such inequalities have been constructed using either observables with a dichotomic spectrum or using periodic functions obtained from displacement operators in phase space. Here we identify the general conditions on the spectral decomposition of observables demonstrating state independent contextuality of quantum mechanics. Our results not only unify existing strategies for maximal violation of state independent non-contextuality inequalities but also lead to new scenarios enabling such violations. Among the consequences of our results is the impossibility of having a state independent maximal violation of non-contextuality in the Peres–Mermin scenario with discrete observables of odd dimensions.
Liu, Ming-Yu; Tuzel, Oncel; Ramalingam, Srikumar; Chellappa, Rama
2014-01-01
We propose a new objective function for clustering. This objective function consists of two components: the entropy rate of a random walk on a graph and a balancing term. The entropy rate favors formation of compact and homogeneous clusters, while the balancing function encourages clusters with similar sizes and penalizes larger clusters that aggressively group samples. We present a novel graph construction for the graph associated with the data and show that this construction induces a matroid--a combinatorial structure that generalizes the concept of linear independence in vector spaces. The clustering result is given by the graph topology that maximizes the objective function under the matroid constraint. By exploiting the submodular and monotonic properties of the objective function, we develop an efficient greedy algorithm. Furthermore, we prove an approximation bound of (1/2) for the optimality of the greedy solution. We validate the proposed algorithm on various benchmarks and show its competitive performances with respect to popular clustering algorithms. We further apply it for the task of superpixel segmentation. Experiments on the Berkeley segmentation data set reveal its superior performances over the state-of-the-art superpixel segmentation algorithms in all the standard evaluation metrics.
Cardiorespiratory Coordination in Repeated Maximal Exercise.
Garcia-Retortillo, Sergi; Javierre, Casimiro; Hristovski, Robert; Ventura, Josep L; Balagué, Natàlia
2017-01-01
Increases in cardiorespiratory coordination (CRC) after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC) analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate) was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1) were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax), maximal oxygen consumption (VO2 max), or ventilatory threshold (VT), an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08) was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43) in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT) between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC evaluation in
Cardiorespiratory Coordination in Repeated Maximal Exercise
Garcia-Retortillo, Sergi; Javierre, Casimiro; Hristovski, Robert; Ventura, Josep L.; Balagué, Natàlia
2017-01-01
Increases in cardiorespiratory coordination (CRC) after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC) analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate) was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1) were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax), maximal oxygen consumption (VO2 max), or ventilatory threshold (VT), an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08) was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43) in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT) between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC evaluation in
An information-theoretic analysis of return maximization in reinforcement learning.
Iwata, Kazunori
2011-12-01
We present a general analysis of return maximization in reinforcement learning. This analysis does not require assumptions of Markovianity, stationarity, and ergodicity for the stochastic sequential decision processes of reinforcement learning. Instead, our analysis assumes the asymptotic equipartition property fundamental to information theory, providing a substantially different view from that in the literature. As our main results, we show that return maximization is achieved by the overlap of typical and best sequence sets, and we present a class of stochastic sequential decision processes with the necessary condition for return maximization. We also describe several examples of best sequences in terms of return maximization in the class of stochastic sequential decision processes, which satisfy the necessary condition.
Nagahara, Ryu; Mizutani, Mirai; Matsuo, Akifumi; Kanehisa, Hiroaki; Fukunaga, Tetsuo
2017-09-27
We aimed to clarify the mechanical determinants of sprinting performance during acceleration and maximal speed phases of a single sprint, using ground reaction forces (GRFs). While 18 male athletes performed a 60-m sprint, GRF was measured at every step over a 50-m distance from the start. Variables during the entire acceleration phase were approximated with a fourth-order polynomial. Subsequently, accelerations at 55%, 65%, 75%, 85%, and 95% of maximal speed, and running speed during the maximal speed phase were determined as sprinting performance variables. Ground reaction impulses and mean GRFs during the acceleration and maximal speed phases were selected as independent variables. Stepwise multiple regression analysis selected propulsive and braking impulses as contributors to acceleration at 55%-95% (β > 0.724) and 75%-95% (β > 0.176), respectively, of maximal speed. Moreover, mean vertical force was a contributor to maximal running speed (β = 0.481). The current results demonstrate that exerting a large propulsive force during the entire acceleration phase, suppressing braking force when approaching maximal speed, and producing a large vertical force during the maximal speed phase are essential for achieving greater acceleration and maintaining higher maximal speed, respectively.
Automatic sets and Delone sets
NASA Astrophysics Data System (ADS)
Barbé, A.; von Haeseler, F.
2004-04-01
Automatic sets D\\subset{\\bb Z}^m are characterized by having a finite number of decimations. They are equivalently generated by fixed points of certain substitution systems, or by certain finite automata. As examples, two-dimensional versions of the Thue-Morse, Baum-Sweet, Rudin-Shapiro and paperfolding sequences are presented. We give a necessary and sufficient condition for an automatic set D\\subset{\\bb Z}^m to be a Delone set in {\\bb R}^m . The result is then extended to automatic sets that are defined as fixed points of certain substitutions. The morphology of automatic sets is discussed by means of examples.
Merging Groups to Maximize Object Partition Comparison.
ERIC Educational Resources Information Center
Klastorin, T. D.
1980-01-01
The problem of objectively comparing two independently determined partitions of N objects or variables is discussed. A similarity measure based on the simple matching coefficient is defined and related to previously suggested measures. (Author/JKS)
Merging Groups to Maximize Object Partition Comparison.
ERIC Educational Resources Information Center
Klastorin, T. D.
1980-01-01
The problem of objectively comparing two independently determined partitions of N objects or variables is discussed. A similarity measure based on the simple matching coefficient is defined and related to previously suggested measures. (Author/JKS)
Cole, James R; Dodge, William W; Findley, John S; Young, Stephen K; Horn, Bruce D; Kalkwarf, Kenneth L; Martin, Max M; Winder, Ronald L
2015-05-01
This Point/Counterpoint article discusses the transformation of dental practice from the traditional solo/small-group (partnership) model of the 1900s to large Dental Support Organizations (DSO) that support affiliated dental practices by providing nonclinical functions such as, but not limited to, accounting, human resources, marketing, and legal and practice management. Many feel that DSO-managed group practices (DMGPs) with employed providers will become the setting in which the majority of oral health care will be delivered in the future. Viewpoint 1 asserts that the traditional dental practice patterns of the past are shifting as many younger dentists gravitate toward employed positions in large group practices or the public sector. Although educational debt is relevant in predicting graduates' practice choices, other variables such as gender, race, and work-life balance play critical roles as well. Societal characteristics demonstrated by aging Gen Xers and those in the Millennial generation blend seamlessly with the opportunities DMGPs offer their employees. Viewpoint 2 contends the traditional model of dental care delivery-allowing entrepreneurial practitioners to make decisions in an autonomous setting-is changing but not to the degree nor as rapidly as Viewpoint 1 professes. Millennials entering the dental profession, with characteristics universally attributed to their generation, see value in the independence and flexibility that a traditional practice allows. Although DMGPs provide dentists one option for practice, several alternative delivery models offer current dentists and future dental school graduates many of the advantages of DMGPs while allowing them to maintain the independence and freedom a traditional practice provides.
Erectile hydraulics: maximizing inflow while minimizing outflow.
Meldrum, David R; Burnett, Arthur L; Dorey, Grace; Esposito, Katherine; Ignarro, Louis J
2014-05-01
Penile rigidity depends on maximizing inflow while minimizing outflow. The aim of this review is to describe the principal factors and mechanisms involved. Erectile quality is the main outcome measure. Data from the pertinent literature were examined to inform our conclusions. Nitric oxide (NO) is the principal factor increasing blood flow into the penis. Penile engorgement and the pelvic floor muscles maintain an adequate erection by impeding outflow of blood by exerting pressure on the penile veins from within and from outside of the penile tunica. Extrinsic pressure by the pelvic floor muscles further raises intracavernosal pressure above maximum inflow pressure to achieve full penile rigidity. Aging and poor lifestyle choices are associated with metabolic impediments to NO production. Aging is also associated with fewer smooth muscle cells and increased fibrosis within the corpora cavernosa, preventing adequate penile engorgement and pressure on the penile veins. Those same penile structural changes occur rapidly following the penile nerve injury that accompanies even "nerve-sparing" radical prostatectomy and are largely prevented in animal models by early chronic use of a phosphodiesterase type 5 (PDE5) inhibitor. Pelvic floor muscles may also decrease in tone and bulk with age, and pelvic floor muscle exercises have been shown to improve erectile function to a similar degree compared with a PDE5 inhibitor in men with erectile dysfunction (ED). Because NO is critical for vascular health and ED is strongly associated with cardiovascular disease, maximal attention should be focused on measures known to increase vascular NO production, including the use of PDE5 inhibitors. Attention should also be paid to early, regular use of PDE5 inhibition to reduce the incidence of ED following penile nerve injury and to assuring normal function of the pelvic floor muscles. These approaches to maximizing erectile function are complementary rather than competitive, as they
Postactivation Potentiation Biases Maximal Isometric Strength Assessment
Lima, Leonardo Coelho Rabello; Oliveira, Felipe Bruno Dias; Oliveira, Thiago Pires; Assumpção, Claudio de Oliveira; Greco, Camila Coelho; Cardozo, Adalgiso Croscato; Denadai, Benedito Sérgio
2014-01-01
Postactivation potentiation (PAP) is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs). The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n = 23) performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT), time to achieve it (tPTI), contractile impulse (CI), root mean square of the electromyographic signal during PTI (RMS), and rate of torque development (RTD), in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m), RTD (746 ± 152 N·m·s−1versus 727 ± 158 N·m·s−1), and RMS (59.1 ± 12.2% RMSMAX versus 54.8 ± 9.4% RMSMAX) were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms). We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables. PMID:25133157
Maximizing versus satisficing: happiness is a matter of choice.
Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R
2002-11-01
Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.
Electromagnetically induced grating with maximal atomic coherence
Carvalho, Silvania A.; Araujo, Luis E. E. de
2011-10-15
We describe theoretically an atomic diffraction grating that combines an electromagnetically induced grating with a coherence grating in a double-{Lambda} atomic system. With the atom in a condition of maximal coherence between its lower levels, the combined gratings simultaneously diffract both the incident probe beam as well as the signal beam generated through four-wave mixing. A special feature of the atomic grating is that it will diffract any beam resonantly tuned to any excited state of the atom accessible by a dipole transition from its ground state.
Maximizing results in reconstruction of cheek defects.
Mureau, Marc A M; Hofer, Stefan O P
2009-07-01
The face is exceedingly important, as it is the medium through which individuals interact with the rest of society. Reconstruction of cheek defects after trauma or surgery is a continuing challenge for surgeons who wish to reliably restore facial function and appearance. Important in aesthetic facial reconstruction are the aesthetic unit principles, by which the face can be divided in central facial units (nose, lips, eyelids) and peripheral facial units (cheeks, forehead, chin). This article summarizes established options for reconstruction of cheek defects and provides an overview of several modifications as well as tips and tricks to avoid complications and maximize aesthetic results.
Isolating a Cell Maximally Secreting Acetylcholinesterase
1985-04-01
research is to isolate a cell line maximally secreting human acetylcholinesterase (AChE, acetylcholine hydrolase, EC 3.1.1.7). This positive...Figures 17-42 " ’ . . ! I I I 1 List of Figures for Annual Report 1. Secretion Levels of Human AChE for Various Cell Lines 2. Culture Data for Human ...secretion of AChE, the A-204 human rhabdomyosarcoma muscle cell line (described previously) has proven to be the most consistent secretor of enzyme (2-4
Independent Component Analysis of Textures
NASA Technical Reports Server (NTRS)
Manduchi, Roberto; Portilla, Javier
2000-01-01
A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.
Independent EEG Sources Are Dipolar
Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott
2012-01-01
Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308
Cormie, Prue; McGuigan, Michael R; Newton, Robert U
2011-02-01
This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and
Fredriksson, Albin Hårdemark, Björn; Forsgren, Anders
2015-07-15
Purpose: This paper introduces a method that maximizes the probability of satisfying the clinical goals in intensity-modulated radiation therapy treatments subject to setup uncertainty. Methods: The authors perform robust optimization in which the clinical goals are constrained to be satisfied whenever the setup error falls within an uncertainty set. The shape of the uncertainty set is included as a variable in the optimization. The goal of the optimization is to modify the shape of the uncertainty set in order to maximize the probability that the setup error will fall within the modified set. Because the constraints enforce the clinical goals to be satisfied under all setup errors within the uncertainty set, this is equivalent to maximizing the probability of satisfying the clinical goals. This type of robust optimization is studied with respect to photon and proton therapy applied to a prostate case and compared to robust optimization using an a priori defined uncertainty set. Results: Slight reductions of the uncertainty sets resulted in plans that satisfied a larger number of clinical goals than optimization with respect to a priori defined uncertainty sets, both within the reduced uncertainty sets and within the a priori, nonreduced, uncertainty sets. For the prostate case, the plans taking reduced uncertainty sets into account satisfied 1.4 (photons) and 1.5 (protons) times as many clinical goals over the scenarios as the method taking a priori uncertainty sets into account. Conclusions: Reducing the uncertainty sets enabled the optimization to find better solutions with respect to the errors within the reduced as well as the nonreduced uncertainty sets and thereby achieve higher probability of satisfying the clinical goals. This shows that asking for a little less in the optimization sometimes leads to better overall plan quality.
Maximally Entangled States of a Two-Qubit System
NASA Astrophysics Data System (ADS)
Singh, Manu P.; Rajput, B. S.
2013-12-01
Entanglement has been explored as one of the key resources required for quantum computation, the functional dependence of the entanglement measures on spin correlation functions has been established, correspondence between evolution of maximally entangled states (MES) of two-qubit system and representation of SU(2) group has been worked out and the evolution of MES under a rotating magnetic field has been investigated. Necessary and sufficient conditions for the general two-qubit state to be maximally entangled state (MES) have been obtained and a new set of MES constituting a very powerful and reliable eigen basis (different from magic bases) of two-qubit systems has been constructed. In terms of the MES constituting this basis, Bell’s States have been generated and all the qubits of two-qubit system have been obtained. It has shown that a MES corresponds to a point in the SO(3) sphere and an evolution of MES corresponds to a trajectory connecting two points on this sphere. Analysing the evolution of MES under a rotating magnetic field, it has been demonstrated that a rotating magnetic field is equivalent to a three dimensional rotation in real space leading to the evolution of a MES.
Maximizing Information Diffusion in the Cyber-physical Integrated Network.
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-11-11
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks.
Maximal power outputs during the Wingate anaerobic test.
Patton, J F; Murphy, M M; Frederick, F A
1985-04-01
The purpose of this study was to determine the resistance loads which elicit maximal values of power output (PO) during performance of the Wingate test (WT). Nineteen male subjects (mean age, 25.1 yrs; mean VO2 max, 3.52 l/min) performed multiple WTs in a random order at resistances ranging from 3.23 to 6.76 joules/pedal rev/kg BW. Tests were carried out on a Monark cycle ergometer modified to permit instantaneous application of resistance. Revolutions were determined by a computer interfaced frequency counter. The mean resistances eliciting the highest peak power (PP) and mean power (MP) outputs were 5.65 and 5.53 joules/pedal rev/kg BW, respectively (average of 5.59 joules/pedal rev/kg BW). Both PP and MP were significantly higher (15.5% and 13.0%, respectively) using a resistance load of 5.59 compared to the Wingate setting of 4.41 joules/pedal rev/kg BW. The test-retest reliability for PP and MP ranged between 0.91 and 0.93 at both resistance loads. Body weight and thigh volume did not significantly estimate the individual resistances eliciting maximal POs. The data suggest that resistance be assigned according to the subjects BW but consideration be given to increasing the resistance from that presently used in various laboratories.
Maximally supersymmetric planar Yang-Mills amplitudes at five loops
Bern, Z.; Carrasco, J. J. M.; Johansson, H.; Kosower, D. A.
2007-12-15
We present an Ansatz for the planar five-loop four-point amplitude in maximally supersymmetric Yang-Mills theory in terms of loop integrals. This Ansatz exploits the recently observed correspondence between integrals with simple conformal properties and those found in the four-point amplitudes of the theory through four loops. We explain how to identify all such integrals systematically. We make use of generalized unitarity in both four and D dimensions to determine the coefficients of each of these integrals in the amplitude. Maximal cuts, in which we cut all propagators of a given integral, are an especially effective means for determining these coefficients. The set of integrals and coefficients determined here will be useful for computing the five-loop cusp anomalous dimension of the theory which is of interest for nontrivial checks of the AdS/CFT duality conjecture. It will also be useful for checking a conjecture that the amplitudes have an iterative structure allowing for their all-loop resummation, whose link to a recent string-side computation by Alday and Maldacena opens a new venue for quantitative AdS/CFT comparisons.
ERIC Educational Resources Information Center
Idaho Univ., Moscow.
This guide to independent study in Idaho begins with introductory information on the following aspects of independent study: the Independent Study in Idaho consortium, student eligibility, special needs, starting dates, registration, costs, textbooks and instructional materials, e-mail and faxing, refunds, choosing a course, time limits, speed…
Steps to Independent Living Series.
ERIC Educational Resources Information Center
Lobb, Nancy
This set of six activity books and a teacher's guide is designed to help students from eighth grade to adulthood with special needs to learn independent living skills. The activity books have a reading level of 2.5 and address: (1) "How to Get Well When You're Sick or Hurt," including how to take a temperature, see a doctor, and use medicines…
Steps to Independent Living Series.
ERIC Educational Resources Information Center
Lobb, Nancy
This set of six activity books and a teacher's guide is designed to help students from eighth grade to adulthood with special needs to learn independent living skills. The activity books have a reading level of 2.5 and address: (1) "How to Get Well When You're Sick or Hurt," including how to take a temperature, see a doctor, and use medicines…
ERIC Educational Resources Information Center
Giorgis, Cyndi; Johnson, Nancy J.
2002-01-01
Presents annotations of approximately 30 titles grouped in text sets. Defines a text set as five to ten books on a particular topic or theme. Discusses books on the following topics: living creatures; pirates; physical appearance; natural disasters; and the Irish potato famine. (SG)
ERIC Educational Resources Information Center
Giorgis, Cyndi; Johnson, Nancy J.
2002-01-01
Presents annotations of approximately 30 titles grouped in text sets. Defines a text set as five to ten books on a particular topic or theme. Discusses books on the following topics: living creatures; pirates; physical appearance; natural disasters; and the Irish potato famine. (SG)
Maximal liquid bridges between horizontal cylinders
NASA Astrophysics Data System (ADS)
Cooray, Himantha; Huppert, Herbert E.; Neufeld, Jerome A.
2016-08-01
We investigate two-dimensional liquid bridges trapped between pairs of identical horizontal cylinders. The cylinders support forces owing to surface tension and hydrostatic pressure that balance the weight of the liquid. The shape of the liquid bridge is determined by analytically solving the nonlinear Laplace-Young equation. Parameters that maximize the trapping capacity (defined as the cross-sectional area of the liquid bridge) are then determined. The results show that these parameters can be approximated with simple relationships when the radius of the cylinders is small compared with the capillary length. For such small cylinders, liquid bridges with the largest cross-sectional area occur when the centre-to-centre distance between the cylinders is approximately twice the capillary length. The maximum trapping capacity for a pair of cylinders at a given separation is linearly related to the separation when it is small compared with the capillary length. The meniscus slope angle of the largest liquid bridge produced in this regime is also a linear function of the separation. We additionally derive approximate solutions for the profile of a liquid bridge, using the linearized Laplace-Young equation. These solutions analytically verify the above-mentioned relationships obtained for the maximization of the trapping capacity.
Maximal lactate steady state in Judo
de Azevedo, Paulo Henrique Silva Marques; Pithon-Curi, Tania; Zagatto, Alessandro Moura; Oliveira, João; Perez, Sérgio
2014-01-01
Summary Background: the purpose of this study was to verify the validity of respiratory compensation threshold (RCT) measured during a new single judo specific incremental test (JSIT) for aerobic demand evaluation. Methods: to test the validity of the new test, the JSIT was compared with Maximal Lactate Steady State (MLSS), which is the gold standard procedure for aerobic demand measuring. Eight well-trained male competitive judo players (24.3 ± 7.9 years; height of 169.3 ± 6.7cm; fat mass of 12.7 ± 3.9%) performed a maximal incremental specific test for judo to assess the RCT and performed on 30-minute MLSS test, where both tests were performed mimicking the UchiKomi drills. Results: the intensity at RCT measured on JSIT was not significantly different compared to MLSS (p=0.40). In addition, it was observed high and significant correlation between MLSS and RCT (r=0.90, p=0.002), as well as a high agreement. Conclusions: RCT measured during JSIT is a valid procedure to measure the aerobic demand, respecting the ecological validity of Judo. PMID:25332923
Spiders Tune Glue Viscosity to Maximize Adhesion.
Amarpuri, Gaurav; Zhang, Ci; Diaz, Candido; Opell, Brent D; Blackledge, Todd A; Dhinojwala, Ali
2015-11-24
Adhesion in humid conditions is a fundamental challenge to both natural and synthetic adhesives. Yet, glue from most spider species becomes stickier as humidity increases. We find the adhesion of spider glue, from five diverse spider species, maximizes at very different humidities that matches their foraging habitats. By using high-speed imaging and spreading power law, we find that the glue viscosity varies over 5 orders of magnitude with humidity for each species, yet the viscosity at maximal adhesion for each species is nearly identical, 10(5)-10(6) cP. Many natural systems take advantage of viscosity to improve functional response, but spider glue's humidity responsiveness is a novel adaptation that makes the glue stickiest in each species' preferred habitat. This tuning is achieved by a combination of proteins and hygroscopic organic salts that determines water uptake in the glue. We therefore anticipate that manipulation of polymer-salts interaction to control viscosity can provide a simple mechanism to design humidity responsive smart adhesives.
Maximal liquid bridges between horizontal cylinders
Huppert, Herbert E.; Neufeld, Jerome A.
2016-01-01
We investigate two-dimensional liquid bridges trapped between pairs of identical horizontal cylinders. The cylinders support forces owing to surface tension and hydrostatic pressure that balance the weight of the liquid. The shape of the liquid bridge is determined by analytically solving the nonlinear Laplace–Young equation. Parameters that maximize the trapping capacity (defined as the cross-sectional area of the liquid bridge) are then determined. The results show that these parameters can be approximated with simple relationships when the radius of the cylinders is small compared with the capillary length. For such small cylinders, liquid bridges with the largest cross-sectional area occur when the centre-to-centre distance between the cylinders is approximately twice the capillary length. The maximum trapping capacity for a pair of cylinders at a given separation is linearly related to the separation when it is small compared with the capillary length. The meniscus slope angle of the largest liquid bridge produced in this regime is also a linear function of the separation. We additionally derive approximate solutions for the profile of a liquid bridge, using the linearized Laplace–Young equation. These solutions analytically verify the above-mentioned relationships obtained for the maximization of the trapping capacity. PMID:27616922
Fractional stereo matching using expectation-maximization.
Xiong, Wei; Chung, Hin Shun; Jia, Jiaya
2009-03-01
In our fractional stereo matching problem, a foreground object with a fractional boundary is blended with a background scene using unknown transparencies. Due to the spatially varying disparities in different layers, one foreground pixel may be blended with different background pixels in stereo images, making the color constancy commonly assumed in traditional stereo matching not hold any more. To tackle this problem, in this paper, we introduce a probabilistic framework constraining the matching of pixel colors, disparities, and alpha values in different layers, and propose an automatic optimization method to solve a Maximizing a Posterior (MAP) problem using Expectation-Maximization (EM), given only a short-baseline stereo input image pair. Our method encodes the effect of background occlusion by layer blending without requiring a special detection process. The alpha computation process in our unified framework can be regarded as a new approach by natural image matting, which handles appropriately the situation when the background color is similar to that of the foreground object. We demonstrate the efficacy of our method by experimenting with challenging stereo images and making comparisons with state-of-the-art methods.
Maximal liquid bridges between horizontal cylinders.
Cooray, Himantha; Huppert, Herbert E; Neufeld, Jerome A
2016-08-01
We investigate two-dimensional liquid bridges trapped between pairs of identical horizontal cylinders. The cylinders support forces owing to surface tension and hydrostatic pressure that balance the weight of the liquid. The shape of the liquid bridge is determined by analytically solving the nonlinear Laplace-Young equation. Parameters that maximize the trapping capacity (defined as the cross-sectional area of the liquid bridge) are then determined. The results show that these parameters can be approximated with simple relationships when the radius of the cylinders is small compared with the capillary length. For such small cylinders, liquid bridges with the largest cross-sectional area occur when the centre-to-centre distance between the cylinders is approximately twice the capillary length. The maximum trapping capacity for a pair of cylinders at a given separation is linearly related to the separation when it is small compared with the capillary length. The meniscus slope angle of the largest liquid bridge produced in this regime is also a linear function of the separation. We additionally derive approximate solutions for the profile of a liquid bridge, using the linearized Laplace-Young equation. These solutions analytically verify the above-mentioned relationships obtained for the maximization of the trapping capacity.
FACTORS WHICH CONTROL MAXIMAL GROWTH OF BACTERIA
Sinclair, N. A.; Stokes, J. L.
1962-01-01
Sinclair, N. A. (Washington State University, Pullman) and J. L. Stokes. Factors which control maximal growth of bacteria. J. Bacteriol. 83:1147–1154. 1962.—In a chemically defined medium containing 1% glucose and 0.1% (NH4)2SO4, both of these compounds are virtually exhausted by the growth of Pseudomonas fluorescens. If these carbon, energy, and nitrogen sources are added back to the culture filtrate, maximal growth to the level of the original culture is obtained. This process can be repeated several times with the same results. Eventually, however, the supply of minerals in the culture limits growth. When the nutrient levels are raised to 3% glucose and 0.3% (NH4)2SO4, lack of oxygen and low pH limit growth before the supply of nutrients is exhausted. There is no evidence that specific autoinhibitory substances are produced either in chemically defined or complex nitrogenous media or that physical crowding of the cells limits growth. The results with Escherichia coli are similar to those with P. fluorescens. However, after a few growth cycles aerobically and after only one growth cycle anaerobically, inhibitory substances, probably organic acids, accumulate and limit growth. PMID:13913264
Maximizing strain in miniaturized dielectric elastomer actuators
NASA Astrophysics Data System (ADS)
Rosset, Samuel; Araromi, Oluwaseun; Shea, Herbert
2015-04-01
We present a theoretical model to optimise the unidirectional motion of a rigid object bonded to a miniaturized dielectric elastomer actuator (DEA), a configuration found for example in AMI's haptic feedback devices, or in our tuneable RF phase shifter. Recent work has shown that unidirectional motion is maximized when the membrane is both anistropically prestretched and subjected to a dead load in the direction of actuation. However, the use of dead weights for miniaturized devices is clearly highly impractical. Consequently smaller devices use the membrane itself to generate the opposing force. Since the membrane covers the entire frame, one has the same prestretch condition in the active (actuated) and passive zones. Because the passive zone contracts when the active zone expands, it does not provide a constant restoring force, reducing the maximum achievable actuation strain. We have determined the optimal ratio between the size of the electrode (active zone) and the passive zone, as well as the optimal prestretch in both in-plane directions, in order to maximize the absolute displacement of the rigid object placed at the active/passive border. Our model and experiments show that the ideal active ratio is 50%, with a displacement twice smaller than what can be obtained with a dead load. We expand our fabrication process to also show how DEAs can be laser-post-processed to remove carefully chosen regions of the passive elastomer membrane, thereby increasing the actuation strain of the device.
Functional trait diversity maximizes ecosystem multifunctionality.
Gross, Nicolas; Le Bagousse-Pinguet, Yoann; Liancourt, Pierre; Berdugo, Miguel; Gotelli, Nicholas J; Maestre, Fernando T
2017-05-01
Understanding the relationship between biodiversity and ecosystem functioning has been a core ecological research topic over the last decades. Although a key hypothesis is that the diversity of functional traits determines ecosystem functioning, we do not know how much trait diversity is needed to maintain multiple ecosystem functions simultaneously (multifunctionality). Here, we uncovered a scaling relationship between the abundance distribution of two key plant functional traits (specific leaf area, maximum plant height) and multifunctionality in 124 dryland plant communities spread over all continents except Antarctica. For each trait, we found a strong empirical relationship between the skewness and the kurtosis of the trait distributions that cannot be explained by chance. This relationship predicted a strikingly high trait diversity within dryland plant communities, which was associated with a local maximization of multifunctionality. Skewness and kurtosis had a much stronger impact on multifunctionality than other important multifunctionality drivers such as species richness and aridity. The scaling relationship identified here quantifies how much trait diversity is required to maximize multifunctionality locally. Trait distributions can be used to predict the functional consequences of biodiversity loss in terrestrial ecosystems.
Maximal coherence in a generic basis
NASA Astrophysics Data System (ADS)
Yao, Yao; Dong, G. H.; Ge, Li; Li, Mo; Sun, C. P.
2016-12-01
Since quantum coherence is an undoubted characteristic trait of quantum physics, the quantification and application of quantum coherence has been one of the long-standing central topics in quantum information science. Within the framework of a resource theory of quantum coherence proposed recently, a fiducial basis should be preselected for characterizing the quantum coherence in specific circumstances, namely, the quantum coherence is a basis-dependent quantity. Therefore, a natural question is raised: what are the maximum and minimum coherences contained in a certain quantum state with respect to a generic basis? While the minimum case is trivial, it is not so intuitive to verify in which basis the quantum coherence is maximal. Based on the coherence measure of relative entropy, we indicate the particular basis in which the quantum coherence is maximal for a given state, where the Fourier matrix (or more generally, complex Hadamard matrices) plays a critical role in determining the basis. Intriguingly, though we can prove that the basis associated with the Fourier matrix is a stationary point for optimizing the l1 norm of coherence, numerical simulation shows that it is not a global optimal choice.
Associations of maximal strength and muscular endurance with cardiovascular risk factors.
Vaara, J P; Fogelholm, M; Vasankari, T; Santtila, M; Häkkinen, K; Kyröläinen, H
2014-04-01
The aim was to study the associations of maximal strength and muscular endurance with single and clustered cardiovascular risk factors. Muscular endurance, maximal strength, cardiorespiratory fitness and waist circumference were measured in 686 young men (25±5 years). Cardiovascular risk factors (plasma glucose, serum high- and low-density lipoprotein cholesterol, triglycerides, blood pressure) were determined. The risk factors were transformed to z-scores and the mean of values formed clustered cardiovascular risk factor. Muscular endurance was inversely associated with triglycerides, s-LDL-cholesterol, glucose and blood pressure (β=-0.09 to - 0.23, p<0.05), and positively with s-HDL cholesterol (β=0.17, p<0.001) independent of cardiorespiratory fitness. Muscular endurance was negatively associated with the clustered cardiovascular risk factor independent of cardiorespiratory fitness (β=-0.26, p<0.05), whereas maximal strength was not associated with any of the cardiovascular risk factors or the clustered cardiovascular risk factor independent of cardiorespiratory fitness. Furthermore, cardiorespiratory fitness was inversely associated with triglycerides, s-LDL-cholesterol and the clustered cardiovascular risk factor (β=-0.14 to - 0.24, p<0.005), as well as positively with s-HDL cholesterol (β=0.11, p<0.05) independent of muscular fitness. This cross-sectional study demonstrated that in young men muscular endurance and cardiorespiratory fitness were independently associated with the clustering of cardiovascular risk factors, whereas maximal strength was not. © Georg Thieme Verlag KG Stuttgart · New York.
Beyond "utilitarianism": maximizing the clinical impact of moral judgment research.
Rosas, Alejandro; Koenigs, Michael
2014-01-01
The use of hypothetical moral dilemmas--which pit utilitarian considerations of welfare maximization against emotionally aversive "personal" harms--has become a widespread approach for studying the neuropsychological correlates of moral judgment in healthy subjects, as well as in clinical populations with social, cognitive, and affective deficits. In this article, we propose that a refinement of the standard stimulus set could provide an opportunity to more precisely identify the psychological factors underlying performance on this task, and thereby enhance the utility of this paradigm for clinical research. To test this proposal, we performed a re-analysis of previously published moral judgment data from two clinical populations: neurological patients with prefrontal brain damage and psychopathic criminals. The results provide intriguing preliminary support for further development of this assessment paradigm.
Ridge network detection in crumpled paper via graph density maximization.
Hsu, Chiou-Ting; Huang, Marvin
2012-10-01
Crumpled sheets of paper tend to exhibit a specific and complex structure, which is described by physicists as ridge networks. Existing literature shows that the automation of ridge network detection in crumpled paper is very challenging because of its complex structure and measuring distortion. In this paper, we propose to model the ridge network as a weighted graph and formulate the ridge network detection as an optimization problem in terms of the graph density. First, we detect a set of graph nodes and then determine the edge weight between each pair of nodes to construct a complete graph. Next, we define a graph density criterion and formulate the detection problem to determine a subgraph with maximal graph density. Further, we also propose to refine the graph density by including a pairwise connectivity into the criterion to improve the connectivity of the detected ridge network. Our experimental results show that, with the density criterion, our proposed method effectively automates the ridge network detection.
Accurate and efficient maximal ball algorithm for pore network extraction
NASA Astrophysics Data System (ADS)
Arand, Frederick; Hesser, Jürgen
2017-04-01
The maximal ball (MB) algorithm is a well established method for the morphological analysis of porous media. It extracts a network of pores and throats from volumetric data. This paper describes structural modifications to the algorithm, while the basic concepts are preserved. Substantial improvements to accuracy and efficiency are achieved as follows: First, all calculations are performed on a subvoxel accurate distance field, and no approximations to discretize balls are made. Second, data structures are simplified to keep memory usage low and improve algorithmic speed. Third, small and reasonable adjustments increase speed significantly. In volumes with high porosity, memory usage is improved compared to classic MB algorithms. Furthermore, processing is accelerated more than three times. Finally, the modified MB algorithm is verified by extracting several network properties from reference as well as real data sets. Runtimes are measured and compared to literature.
Symmetry-adapted Wannier functions in the maximal localization procedure
NASA Astrophysics Data System (ADS)
Sakuma, R.
2013-06-01
A procedure to construct symmetry-adapted Wannier functions in the framework of the maximally localized Wannier function approach [Marzari and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.56.12847 56, 12847 (1997); Souza, Marzari, and Vanderbilt, Phys. Rev. BPRBMDO1098-012110.1103/PhysRevB.65.035109 65, 035109 (2001)] is presented. In this scheme, the minimization of the spread functional of the Wannier functions is performed with constraints that are derived from symmetry properties of the specified set of the Wannier functions and the Bloch functions used to construct them, therefore one can obtain a solution that does not necessarily yield the global minimum of the spread functional. As a test of this approach, results of atom-centered Wannier functions for GaAs and Cu are presented.
Text-Independent, Open-Set Speaker Recognition
1996-03-01
w(k) = 1 + - sin (1) 2 L where k (1 < k < L) is the index of the cepstral coefficients. 2.2.2.4 RASTA . Similar to liftering, the RelAtive SpecTrA... RASTA ) process is a means of reducing the cepstral coefficients’ sensitivity to noise [24] [25]. Hermansky et al [24] describe the RASTA process as the...frame spectral changes [24]. In addition to compensating for the time-varying channel bias, RASTA process- ing also removes the global mean of the
An Approach to Keeping Independent Colleges Independent.
ERIC Educational Resources Information Center
Northwest Area Foundation, St. Paul, Minn.
As a result of the financial difficulties faced by independent colleges in the northwestern United States, the Northwest Area Foundation in 1972 surveyed the administrations of 80 private colleges to get a profile of the colleges, a list of their current problems, and some indication of how the problems might be approached. The three top problems…
Native Nations of Quebec: Independence within Independence?
ERIC Educational Resources Information Center
Williams, Paul
1995-01-01
Aboriginal nations oppose the separation of Quebec from Canada because they favor confederations, multiple international boundaries present jurisdictional nightmares, federal programs might disappear, and Quebec's history of aggression against Aboriginal peoples plus the ethnic nature of its nationalism suggest an independent Quebec is a potential…
Necessary and Sufficient Condition for Quantum State-Independent Contextuality.
Cabello, Adán; Kleinmann, Matthias; Budroni, Costantino
2015-06-26
We solve the problem of whether a set of quantum tests reveals state-independent contextuality and use this result to identify the simplest set of the minimal dimension. We also show that identifying state-independent contextuality graphs [R. Ramanathan and P. Horodecki, Phys. Rev. Lett. 112, 040404 (2014)] is not sufficient for revealing state-independent contextuality.
CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design
Chen, Yong
2016-01-01
A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html. PMID:27487245
With age a lower individual breathing reserve is associated with a higher maximal heart rate.
Burtscher, Martin; Gatterer, Hannes; Faulhaber, Martin; Burtscher, Johannes
2017-09-14
Maximal heart rate (HRmax) is linearly declining with increasing age. Regular exercise training is supposed to partly prevent this decline, whereas sex and habitual physical activity do not. High exercise capacity is associated with a high cardiac output (HR x stroke volume) and high ventilatory requirements. Due to the close cardiorespiratory coupling, we hypothesized that the individual ventilatory response to maximal exercise might be associated with the age-related HRmax. Retrospective analyses have been conducted on the results of 129 consecutively performed routine cardiopulmonary exercise tests. The study sample comprised healthy subjects of both sexes of a broad range of age (20-86 years). Maximal values of power output, minute ventilation, oxygen uptake and heart rate were assessed by the use of incremental cycle spiroergometry. Linear multivariate regression analysis revealed that in addition to age the individual breathing reserve at maximal exercise was independently predictive for HRmax. A lower breathing reserve due to a high ventilatory demand and/or a low ventilatory capacity, which is more pronounced at a higher age, was associated with higher HRmax. Age explained the observed variance in HRmax by 72% and was improved to 83% when the variable "breathing reserve" was entered. The presented findings indicate an independent association between the breathing reserve at maximal exercise and maximal heart rate, i.e. a low individual breathing reserve is associated with a higher age-related HRmax. A deeper understanding of this association has to be investigated in a more physiological scenario. Copyright © 2017 Elsevier B.V. All rights reserved.
Bell inequalities for multipartite qubit quantum systems and their maximal violation
NASA Astrophysics Data System (ADS)
Li, Ming; Fei, Shao-Ming
2012-11-01
We present a set of Bell inequalities for multiqubit quantum systems. These Bell inequalities are shown to be able to detect multiqubit entanglement better than previous Bell inequalities such as Werner-Wolf-Zukowski-Brukner ones. Computable formulas are presented for calculating the maximal violations of these Bell inequalities for any multiqubit states.
Independent component analysis in spiking neurons.
Savin, Cristina; Joshi, Prashant; Triesch, Jochen
2010-04-22
Although models based on independent component analysis (ICA) have been successful in explaining various properties of sensory coding in the cortex, it remains unclear how networks of spiking neurons using realistic plasticity rules can realize such computation. Here, we propose a biologically plausible mechanism for ICA-like learning with spiking neurons. Our model combines spike-timing dependent plasticity and synaptic scaling with an intrinsic plasticity rule that regulates neuronal excitability to maximize information transmission. We show that a stochastically spiking neuron learns one independent component for inputs encoded either as rates or using spike-spike correlations. Furthermore, different independent components can be recovered, when the activity of different neurons is decorrelated by adaptive lateral inhibition.
Classification with asymmetric label noise: Consistency and maximal denoising
Blanchard, Gilles; Flaska, Marek; Handy, Gregory; Pozzi, Sara; Scott, Clayton
2016-09-20
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that a majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.
Classification with asymmetric label noise: Consistency and maximal denoising
Blanchard, Gilles; Flaska, Marek; Handy, Gregory; ...
2016-09-20
In many real-world classification problems, the labels of training examples are randomly corrupted. Most previous theoretical work on classification with label noise assumes that the two classes are separable, that the label noise is independent of the true class label, or that the noise proportions for each class are known. In this work, we give conditions that are necessary and sufficient for the true class-conditional distributions to be identifiable. These conditions are weaker than those analyzed previously, and allow for the classes to be nonseparable and the noise levels to be asymmetric and unknown. The conditions essentially state that amore » majority of the observed labels are correct and that the true class-conditional distributions are “mutually irreducible,” a concept we introduce that limits the similarity of the two distributions. For any label noise problem, there is a unique pair of true class-conditional distributions satisfying the proposed conditions, and we argue that this pair corresponds in a certain sense to maximal denoising of the observed distributions. Our results are facilitated by a connection to “mixture proportion estimation,” which is the problem of estimating the maximal proportion of one distribution that is present in another. We establish a novel rate of convergence result for mixture proportion estimation, and apply this to obtain consistency of a discrimination rule based on surrogate loss minimization. Experimental results on benchmark data and a nuclear particle classification problem demonstrate the efficacy of our approach. MSC 2010 subject classifications: Primary 62H30; secondary 68T10. Keywords and phrases: Classification, label noise, mixture proportion estimation, surrogate loss, consistency.« less
Hwang, Won-Jeong; Kim, Jung-Hyun; Jeon, Seo-Hyun; Chung, Yijung
2015-01-01
[Purpose] This study aimed to examine the relationship between maximal lateral reaching distance on the affected side and weight shifting using the Multi-directional Reach Test in persons with stoke. [Subjects] Fifty-one chronic stroke participants were recruited from two rehabilitation hospitals. This study administered the Berg Balance Scale, Timed Up-and-Go, Trunk Impairment Scale, Modified Barthel Index and measured different maximal reaching distances. [Results] The maximal lateral reaching distance on the affected side was correlated with the BBS (r=0.571), TUG (r=−0.478), TIS (r=0.561), and MBI scores (r=0.499), the lateral reaching distance in all directions on the non-affected side (r=0.785), the maximal backward reaching distance (r=0.723), and the maximal forward reaching distance (r=0.673). The maximal reaching distance on the affected side was also affected by that on the non-affected side, in addition to the maximal backward reaching distance and MBI score. The final step model of stepwise multiple regression was explained 69.5%. [Conclusion] Maximal lateral reaching distance on the affected side as determined by the Multi-directional Reach Test is a good method of assessing functional performance in stroke patients. Data regarding maximal reaching distance on the non-affected side can be used to measure functional impairment on the affected side in clinical settings. PMID:26504275
Polycrystalline configurations that maximize electrical resistivity
NASA Astrophysics Data System (ADS)
Nesi, Vincenzo; Milton, Graeme W.
A lower bound on the effective conductivity tensor of polycrystalline aggregates formed from a single basic crystal of conductivity σ was recently established by Avellaneda. Cherkaev, Lurie and Milton. The bound holds for any basic crystal, but for isotropic aggregates of a uniaxial crystal, the bound is achieved by a sphere assemblage model of Schulgasser. This left open the question of attainability of the bound when the crystal is not uniaxial. The present work establishes that the bound is always attained by a rather large class of polycrystalline materials. These polycrystalline materials, with maximal electrical resistivity, are constructed by sequential lamination of the basic crystal and rotations of itself on widely separated length scales. The analysis is facilitated by introducing a tensor S = 0( 0I + σ) -1 where 0 > 0 is chosen so that Tr S = 1. This tensor s is related to the electric field in the optimal polycrystalline configurations.
Maximizing profitability in a hospital outpatient pharmacy.
Jorgenson, J A; Kilarski, J W; Malatestinic, W N; Rudy, T A
1989-07-01
This paper describes the strategies employed to increase the profitability of an existing ambulatory pharmacy operated by the hospital. Methods to generate new revenue including implementation of a home parenteral therapy program, a home enteral therapy program, a durable medical equipment service, and home care disposable sales are described. Programs to maximize existing revenue sources such as increasing the capture rate on discharge prescriptions, increasing "walk-in" prescription traffic and increasing HMO prescription volumes are discussed. A method utilized to reduce drug expenditures is also presented. By minimizing expenses and increasing the revenues for the ambulatory pharmacy operation, net profit increased from +26,000 to over +140,000 in one year.
Dispatch Scheduling to Maximize Exoplanet Detection
NASA Astrophysics Data System (ADS)
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
Multipartite maximally entangled states in symmetric scenarios
NASA Astrophysics Data System (ADS)
González-Guillén, Carlos E.
2012-08-01
We consider the class of (N+1)-partite states suitable for protocols where there is a powerful party, the authority, and the other N parties play the same role, namely, the state of their system lies in the symmetric Hilbert space. We show that, within this scenario, there is a “maximally entangled state” that can be transform by a local operations and classical communication protocol into any other state. In addition, we show how to use the protocol efficiently, including the construction of the state, and discuss security issues for possible applications to cryptographic protocols. As an immediate consequence we recover a sequential protocol that implements the 1-to-N symmetric cloning.
Robust determination of maximally localized Wannier functions
NASA Astrophysics Data System (ADS)
Cancès, Éric; Levitt, Antoine; Panati, Gianluca; Stoltz, Gabriel
2017-02-01
We propose an algorithm to determine maximally localized Wannier functions (MLWFs). This algorithm, based on recent theoretical developments, does not require any physical input such as initial guesses for the Wannier functions, unlike popular schemes based on the projection method. We discuss how the projection method can fail on fine grids when the initial guesses are too far from MLWFs. We demonstrate that our algorithm is able to find localized Wannier functions through tests on two-dimensional systems, simplified models of semiconductors, and realistic DFT systems by interfacing with the wannier90 code. We also test our algorithm on the Haldane and Kane-Mele models to examine how it fails in the presence of topological obstructions.
Holographic equipartition and the maximization of entropy
NASA Astrophysics Data System (ADS)
Krishna, P. B.; Mathew, Titus K.
2017-09-01
The accelerated expansion of the Universe can be interpreted as a tendency to satisfy holographic equipartition. It can be expressed by a simple law, Δ V =Δ t (Nsurf-ɛ Nbulk) , where V is the Hubble volume in Planck units, t is the cosmic time in Planck units, and Nsurf /bulk is the number of degrees of freedom on the horizon/bulk of the Universe. We show that this holographic equipartition law effectively implies the maximization of entropy. In the cosmological context, a system that obeys the holographic equipartition law behaves as an ordinary macroscopic system that proceeds to an equilibrium state of maximum entropy. We consider the standard Λ CDM model of the Universe and show that it is consistent with the holographic equipartition law. Analyzing the entropy evolution, we find that it also proceeds to an equilibrium state of maximum entropy.
Characterizing maximally singular phase-space distributions
NASA Astrophysics Data System (ADS)
Sperling, J.
2016-07-01
Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.
Zagatto, A; Redkva, P; Loures, J; Kalva Filho, C; Franco, V; Kaminagakura, E; Papoti, M
2011-12-01
The aims of this study were: (i) to measure energy system contributions in maximal anaerobic running test (MART); and (ii) to verify any correlation between MART and maximal accumulated oxygen deficit (MAOD). Eleven members of the armed forces were recruited for this study. Participants performed MART and MAOD, both accomplished on a treadmill. MART consisted of intermittent exercise, 20 s effort with 100 s recovery, after each spell of effort exercise. Energy system contributions by MART were also determined by excess post-exercise oxygen consumption, lactate response, and oxygen uptake measurements. MAOD was determined by five submaximal intensities and one supramaximal intensity exercises corresponding to 120% at maximal oxygen uptake intensity. Energy system contributions were 65.4±1.1% to aerobic; 29.5±1.1% to anaerobic a-lactic; and 5.1±0.5% to anaerobic lactic system throughout the whole test, while only during effort periods the anaerobic contribution corresponded to 73.5±1.0%. Maximal power found in MART corresponded to 111.25±1.33 mL/kg/min but did not significantly correlate with MAOD (4.69±0.30 L and 70.85±4.73 mL/kg). We concluded that the anaerobic a-lactic system is the main energy system in MART efforts and this test did not significantly correlate to MAOD.
ERIC Educational Resources Information Center
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2014-12-01
The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.
Innovative Conference Curriculum: Maximizing Learning and Professionalism
ERIC Educational Resources Information Center
Hyland, Nancy; Kranzow, Jeannine
2012-01-01
This action research study evaluated the potential of an innovative curriculum to move 73 graduate students toward professional development. The curriculum was grounded in the professional conference and utilized the motivation and expertise of conference presenters. This innovation required students to be more independent, act as a critical…
Seizures and Teens: Maximizing Health and Safety
ERIC Educational Resources Information Center
Sundstrom, Diane
2007-01-01
As parents and caregivers, their job is to help their children become happy, healthy, and productive members of society. They try to balance the desire to protect their children with their need to become independent young adults. This can be a struggle for parents of teens with seizures, since there are so many challenges they may face. Teenagers…
Seizures and Teens: Maximizing Health and Safety
ERIC Educational Resources Information Center
Sundstrom, Diane
2007-01-01
As parents and caregivers, their job is to help their children become happy, healthy, and productive members of society. They try to balance the desire to protect their children with their need to become independent young adults. This can be a struggle for parents of teens with seizures, since there are so many challenges they may face. Teenagers…
NASA Astrophysics Data System (ADS)
Shi, Ronghua; Lv, Geli; Wang, Yuan; Huang, Dazu; Guo, Ying
2013-02-01
An improved framework of quantum secret sharing (QSS) is designated structurally based on the Chinese Remainder Theorem (CRT) via the non-maximally entanglement analysis. In this CRT-based QSS, the secret is divided and then allotted to two or more sharers according to independent shadows achieved from the CRT in finite field. The secret can be restored jointly by legal participants using the partial non-maximally entanglement analysis in independent Hilbert spaces. The security is guaranteed by the secret dividing-and-recovering process based on the CRT, along with the entanglement channels established beforehand. It provides an alternative technique for the secret transmitting in complex quantum computation networks, where the CRT is conducted completely among legal participants.
Complex attentional control settings.
Parrott, Stacey E; Levinthal, Brian R; Franconeri, Steven L
2010-12-01
The visual system prioritizes information through a variety of mechanisms, including "attentional control settings" that specify features (e.g., colour) that are relevant to current goals. Recent work shows that these control settings may be more complex than previously thought, such that participants can monitor for independent features at different locations (Adamo, Pun, Pratt, & Ferber, 2008). However, this result leaves unclear whether these control settings affect early attentional selection or later target processing. We dissociated between these possibilities in two ways. In Experiment 1, participants were asked to determine whether a target object, which was preceded by an uninformative cue, matched one of two target templates (e.g., a blue vertical object or a green horizontal object). Participants monitored for independent features in the same location, but in different objects, which should reduce the effectiveness of the control setting if it is due to early attentional selection, but not if it is due to later target processing. In Experiment 2, we removed the ability of the cue to prime the target identity, which makes the opposite prediction. Together, the results suggest that complex attentional control settings primarily affect later target identity processing, and not early attentional selection.
Survival Analysis in Amputees Based on Physical Independence Grade Achievement
Stineman, Margaret G.; Kurichi, Jibby E.; Kwong, Pui L.; Maislin, Greg; Reker, Dean M.; Bruce Vogel, W.; Prvu-Bettger, Janet A.; Bidelspach, Douglas E.; Bates, Barbara E.
2010-01-01
Backgound Survival implications of achieving different grades of physical independence after lower extremity amputation are unknown. Objectives To identify thresholds of physical independence achievement associated with improved 6-month survival and to identify and compare other risk factors after removing the influence of the grade achieved. Design Data were combined from 8 administrative databases. Grade was measured on the basis of 13 individual self-care and mobility activities measured at inpatient rehabilitation discharge. Setting Ninety-nine US Department of Veterans Affairs Medical Centers. Patients Retrospective longitudinal cohort study of 2616 veterans who underwent lower extremity amputation and subsequent inpatient rehabilitation between October 1, 2002, and September 30, 2004. Main Outcome Measure Cumulative 6-month survival after rehabilitation discharge. Results The 6-month survival rate (95% confidence interval [CI]) for those at grade 1 (total assistance) was 73.5% (70.5%-76.2%). The achievement of grade 2 (maximal assistance) led to the largest incremental improvement in prognosis with survival increasing to 91.1% (95% CI, 85.6%-94.5%). In amputees who remained at grade 1, the 30-day hazards ratio for survival compared with grade 6 (independent) was 43.9 (95% CI, 10.8-278.2), sharply decreasing with time. Whereas metastatic cancer and hemodialysis remained significantly associated with reduced survival (both P ≤ .001), anatomical amputation level was not significant when rehabilitation discharge grade and other diagnostic conditions were considered. Conclusions Even a small improvement to grade 2 in the most severely impaired amputees resulted in better 6-month survival. Health care systems must plan appropriate interdisciplinary treatment strategies for both medical and functional issues after amputation. PMID:19528388
Romano, Raffaele; Loock, Peter van
2010-07-15
Quantum teleportation enables deterministic and faithful transmission of quantum states, provided a maximally entangled state is preshared between sender and receiver, and a one-way classical channel is available. Here, we prove that these resources are not only sufficient, but also necessary, for deterministically and faithfully sending quantum states through any fixed noisy channel of maximal rank, when a single use of the cannel is admitted. In other words, for this family of channels, there are no other protocols, based on different (and possibly cheaper) sets of resources, capable of replacing quantum teleportation.
American Independence. Fifth Grade.
ERIC Educational Resources Information Center
Crosby, Annette
This fifth grade teaching unit covers early conflicts between the American colonies and Britain, battles of the American Revolutionary War, and the Declaration of Independence. Knowledge goals address the pre-revolutionary acts enforced by the British, the concepts of conflict and independence, and the major events and significant people from the…
Independence of Internal Auditors.
ERIC Educational Resources Information Center
Montondon, Lucille; Meixner, Wilda F.
1993-01-01
A survey of 288 college and university auditors investigated patterns in their appointment, reporting, and supervisory practices as indicators of independence and objectivity. Results indicate a weakness in the positioning of internal auditing within institutions, possibly compromising auditor independence. Because the auditing function is…
Fostering Musical Independence
ERIC Educational Resources Information Center
Shieh, Eric; Allsup, Randall Everett
2016-01-01
Musical independence has always been an essential aim of musical instruction. But this objective can refer to everything from high levels of musical expertise to more student choice in the classroom. While most conceptualizations of musical independence emphasize the demonstration of knowledge and skills within particular music traditions, this…
American Independence. Fifth Grade.
ERIC Educational Resources Information Center
Crosby, Annette
This fifth grade teaching unit covers early conflicts between the American colonies and Britain, battles of the American Revolutionary War, and the Declaration of Independence. Knowledge goals address the pre-revolutionary acts enforced by the British, the concepts of conflict and independence, and the major events and significant people from the…
Karbowski, Jan
2015-01-01
The structure and quantitative composition of the cerebral cortex are interrelated with its computational capacity. Empirical data analyzed here indicate a certain hierarchy in local cortical composition. Specifically, neural wire, i.e., axons and dendrites take each about 1/3 of cortical space, spines and glia/astrocytes occupy each about (1/3)2, and capillaries around (1/3)4. Moreover, data analysis across species reveals that these fractions are roughly brain size independent, which suggests that they could be in some sense optimal and thus important for brain function. Is there any principle that sets them in this invariant way? This study first builds a model of local circuit in which neural wire, spines, astrocytes, and capillaries are mutually coupled elements and are treated within a single mathematical framework. Next, various forms of wire minimization rule (wire length, surface area, volume, or conduction delays) are analyzed, of which, only minimization of wire volume provides realistic results that are very close to the empirical cortical fractions. As an alternative, a new principle called “spine economy maximization” is proposed and investigated, which is associated with maximization of spine proportion in the cortex per spine size that yields equally good but more robust results. Additionally, a combination of wire cost and spine economy notions is considered as a meta-principle, and it is found that this proposition gives only marginally better results than either pure wire volume minimization or pure spine economy maximization, but only if spine economy component dominates. However, such a combined meta-principle yields much better results than the constraints related solely to minimization of wire length, wire surface area, and conduction delays. Interestingly, the type of spine size distribution also plays a role, and better agreement with the data is achieved for distributions with long tails. In sum, these results suggest that for the
Maximization Paradox: Result of Believing in an Objective Best.
Luan, Mo; Li, Hong
2017-05-01
The results from four studies provide reliable evidence of how beliefs in an objective best influence the decision process and subjective feelings. A belief in an objective best serves as the fundamental mechanism connecting the concept of maximizing and the maximization paradox (i.e., expending great effort but feeling bad when making decisions, Study 1), and randomly chosen decision makers operate similar to maximizers once they are manipulated to believe that the best is objective (Studies 2A, 2B, and 3). In addition, the effect of a belief in an objective best on the maximization paradox is moderated by the presence of a dominant option (Study 3). The findings of this research contribute to the maximization literature by demonstrating that believing in an objective best leads to the maximization paradox. The maximization paradox is indeed the result of believing in an objective best.
Energy Efficiency Maximization of Practical Wireless Communication Systems
NASA Astrophysics Data System (ADS)
Eraslan, Eren
Energy consumption of the modern wireless communication systems is rapidly growing due to the ever-increasing data demand and the advanced solutions employed in order to address this demand, such as multiple-input multiple-output (MIMO) and orthogonal frequency division multiplexing (OFDM) techniques. These MIMO systems are power hungry, however, they are capable of changing the transmission parameters, such as number of spatial streams, number of transmitter/receiver antennas, modulation, code rate, and transmit power. They can thus choose the best mode out of possibly thousands of modes in order to optimize an objective function. This problem is referred to as the link adaptation problem. In this work, we focus on the link adaptation for energy efficiency maximization problem, which is defined as choosing the optimal transmission mode to maximize the number of successfully transmitted bits per unit energy consumed by the link. We model the energy consumption and throughput performances of a MIMO-OFDM link and develop a practical link adaptation protocol, which senses the channel conditions and changes its transmission mode in real-time. It turns out that the brute force search, which is usually assumed in previous works, is prohibitively complex, especially when there are large numbers of transmit power levels to choose from. We analyze the relationship between the energy efficiency and transmit power, and prove that energy efficiency of a link is a single-peaked quasiconcave function of transmit power. This leads us to develop a low-complexity algorithm that finds a near-optimal transmit power and take this dimension out of the search space. We further prune the search space by analyzing the singular value decomposition of the channel and excluding the modes that use higher number of spatial streams than the channel can support. These algorithms and our novel formulations provide simpler computations and limit the search space into a much smaller set; hence
Optimal deployment of resources for maximizing impact in spreading processes
Lokhov, Andrey Y.; Saad, David
2017-09-12
The effective use of limited resources for controlling spreading processes on networks is of prime significance in diverse contexts, ranging from the identification of “influential spreaders” for maximizing information dissemination and targeted interventions in regulatory networks, to the development of mitigation policies for infectious diseases and financial contagion in economic systems. Solutions for these optimization tasks that are based purely on topological arguments are not fully satisfactory; in realistic settings, the problem is often characterized by heterogeneous interactions and requires interventions in a dynamic fashion over a finite time window via a restricted set of controllable nodes. The optimal distributionmore » of available resources hence results from an interplay between network topology and spreading dynamics. Here, we show how these problems can be addressed as particular instances of a universal analytical framework based on a scalable dynamic message-passing approach and demonstrate the efficacy of the method on a variety of real-world examples.« less
Expectation-Maximization Binary Clustering for Behavioural Annotation
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis. PMID:27002631
A Bayesian optimization approach for wind farm power maximization
NASA Astrophysics Data System (ADS)
Park, Jinkyoo; Law, Kincho H.
2015-03-01
The objective of this study is to develop a model-free optimization algorithm to improve the total wind farm power production in a cooperative game framework. Conventionally, for a given wind condition, an individual wind turbine maximizes its own power production without taking into consideration the conditions of other wind turbines. Under this greedy control strategy, the wake formed by the upstream wind turbine, due to the reduced wind speed and the increased turbulence intensity inside the wake, would affect and lower the power productions of the downstream wind turbines. To increase the overall wind farm power production, researchers have proposed cooperative wind turbine control approaches to coordinate the actions that mitigate the wake interference among the wind turbines and thus increase the total wind farm power production. This study explores the use of a data-driven optimization approach to identify the optimum coordinated control actions in real time using limited amount of data. Specifically, we propose the Bayesian Ascent (BA) method that combines the strengths of Bayesian optimization and trust region optimization algorithms. Using Gaussian Process regression, BA requires only a few number of data points to model the complex target system. Furthermore, due to the use of trust region constraint on sampling procedure, BA tends to increase the target value and converge toward near the optimum. Simulation studies using analytical functions show that the BA method can achieve an almost monotone increase in a target value with rapid convergence. BA is also implemented and tested in a laboratory setting to maximize the total power using two scaled wind turbine models.
Evolution of correlated multiplexity through stability maximization
NASA Astrophysics Data System (ADS)
Dwivedi, Sanjiv K.; Jalan, Sarika
2017-02-01
Investigating the relation between various structural patterns found in real-world networks and the stability of underlying systems is crucial to understand the importance and evolutionary origin of such patterns. We evolve multiplex networks, comprising antisymmetric couplings in one layer depicting predator-prey relationship and symmetric couplings in the other depicting mutualistic (or competitive) relationship, based on stability maximization through the largest eigenvalue of the corresponding adjacency matrices. We find that there is an emergence of the correlated multiplexity between the mirror nodes as the evolution progresses. Importantly, evolved values of the correlated multiplexity exhibit a dependence on the interlayer coupling strength. Additionally, the interlayer coupling strength governs the evolution of the disassortativity property in the individual layers. We provide analytical understanding to these findings by considering starlike networks representing both the layers. The framework discussed here is useful for understanding principles governing the stability as well as the importance of various patterns in the underlying networks of real-world systems ranging from the brain to ecology which consist of multiple types of interaction behavior.
Network channel allocation and revenue maximization
NASA Astrophysics Data System (ADS)
Hamalainen, Timo; Joutsensalo, Jyrki
2002-09-01
This paper introduces a model that can be used to share link capacity among customers under different kind of traffic conditions. This model is suitable for different kind of networks like the 4G networks (fast wireless access to wired network) to support connections of given duration that requires a certain quality of service. We study different types of network traffic mixed in a same communication link. A single link is considered as a bottleneck and the goal is to find customer traffic profiles that maximizes the revenue of the link. Presented allocation system accepts every calls and there is not absolute blocking, but the offered data rate/user depends on the network load. Data arrival rate depends on the current link utilization, user's payment (selected CoS class) and delay. The arrival rate is (i) increasing with respect to the offered data rate, (ii) decreasing with respect to the price, (iii) decreasing with respect to the network load, and (iv) decreasing with respect to the delay. As an example, explicit formula obeying these conditions is given and analyzed.
Postural dynamics in maximal isometric ramp efforts.
Bouisset, Simon; Le Bozec, Serge; Ribreau, Christian
2002-09-01
Aglobal biomechanical model of transient push efforts is proposed where transient efforts are taken into consideration, with the aim to examine in greater depth the postural adjustments associated with voluntary efforts. In this context, the push effort is considered as a perturbation of balance, and the other reaction forces as a counter-perturbation which is necessary for the task to be performed efficiently. The subjects were asked to exert maximal horizontal two-handed isometric pushes on a dynamometric bar, as rapidly as possible. They were seated on a custom-designed device which measured global and partitive dynamic quantities. The results showed that the horizontal reaction forces and the horizontal displacement of the centre of pressure increased quasi-proportionally with the perturbation. In addition, it was established that vertical reaction forces increased at seat level whereas they decreased at foot level, resulting in minor vertical acceleration and displacement of the centre of gravity. On the contrary, the anteroposterior reaction forces increased both at foot and at seat levels. Based on a detailed examination of the various terms of the model, it is concluded that transient muscular effort induces dynamics of the postural chain. These observations support the view that there is a postural counter-perturbation which is associated with motor activity. More generally, the model helped to specify the effect of postural dynamic phenomena. It makes it possible to stress the importance of adherence at the contact level between the subject and the seat in the course of transient efforts.
Maximizing Exosome Colloidal Stability Following Electroporation
Hood, Joshua L.; Scott, Michael J.; Wickline, Samuel A.
2014-01-01
Development of exosome based semi-synthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5 nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label free means to enrich exogenously modified exosomes and introduces the potential for MRI driven theranostic exosome investigations in vivo. PMID:24333249
Maximal respiratory pressure in healthy Japanese children.
Tagami, Miki; Okuno, Yukako; Matsuda, Tadamitsu; Kawamura, Kenta; Shoji, Ryosuke; Tomita, Kazuhide
2017-03-01
[Purpose] Normal values for respiratory muscle pressures during development in Japanese children have not been reported. The purpose of this study was to investigate respiratory muscle pressures in Japanese children aged 3-12 years. [Subjects and Methods] We measured respiratory muscle pressure values using a manovacuometer without a nose clip, with subjects in a sitting position. Data were collected for ages 3-6 (Group I: 68 subjects), 7-9 (Group II: 86 subjects), and 10-12 (Group III: 64 subjects) years. [Results] The values for respiratory muscle pressures in children were significantly higher with age in both sexes, and were higher in boys than in girls. Correlation coefficients were significant at values of 0.279 to 0.471 for each gender relationship between maximal respiratory pressure and age, height, and weight, respectively. [Conclusion] In this study, we showed pediatric respiratory muscle pressure reference value for each age. In the present study, values for respiratory muscle pressures were lower than Brazilian studies. This suggests that differences in respiratory muscle pressures vary with ethnicity.
Maximal respiratory pressure in healthy Japanese children
Tagami, Miki; Okuno, Yukako; Matsuda, Tadamitsu; Kawamura, Kenta; Shoji, Ryosuke; Tomita, Kazuhide
2017-01-01
[Purpose] Normal values for respiratory muscle pressures during development in Japanese children have not been reported. The purpose of this study was to investigate respiratory muscle pressures in Japanese children aged 3–12 years. [Subjects and Methods] We measured respiratory muscle pressure values using a manovacuometer without a nose clip, with subjects in a sitting position. Data were collected for ages 3–6 (Group I: 68 subjects), 7–9 (Group II: 86 subjects), and 10–12 (Group III: 64 subjects) years. [Results] The values for respiratory muscle pressures in children were significantly higher with age in both sexes, and were higher in boys than in girls. Correlation coefficients were significant at values of 0.279 to 0.471 for each gender relationship between maximal respiratory pressure and age, height, and weight, respectively. [Conclusion] In this study, we showed pediatric respiratory muscle pressure reference value for each age. In the present study, values for respiratory muscle pressures were lower than Brazilian studies. This suggests that differences in respiratory muscle pressures vary with ethnicity. PMID:28356644
Predicting maximal grip strength using hand circumference.
Li, Ke; Hewson, David J; Duchêne, Jacques; Hogrel, Jean-Yves
2010-12-01
The objective of this study was to analyze the correlations between anthropometric data and maximal grip strength (MGS) in order to establish a simple model to predict "normal" MGS. Randomized bilateral measurement of MGS was performed on a homogeneous population of 100 subjects. MGS was measured according to a standardized protocol with three dynamometers (Jamar, Myogrip and Martin Vigorimeter) for both dominant and non-dominant sides. Several anthropometric data were also measured: height; weight; hand, wrist and forearm circumference; hand and palm length. Among these data, hand circumference had the strongest correlation with MGS for all three dynamometers and for both hands (0.789 and 0.782 for Jamar; 0.829 and 0.824 for Myogrip; 0.663 and 0.730 for Vigorimeter). In addition, the only anthropometric variable systematically selected by a stepwise multiple linear regression analysis was also hand circumference. Based on this parameter alone, a predictive regression model presented good results (r(2) = 0.624 for Jamar; r(2) = 0.683 for Myogrip and r(2) = 0.473 for Vigorimeter; all adjusted r(2)). Moreover a single equation was predictive of MGS for both men and women and for both non-dominant and dominant hands. "Normal" MGS can be predicted using hand circumference alone.
Maximizing exosome colloidal stability following electroporation.
Hood, Joshua L; Scott, Michael J; Wickline, Samuel A
2014-03-01
Development of exosome-based semisynthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum-derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label-free means of enriching exogenously modified exosomes and introduces the potential for MRI-driven theranostic exosome investigations in vivo.
Maximizing NGL recovery by refrigeration optimization
Baldonedo H., A.H.
1999-07-01
PDVSA--Petroleo y Gas, S.A. has within its facilities in Lake Maracaibo two plants that extract liquids from natural gas (NGL), They use a combined mechanic refrigeration absorption with natural gasoline. Each of these plants processes 420 MMsccfd with a pressure of 535 psig and 95 F that comes from the compression plants PCTJ-2 and PCTJ-3 respectively. About 40 MMscfd of additional rich gas comes from the high pressure system. Under the present conditions these plants produce in the order of 16,800 and 23,800 b/d of NGL respectively, with a propane recovery percentage of approximately 75%, limited by the capacity of the refrigeration system. To optimize the operation and the design of the refrigeration system and to maximize the NGL recovery, a conceptual study was developed in which the following aspects about the process were evaluated: capacity of the refrigeration system, refrigeration requirements, identification of limitations and evaluation of the system improvements. Based on the results obtained it was concluded that by relocating some condensers, refurbishing the main refrigeration system turbines and using HIGH FLUX piping in the auxiliary refrigeration system of the evaporators, there will be an increase of 85% on the propane recovery, with an additional production of 25,000 b/d of NGL and 15 MMscfd of ethane rich gas.
Maximal and sub-maximal functional lifting performance at different platform heights.
Savage, Robert J; Jaffrey, Mark A; Billing, Daniel C; Ham, Daniel J
2015-01-01
Introducing valid physical employment tests requires identifying and developing a small number of practical tests that provide broad coverage of physical performance across the full range of job tasks. This study investigated discrete lifting performance across various platform heights reflective of common military lifting tasks. Sixteen Australian Army personnel performed a discrete lifting assessment to maximal lifting capacity (MLC) and maximal acceptable weight of lift (MAWL) at four platform heights between 1.30 and 1.70 m. There were strong correlations between platform height and normalised lifting performance for MLC (R(2) = 0.76 ± 0.18, p < 0.05) and MAWL (R(2) = 0.73 ± 0.21, p < 0.05). The developed relationship allowed prediction of lifting capacity at one platform height based on lifting capacity at any of the three other heights, with a standard error of < 4.5 kg and < 2.0 kg for MLC and MAWL, respectively.
Action Now for Older Americans: Toward Independent Living.
ERIC Educational Resources Information Center
Thorson, James A., Ed.
The collection of conference papers given by representatives of State, Federal, and voluntary agencies, and university faculty, discusses information and planning strategies aimed at maximizing independent living for the elderly. Introductory and welcoming remarks by James A. Thorson, Virginia Smith, and Frank Groschelle are included along with…
ERIC Educational Resources Information Center
McCook, Byron Alexander
2009-01-01
Pennsylvania public school districts are largely funded through basic education subsidy for providing educational services for resident students and non-resident students who are placed in residential programs within the school district boundaries. Non-resident placements occur through, but are not limited to, adjudication proceedings, foster home…
Maximizing protection from use of oral cholera vaccines in developing country settings
Desai, Sachin N; Cravioto, Alejandro; Sur, Dipika; Kanungo, Suman
2014-01-01
When oral vaccines are administered to children in lower- and middle-income countries, they do not induce the same immune responses as they do in developed countries. Although not completely understood, reasons for this finding include maternal antibody interference, mucosal pathology secondary to infection, malnutrition, enteropathy, and previous exposure to the organism (or related organisms). Young children experience a high burden of cholera infection, which can lead to severe acute dehydrating diarrhea and substantial mortality and morbidity. Oral cholera vaccines show variations in their duration of protection and efficacy between children and adults. Evaluating innate and memory immune response is necessary to understand V. cholerae immunity and to improve current cholera vaccine candidates, especially in young children. Further research on the benefits of supplementary interventions and delivery schedules may also improve immunization strategies. PMID:24861554
Maximal entanglement concentration for a set of (n+1)-qubit states
NASA Astrophysics Data System (ADS)
Banerjee, Anindita; Shukla, Chitra; Pathak, Anirban
2015-12-01
We propose two schemes for concentration of (n+1)-qubit entangled states that can be written in the form of ( α |\\varphi 0rangle |0rangle +β |\\varphi 1rangle |1rangle ) _{n+1} where |\\varphi 0rangle and |\\varphi 1rangle are mutually orthogonal n-qubit states. The importance of this general form is that the entangled states such as Bell, cat, GHZ, GHZ-like, |\\varOmega rangle , |Q5rangle , 4-qubit cluster states and specific states from the nine SLOCC-nonequivalent families of 4-qubit entangled states can be expressed in this form. The proposed entanglement concentration protocol is based on the local operations and classical communications (LOCC). It is shown that the maximum success probability for ECP using quantum nondemolition technique (QND) is 2β 2 for (n+1)-qubit states of the prescribed form. It is shown that the proposed schemes can be implemented optically. Further, it is also noted that the proposed schemes can be implemented using quantum dot and microcavity systems.
NASA Astrophysics Data System (ADS)
Yun, Hyejeong; Jung, Yeonkuk; Lee, Kyung Haeng; Song, Hyun Pa; Kim, Keehyuk; Jo, Cheorun
2012-08-01
Irradiation is an excellent method for improving the safety and functional properties of egg. However, the internal quality of egg can be deteriorated due to a rapid decrease in Haugh units. In this study, the optimal conditions for maintaining the quality and maximizing the safety and functional properties of egg were determined when combination of irradiation and chitosan coating was treated using response surface methodology (RSM). Independent degradation parameters—irradiation dose (0-2 kGy) and concentration of chitosan coating (0-2%) were assigned (-2,-1, 0, 1, 2), and 10 intervals were set on the basis of central composite design for the degradation experiment. The dependant variables within a confidence level less than 5% were Haugh units, foaming ability, foam stability, and number of Salmonella typhimurium. The predicted maximum values of Haugh units and foaming ability were 82.7 (irradiation dose 0.0006 kGy and concentration of chitosan solution 1.03%) and 62.2 mm (1.99 kGy and 0.86%), respectively. S. typhimurium inoculated on the egg surface was not detected after 1.86 kGy and 0.48%. Based on superimposing four-dimensional RSM with respect to freshness (Haugh units), functional property (foaming capacity and foam stability), and reduction of S. typhimurium, the predicted optimum ranges for irradiation dose and chitosan solution concentration were 0.35-0.65 kGy and 0.25-0.85%, respectively. The predicted optimum values were obtained from 0.45 kGy and 0.525%. This methodology can be used to predict egg quality and safety when different combination treatments were applied.
Metabolomics of aerobic metabolism in mice selected for increased maximal metabolic rate
Wone, Bernard; Donovan, Edward R.; Hayes, Jack P.
2014-01-01
Maximal aerobic metabolic rate (MMR) is an important physiological and ecological variable that sets an upper limit to sustained, vigorous activity. How the oxygen cascade from the external environment to the mitochondria may affect MMR has been the subject of much interest, but little is known about the metabolic profiles that underpin variation in MMR. We tested how seven generations of artificial selection for high mass-independent MMR affected metabolite profiles of two skeletal muscles (gastrocnemius and plantaris) and the liver. MMR was 12.3% higher in mass selected for high MMR than in controls. Basal metabolic rate was 3.5% higher in selected mice than in controls. Artificial selection did not lead to detectable changes in the metabolic profiles from plantaris muscle, but in the liver amino acids and tricarboxylic acid cycle (TCA cycle) metabolites were lower in high-MMR mice than in controls. In gastrocnemius, amino acids and TCA cycle metabolites were higher in high-MMR mice than in controls, indicating elevated amino acid and energy metabolism. Moreover, in gastrocnemius free fatty acids and triacylglycerol fatty acids were lower in high-MMR mice than in controls. Because selection for high MMR was associated with changes in the resting metabolic profile of both liver and gastrocnemius, the result suggests a possible mechanistic link between resting metabolism and MMR. In addition, it is well established that diet and exercise affect the composition of fatty acids in muscle. The differences that we found between control lines and lines selected for high MMR demonstrate that the composition of fatty acids in muscle is also affected by genetic factors. PMID:21982590
Skeletal muscle vasodilatation during maximal exercise in health and disease
Calbet, Jose A L; Lundby, Carsten
2012-01-01
Maximal exercise vasodilatation results from the balance between vasoconstricting and vasodilating signals combined with the vascular reactivity to these signals. During maximal exercise with a small muscle mass the skeletal muscle vascular bed is fully vasodilated. During maximal whole body exercise, however, vasodilatation is restrained by the sympathetic system. This is necessary to avoid hypotension since the maximal vascular conductance of the musculature exceeds the maximal pumping capacity of the heart. Endurance training and high-intensity intermittent knee extension training increase the capacity for maximal exercise vasodilatation by 20–30%, mainly due to an enhanced vasodilatory capacity, as maximal exercise perfusion pressure changes little with training. The increase in maximal exercise vascular conductance is to a large extent explained by skeletal muscle hypertrophy and vascular remodelling. The vasodilatory capacity during maximal exercise is reduced or blunted with ageing, as well as in chronic heart failure patients and chronically hypoxic humans; reduced vasodilatory responsiveness and increased sympathetic activity (and probably, altered sympatholysis) are potential mechanisms accounting for this effect. Pharmacological counteraction of the sympathetic restraint may result in lower perfusion pressure and reduced oxygen extraction by the exercising muscles. However, at the same time fast inhibition of the chemoreflex in maximally exercising humans may result in increased vasodilatation, further confirming a restraining role of the sympathetic nervous system on exercise-induced vasodilatation. This is likely to be critical for the maintenance of blood pressure in exercising patients with a limited heart pump capacity. PMID:23027820
Scheinker, Alexander; Baily, Scott; Young, Daniel; ...
2014-08-01
In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic fieldmore » cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.« less
Scheinker, Alexander; Baily, Scott; Young, Daniel; Kolski, Jeffrey S.; Prokop, Mark
2014-08-01
In this work, an implementation of a recently developed model-independent adaptive control scheme, for tuning uncertain and time varying systems, is demonstrated on the Los Alamos linear particle accelerator. The main benefits of the algorithm are its simplicity, ability to handle an arbitrary number of components without increased complexity, and the approach is extremely robust to measurement noise, a property which is both analytically proven and demonstrated in the experiments performed. We report on the application of this algorithm for simultaneous tuning of two buncher radio frequency (RF) cavities, in order to maximize beam acceptance into the accelerating electromagnetic field cavities of the machine, with the tuning based only on a noisy measurement of the surviving beam current downstream from the two bunching cavities. The algorithm automatically responds to arbitrary phase shift of the cavity phases, automatically re-tuning the cavity settings and maximizing beam acceptance. Because it is model independent it can be utilized for continuous adaptation to time-variation of a large system, such as due to thermal drift, or damage to components, in which the remaining, functional components would be automatically re-tuned to compensate for the failing ones. We start by discussing the general model-independent adaptive scheme and how it may be digitally applied to a large class of multi-parameter uncertain systems, and then present our experimental results.
Heggelund, Jørn; Fimland, Marius S; Helgerud, Jan; Hoff, Jan
2013-06-01
This study compared maximal strength training (MST) with equal training volume (kg × sets × repetitions) of conventional strength training (CON) primarily with regard to work economy, and second one repetition maximum (1RM) and rate of force development (RFD) of single leg knee extension. In an intra-individual design, one leg was randomized to knee-extension MST (4 or 5RM) and the other leg to CON (3 × 10RM) three times per week for 8 weeks. MST was performed with maximal concentric mobilization of force while CON was performed with moderate velocity. Eight untrained or moderately trained men (26 ± 1 years) completed the study. The improvement in gross work economy was -0.10 ± 0.08 L min(-1) larger after MST (P = 0.011, between groups). From pre- to post-test the MST and CON improved net work economy with 31 % (P < 0.001) and 18 % (P = 0.01), respectively. Compared with CON, the improvement in 1RM and dynamic RFD was 13.7 ± 8.4 kg (P = 0.002) and 587 ± 679 N s(-1) (P = 0.044) larger after MST, whereas isometric RFD was of borderline significance 3,028 ± 3,674 N s(-1) (P = 0.053). From pre- to post-test, MST improved 1RM and isometric RFD with 50 % (P < 0.001) and 155 % (P < 0.001), respectively whereas CON improved 1RM and isometric RFD with 35 % (P < 0.001) and 83 % (P = 0.028), respectively. Anthropometric measures of quadriceps femoris muscle mass and peak oxygen uptake did not change. In conclusion, 8 weeks of MST was more effective than CON for improving work economy, 1RM and RFD in untrained and moderately trained men. The advantageous effect of MST to improve work economy could be due to larger improvements in 1RM and RFD.
Maximizing Experiential Learning for Student Success
ERIC Educational Resources Information Center
Coker, Jeffrey Scott; Porter, Desiree Jasmine
2015-01-01
Several years ago, Elon University set out to better understand experiential learning on campus. At the time, there was a pragmatic need to collect data that would inform revisions to the core curriculum, including an experiential-learning requirement (ELR) that had been in place since 1994. The question was whether it made sense to raise the…
Maximizing Experiential Learning for Student Success
ERIC Educational Resources Information Center
Coker, Jeffrey Scott; Porter, Desiree Jasmine
2015-01-01
Several years ago, Elon University set out to better understand experiential learning on campus. At the time, there was a pragmatic need to collect data that would inform revisions to the core curriculum, including an experiential-learning requirement (ELR) that had been in place since 1994. The question was whether it made sense to raise the…
Matching, Demand, Maximization, and Consumer Choice
ERIC Educational Resources Information Center
Wells, Victoria K.; Foxall, Gordon R.
2013-01-01
The use of behavioral economics and behavioral psychology in consumer choice has been limited. The current study extends the study of consumer behavior analysis, a synthesis between behavioral psychology, economics, and marketing, to a larger data set. This article presents the current work and results from the early analysis of the data. We…
Matching, Demand, Maximization, and Consumer Choice
ERIC Educational Resources Information Center
Wells, Victoria K.; Foxall, Gordon R.
2013-01-01
The use of behavioral economics and behavioral psychology in consumer choice has been limited. The current study extends the study of consumer behavior analysis, a synthesis between behavioral psychology, economics, and marketing, to a larger data set. This article presents the current work and results from the early analysis of the data. We…
Developing Independence in Decoding.
ERIC Educational Resources Information Center
Spiegel, Dixie Lee
1985-01-01
Proposes three guidelines to assist teachers in helping students become independent decoders: (1) teach the flexible use of a repertoire of strategies, (2) don't overnurture, and (3) provide a strong vocabulary development program. (FL)
NASA Technical Reports Server (NTRS)
1987-01-01
The work done on the Media Independent Interface (MII) Interface Control Document (ICD) program is described and recommendations based on it were made. Explanations and rationale for the content of the ICD itself are presented.
Yoo, Kyung-Tae; An, Ho-Jung; Lee, Sun-Kyung; Choi, Jung-Hyun
2013-09-01
[Purpose] This research analyzed how seat distance and gender affect maximal torque and muscle strength when driving to present base data for the optimal driving posture. [Subjects and Methods] The subjects were 27 college students in their 20's, 15 males and 12 females. After had been measured, the subjects sat in front of a steering wheel with the distance between the steering wheel and the seat set in turns. at 50, 70, and 90% their arm length, and the maximal torque and muscle strength were measured. [Results] Both the maximal torque and muscle strength were found to be greater in male subjects than female subjects whether they turned the steering wheel clockwise or counterclockwise. The difference was big enough to be statistically significant. Maximal torque was greatest when the seat distance was 50% of arm length, whether turning the steering wheel clockwise or counterclockwise. There were statistically significant differences in maximal torque between seat distances of 50 and 70% and 90% of the arm length. Muscle strength, in contrast, was found to be the greatest at a seat distance of 70% of arm length. [Conclusion] We conclude that greater torque can be obtained when the steering wheel is nearer the seat while greater muscle strength can be obtained when the seat distance from the steering wheel is 70% of the arm length.
Maximal Torque and Muscle Strength is Affected by Seat Distance from the Steering Wheel when Driving
Yoo, Kyung-Tae; An, Ho-Jung; Lee, Sun-Kyung; Choi, Jung-Hyun
2013-01-01
[Purpose] This research analyzed how seat distance and gender affect maximal torque and muscle strength when driving to present base data for the optimal driving posture. [Subjects and Methods] The subjects were 27 college students in their 20's, 15 males and 12 females. After had been measured, the subjects sat in front of a steering wheel with the distance between the steering wheel and the seat set in turns. at 50, 70, and 90% their arm length, and the maximal torque and muscle strength were measured. [Results] Both the maximal torque and muscle strength were found to be greater in male subjects than female subjects whether they turned the steering wheel clockwise or counterclockwise. The difference was big enough to be statistically significant. Maximal torque was greatest when the seat distance was 50% of arm length, whether turning the steering wheel clockwise or counterclockwise. There were statistically significant differences in maximal torque between seat distances of 50 and 70% and 90% of the arm length. Muscle strength, in contrast, was found to be the greatest at a seat distance of 70% of arm length. [Conclusion] We conclude that greater torque can be obtained when the steering wheel is nearer the seat while greater muscle strength can be obtained when the seat distance from the steering wheel is 70% of the arm length. PMID:24259937
The rank-size scaling law and entropy-maximizing principle
NASA Astrophysics Data System (ADS)
Chen, Yanguang
2012-02-01
The rank-size regularity known as Zipf's law is one of the scaling laws and is frequently observed in the natural living world and social institutions. Many scientists have tried to derive the rank-size scaling relation through entropy-maximizing methods, but they have not been entirely successful. By introducing a pivotal constraint condition, I present here a set of new derivations based on the self-similar hierarchy of cities. First, I derive a pair of exponent laws by postulating local entropy maximizing. From the two exponential laws follows a general hierarchical scaling law, which implies the general form of Zipf's law. Second, I derive a special hierarchical scaling law with the exponent equal to 1 by postulating global entropy maximizing, and this implies the pure form of Zipf's law. The rank-size scaling law has proven to be one of the special cases of the hierarchical scaling law, and the derivation suggests a certain scaling range with the first or the last data point as an outlier. The entropy maximization of social systems differs from the notion of entropy increase in thermodynamics. For urban systems, entropy maximizing suggests the greatest equilibrium between equity for parts/individuals and efficiency of the whole.
Maximal perfusion of skeletal muscle in man.
Andersen, P; Saltin, B
1985-01-01
Five subjects exercised with the knee extensor of one limb at work loads ranging from 10 to 60 W. Measurements of pulmonary oxygen uptake, heart rate, leg blood flow, blood pressure and femoral arterial-venous differences for oxygen and lactate were made between 5 and 10 min of the exercise. Flow in the femoral vein was measured using constant infusion of saline near 0 degrees C. Since a cuff was inflated just below the knee during the measurements and because the hamstrings were inactive, the measured flow represented primarily the perfusion of the knee extensors. Blood flow increased linearly with work load right up to an average value of 5.7 l min-1. Mean arterial pressure was unchanged up to a work load of 30 W, but increased thereafter from 100 to 130 mmHg. The femoral arterial-venous oxygen difference at maximum work averaged 14.6% (v/v), resulting in an oxygen uptake of 0.80 l min-1. With a mean estimated weight of the knee extensors of 2.30 kg the perfusion of maximally exercising skeletal muscle of man is thus in the order of 2.5 l kg-1 min-1, and the oxygen uptake 0.35 l kg-1 min-1. Limitations in the methods used previously to determine flow and/or the characteristics of the exercise model used may explain why earlier studies in man have failed to demonstrate the high perfusion of muscle reported here. It is concluded that muscle blood flow is closely related to the oxygen demand of the exercising muscles. The hyperaemia at low work intensities is due to vasodilatation, and an elevated mean arterial blood pressure only contributes to the linear increase in flow at high work rates. The magnitude of perfusion observed during intense exercise indicates that the vascular bed of skeletal muscle is not a limiting factor for oxygen transport. PMID:4057091
Expectation maximization applied to GMTI convoy tracking
NASA Astrophysics Data System (ADS)
Koch, Wolfgang
2002-08-01
Collectively moving ground targets are typical of a military ground situation and have to be treated as separate aggregated entities. For a long-range ground surveillance application with airborne GMTI radar we inparticular address the task of track maintenance for ground moving convoys consisting of a small number of individual vehicles. In the proposed approach the identity of the individual vehicles within the convoy is no longer stressed. Their kinematical state vectors are rather treated as internal degrees of freedom characterizing the convoy, which is considered as a collective unit. In this context, the Expectation Maximization technique (EM), originally developed for incomplete data problems in statistical inference and first applied to tracking applications by STREIT et al. seems to be a promising approach. We suggest to embed the EM algorithm into a more traditional Bayesian tracking framework for dealing with false or unwanted sensor returns. The proposed distinction between external and internal data association conflicts (i.e. those among the convoy vehicles) should also enable the application of sequential track extraction techniques introduced by Van Keuk for aircraft formations, providing estimates of the number of the individual convoy vehicles involved. Even with sophisticated signal processing methods (STAP: Space-Time Adaptive Processing), ground moving vehicles can well be masked by the sensor specific clutter notch (Doppler blinding). This physical phenomenon results in interfering fading effects, which can well last over a longer series of sensor updates and therefore will seriously affect the track quality unless properly handled. Moreover, for ground moving convoys the phenomenon of Doppler blindness often superposes the effects induced by the finite resolution capability of the sensor. In many practical cases a separate modeling of resolution phenomena for convoy targets can therefore be omitted, provided the GMTI detection model is used
Rare flavor processes in Maximally Natural Supersymmetry
NASA Astrophysics Data System (ADS)
García, Isabel García; March-Russell, John
2015-01-01
We study CP-conserving rare flavor violating processes in the recently proposed theory of Maximally Natural Supersymmetry (MNSUSY). MNSUSY is an unusual supersymmetric (SUSY) extension of the Standard Model (SM) which, remarkably, is untuned at present LHC limits. It employs Scherk-Schwarz breaking of SUSY by boundary conditions upon compactifying an underlying 5-dimensional (5D) theory down to 4D, and is not well-described by softly-broken SUSY, with much different phenomenology than the Minimal Supersymmetric Standard Model (MSSM) and its variants. The usual CP-conserving SUSY-flavor problem is automatically solved in MNSUSY due to a residual almost exact U(1) R symmetry, naturally heavy and highly degenerate 1st- and 2nd-generation sfermions, and heavy gauginos and Higgsinos. Depending on the exact implementation of MNSUSY there exist important new sources of flavor violation involving gauge boson Kaluza-Klein (KK) excitations. The spatial localization properties of the matter multiplets, in particular the brane localization of the 3rd generation states, imply KK-parity is broken and tree-level contributions to flavor changing neutral currents are present in general. Nevertheless, we show that simple variants of the basic MNSUSY model are safe from present flavor constraints arising from kaon and B-meson oscillations, the rare decays B s, d → μ + μ -, μ → ēee and μ- e conversion in nuclei. We also briefly discuss some special features of the radiative decays μ → eγ and . Future experiments, especially those concerned with lepton flavor violation, should see deviations from SM predictions unless one of the MNSUSY variants with enhanced flavor symmetries is realized.
Is energy expenditure taken into account in human sub-maximal jumping?--A simulation study.
Vanrenterghem, Jos; Bobbert, Maarten F; Casius, L J Richard; De Clercq, Dirk
2008-02-01
This paper presents a simulation study that was conducted to investigate whether the stereotyped motion pattern observed in human sub-maximal jumping can be interpreted from the perspective of energy expenditure. Human sub-maximal vertical countermovement jumps were compared to jumps simulated with a forward dynamic musculo-skeletal model. This model consisted of four interconnected rigid segments, actuated by six Hill-type muscle actuators. The only independent input of the model was the stimulation of muscles as a function of time. This input was optimized using an objective function, in which targeting a specific sub-maximal height value was combined with minimizing the amount of muscle work produced. The characteristic changes in motion pattern observed in humans jumping to different target heights were reproduced by the model. As the target height was lowered, two major changes occurred in the motion pattern. First, the countermovement amplitude was reduced; this helped to save energy because of reduced dissipation and regeneration of energy in the contractile elements. Second, the contribution of rotation of the heavy proximal segments of the lower limbs to the vertical velocity of the centre of gravity at take-off was less; this helped to save energy because of reduced ineffective rotational energies at take-off. The simulations also revealed that, with the observed movement adaptations, muscle work was reduced through improved relative use of the muscle's elastic properties in sub-maximal jumping. According to the results of the simulations, the stereotyped motion pattern observed in sub-maximal jumping is consistent with the idea that in sub-maximal jumping, subjects are trying to achieve the targeted jump height with minimal energy expenditure.
Bois, John P; Geske, Jeffrey B; Foley, Thomas A; Ommen, Steve R; Pellikka, Patricia A
2017-02-15
Left ventricular (LV) wall thickness is a prognostic marker in hypertrophic cardiomyopathy (HC). LV wall thickness ≥30 mm (massive hypertrophy) is independently associated with sudden cardiac death. Presence of massive hypertrophy is used to guide decision making for cardiac defibrillator implantation. We sought to determine whether measurements of maximal LV wall thickness differ between cardiac magnetic resonance imaging (MRI) and transthoracic echocardiography (TTE). Consecutive patients were studied who had HC without previous septal ablation or myectomy and underwent both cardiac MRI and TTE at a single tertiary referral center. Reported maximal LV wall thickness was compared between the imaging techniques. Patients with ≥1 technique reporting massive hypertrophy received subset analysis. In total, 618 patients were evaluated from January 1, 2003, to December 21, 2012 (mean [SD] age, 53 [15] years; 381 men [62%]). In 75 patients (12%), reported maximal LV wall thickness was identical between MRI and TTE. Median difference in reported maximal LV wall thickness between the techniques was 3 mm (maximum difference, 17 mm). Of the 63 patients with ≥1 technique measuring maximal LV wall thickness ≥30 mm, 44 patients (70%) had discrepant classification regarding massive hypertrophy. MRI identified 52 patients (83%) with massive hypertrophy; TTE, 30 patients (48%). Although guidelines recommend MRI or TTE imaging to assess cardiac anatomy in HC, this study shows discrepancy between the techniques for maximal reported LV wall thickness assessment. In conclusion, because this measure clinically affects prognosis and therapeutic decision making, efforts to resolve these discrepancies are critical.
Tillin, Neale A; Folland, Jonathan P
2014-02-01
To compare the effects of short-term maximal (MST) vs. explosive (EST) strength training on maximal and explosive force production, and assess the neural adaptations underpinning any training-specific functional changes. Male participants completed either MST (n = 9) or EST (n = 10) for 4 weeks. In training participants were instructed to: contract as fast and hard as possible for ~1 s (EST); or contract progressively up to 75% maximal voluntary force (MVF) and hold for 3 s (MST). Pre- and post-training measurements included recording MVF during maximal voluntary contractions and explosive force at 50-ms intervals from force onset during explosive contractions. Neuromuscular activation was assessed by recording EMG RMS amplitude, normalised to a maximal M-wave and averaged across the three superficial heads of the quadriceps, at MVF and between 0-50, 0-100 and 0-150 ms during the explosive contractions. Improvements in MVF were significantly greater (P < 0.001) following MST (+21 ± 12%) than EST (+11 ± 7%), which appeared due to a twofold greater increase in EMG at MVF following MST. In contrast, early phase explosive force (at 100 ms) increased following EST (+16 ± 14%), but not MST, resulting in a time × group interaction effect (P = 0.03), which appeared due to a greater increase in EMG during the early phase (first 50 ms) of explosive contractions following EST (P = 0.052). These results provide evidence for distinct neuromuscular adaptations after MST vs. EST that are specific to the training stimulus, and demonstrate the independent adaptability of maximal and explosive strength.
Maximize, minimize or target - optimization for a fitted response from a designed experiment
Anderson-Cook, Christine Michaela; Cao, Yongtao; Lu, Lu
2016-04-01
One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.
Maximize, minimize or target - optimization for a fitted response from a designed experiment
Anderson-Cook, Christine Michaela; Cao, Yongtao; Lu, Lu
2016-04-01
One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.
An Online Algorithm for Maximizing Submodular Functions
2007-12-20
0, acheiving an approximation ratio of 1 − 1 e + for MAX k-COVERAGE is NP-hard [10]. Recently, Feige, Lovász, and Tetali [11] introduced MIN...Journal of the ACM, 45(4):634– 652, 1998. [11] Uriel Feige, László Lovász, and Prasad Tetali . Approximating min sum set cover. Algorith- mica, 40(4
Maximality-Based Structural Operational Semantics for Petri Nets
NASA Astrophysics Data System (ADS)
Saīdouni, Djamel Eddine; Belala, Nabil; Bouneb, Messaouda
2009-03-01
The goal of this work is to exploit an implementable model, namely the maximality-based labeled transition system, which permits to express true-concurrency in a natural way without splitting actions on their start and end events. One can do this by giving a maximality-based structural operational semantics for the model of Place/Transition Petri nets in terms of maximality-based labeled transition systems structures.
Crisafulli, Antonio; Tangianu, Flavio; Tocco, Filippo; Concu, Alberto; Mameli, Ombretta; Mulliri, Gabriele; Caria, Marcello A
2011-08-01
Brief episodes of nonlethal ischemia, commonly known as "ischemic preconditioning" (IP), are protective against cell injury induced by infarction. Moreover, muscle IP has been found capable of improving exercise performance. The aim of the study was the comparison of standard exercise performances carried out in normal conditions with those carried out following IP, achieved by brief muscle ischemia at rest (RIP) and after exercise (EIP). Seventeen physically active, healthy male subjects performed three incremental, randomly assigned maximal exercise tests on a cycle ergometer up to exhaustion. One was the reference (REF) test, whereas the others were performed after the RIP and EIP sessions. Total exercise time (TET), total work (TW), and maximal power output (W(max)), oxygen uptake (VO(2max)), and pulmonary ventilation (VE(max)) were assessed. Furthermore, impedance cardiography was used to measure maximal heart rate (HR(max)), stroke volume (SV(max)), and cardiac output (CO(max)). A subgroup of volunteers (n = 10) performed all-out tests to assess their anaerobic capacity. We found that both RIP and EIP protocols increased in a similar fashion TET, TW, W(max), VE(max), and HR(max) with respect to the REF test. In particular, W(max) increased by ∼ 4% in both preconditioning procedures. However, preconditioning sessions failed to increase traditionally measured variables such as VO(2max), SV(max,) CO(max), and anaerobic capacity(.) It was concluded that muscle IP improves performance without any difference between RIP and EIP procedures. The mechanism of this effect could be related to changes in fatigue perception.
Buscemi, Silvio; Canino, Baldassare; Batsis, John A; Buscemi, Chiara; Calandrino, Vincenzo; Mattina, Alessandro; Arnone, Mariangela; Caimi, Gregorio; Cerasola, Giovanni; Verga, Salvatore
2013-04-01
Aerobic capacity, as indicated by maximal oxygen uptake (VO2 max) has an important role in contrasting the traditional cardiovascular risk factors and preventing cardiovascular morbidity and mortality. It is known that endothelial function, measured as flow-mediated dilation (FMD) of the brachial artery, is strictly linked to atherogenesis and cardiovascular risk. However, the relationship between VO2 max and FMD has not been fully investigated especially in healthy non-obese subjects. This preliminary study cross-sectionally investigated the relationship between VO2 max and FMD in 22 non-obese, healthy sedentary male subjects. Dividing the cohort in two subgroups of 11 subjects each according to the median value of VO2 max, the FMD was significantly lower in the subgroup with lower VO2 max (mean ± sem: 7.1 ± 0.7 vs. 9.5 ± 0.8 %; P = 0.035). Absolute VO2 max (mL min(-1)) was significantly and independently correlated with body fat mass (r = -0.50; P = 0.018) and with FMD (r = 0.44; P = 0.039). This preliminary study suggests that maximal oxygen uptake is independently correlated with endothelial function in healthy non-obese adults. These results are also in agreement with the possibility that improving maximal oxygen uptake may have a favorable effect on endothelial function and vice versa.
Independent technical review, handbook
Not Available
1994-02-01
Purpose Provide an independent engineering review of the major projects being funded by the Department of Energy, Office of Environmental Restoration and Waste Management. The independent engineering review will address questions of whether the engineering practice is sufficiently developed to a point where a major project can be executed without significant technical problems. The independent review will focus on questions related to: (1) Adequacy of development of the technical base of understanding; (2) Status of development and availability of technology among the various alternatives; (3) Status and availability of the industrial infrastructure to support project design, equipment fabrication, facility construction, and process and program/project operation; (4) Adequacy of the design effort to provide a sound foundation to support execution of project; (5) Ability of the organization to fully integrate the system, and direct, manage, and control the execution of a complex major project.
Homebirth and independent midwifery.
Harris, G
2000-07-01
Why do women choose to give birth at home, and midwives to work independently, in a culture that does little to support this option? This article looks at the reasons childbearing women and midwives make these choices and the barriers to achieving them. The safety of the homebirth option is supported in reference to analysis of mortality and morbidity. Homebirth practices and level of success are compared in Australia and New Zealand (NZ), in particular, and The Netherlands, England and America. The success of popularity of homebirths is analysed in terms of socio-economic status. The current situation and challenges of independent midwifery in Darwin are described.
D2-brane Chern-Simons theories: F -maximization = a-maximization
NASA Astrophysics Data System (ADS)
Fluder, Martin; Sparks, James
2016-01-01
We study a system of N D2-branes probing a generic Calabi-Yau three-fold singularity in the presence of a non-zero quantized Romans mass n. We argue that the low-energy effective c N=2 Chern-Simons quiver gauge theory flows to a superconformal fixed point in the IR, and construct the dual AdS4 solution in massive IIA supergravity. We compute the free energy F of the gauge theory on S 3 using localization. In the large N limit we find F = c ( nN )1/3 a 2/3, where c is a universal constant and a is the a-function of the "parent" four-dimensional N=1 theory on N D3-branes probing the same Calabi-Yau singularity. It follows that maximizing F over the space of admissible R-symmetries is equivalent to maximizing a for this class of theories. Moreover, we show that the gauge theory result precisely matches the holographic free energy of the supergravity solution, and provide a similar matching of the VEV of a BPS Wilson loop operator.
The generalized scheme-independent Crewther relation in QCD
Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; ...
2017-05-10
The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton–nucleon scattering times the Adler function, defined from the cross section for electron–positron annihilation into hadrons, has no pQCD radiative corrections. The “Generalized Crewther Relation” relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (Dns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (CBjp) at leading twist. A scheme-dependent ΔCSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both Dns and the inverse coefficient Cmore » $$-1\\atop{Bjp}$$ have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, $$\\hat{α}$$d(Q)=Σi≥1$$\\hat{α}^i\\atop{g1}$$(Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the
The generalized scheme-independent Crewther relation in QCD
NASA Astrophysics Data System (ADS)
Shen, Jian-Ming; Wu, Xing-Gang; Ma, Yang; Brodsky, Stanley J.
2017-07-01
The Principle of Maximal Conformality (PMC) provides a systematic way to set the renormalization scales order-by-order for any perturbative QCD calculable processes. The resulting predictions are independent of the choice of renormalization scheme, a requirement of renormalization group invariance. The Crewther relation, which was originally derived as a consequence of conformally invariant field theory, provides a remarkable connection between two observables when the β function vanishes: one can show that the product of the Bjorken sum rule for spin-dependent deep inelastic lepton-nucleon scattering times the Adler function, defined from the cross section for electron-positron annihilation into hadrons, has no pQCD radiative corrections. The ;Generalized Crewther Relation; relates these two observables for physical QCD with nonzero β function; specifically, it connects the non-singlet Adler function (Dns) to the Bjorken sum rule coefficient for polarized deep-inelastic electron scattering (CBjp) at leading twist. A scheme-dependent ΔCSB-term appears in the analysis in order to compensate for the conformal symmetry breaking (CSB) terms from perturbative QCD. In conventional analyses, this normally leads to unphysical dependence in both the choice of the renormalization scheme and the choice of the initial scale at any finite order. However, by applying PMC scale-setting, we can fix the scales of the QCD coupling unambiguously at every order of pQCD. The result is that both Dns and the inverse coefficient CBjp-1 have identical pQCD coefficients, which also exactly match the coefficients of the corresponding conformal theory. Thus one obtains a new generalized Crewther relation for QCD which connects two effective charges, αˆd (Q) =∑i≥1 αˆg1 i (Qi), at their respective physical scales. This identity is independent of the choice of the renormalization scheme at any finite order, and the dependence on the choice of the initial scale is negligible. Similar
Caring about Independent Lives
ERIC Educational Resources Information Center
Christensen, Karen
2010-01-01
With the rhetoric of independence, new cash for care systems were introduced in many developed welfare states at the end of the 20th century. These systems allow local authorities to pay people who are eligible for community care services directly, to enable them to employ their own careworkers. Despite the obvious importance of the careworker's…
Postcard from Independence, Mo.
ERIC Educational Resources Information Center
Archer, Jeff
2004-01-01
This article reports results showing that the Independence, Missori school district failed to meet almost every one of its improvement goals under the No Child Left Behind Act. The state accreditation system stresses improvement over past scores, while the federal law demands specified amounts of annual progress toward the ultimate goal of 100…
ERIC Educational Resources Information Center
Stewart, David
Maintaining the status quo as well as the attitude toward cultural funding and development that it imposes on video are detrimental to the formation of a thriving video network, and also out of key with the present social and political situation in Britain. Independent video has some quite specific advantages as a medium for cultural production…
ERIC Educational Resources Information Center
Tipping, Joyce
1978-01-01
Designed to help handicapped persons who have been living a sheltered existence develop independent living skills, this course is divided into two parts. The first part consists of a five-day apartment live-in experience, and the second concentrates on developing the learners' awareness of community resources and consumer skills. (BM)
Native American Independent Living.
ERIC Educational Resources Information Center
Clay, Julie Anna
1992-01-01
Examines features of independent living philosophy with regard to compatibility with Native American cultures, including definition or conceptualization of disability; self-advocacy; systems advocacy; peer counseling; and consumer control and involvement. Discusses an actualizing process as one method of resolving cultural conflicts and…
ERIC Educational Resources Information Center
Roha, Thomas Arden
1999-01-01
Foundations affiliated with public higher education institutions can avoid having to open records for public scrutiny, by having independent boards of directors, occupying leased office space or paying market value for university space, using only foundation personnel, retaining legal counsel, being forthcoming with information and use of public…
ERIC Educational Resources Information Center
James, H. Thomas
Independent schools that are of viable size, well managed, and strategically located to meet competition will survive and prosper past the current financial crisis. We live in a complex technological society with insatiable demands for knowledgeable people to keep it running. The future will be marked by the orderly selection of qualified people,…
Detrimental Relations of Maximization with Academic and Career Attitudes
ERIC Educational Resources Information Center
Dahling, Jason J.; Thompson, Mindi N.
2013-01-01
Maximization refers to a decision-making style that involves seeking the single best option when making a choice, which is generally dysfunctional because people are limited in their ability to rationally evaluate all options and identify the single best outcome. The vocational consequences of maximization are examined in two samples, college…
Effect of Age and Other Factors on Maximal Heart Rate.
ERIC Educational Resources Information Center
Londeree, Ben R.; Moeschberger, Melvin L.
1982-01-01
To reduce confusion regarding reported effects of age on maximal exercise heart rate, a comprehensive review of the relevant English literature was conducted. Data on maximal heart rate after exercising with a bicycle, a treadmill, and after swimming were analyzed with regard to physical fitness and to age, sex, and racial differences. (Authors/PP)
Preschoolers Can Recognize Violations of the Gricean Maxims
ERIC Educational Resources Information Center
Eskritt, Michelle; Whalen, Juanita; Lee, Kang
2008-01-01
Grice ("Syntax and semantics: Speech acts", 1975, pp. 41-58, Vol. 3) proposed that conversation is guided by a spirit of cooperation that involves adherence to several conversational maxims. Three types of maxims were explored in the current study: 1) Quality, to be truthful; 2) Relation, to say only what is relevant to a conversation; and 3)…
Pace's Maxims for Homegrown Library Projects. Coming Full Circle
ERIC Educational Resources Information Center
Pace, Andrew K.
2005-01-01
This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…
Pace's Maxims for Homegrown Library Projects. Coming Full Circle
ERIC Educational Resources Information Center
Pace, Andrew K.
2005-01-01
This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…
Detrimental Relations of Maximization with Academic and Career Attitudes
ERIC Educational Resources Information Center
Dahling, Jason J.; Thompson, Mindi N.
2013-01-01
Maximization refers to a decision-making style that involves seeking the single best option when making a choice, which is generally dysfunctional because people are limited in their ability to rationally evaluate all options and identify the single best outcome. The vocational consequences of maximization are examined in two samples, college…
On the predictability of chaotic systems with respect to maximally effective computation time
NASA Astrophysics Data System (ADS)
Xinquan, Gao; Guolin, Feng; Wenjie, Dong; Jifan, Chou
2003-04-01
The round-off error introduces uncertainty in the numerical solution. A computational uncertainty principle is explained and validated by using chaotic systems, such as the climatic model, the Rossler and super chaos system. Maximally effective computation time (MECT) and optimal stepsize (OS) are discussed and obtained via an optimal searching method. Under OS in solving nonlinear ordinary differential equations, the self-memorization equations of chaotic systems are set up, thus a new approach to numerical weather forecast is described.
Maximal zero textures in Linear and Inverse seesaw
NASA Astrophysics Data System (ADS)
Sinha, Roopam; Samanta, Rome; Ghosal, Ambar
2016-08-01
We investigate Linear and Inverse seesaw mechanisms with maximal zero textures of the constituent matrices subjected to the assumption of non-zero eigenvalues for the neutrino mass matrix mν and charged lepton mass matrix me. If we restrict to the minimally parametrized non-singular 'me' (i.e., with maximum number of zeros) it gives rise to only 6 possible textures of me. Non-zero determinant of mν dictates six possible textures of the constituent matrices. We ask in this minimalistic approach, what phenomenologically allowed maximum zero textures are possible. It turns out that Inverse seesaw leads to 7 allowed two-zero textures while the Linear seesaw leads to only one. In Inverse seesaw, we show that 2 is the maximum number of independent zeros that can be inserted into μS to obtain all 7 viable two-zero textures of mν. On the other hand, in Linear seesaw mechanism, the minimal scheme allows maximum 5 zeros to be accommodated in 'm' so as to obtain viable effective neutrino mass matrices (mν). Interestingly, we find that our minimalistic approach in Inverse seesaw leads to a realization of all the phenomenologically allowed two-zero textures whereas in Linear seesaw only one such texture is viable. Next, our numerical analysis shows that none of the two-zero textures give rise to enough CP violation or significant δCP. Therefore, if δCP = π / 2 is established, our minimalistic scheme may still be viable provided we allow larger number of parameters in 'me'.
Comprehensibility maximization and humanly comprehensible representations
NASA Astrophysics Data System (ADS)
Kamimura, Ryotaro
2012-04-01
In this paper, we propose a new information-theoretic method to measure the comprehensibility of network configurations in competitive learning. Comprehensibility is supposed to be measured by information contained in components in competitive networks. Thus, the increase in information corresponds to the increase in comprehensibility of network configurations. One of the most important characteristics of the method is that parameters can be explicitly determined so as to produce a state where the different types of comprehensibility can be mutually increased. We applied the method to two problems, namely an artificial data set and the ionosphere data from the well-known machine learning database. In both problems, we showed that improved performance could be obtained in terms of all types of comprehensibility and quantization errors. For the topographic errors, we found that updating connection weights prevented them from increasing. Then, the optimal values of comprehensibility could be explicitly determined, and clearer class boundaries were generated.
A comparison between laddermill and treadmill maximal oxygen consumption.
Montoliu, M A; Gonzalez, V; Rodriguez, B; Palenciano, L
1997-01-01
Maximal O2 consumption (VO2max) is an index of the capacity for work over an 8 h workshift. Running on a treadmill is the most common method of eliciting it, because it is an easy, natural exercise, and also, by engaging large muscle masses, larger values are obtained than by other exercises. It has been claimed, however, that climbing a laddermill elicits a still higher VO2max, probably because more muscle mass is apparently engaged (legs + arms) than on the treadmill (legs only). However, no data in support of this claim have been presented. To see if differences exist, we conducted progressive tests to exhaustion on 44 active coal miners, on a laddermill (slant angle 75 degrees, vertical separation of rungs 25 cm) and on a treadmill set at a 5% gradient. The subjects' mean (range) age was 37.4 (31-47) years, height 174.3 (164-187) cm, body mass 82.2 (64-103) kg. Mean (range) VO2max on the laddermill was 2.83 (2.31-3.64) l x min(-1) and 2.98 (2.03-4.22) l x min(-1) on the treadmill (P < 0.01, Student's paired t-test). Mean (range) of maximal heart rate f(cmax) (beats x min(-1)) on the laddermill and on the treadmill were 181.0 (161-194) and 181.3 (162-195), respectively (NS). Laddermill:treadmill VO2max was negatively related to both treadmill VO2max x kg body mass(-1) (r = -0.410, P < 0.01) and body mass (r = -0.409, P < 0.01). Laddermill:treadmill f(cmax) was negatively related to treadmill VO2max x kg body mass(-1) (r = -0.367, P < 0.02) but not to body mass (r = -0.166, P = 0.28). Our data would suggest that for fitter subjects (VO2max > 2.6 l x min or VO2max kg body mass(-1) > 30 ml x min(-1) x kg(-1)) and/or higher body masses (> 70 kg), exercise on the laddermill is not dynamic enough to elicit a VO2max as high as on the treadmill. For such subjects, treadmill VO2max would overestimate exercise capacity for jobs requiring a fair amount of climbing ladders or ladder-like structures.
Absolutely Maximally Entangled States of Seven Qubits Do Not Exist.
Huber, Felix; Gühne, Otfried; Siewert, Jens
2017-05-19
Pure multiparticle quantum states are called absolutely maximally entangled if all reduced states obtained by tracing out at least half of the particles are maximally mixed. We provide a method to characterize these states for a general multiparticle system. With that, we prove that a seven-qubit state whose three-body marginals are all maximally mixed, or equivalently, a pure ((7,1,4))_{2} quantum error correcting code, does not exist. Furthermore, we obtain an upper limit on the possible number of maximally mixed three-body marginals and identify the state saturating the bound. This solves the seven-particle problem as the last open case concerning maximally entangled states of qubits.
Absolutely Maximally Entangled States of Seven Qubits Do Not Exist
NASA Astrophysics Data System (ADS)
Huber, Felix; Gühne, Otfried; Siewert, Jens
2017-05-01
Pure multiparticle quantum states are called absolutely maximally entangled if all reduced states obtained by tracing out at least half of the particles are maximally mixed. We provide a method to characterize these states for a general multiparticle system. With that, we prove that a seven-qubit state whose three-body marginals are all maximally mixed, or equivalently, a pure (7,1,4)2 quantum error correcting code, does not exist. Furthermore, we obtain an upper limit on the possible number of maximally mixed three-body marginals and identify the state saturating the bound. This solves the seven-particle problem as the last open case concerning maximally entangled states of qubits.
Criticality Maximizes Complexity in Neural Tissue
Timme, Nicholas M.; Marshall, Najja J.; Bennett, Nicholas; Ripp, Monica; Lautzenhiser, Edward; Beggs, John M.
2016-01-01
The analysis of neural systems leverages tools from many different fields. Drawing on techniques from the study of critical phenomena in statistical mechanics, several studies have reported signatures of criticality in neural systems, including power-law distributions, shape collapses, and optimized quantities under tuning. Independently, neural complexity—an information theoretic measure—has been introduced in an effort to quantify the strength of correlations across multiple scales in a neural system. This measure represents an important tool in complex systems research because it allows for the quantification of the complexity of a neural system. In this analysis, we studied the relationships between neural complexity and criticality in neural culture data. We analyzed neural avalanches in 435 recordings from dissociated hippocampal cultures produced from rats, as well as neural avalanches from a cortical branching model. We utilized recently developed maximum likelihood estimation power-law fitting methods that account for doubly truncated power-laws, an automated shape collapse algorithm, and neural complexity and branching ratio calculation methods that account for sub-sampling, all of which are implemented in the freely available Neural Complexity and Criticality MATLAB toolbox. We found evidence that neural systems operate at or near a critical point and that neural complexity is optimized in these neural systems at or near the critical point. Surprisingly, we found evidence that complexity in neural systems is dependent upon avalanche profiles and neuron firing rate, but not precise spiking relationships between neurons. In order to facilitate future research, we made all of the culture data utilized in this analysis freely available online. PMID:27729870
Criticality Maximizes Complexity in Neural Tissue.
Timme, Nicholas M; Marshall, Najja J; Bennett, Nicholas; Ripp, Monica; Lautzenhiser, Edward; Beggs, John M
2016-01-01
The analysis of neural systems leverages tools from many different fields. Drawing on techniques from the study of critical phenomena in statistical mechanics, several studies have reported signatures of criticality in neural systems, including power-law distributions, shape collapses, and optimized quantities under tuning. Independently, neural complexity-an information theoretic measure-has been introduced in an effort to quantify the strength of correlations across multiple scales in a neural system. This measure represents an important tool in complex systems research because it allows for the quantification of the complexity of a neural system. In this analysis, we studied the relationships between neural complexity and criticality in neural culture data. We analyzed neural avalanches in 435 recordings from dissociated hippocampal cultures produced from rats, as well as neural avalanches from a cortical branching model. We utilized recently developed maximum likelihood estimation power-law fitting methods that account for doubly truncated power-laws, an automated shape collapse algorithm, and neural complexity and branching ratio calculation methods that account for sub-sampling, all of which are implemented in the freely available Neural Complexity and Criticality MATLAB toolbox. We found evidence that neural systems operate at or near a critical point and that neural complexity is optimized in these neural systems at or near the critical point. Surprisingly, we found evidence that complexity in neural systems is dependent upon avalanche profiles and neuron firing rate, but not precise spiking relationships between neurons. In order to facilitate future research, we made all of the culture data utilized in this analysis freely available online.
Montero, David; Díaz-Cañestro, Candela
2016-05-01
The increase in maximal oxygen consumption (VO2max) with endurance training is associated with that of maximal cardiac output (Qmax), but not oxygen extraction, in young individuals. Whether such a relationship is altered with ageing remains unclear. Therefore, we sought systematically to review and determine the effect of endurance training on and the associations among VO2max, Qmax and arteriovenous oxygen difference at maximal exercise (Ca-vO2max) in healthy aged individuals. We conducted a systematic search of MEDLINE, Scopus and Web of Science, from their inceptions until May 2015 for articles assessing the effect of endurance training lasting 3 weeks or longer on VO2max and Qmax and/or Ca-vO2max in healthy middle-aged and/or older individuals (mean age ≥40 years). Meta-analyses were performed to determine the standardised mean difference (SMD) in VO2max, Qmax and Ca-vO2max between post and pre-training measurements. Subgroup and meta-regression analyses were used to evaluate the associations among SMDs and potential moderating factors. Sixteen studies were included after systematic review, comprising a total of 153 primarily untrained healthy middle-aged and older subjects (mean age 42-71 years). Endurance training programmes ranged from 8 to 52 weeks of duration. After data pooling, VO2max (SMD 0.89; P < 0.0001) and Qmax (SMD 0.61; P < 0.0001) were increased after endurance training; no heterogeneity among studies was detected. Ca-vO2max was only increased with endurance training interventions lasting more than 12 weeks (SMD 0.62; P = 0.001). In meta-regression, the SMD in Qmax was positively associated with the SMD in VO2max (B = 0.79, P = 0.04). The SMD in Ca-vO2max was not associated with the SMD in VO2max (B = 0.09, P = 0.84). The improvement in VO2max following endurance training is a linear function of Qmax, but not Ca-vO2max, through healthy ageing. © The European Society of Cardiology 2015.
Point set registration: coherent point drift.
Myronenko, Andriy; Song, Xubo
2010-12-01
Point set registration is a key component in many computer vision tasks. The goal of point set registration is to assign correspondences between two sets of points and to recover the transformation that maps one point set to the other. Multiple factors, including an unknown nonrigid spatial transformation, large dimensionality of point set, noise, and outliers, make the point set registration a challenging problem. We introduce a probabilistic method, called the Coherent Point Drift (CPD) algorithm, for both rigid and nonrigid point set registration. We consider the alignment of two point sets as a probability density estimation problem. We fit the Gaussian mixture model (GMM) centroids (representing the first point set) to the data (the second point set) by maximizing the likelihood. We force the GMM centroids to move coherently as a group to preserve the topological structure of the point sets. In the rigid case, we impose the coherence constraint by reparameterization of GMM centroid locations with rigid parameters and derive a closed form solution of the maximization step of the EM algorithm in arbitrary dimensions. In the nonrigid case, we impose the coherence constraint by regularizing the displacement field and using the variational calculus to derive the optimal transformation. We also introduce a fast algorithm that reduces the method computation complexity to linear. We test the CPD algorithm for both rigid and nonrigid transformations in the presence of noise, outliers, and missing points, where CPD shows accurate results and outperforms current state-of-the-art methods.
A Classroom Tariff-Setting Game
ERIC Educational Resources Information Center
Winchester, Niven
2006-01-01
The author outlines a classroom tariff-setting game that allows students to explore the consequences of import tariffs imposed by large countries (countries able to influence world prices). Groups of students represent countries, which are organized into trading pairs. Each group's objective is to maximize welfare by choosing an appropriate ad…
NASA Astrophysics Data System (ADS)
Sarder, Pinaki; Akers, Walter J.; Sudlow, Gail P.; Yazdanfar, Siavash; Achilefu, Samuel
2014-02-01
We report two methods for quantitatively determining maximal imaging depth from thick tissue images captured using all-near-infrared (NIR) multiphoton microscopy (MPM). All-NIR MPM is performed using 1550 nm laser excitation with NIR detection. This method enables imaging more than five-fold deep in thick tissues in comparison with other NIR excitation microscopy methods. In this study, we show a correlation between the multiphoton signal along the depth of tissue samples and the shape of the corresponding empirical probability density function (pdf) of the photon counts. Histograms from this analysis become increasingly symmetric with the imaging depth. This distribution transitions toward the background distribution at higher imaging depths. Inspired by these observations, we propose two independent methods based on which one can automatically determine maximal imaging depth in the all-NIR MPM images of thick tissues. At this point, the signal strength is expected to be weak and similar to the background. The first method suggests the maximal imaging depth corresponds to the deepest image plane where the ratio between the mean and median of the empirical photon-count pdf is outside the vicinity of 1. The second method suggests the maximal imaging depth corresponds to the deepest image plane where the squared distance between the empirical photon-count mean obtained from the object and the mean obtained from the background is greater than a threshold. We demonstrate the application of these methods in all-NIR MPM images of mouse kidney tissues to study maximal depth penetration in such tissues.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
...; Independent Contractor Registration and Identification ACTION: Notice. SUMMARY: The Department of Labor (DOL... ] request (ICR) titled, ``Independent Contractor Registration and Identification,'' to the Office of...@dol.gov . SUPPLEMENTARY INFORMATION: Regulations 30 CFR part 45, Independent Contractors, sets...
Berthon, P; Fellmann, N
2002-09-01
The maximal aerobic velocity concept developed since eighties is considered as either the minimal velocity which elicits the maximal aerobic consumption or as the "velocity associated to maximal oxygen consumption". Different methods for measuring maximal aerobic velocity on treadmill in laboratory conditions have been elaborated, but all these specific protocols measure V(amax) either during a maximal oxygen consumption test or with an association of such a test. An inaccurate method presents a certain number of problems in the subsequent use of the results, for example in the elaboration of training programs, in the study of repeatability or in the determination of individual limit time. This study analyzes 14 different methods to understand their interests and limits in view to propose a general methodology for measuring V(amax). In brief, the test should be progressive and maximal without any rest period and of 17 to 20 min total duration. It should begin with a five min warm-up at 60-70% of the maximal aerobic power of the subjects. The beginning of the trial should be fixed so that four or five steps have to be run. The duration of the steps should be three min with a 1% slope and an increasing speed of 1.5 km x h(-1) until complete exhaustion. The last steps could be reduced at two min for a 1 km x h(-1) increment. The maximal aerobic velocity is adjusted in relation to duration of the last step.
Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H.; Medeiros, Felipe A.; Zangwill, Linda M.; Weinreb, Robert N.; Liebmann, Jeffrey M.; Girkin, Christopher A.; Bowd, Christopher
2016-01-01
Purpose To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM–progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. Methods GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Results Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. Conclusions GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Translational Relevance Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning. PMID:27152250
Maximal stochastic transport in the Lorenz equations
NASA Astrophysics Data System (ADS)
Agarwal, Sahil; Wettlaufer, J. S.
2016-01-01
We calculate the stochastic upper bounds for the Lorenz equations using an extension of the background method. In analogy with Rayleigh-Bénard convection the upper bounds are for heat transport versus Rayleigh number. As might be expected, the stochastic upper bounds are larger than the deterministic counterpart of Souza and Doering [1], but their variation with noise amplitude exhibits interesting behavior. Below the transition to chaotic dynamics the upper bounds increase monotonically with noise amplitude. However, in the chaotic regime this monotonicity depends on the number of realizations in the ensemble; at a particular Rayleigh number the bound may increase or decrease with noise amplitude. The origin of this behavior is the coupling between the noise and unstable periodic orbits, the degree of which depends on the degree to which the ensemble represents the ergodic set. This is confirmed by examining the close returns plots of the full solutions to the stochastic equations and the numerical convergence of the noise correlations. The numerical convergence of both the ensemble and time averages of the noise correlations is sufficiently slow that it is the limiting aspect of the realization of these bounds. Finally, we note that the full solutions of the stochastic equations demonstrate that the effect of noise is equivalent to the effect of chaos.
Maximizing the Adjacent Possible in Automata Chemistries.
Hickinbotham, Simon; Clark, Edward; Nellis, Adam; Stepney, Susan; Clarke, Tim; Young, Peter
2016-01-01
Automata chemistries are good vehicles for experimentation in open-ended evolution, but they are by necessity complex systems whose low-level properties require careful design. To aid the process of designing automata chemistries, we develop an abstract model that classifies the features of a chemistry from a physical (bottom up) perspective and from a biological (top down) perspective. There are two levels: things that can evolve, and things that cannot. We equate the evolving level with biology and the non-evolving level with physics. We design our initial organisms in the biology, so they can evolve. We design the physics to facilitate evolvable biologies. This architecture leads to a set of design principles that should be observed when creating an instantiation of the architecture. These principles are Everything Evolves, Everything's Soft, and Everything Dies. To evaluate these ideas, we present experiments in the recently developed Stringmol automata chemistry. We examine the properties of Stringmol with respect to the principles, and so demonstrate the usefulness of the principles in designing automata chemistries.
Kettlebell swing training improves maximal and explosive strength.
Lake, Jason P; Lauder, Mike A
2012-08-01
The aim of this study was to establish the effect that kettlebell swing (KB) training had on measures of maximum (half squat-HS-1 repetition maximum [1RM]) and explosive (vertical jump height-VJH) strength. To put these effects into context, they were compared with the effects of jump squat power training (JS-known to improve 1RM and VJH). Twenty-one healthy men (age = 18-27 years, body mass = 72.58 ± 12.87 kg) who could perform a proficient HS were tested for their HS 1RM and VJH pre- and post-training. Subjects were randomly assigned to either a KB or JS training group after HS 1RM testing and trained twice a week. The KB group performed 12-minute bouts of KB exercise (12 rounds of 30-second exercise, 30-second rest with 12 kg if <70 kg or 16 kg if >70 kg). The JS group performed at least 4 sets of 3 JS with the load that maximized peak power-Training volume was altered to accommodate different training loads and ranged from 4 sets of 3 with the heaviest load (60% 1RM) to 8 sets of 6 with the lightest load (0% 1RM). Maximum strength improved by 9.8% (HS 1RM: 165-181% body mass, p < 0.001) after the training intervention, and post hoc analysis revealed that there was no significant difference between the effect of KB and JS training (p = 0.56). Explosive strength improved by 19.8% (VJH: 20.6-24.3 cm) after the training intervention, and post hoc analysis revealed that the type of training did not significantly affect this either (p = 0.38). The results of this study clearly demonstrate that 6 weeks of biweekly KB training provides a stimulus that is sufficient to increase both maximum and explosive strength offering a useful alternative to strength and conditioning professionals seeking variety for their athletes.
Agent independent task planning
NASA Technical Reports Server (NTRS)
Davis, William S.
1990-01-01
Agent-Independent Planning is a technique that allows the construction of activity plans without regard to the agent that will perform them. Once generated, a plan is then validated and translated into instructions for a particular agent, whether a robot, crewmember, or software-based control system. Because Space Station Freedom (SSF) is planned for orbital operations for approximately thirty years, it will almost certainly experience numerous enhancements and upgrades, including upgrades in robotic manipulators. Agent-Independent Planning provides the capability to construct plans for SSF operations, independent of specific robotic systems, by combining techniques of object oriented modeling, nonlinear planning and temporal logic. Since a plan is validated using the physical and functional models of a particular agent, new robotic systems can be developed and integrated with existing operations in a robust manner. This technique also provides the capability to generate plans for crewmembers with varying skill levels, and later apply these same plans to more sophisticated robotic manipulators made available by evolutions in technology.
International exploration by independent
Bertragne, R.G.
1992-04-01
Recent industry trends indicate that the smaller U.S. independents are looking at foreign exploration opportunities as one of the alternatives for growth in the new age of exploration. Foreign finding costs per barrel usually are accepted to be substantially lower than domestic costs because of the large reserve potential of international plays. To get involved in overseas exploration, however, requires the explorationist to adapt to different cultural, financial, legal, operational, and political conditions. Generally, foreign exploration proceeds at a slower pace than domestic exploration because concessions are granted by a country's government, or are explored in partnership with a national oil company. First, the explorationist must prepare a mid- to long-term strategy, tailored to the goals and the financial capabilities of the company; next, is an ongoing evaluation of quality prospects in various sedimentary basins, and careful planning and conduct of the operations. To successfully explore overseas also requires the presence of a minimum number of explorationists and engineers thoroughly familiar with the various exploratory and operational aspects of foreign work. Ideally, these team members will have had a considerable amount of on-site experience in various countries and climates. Independents best suited for foreign expansion are those who have been financially successful in domestic exploration. When properly approached, foreign exploration is well within the reach of smaller U.S. independents, and presents essentially no greater risk than domestic exploration; however, the reward can be much larger and can catapult the company into the 'big leagues.'
International exploration by independents
Bertagne, R.G. )
1991-03-01
Recent industry trends indicate that the smaller US independents are looking at foreign exploration opportunities as one of the alternatives for growth in the new age of exploration. It is usually accepted that foreign finding costs per barrel are substantially lower than domestic because of the large reserve potential of international plays. To get involved overseas requires, however, an adaptation to different cultural, financial, legal, operational, and political conditions. Generally foreign exploration proceeds at a slower pace than domestic because concessions are granted by the government, or are explored in partnership with the national oil company. First, a mid- to long-term strategy, tailored to the goals and the financial capabilities of the company, must be prepared; it must be followed by an ongoing evaluation of quality prospects in various sedimentary basins, and a careful planning and conduct of the operations. To successfully explore overseas also requires the presence on the team of a minimum number of explorationists and engineers thoroughly familiar with the various exploratory and operational aspects of foreign work, having had a considerable amount of onsite experience in various geographical and climatic environments. Independents that are best suited for foreign expansion are those that have been financially successful domestically, and have a good discovery track record. When properly approached foreign exploration is well within the reach of smaller US independents and presents essentially no greater risk than domestic exploration; the reward, however, can be much larger and can catapult the company into the big leagues.
Robust vehicle lateral stabilisation via set-based methods for uncertain piecewise affine systems
NASA Astrophysics Data System (ADS)
Palmieri, Giovanni; Barić, Miroslav; Glielmo, Luigi; Borrelli, Francesco
2012-06-01
The paper presents the design of a lateral stability controller for ground vehicles based on front steering and four wheels independent braking. The control objective is to track yaw rate and lateral velocity reference signals while avoiding front and rear wheel traction force saturation. Control design is based on an approximate piecewise-affine nonlinear dynamical model of the vehicle. Vehicle longitudinal velocity and drivers steering input are modelled as measured disturbances taking values in a compact set. A time-optimal control strategy which ensures convergence into a maximal robust control invariant (RCI) set is proposed. This paper presents the uncertain model, the RCI computation, and the control algorithm. Experimental tests at high-speed on ice with aggressive driver manoeuvres show the effectiveness of the proposed scheme.
Boccia, Gennaro; Dardanello, Davide; Tarperi, Cantor; Festa, Luca; La Torre, Antonio; Pellegrini, Barbara; Schena, Federico; Rainoldi, Alberto
2017-08-01
We examined whether the presence of fatigue induced by prolonged running influenced the time courses of force generating capacities throughout a series of intermittent rapid contractions. Thirteen male amateur runners performed a set of 15 intermittent isometric rapid contractions of the knee extensor muscles, (3s/5s on/off) the day before (PRE) and immediately after (POST) a half marathon. The maximal voluntary contraction force, rate of force development (RFDpeak), and their ratio (relative RFDpeak) were calculated. At POST, considering the first (out of 15) repetition, the maximal force and RFDpeak decreased (p<0.0001) at the same extent (by 22±6% and 24±22%, respectively), resulting in unchanged relative RFDpeak (p=0.6). Conversely, the decline of RFDpeak throughout the repetitions was more pronounced at POST (p=0.02), thus the decline of relative RFDpeak was more pronounced (p=0.007) at POST (-25±13%) than at PRE (-3±13%). The main finding of this study was that the fatigue induced by a half-marathon caused a more pronounced impairment of rapid compared to maximal force in the subsequent intermittent protocol. Thus, the fatigue-induced impairment in rapid muscle contractions may have a greater effect on repeated, rather than on single, attempts of maximal force production. Copyright © 2017 Elsevier B.V. All rights reserved.
Erol, Volkan; Ozaydin, Fatih; Altintas, Azmi Ali
2014-06-24
Entanglement has been studied extensively for unveiling the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known measures for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. It was found that for sets of non-maximally entangled states of two qubits, comparing these entanglement measures may lead to different entanglement orderings of the states. On the other hand, although it is not an entanglement measure and not monotonic under local operations, due to its ability of detecting multipartite entanglement, quantum Fisher information (QFI) has recently received an intense attraction generally with entanglement in the focus. In this work, we revisit the state ordering problem of general two qubit states. Generating a thousand random quantum states and performing an optimization based on local general rotations of each qubit, we calculate the maximal QFI for each state. We analyze the maximized QFI in comparison with concurrence, REE and negativity and obtain new state orderings. We show that there are pairs of states having equal maximized QFI but different values for concurrence, REE and negativity and vice versa.
Muscle Damage following Maximal Eccentric Knee Extensions in Males and Females.
Hicks, K M; Onambélé, G L; Winwood, K; Morse, C I
2016-01-01
To investigate whether there is a sex difference in exercise induced muscle damage. Vastus Lateralis and patella tendon properties were measured in males and females using ultrasonography. During maximal voluntary eccentric knee extensions (12 reps x 6 sets), Vastus Lateralis fascicle lengthening and maximal voluntary eccentric knee extensions torque were recorded every 10° of knee joint angle (20-90°). Isometric torque, Creatine Kinase and muscle soreness were measured pre, post, 48, 96 and 168 hours post damage as markers of exercise induced muscle damage. Patella tendon stiffness and Vastus Lateralis fascicle lengthening were significantly higher in males compared to females (p<0.05). There was no sex difference in isometric torque loss and muscle soreness post exercise induced muscle damage (p>0.05). Creatine Kinase levels post exercise induced muscle damage were higher in males compared to females (p<0.05), and remained higher when maximal voluntary eccentric knee extension torque, relative to estimated quadriceps anatomical cross sectional area, was taken as a covariate (p<0.05). Based on isometric torque loss, there is no sex difference in exercise induced muscle damage. The higher Creatine Kinase in males could not be explained by differences in maximal voluntary eccentric knee extension torque, Vastus Lateralis fascicle lengthening and patella tendon stiffness. Further research is required to understand the significant sex differences in Creatine Kinase levels following exercise induced muscle damage.
Muscle Damage following Maximal Eccentric Knee Extensions in Males and Females
2016-01-01
Aim To investigate whether there is a sex difference in exercise induced muscle damage. Materials and Method Vastus Lateralis and patella tendon properties were measured in males and females using ultrasonography. During maximal voluntary eccentric knee extensions (12 reps x 6 sets), Vastus Lateralis fascicle lengthening and maximal voluntary eccentric knee extensions torque were recorded every 10° of knee joint angle (20–90°). Isometric torque, Creatine Kinase and muscle soreness were measured pre, post, 48, 96 and 168 hours post damage as markers of exercise induced muscle damage. Results Patella tendon stiffness and Vastus Lateralis fascicle lengthening were significantly higher in males compared to females (p<0.05). There was no sex difference in isometric torque loss and muscle soreness post exercise induced muscle damage (p>0.05). Creatine Kinase levels post exercise induced muscle damage were higher in males compared to females (p<0.05), and remained higher when maximal voluntary eccentric knee extension torque, relative to estimated quadriceps anatomical cross sectional area, was taken as a covariate (p<0.05). Conclusion Based on isometric torque loss, there is no sex difference in exercise induced muscle damage. The higher Creatine Kinase in males could not be explained by differences in maximal voluntary eccentric knee extension torque, Vastus Lateralis fascicle lengthening and patella tendon stiffness. Further research is required to understand the significant sex differences in Creatine Kinase levels following exercise induced muscle damage. PMID:26986066
Erol, Volkan; Ozaydin, Fatih; Altintas, Azmi Ali
2014-01-01
Entanglement has been studied extensively for unveiling the mysteries of non-classical correlations between quantum systems. In the bipartite case, there are well known measures for quantifying entanglement such as concurrence, relative entropy of entanglement (REE) and negativity, which cannot be increased via local operations. It was found that for sets of non-maximally entangled states of two qubits, comparing these entanglement measures may lead to different entanglement orderings of the states. On the other hand, although it is not an entanglement measure and not monotonic under local operations, due to its ability of detecting multipartite entanglement, quantum Fisher information (QFI) has recently received an intense attraction generally with entanglement in the focus. In this work, we revisit the state ordering problem of general two qubit states. Generating a thousand random quantum states and performing an optimization based on local general rotations of each qubit, we calculate the maximal QFI for each state. We analyze the maximized QFI in comparison with concurrence, REE and negativity and obtain new state orderings. We show that there are pairs of states having equal maximized QFI but different values for concurrence, REE and negativity and vice versa. PMID:24957694
A comparison of central aspects of fatigue in submaximal and maximal voluntary contractions.
Taylor, Janet L; Gandevia, Simon C
2008-02-01
Magnetic and electrical stimulation at different levels of the neuraxis show that supraspinal and spinal factors limit force production in maximal isometric efforts ("central fatigue"). In sustained maximal contractions, motoneurons become less responsive to synaptic input and descending drive becomes suboptimal. Exercise-induced activity in group III and IV muscle afferents acts supraspinally to limit motor cortical output but does not alter motor cortical responses to transcranial magnetic stimulation. "Central" and "peripheral" fatigue develop more slowly during submaximal exercise. In sustained submaximal contractions, central fatigue occurs in brief maximal efforts even with a weak ongoing contraction (<15% maximum). The presence of central fatigue when much of the available motor pathway is not engaged suggests that afferent inputs contribute to reduce voluntary activation. Small-diameter muscle afferents are likely to be activated by local activity even in sustained weak contractions. During such contractions, it is difficult to measure central fatigue, which is best demonstrated in maximal efforts. To show central fatigue in submaximal contractions, changes in motor unit firing and force output need to be characterized simultaneously. Increasing central drive recruits new motor units, but the way this occurs is likely to depend on properties of the motoneurons and the inputs they receive in the task. It is unclear whether such factors impair force production for a set level of descending drive and thus represent central fatigue. The best indication that central fatigue is important during submaximal tasks is the disproportionate increase in subjects' perceived effort when maintaining a low target force.
Brown, K.A.; Osbakken, M.; Boucher, C.A.; Strauss, H.W.; Pohost, G.M.; Okada, R.D.
1985-01-01
The incidence and causes of abnormal thallium-201 (TI-201) myocardial perfusion studies in the absence of significant coronary artery disease were examined. The study group consisted of 100 consecutive patients undergoing exercise TI-201 testing and coronary angiography who were found to have maximal coronary artery diameter narrowing of less than 50%. Maximal coronary stenosis ranged from 0 to 40%. The independent and relative influences of patient clinical, exercise and angiographic data were assessed by logistic regression analysis. Significant predictors of a positive stress TI-201 test result were: (1) percent maximal coronary stenosis (p less than 0.0005), (2) propranolol use (p less than 0.01), (3) interaction of propranolol use and percent maximal stenosis (p less than 0.005), and (4) stress-induced chest pain (p . 0.05). No other patient variable had a significant influence. Positive TI-201 test results were more common in patients with 21 to 40% maximal stenosis (59%) than in patients with 0 to 20% maximal stenosis (27%) (p less than 0.01). Among patients with 21 to 40% stenosis, a positive test response was more common when 85% of maximal predicted heart rate was achieved (75%) than when it was not (40%) (p less than 0.05). Of 16 nonapical perfusion defects seen in patients with 21 to 40% maximal stenosis, 14 were in the territory that corresponded with such a coronary stenosis. Patients taking propranolol were more likely to have a positive TI-201 test result (45%) than patients not taking propranolol (22%) (p less than 0.05).
Oxygen uptake in maximal effort constant rate and interval running.
Pratt, Daniel; O'Brien, Brendan J; Clark, Bradley
2013-01-01
This study investigated differences in average VO2 of maximal effort interval running to maximal effort constant rate running at lactate threshold matched for time. The average VO2 and distance covered of 10 recreational male runners (VO2max: 4158 ± 390 mL · min(-1)) were compared between a maximal effort constant-rate run at lactate threshold (CRLT), a maximal effort interval run (INT) consisting of 2 min at VO2max speed with 2 minutes at 50% of VO2 repeated 5 times, and a run at the average speed sustained during the interval run (CR submax). Data are presented as mean and 95% confidence intervals. The average VO2 for INT, 3451 (3269-3633) mL · min(-1), 83% VO2max, was not significantly different to CRLT, 3464 (3285-3643) mL · min(-1), 84% VO2max, but both were significantly higher than CR sub-max, 3464 (3285-3643) mL · min(-1), 76% VO2max. The distance covered was significantly greater in CLRT, 4431 (4202-3731) metres, compared to INT and CR sub-max, 4070 (3831-4309) metres. The novel finding was that a 20-minute maximal effort constant rate run uses similar amounts of oxygen as a 20-minute maximal effort interval run despite the greater distance covered in the maximal effort constant-rate run.
Can monkeys make investments based on maximized pay-off?
Steelandt, Sophie; Dufour, Valérie; Broihanne, Marie-Hélène; Thierry, Bernard
2011-03-10
Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella) and thirteen macaques (Macaca fascicularis, Macaca tonkeana) in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible.
A taxonomic approach to communicating maxims in interstellar messages
NASA Astrophysics Data System (ADS)
Vakoch, Douglas A.
2011-02-01
Previous discussions of interstellar messages that could be sent to extraterrestrial intelligence have focused on descriptions of mathematics, science, and aspects of human culture and civilization. Although some of these depictions of humanity have implicitly referred to our aspirations, this has not clearly been separated from descriptions of our actions and attitudes as they are. In this paper, a methodology is developed for constructing interstellar messages that convey information about our aspirations by developing a taxonomy of maxims that provide guidance for living. Sixty-six maxims providing guidance for living were judged for degree of similarity to each of other. Quantitative measures of the degree of similarity between all pairs of maxims were derived by aggregating similarity judgments across individual participants. These composite similarity ratings were subjected to a cluster analysis, which yielded a taxonomy that highlights perceived interrelationships between individual maxims and that identifies major classes of maxims. Such maxims can be encoded in interstellar messages through three-dimensional animation sequences conveying narratives that highlight interactions between individuals. In addition, verbal descriptions of these interactions in Basic English can be combined with these pictorial sequences to increase intelligibility. Online projects to collect messages such as the SETI Institute's Earth Speaks and La Tierra Habla, can be used to solicit maxims from participants around the world.
When Does Reward Maximization Lead to Matching Law?
Sakai, Yutaka; Fukai, Tomoki
2008-01-01
What kind of strategies subjects follow in various behavioral circumstances has been a central issue in decision making. In particular, which behavioral strategy, maximizing or matching, is more fundamental to animal's decision behavior has been a matter of debate. Here, we prove that any algorithm to achieve the stationary condition for maximizing the average reward should lead to matching when it ignores the dependence of the expected outcome on subject's past choices. We may term this strategy of partial reward maximization “matching strategy”. Then, this strategy is applied to the case where the subject's decision system updates the information for making a decision. Such information includes subject's past actions or sensory stimuli, and the internal storage of this information is often called “state variables”. We demonstrate that the matching strategy provides an easy way to maximize reward when combined with the exploration of the state variables that correctly represent the crucial information for reward maximization. Our results reveal for the first time how a strategy to achieve matching behavior is beneficial to reward maximization, achieving a novel insight into the relationship between maximizing and matching. PMID:19030101
Student Learning and an Independent Study Course.
ERIC Educational Resources Information Center
Brew, Angela; McCormick, Bob
1979-01-01
Based on evaluation of an Open University independent study course used in a conventional setting at the University of Essex, this paper focuses on: the relationship of the student's choice of learning strategy with the requirements of the task; the student's ability to adapt his strategy; task restrictions; and the student's view of learning.…
Toward the Pursuit of Independence and Happiness.
ERIC Educational Resources Information Center
Lee, Benjamin
1995-01-01
A speech presented to Navajo high school seniors on the eve of graduation challenges students to take advantage of educational opportunities, to set goals, and to continually strive to achieve their plans. Discusses the speaker's own experiences as a disabled Native American pursuing a career in computers and achieving independence. (LP)
Maximizing efficiency on trauma surgeon rounds.
Ramaniuk, Aliaksandr; Dickson, Barbara J; Mahoney, Sean; O'Mara, Michael S
2017-01-01
Rounding by trauma surgeons is a complex multidisciplinary team-based process in the inpatient setting. Implementation of lean methodology aims to increase understanding of the value stream and eliminate nonvalue-added (NVA) components. We hypothesized that analysis of trauma rounds with education and intervention would improve surgeon efficacy. Level 1 trauma center with 4300 admissions per year. Average non-intensive care unit census was 55. Five full-time attending trauma surgeons were evaluated. Value-added (VA) and NVA components of rounding were identified. The components of each patient interaction during daily rounds were documented. Summary data were presented to the surgeons. An action plan of improvement was provided at group and individual interventions. Change plans were presented to the multidisciplinary team. Data were recollected 6 mo after intervention. The percent of interactions with NVA components decreased (16.0% to 10.7%, P = 0.0001). There was no change between the two periods in time of evaluation of individual patients (4.0 and 3.5 min, P = 0.43). Overall time to complete rounds did not change. There was a reduction in the number of interactions containing NVA components (odds ratio = 2.5). The trauma surgeons were able to reduce the NVA components of rounds. We did not see a decrease in rounding time or individual patient time. This implies that surgeons were able to reinvest freed time into patient care, or that the NVA components were somehow not increasing process time. Direct intervention for isolated improvements can be effective in the rounding process, and efforts should be focused upon improving the value of time spent rather than reducing time invested. Copyright © 2016 Elsevier Inc. All rights reserved.
Building hospital TQM teams: effective polarity analysis and maximization.
Hurst, J B
1996-09-01
Building and maintaining teams require careful attention to and maximization of such polar opposites (¿polarities¿) as individual and team, directive and participatory leadership, task and process, and stability and change. Analyzing systematic elements of any polarity and listing blocks, supports, and flexible ways to maximize it will prevent the negative consequences that occur when treating a polarity like a solvable problem. Flexible, well-timed shifts from pole to pole result in the maximization of upside and minimization of downside consequences.
On the Ribosomal Density that Maximizes Protein Translation Rate
Zarai, Yoram; Margaliot, Michael; Tuller, Tamir
2016-01-01
During mRNA translation, several ribosomes attach to the same mRNA molecule simultaneously translating it into a protein. This pipelining increases the protein translation rate. A natural and important question is what ribosomal density maximizes the protein translation rate. Using mathematical models of ribosome flow along both a linear and a circular mRNA molecules we prove that typically the steady-state protein translation rate is maximized when the ribosomal density is one half of the maximal possible density. We discuss the implications of our results to endogenous genes under natural cellular conditions and also to synthetic biology. PMID:27861564
Opportunities to maximize value with integrated palliative care
Bergman, Jonathan; Laviana, Aaron A
2016-01-01
Palliative care involves aggressively addressing and treating psychosocial, spiritual, religious, and family concerns, as well as considering the overall psychosocial structures supporting a patient. The concept of integrated palliative care removes the either/or decision a patient needs to make: they need not decide if they want either aggressive chemotherapy from their oncologist or symptom-guided palliative care but rather they can be comanaged by several clinicians, including a palliative care clinician, to maximize the benefit to them. One common misconception about palliative care, and supportive care in general, is that it amounts to “doing nothing” or “giving up” on aggressive treatments for patients. Rather, palliative care involves very aggressive care, targeted at patient symptoms, quality-of-life, psychosocial needs, family needs, and others. Integrating palliative care into the care plan for individuals with advanced diseases does not necessarily imply that a patient must forego other treatment options, including those aimed at a cure, prolonging of life, or palliation. Implementing interventions to understand patient preferences and to ensure those preferences are addressed, including preferences related to palliative and supportive care, is vital in improving the patient-centeredness and value of surgical care. Given our aging population and the disproportionate cost of end-of-life care, this holds great hope in bending the cost curve of health care spending, ensuring patient-centeredness, and improving quality and value of care. Level 1 evidence supports this model, and it has been achieved in several settings; the next necessary step is to disseminate such models more broadly. PMID:27226721
Maximizing children's physical activity using the LET US Play principles.
Brazendale, Keith; Chandler, Jessica L; Beets, Michael W; Weaver, Robert G; Beighle, Aaron; Huberty, Jennifer L; Moore, Justin B
2015-07-01
Staff in settings that care for children struggle to implement standards designed to promote moderate-to-vigorous physical activity (MVPA), suggesting a need for effective strategies to maximize the amount of time children spend in MVPA during scheduled PA opportunities. The purpose of this study was to compare the MVPA children accumulate during commonly played games delivered in their traditional format versus games modified according to the LET US Play principles. Children (K-5th) participated in 1-hour PA sessions delivered on non-consecutive days (summer 2014). Using a randomized, counterbalanced design, one of the six games was played for 20min using either traditional rules or LET US Play followed by the other strategy with a 10min break in between. Physical activity was measured via accelerometry. Repeated-measures, mixed-effects regression models were used to estimate differences in percent of time spent sedentary and in MVPA. A total of 267 children (age 7.5years, 43% female, 29% African American) participated in 50, 1-hour activity sessions. Games incorporating LET US Play elicited more MVPA from both boys and girls compared to the same games with traditional rules. For boys and girls, the largest MVPA difference occurred during tag games (+20.3%). The largest reduction in the percent of time sedentary occurred during tag games (boys -27.7%, girls -32.4%). Overall, the percentage of children meeting 50% time in MVPA increased in four games (+18.7% to +53.1%). LET US Play led to greater accumulation of MVPA for boys and girls, and can increase the percent of children attaining the 50% of time in MVPA standard. Copyright © 2015 Elsevier Inc. All rights reserved.
Maximizing information exchange between complex networks
NASA Astrophysics Data System (ADS)
West, Bruce J.; Geneston, Elvis L.; Grigolini, Paolo
2008-10-01
Science is not merely the smooth progressive interaction of hypothesis, experiment and theory, although it sometimes has that form. More realistically the scientific study of any given complex phenomenon generates a number of explanations, from a variety of perspectives, that eventually requires synthesis to achieve a deep level of insight and understanding. One such synthesis has created the field of out-of-equilibrium statistical physics as applied to the understanding of complex dynamic networks. Over the past forty years the concept of complexity has undergone a metamorphosis. Complexity was originally seen as a consequence of memory in individual particle trajectories, in full agreement with a Hamiltonian picture of microscopic dynamics and, in principle, macroscopic dynamics could be derived from the microscopic Hamiltonian picture. The main difficulty in deriving macroscopic dynamics from microscopic dynamics is the need to take into account the actions of a very large number of components. The existence of events such as abrupt jumps, considered by the conventional continuous time random walk approach to describing complexity was never perceived as conflicting with the Hamiltonian view. Herein we review many of the reasons why this traditional Hamiltonian view of complexity is unsatisfactory. We show that as a result of technological advances, which make the observation of single elementary events possible, the definition of complexity has shifted from the conventional memory concept towards the action of non-Poisson renewal events. We show that the observation of crucial processes, such as the intermittent fluorescence of blinking quantum dots as well as the brain’s response to music, as monitored by a set of electrodes attached to the scalp, has forced investigators to go beyond the traditional concept of complexity and to establish closer contact with the nascent field of complex networks. Complex networks form one of the most challenging areas of
Independent Education in Bulgaria.
ERIC Educational Resources Information Center
Mason, Peter
The tradition of academic excellence in the arts, culture, mathematics, and science in Bulgaria that was set aside under communism remains the goal of the Bulgarian government with legislation designed to replace the ideological doctrine subordinating education under a totalitarian regime and to restore Bulgaria's historical tradition. The new law…
Carnot cycle at finite power: attainability of maximal efficiency.
Allahverdyan, Armen E; Hovhannisyan, Karen V; Melkikh, Alexey V; Gevorkian, Sasun G
2013-08-02
We want to understand whether and to what extent the maximal (Carnot) efficiency for heat engines can be reached at a finite power. To this end we generalize the Carnot cycle so that it is not restricted to slow processes. We show that for realistic (i.e., not purposefully designed) engine-bath interactions, the work-optimal engine performing the generalized cycle close to the maximal efficiency has a long cycle time and hence vanishing power. This aspect is shown to relate to the theory of computational complexity. A physical manifestation of the same effect is Levinthal's paradox in the protein folding problem. The resolution of this paradox for realistic proteins allows to construct engines that can extract at a finite power 40% of the maximally possible work reaching 90% of the maximal efficiency. For purposefully designed engine-bath interactions, the Carnot efficiency is achievable at a large power.
Morning vs. evening maximal cycle power and technical swimming ability.
Deschodt, Veronique J; Arsac, Laurent M
2004-02-01
The aim of this study was to observe diurnal influences on maximal power and technical swimming ability at three different times (8 AM, 1 PM, and 6 PM). Prior to each test, tympanic temperature was taken. Maximal power was analyzed by cycle tests. Stroke length, stroke rate, hand pattern, and swimming velocity were recorded between the 20th and the 28th m of the 50-m freestyle. Temperature varied +/-0.4 degrees C between morning and evening. Concomitantly, maximal power (+7%) and technical ability (+3% in stroke length, +5% in stroke rate and changes in underwater hand coordinates) were greater in the evening. The present study confirms and specifies diurnal influences on all-out performances with regard to both maximal power and technical ability. Thus, when swimmers are called upon to perform at a high level in the morning, they should warm up extensively in order to "swamp" the diurnal effects of the morning.
Optimizing Air Force Depot Programming to Maximize Operational Capability
2014-03-27
34 vii LINGO Component... LINGO Code with Notional Data by Model .................................. 45 RAND Formulation to Maximize Operational Capability...Minimize Cost ...................................................................................... 49 Appendix B –Final LINGO Code by Model
Interpreting Negative Results in an Angle Maximization Problem.
ERIC Educational Resources Information Center
Duncan, David R.; Litwiller, Bonnie H.
1995-01-01
Presents a situation in which differential calculus is used with inverse trigonometric tangent functions to maximize an angle measure. A negative distance measure ultimately results, requiring a reconsideration of assumptions inherent in the initial figure. (Author/MKR)
Cliqueing--A Technique for Producing Maximally Connected Clusters
ERIC Educational Resources Information Center
Gerson, Gordon M.
1978-01-01
Explains a technique whereby a large data base may be automatically classified into maximally connected clusters called cliques. The data base used is a section of United States patents. (Author/ MBR)
Maximal Stationary Iterative Methods for the Solution of Operator Equations,
dimensional case, 2 < or = m < or = + infinity, the author proves that interpolatory iteration is maximal for n = 0 in the class of iterations using values of the first s derivatives at n previous points. Author)
Maximal slicing of D-dimensional spherically symmetric vacuum spacetime
Nakao, Ken-ichi; Abe, Hiroyuki; Yoshino, Hirotaka; Shibata, Masaru
2009-10-15
We study the foliation of a D-dimensional spherically symmetric black-hole spacetime with D{>=}5 by two kinds of one-parameter families of maximal hypersurfaces: a reflection-symmetric foliation with respect to the wormhole slot and a stationary foliation that has an infinitely long trumpetlike shape. As in the four-dimensional case, the foliations by the maximal hypersurfaces avoid the singularity irrespective of the dimensionality. This indicates that the maximal slicing condition will be useful for simulating higher-dimensional black-hole spacetimes in numerical relativity. For the case of D=5, we present analytic solutions of the intrinsic metric, the extrinsic curvature, the lapse function, and the shift vector for the foliation by the stationary maximal hypersurfaces. These data will be useful for checking five-dimensional numerical-relativity codes based on the moving puncture approach.
Bayesian k-Means as a "maximization-expectation" algorithm.
Kurihara, Kenichi; Welling, Max
2009-04-01
We introduce a new class of "maximization-expectation" (ME) algorithms where we maximize over hidden variables but marginalize over random parameters. This reverses the roles of expectation and maximization in the classical expectation-maximization algorithm. In the context of clustering, we argue that these hard assignments open the door to very fast implementations based on data structures such as kd-trees and conga lines. The marginalization over parameters ensures that we retain the ability to infer model structure (i.e., number of clusters). As an important example, we discuss a top-down Bayesian k-means algorithm and a bottom-up agglomerative clustering algorithm. In experiments, we compare these algorithms against a number of alternative algorithms that have recently appeared in the literature.
Grammatical complexity of strange sets
NASA Astrophysics Data System (ADS)
Auerbach, Ditza; Procaccia, Itamar
1990-06-01
Chaotic dynamical systems can be organized around an underlying strange set, which is comprised of all the unstable periodic orbits. In this paper, we quantify the complexity of such an organization; this complexity addresses the difficulty of predicting the structure of the strange set from low-order data and is independent of the entropy and the algorithmic complexity. We refer to the new measure as the grammatical complexity. The notion is introduced, discussed, and illustrated in the context of simple dynamical systems. In addition, the grammatical complexity is generalized to include metric properties arising due to the nonuniform distribution of the invariant measure on the strange set.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
Maximally Permissive Composition of Actors in Ptolemy II
2013-03-20
Maximally Permissive Composition of Actors in Ptolemy II Marten Lohstroh Electrical Engineering and Computer Sciences University of California at...3. DATES COVERED 00-00-2013 to 00-00-2013 4. TITLE AND SUBTITLE Maximally Permissive Composition of Actors in Ptolemy II 5a. CONTRACT NUMBER...addresses the problem of handling dynamic data, in the statically typed, actor-oriented modeling environment called Ptolemy II. It explores the possibilities
[Maximal anaerobic capacity of man in a modified Wingate test].
Ushakov, B B; Chelnokova, E V
1992-01-01
We studied the possibility of using a 380B Siemens-Elema (Sweden) bicycle ergometer to determine the maximal anaerobic capacity of healthy subjects during a modified Wingate test. Exercise was performed under stable moment conditions, with calculation of braking resistance on the basis of the subjects lean body mass. The values of total work performed and maximal power may be used for comparative evaluation of physical work capacity in participants of training and rehabilitation programs.
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
Effects of repeated bouts of squatting exercise on sub-maximal endurance running performance.
Burt, Dean; Lamb, Kevin; Nicholas, Ceri; Twist, Craig
2013-02-01
It is well established that exercise-induced muscle damage (EIMD) has a detrimental effect on endurance exercise performed in the days that follow. However, it is unknown whether such effects remain after a repeated bout of EIMD. Therefore, the purpose of this study was to examine the effects of repeated bouts of muscle-damaging exercise on sub-maximal running exercise. Nine male participants completed baseline measurements associated with a sub-maximal running bout at lactate turn point. These measurements were repeated 24-48 h after EIMD, comprising 100 squats (10 sets of 10 at 80 % body mass). Two weeks later, when symptoms from the first bout of EIMD had dissipated, all procedures performed at baseline were repeated. Results revealed significant increases in muscle soreness and creatine kinase activity and decreases in peak knee extensor torque and vertical jump performance at 24-48 h after the initial bout of EIMD. However, after the repeated bout, symptoms of EIMD were reduced from baseline at 24-48 h. Significant increases in oxygen uptake (.VO2), minute ventilation (.VE), blood lactate ([BLa]), rating of perceived exertion (RPE), stride frequency and decreases in stride length were observed during sub-maximal running at 24-48 h following the initial bout of EIMD. However, following the repeated bout of EIMD, .VO2, .VE, [BLa], RPE and stride pattern responses during sub-maximal running remained unchanged from baseline at all time points. These findings confirm that a single resistance session protects skeletal muscle against the detrimental effects of EIMD on sub-maximal running endurance exercise.
Poker, Gilad; Zarai, Yoram; Margaliot, Michael; Tuller, Tamir
2014-01-01
Translation is an important stage in gene expression. During this stage, macro-molecules called ribosomes travel along the mRNA strand linking amino acids together in a specific order to create a functioning protein. An important question, related to many biomedical disciplines, is how to maximize protein production. Indeed, translation is known to be one of the most energy-consuming processes in the cell, and it is natural to assume that evolution shaped this process so that it maximizes the protein production rate. If this is indeed so then one can estimate various parameters of the translation machinery by solving an appropriate mathematical optimization problem. The same problem also arises in the context of synthetic biology, namely, re-engineer heterologous genes in order to maximize their translation rate in a host organism. We consider the problem of maximizing the protein production rate using a computational model for translation–elongation called the ribosome flow model (RFM). This model describes the flow of the ribosomes along an mRNA chain of length n using a set of n first-order nonlinear ordinary differential equations. It also includes n + 1 positive parameters: the ribosomal initiation rate into the mRNA chain, and n elongation rates along the chain sites. We show that the steady-state translation rate in the RFM is a strictly concave function of its parameters. This means that the problem of maximizing the translation rate under a suitable constraint always admits a unique solution, and that this solution can be determined using highly efficient algorithms for solving convex optimization problems even for large values of n. Furthermore, our analysis shows that the optimal translation rate can be computed based only on the optimal initiation rate and the elongation rate of the codons near the beginning of the ORF. We discuss some applications of the theoretical results to synthetic biology, molecular evolution, and functional genomics. PMID
Oxygenation using tidal volume breathing after maximal exhalation.
Baraka, Anis S; Taha, Samar K; El-Khatib, Mohamad F; Massouh, Faraj M; Jabbour, Dima G; Alameddine, Mahmoud M
2003-11-01
We compared, in volunteers, the oxygenation achieved by tidal volume breathing (TVB) over a 3-min period after maximal exhalation with that achieved by TVB alone. Twenty-three healthy volunteers underwent the two breathing techniques in a randomized order. A circle absorber system with an oxygen flow of 10 L/min was used. The end-expiratory oxygen concentration (EEO(2)) was monitored at 15-s intervals up to 3 min. TVB after maximal exhalation produced EEO(2) values of 68% +/- 5%, 75% +/- 5%, and 79% +/- 4% at 30, 45, and 60 s, respectively, which were significantly larger (P < 0.05) than the corresponding values obtained with TVB alone (58% +/- 5%, 66% +/- 6%, and 71% +/- 5%, respectively). In both techniques, the EEO(2) increased exponentially, with time constants of 35 s during TVB after maximal exhalation versus 58 s during TVB without prior maximal exhalation. In conclusion, maximal exhalation before TVB can hasten preoxygenation by decreasing the nitrogen content of the functional residual capacity, with a consequent increase of EEO(2) to approximately 70% in 30 s and 80% in 60 s. Oxygenation by using maximal exhalation before tidal volume breathing produced a significantly faster increase in end-expiratory oxygen concentration than oxygenation with tidal volume breathing alone.
Cary Potter on Independent Education
ERIC Educational Resources Information Center
Potter, Cary
1978-01-01
Cary Potter was President of the National Association of Independent Schools from 1964-1978. As he leaves NAIS he gives his views on education, on independence, on the independent school, on public responsibility, on choice in a free society, on educational change, and on the need for collective action by independent schools. (Author/RK)
Sensation Seeking and Field Independence.
ERIC Educational Resources Information Center
Baker, A. Harvey
1988-01-01
Examined relationship between sensation seeking and field independence using three indices of field independence and combining the data in an Overall Index. The three field independence instruments and the Sensation Seeking Scale were administered to 103 college students. Positive relationship between sensation seeking and field independence was…
Cary Potter on Independent Education
ERIC Educational Resources Information Center
Potter, Cary
1978-01-01
Cary Potter was President of the National Association of Independent Schools from 1964-1978. As he leaves NAIS he gives his views on education, on independence, on the independent school, on public responsibility, on choice in a free society, on educational change, and on the need for collective action by independent schools. (Author/RK)
Myth or Truth: Independence Day.
ERIC Educational Resources Information Center
Gardner, Traci
Most Americans think of the Fourth of July as Independence Day, but is it really the day the U.S. declared and celebrated independence? By exploring myths and truths surrounding Independence Day, this lesson asks students to think critically about commonly believed stories regarding the beginning of the Revolutionary War and the Independence Day…