Sample records for linear combination rule

  1. Intelligent Distributed Systems

    DTIC Science & Technology

    2015-10-23

    periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as

  2. Application of local linearization and the transonic equivalence rule to the flow about slender analytic bodies at Mach numbers near 1.0

    NASA Technical Reports Server (NTRS)

    Tyson, R. W.; Muraca, R. J.

    1975-01-01

    The local linearization method for axisymmetric flow is combined with the transonic equivalence rule to calculate pressure distribution on slender bodies at free-stream Mach numbers from .8 to 1.2. This is an approximate solution to the transonic flow problem which yields results applicable during the preliminary design stages of a configuration development. The method can be used to determine the aerodynamic loads on parabolic arc bodies having either circular or elliptical cross sections. It is particularly useful in predicting pressure distributions and normal force distributions along the body at small angles of attack. The equations discussed may be extended to include wing-body combinations.

  3. Simultaneous Optimization of Decisions Using a Linear Utility Function.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1990-01-01

    An approach is presented to simultaneously optimize decision rules for combinations of elementary decisions through a framework derived from Bayesian decision theory. The developed linear utility model for selection-mastery decisions was applied to a sample of 43 first year medical students to illustrate the procedure. (SLD)

  4. Hyper-heuristic Evolution of Dispatching Rules: A Comparison of Rule Representations.

    PubMed

    Branke, Jürgen; Hildebrandt, Torsten; Scholz-Reiter, Bernd

    2015-01-01

    Dispatching rules are frequently used for real-time, online scheduling in complex manufacturing systems. Design of such rules is usually done by experts in a time consuming trial-and-error process. Recently, evolutionary algorithms have been proposed to automate the design process. There are several possibilities to represent rules for this hyper-heuristic search. Because the representation determines the search neighborhood and the complexity of the rules that can be evolved, a suitable choice of representation is key for a successful evolutionary algorithm. In this paper we empirically compare three different representations, both numeric and symbolic, for automated rule design: A linear combination of attributes, a representation based on artificial neural networks, and a tree representation. Using appropriate evolutionary algorithms (CMA-ES for the neural network and linear representations, genetic programming for the tree representation), we empirically investigate the suitability of each representation in a dynamic stochastic job shop scenario. We also examine the robustness of the evolved dispatching rules against variations in the underlying job shop scenario, and visualize what the rules do, in order to get an intuitive understanding of their inner workings. Results indicate that the tree representation using an improved version of genetic programming gives the best results if many candidate rules can be evaluated, closely followed by the neural network representation that already leads to good results for small to moderate computational budgets. The linear representation is found to be competitive only for extremely small computational budgets.

  5. A Bayesian model averaging method for the derivation of reservoir operating rules

    NASA Astrophysics Data System (ADS)

    Zhang, Jingwen; Liu, Pan; Wang, Hao; Lei, Xiaohui; Zhou, Yanlai

    2015-09-01

    Because the intrinsic dynamics among optimal decision making, inflow processes and reservoir characteristics are complex, functional forms of reservoir operating rules are always determined subjectively. As a result, the uncertainty of selecting form and/or model involved in reservoir operating rules must be analyzed and evaluated. In this study, we analyze the uncertainty of reservoir operating rules using the Bayesian model averaging (BMA) model. Three popular operating rules, namely piecewise linear regression, surface fitting and a least-squares support vector machine, are established based on the optimal deterministic reservoir operation. These individual models provide three-member decisions for the BMA combination, enabling the 90% release interval to be estimated by the Markov Chain Monte Carlo simulation. A case study of China's the Baise reservoir shows that: (1) the optimal deterministic reservoir operation, superior to any reservoir operating rules, is used as the samples to derive the rules; (2) the least-squares support vector machine model is more effective than both piecewise linear regression and surface fitting; (3) BMA outperforms any individual model of operating rules based on the optimal trajectories. It is revealed that the proposed model can reduce the uncertainty of operating rules, which is of great potential benefit in evaluating the confidence interval of decisions.

  6. A system for aerodynamic design and analysis of supersonic aircraft. Part 4: Test cases

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1980-01-01

    An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Representative test cases and associated program output are presented.

  7. Two heads are better than one, but how much? Evidence that people's use of causal integration rules does not always conform to normative standards.

    PubMed

    Vadillo, Miguel A; Ortega-Castro, Nerea; Barberia, Itxaso; Baker, A G

    2014-01-01

    Many theories of causal learning and causal induction differ in their assumptions about how people combine the causal impact of several causes presented in compound. Some theories propose that when several causes are present, their joint causal impact is equal to the linear sum of the individual impact of each cause. However, some recent theories propose that the causal impact of several causes needs to be combined by means of a noisy-OR integration rule. In other words, the probability of the effect given several causes would be equal to the sum of the probability of the effect given each cause in isolation minus the overlap between those probabilities. In the present series of experiments, participants were given information about the causal impact of several causes and then they were asked what compounds of those causes they would prefer to use if they wanted to produce the effect. The results of these experiments suggest that participants actually use a variety of strategies, including not only the linear and the noisy-OR integration rules, but also averaging the impact of several causes.

  8. Inter-synaptic learning of combination rules in a cortical network model

    PubMed Central

    Lavigne, Frédéric; Avnaïm, Francis; Dumercy, Laurent

    2014-01-01

    Selecting responses in working memory while processing combinations of stimuli depends strongly on their relations stored in long-term memory. However, the learning of XOR-like combinations of stimuli and responses according to complex rules raises the issue of the non-linear separability of the responses within the space of stimuli. One proposed solution is to add neurons that perform a stage of non-linear processing between the stimuli and responses, at the cost of increasing the network size. Based on the non-linear integration of synaptic inputs within dendritic compartments, we propose here an inter-synaptic (IS) learning algorithm that determines the probability of potentiating/depressing each synapse as a function of the co-activity of the other synapses within the same dendrite. The IS learning is effective with random connectivity and without either a priori wiring or additional neurons. Our results show that IS learning generates efficacy values that are sufficient for the processing of XOR-like combinations, on the basis of the sole correlational structure of the stimuli and responses. We analyze the types of dendrites involved in terms of the number of synapses from pre-synaptic neurons coding for the stimuli and responses. The synaptic efficacy values obtained show that different dendrites specialize in the detection of different combinations of stimuli. The resulting behavior of the cortical network model is analyzed as a function of inter-synaptic vs. Hebbian learning. Combinatorial priming effects show that the retrospective activity of neurons coding for the stimuli trigger XOR-like combination-selective prospective activity of neurons coding for the expected response. The synergistic effects of inter-synaptic learning and of mixed-coding neurons are simulated. The results show that, although each mechanism is sufficient by itself, their combined effects improve the performance of the network. PMID:25221529

  9. Constructing Compact Takagi-Sugeno Rule Systems: Identification of Complex Interactions in Epidemiological Data

    PubMed Central

    Zhou, Shang-Ming; Lyons, Ronan A.; Brophy, Sinead; Gravenor, Mike B.

    2012-01-01

    The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data. PMID:23272108

  10. Constructing compact Takagi-Sugeno rule systems: identification of complex interactions in epidemiological data.

    PubMed

    Zhou, Shang-Ming; Lyons, Ronan A; Brophy, Sinead; Gravenor, Mike B

    2012-01-01

    The Takagi-Sugeno (TS) fuzzy rule system is a widely used data mining technique, and is of particular use in the identification of non-linear interactions between variables. However the number of rules increases dramatically when applied to high dimensional data sets (the curse of dimensionality). Few robust methods are available to identify important rules while removing redundant ones, and this results in limited applicability in fields such as epidemiology or bioinformatics where the interaction of many variables must be considered. Here, we develop a new parsimonious TS rule system. We propose three statistics: R, L, and ω-values, to rank the importance of each TS rule, and a forward selection procedure to construct a final model. We use our method to predict how key components of childhood deprivation combine to influence educational achievement outcome. We show that a parsimonious TS model can be constructed, based on a small subset of rules, that provides an accurate description of the relationship between deprivation indices and educational outcomes. The selected rules shed light on the synergistic relationships between the variables, and reveal that the effect of targeting specific domains of deprivation is crucially dependent on the state of the other domains. Policy decisions need to incorporate these interactions, and deprivation indices should not be considered in isolation. The TS rule system provides a basis for such decision making, and has wide applicability for the identification of non-linear interactions in complex biomedical data.

  11. A computational system for aerodynamic design and analysis of supersonic aircraft. Part 1: General description and theoretical development

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1976-01-01

    An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. Schematics of the program structure and the individual overlays and subroutines are described.

  12. Aerodynamic design and analysis system for supersonic aircraft. Part 1: General description and theoretical development

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.

    1975-01-01

    An integrated system of computer programs has been developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This part presents a general description of the system and describes the theoretical methods used.

  13. A computational system for aerodynamic design and analysis of supersonic aircraft. Part 2: User's manual

    NASA Technical Reports Server (NTRS)

    Middleton, W. D.; Lundry, J. L.; Coleman, R. G.

    1976-01-01

    An integrated system of computer programs was developed for the design and analysis of supersonic configurations. The system uses linearized theory methods for the calculation of surface pressures and supersonic area rule concepts in combination with linearized theory for calculation of aerodynamic force coefficients. Interactive graphics are optional at the user's request. This user's manual contains a description of the system, an explanation of its usage, the input definition, and example output.

  14. Calculative techniques for transonic flows about certain classes of wing body combinations

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Spreiter, J. R.

    1972-01-01

    Procedures based on the method of local linearization and transonic equivalence rule were developed for predicting properties of transonic flows about certain classes of wing-body combinations. The procedures are applicable to transonic flows with free stream Mach number in the ranges near one, below the lower critical and above the upper critical. Theoretical results are presented for surface and flow field pressure distributions for both lifting and nonlifting situations.

  15. Optimal Hedging Rule for Reservoir Refill Operation

    NASA Astrophysics Data System (ADS)

    Wan, W.; Zhao, J.; Lund, J. R.; Zhao, T.; Lei, X.; Wang, H.

    2015-12-01

    This paper develops an optimal reservoir Refill Hedging Rule (RHR) for combined water supply and flood operation using mathematical analysis. A two-stage model is developed to formulate the trade-off between operations for conservation benefit and flood damage in the reservoir refill season. Based on the probability distribution of the maximum refill water availability at the end of the second stage, three zones are characterized according to the relationship among storage capacity, expected storage buffer (ESB), and maximum safety excess discharge (MSED). The Karush-Kuhn-Tucker conditions of the model show that the optimality of the refill operation involves making the expected marginal loss of conservation benefit from unfilling (i.e., ending storage of refill period less than storage capacity) as nearly equal to the expected marginal flood damage from levee overtopping downstream as possible while maintaining all constraints. This principle follows and combines the hedging rules for water supply and flood management. A RHR curve is drawn analogously to water supply hedging and flood hedging rules, showing the trade-off between the two objectives. The release decision result has a linear relationship with the current water availability, implying the linearity of RHR for a wide range of water conservation functions (linear, concave, or convex). A demonstration case shows the impacts of factors. Larger downstream flood conveyance capacity and empty reservoir capacity allow a smaller current release and more water can be conserved. Economic indicators of conservation benefit and flood damage compete with each other on release, the greater economic importance of flood damage is, the more water should be released in the current stage, and vice versa. Below a critical value, improving forecasts yields less water release, but an opposing effect occurs beyond this critical value. Finally, the Danjiangkou Reservoir case study shows that the RHR together with a rolling horizon decision approach can lead to a gradual dynamic refilling, indicating its potential for practical use.

  16. The linear combination of vectors implies the existence of the cross and dot products

    NASA Astrophysics Data System (ADS)

    Pujol, Jose

    2018-07-01

    Given two vectors u and v, their cross product u × v is a vector perpendicular to u and v. The motivation for this property, however, is never addressed. Here we show that the existence of the cross and dot products and the perpendicularity property follow from the concept of linear combination, which does not involve products of vectors. For our proof we consider the plane generated by a linear combination of uand v. When looking for the coefficients in the linear combination required to reach a desired point on the plane, the solution involves the existence of a normal vector n = u × v. Our results have a bearing on the history of vector analysis, as a product similar to the cross product but without the perpendicularity requirement existed at the same time. These competing products originate in the work of two major nineteen-century mathematicians, W. Hamilton, and H. Grassmann. These historical aspects are discussed in some detail here. We also address certain aspects of the teaching of u × v to undergraduate students, which is known to carry some difficulties. This includes the algebraic and geometric denitions of u × v, the rule for the direction of u × v, and the pseudovectorial nature of u × v.

  17. Developing Novel Reservoir Rule Curves Using Seasonal Inflow Projections

    NASA Astrophysics Data System (ADS)

    Tseng, Hsin-yi; Tung, Ching-pin

    2015-04-01

    Due to significant seasonal rainfall variations, reservoirs and their flexible operational rules are indispensable to Taiwan. Furthermore, with the intensifying impacts of climate change on extreme climate, the frequency of droughts in Taiwan has been increasing in recent years. Drought is a creeping phenomenon, the slow onset character of drought makes it difficult to detect at an early stage, and causes delays on making the best decision of allocating water. For these reasons, novel reservoir rule curves using projected seasonal streamflow are proposed in this study, which can potentially reduce the adverse effects of drought. This study dedicated establishing new rule curves which consider both current available storage and anticipated monthly inflows with leading time of two months to reduce the risk of water shortage. The monthly inflows are projected based on the seasonal climate forecasts from Central Weather Bureau (CWB), which a weather generation model is used to produce daily weather data for the hydrological component of the GWLF. To incorporate future monthly inflow projections into rule curves, this study designs a decision flow index which is a linear combination of current available storage and inflow projections with leading time of 2 months. By optimizing linear relationship coefficients of decision flow index, the shape of rule curves and the percent of water supply in each zone, the best rule curves to decrease water shortage risk and impacts can be developed. The Shimen Reservoir in the northern Taiwan is used as a case study to demonstrate the proposed method. Existing rule curves (M5 curves) of Shimen Reservoir are compared with two cases of new rule curves, including hindcast simulations and historic seasonal forecasts. The results show new rule curves can decrease the total water shortage ratio, and in addition, it can also allocate shortage amount to preceding months to avoid extreme shortage events. Even though some uncertainties in historic forecasts would result unnecessary discounts of water supply, it still performs better than M5 curves during droughts.

  18. On nonstationarity-related errors in modal combination rules of the response spectrum method

    NASA Astrophysics Data System (ADS)

    Pathak, Shashank; Gupta, Vinay K.

    2017-10-01

    Characterization of seismic hazard via (elastic) design spectra and the estimation of linear peak response of a given structure from this characterization continue to form the basis of earthquake-resistant design philosophy in various codes of practice all over the world. Since the direct use of design spectrum ordinates is a preferred option for the practicing engineers, modal combination rules play central role in the peak response estimation. Most of the available modal combination rules are however based on the assumption that nonstationarity affects the structural response alike at the modal and overall response levels. This study considers those situations where this assumption may cause significant errors in the peak response estimation, and preliminary models are proposed for the estimation of the extents to which nonstationarity affects the modal and total system responses, when the ground acceleration process is assumed to be a stationary process. It is shown through numerical examples in the context of complete-quadratic-combination (CQC) method that the nonstationarity-related errors in the estimation of peak base shear may be significant, when strong-motion duration of the excitation is too small compared to the period of the system and/or the response is distributed comparably in several modes. It is also shown that these errors are reduced marginally with the use of the proposed nonstationarity factor models.

  19. Learning accurate and interpretable models based on regularized random forests regression

    PubMed Central

    2014-01-01

    Background Many biology related research works combine data from multiple sources in an effort to understand the underlying problems. It is important to find and interpret the most important information from these sources. Thus it will be beneficial to have an effective algorithm that can simultaneously extract decision rules and select critical features for good interpretation while preserving the prediction performance. Methods In this study, we focus on regression problems for biological data where target outcomes are continuous. In general, models constructed from linear regression approaches are relatively easy to interpret. However, many practical biological applications are nonlinear in essence where we can hardly find a direct linear relationship between input and output. Nonlinear regression techniques can reveal nonlinear relationship of data, but are generally hard for human to interpret. We propose a rule based regression algorithm that uses 1-norm regularized random forests. The proposed approach simultaneously extracts a small number of rules from generated random forests and eliminates unimportant features. Results We tested the approach on some biological data sets. The proposed approach is able to construct a significantly smaller set of regression rules using a subset of attributes while achieving prediction performance comparable to that of random forests regression. Conclusion It demonstrates high potential in aiding prediction and interpretation of nonlinear relationships of the subject being studied. PMID:25350120

  20. Refining Linear Fuzzy Rules by Reinforcement Learning

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.; Khedkar, Pratap S.; Malkani, Anil

    1996-01-01

    Linear fuzzy rules are increasingly being used in the development of fuzzy logic systems. Radial basis functions have also been used in the antecedents of the rules for clustering in product space which can automatically generate a set of linear fuzzy rules from an input/output data set. Manual methods are usually used in refining these rules. This paper presents a method for refining the parameters of these rules using reinforcement learning which can be applied in domains where supervised input-output data is not available and reinforcements are received only after a long sequence of actions. This is shown for a generalization of radial basis functions. The formation of fuzzy rules from data and their automatic refinement is an important step in closing the gap between the application of reinforcement learning methods in the domains where only some limited input-output data is available.

  1. Consensus for linear multi-agent system with intermittent information transmissions using the time-scale theory

    NASA Astrophysics Data System (ADS)

    Taousser, Fatima; Defoort, Michael; Djemai, Mohamed

    2016-01-01

    This paper investigates the consensus problem for linear multi-agent system with fixed communication topology in the presence of intermittent communication using the time-scale theory. Since each agent can only obtain relative local information intermittently, the proposed consensus algorithm is based on a discontinuous local interaction rule. The interaction among agents happens at a disjoint set of continuous-time intervals. The closed-loop multi-agent system can be represented using mixed linear continuous-time and linear discrete-time models due to intermittent information transmissions. The time-scale theory provides a powerful tool to combine continuous-time and discrete-time cases and study the consensus protocol under a unified framework. Using this theory, some conditions are derived to achieve exponential consensus under intermittent information transmissions. Simulations are performed to validate the theoretical results.

  2. Bright and dark singlet excitons via linear and two-photon spectroscopy in monolayer transition metal dichalcogenides

    DOE PAGES

    Berkelbach, Timothy C.; Hybertsen, Mark S.; Reichmann, David R.

    2015-08-10

    We discuss the linear and two-photon spectroscopic selection rules for spin-singlet excitons in monolayer transition-metal dichalcogenides. Our microscopic formalism combines a fully k-dependent few-orbital band structure with a many-body Bethe-Salpeter equation treatment of the electron-hole interaction, using a model dielectric function. We show analytically and numerically that the single-particle, valley-dependent selection rules are preserved in the presence of excitonic effects. Furthermore, we definitively demonstrate that the bright (one-photon allowed) excitons have s-type azimuthal symmetry and that dark p-type excitons can be probed via two-photon spectroscopy. Thus, the screened Coulomb interaction in these materials substantially deviates from the 1/ε₀r form; thismore » breaks the “accidental” angular momentum degeneracy in the exciton spectrum, such that the 2p exciton has a lower energy than the 2s exciton by at least 50 meV. We compare our calculated two-photon absorption spectra to recent experimental measurements.« less

  3. Spin structure of the neutron ({sup 3}He) and the Bjoerken sum rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meziani, Z.E.

    1994-12-01

    A first measurement of the longitudinal asymmetry of deep-inelastic scattering of polarized electrons from a polarized {sup 3}He target at energies ranging from 19 to 26 GeV has been performed at the Stanford Linear Accelerator Center (SLAC). The spin-structure function of the neutron g{sub 1}{sup n} has been extracted from the measured asymmetries. The Quark Parton Model (QPM) interpretation of the nucleon spin-structure function is examined in light of the new results. A test of the Ellis-Jaffe sum rule (E-J) on the neutron is performed at high momentum transfer and found to be satisfied. Furthermore, combining the proton results ofmore » the European Muon Collaboration (EMC) and the neutron results of E-142, the Bjoerken sum rule test is carried at high Q{sup 2} where higher order Perturbative Quantum Chromodynamics (PQCD) corrections and higher-twist corrections are smaller. The sum rule is saturated to within one standard deviation.« less

  4. Practical implementation of the double linear damage rule and damage curve approach for treating cumulative fatigue damage

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Halford, G. R.

    1980-01-01

    Simple procedures are presented for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is provided for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are provided for determining the two phases of life. The procedure involves two steps, each similar to the conventional application of the commonly used linear damage rule. When the sum of cycle ratios based on phase 1 lives reaches unity, phase 1 is presumed complete, and further loadings are summed as cycle ratios on phase 2 lives. When the phase 2 sum reaches unity, failure is presumed to occur. No other physical properties or material constants than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons of both methods are discussed.

  5. Practical implementation of the double linear damage rule and damage curve approach for treating cumulative fatigue damage

    NASA Technical Reports Server (NTRS)

    Manson, S. S.; Halford, G. R.

    1981-01-01

    Simple procedures are given for treating cumulative fatigue damage under complex loading history using either the damage curve concept or the double linear damage rule. A single equation is given for use with the damage curve approach; each loading event providing a fraction of damage until failure is presumed to occur when the damage sum becomes unity. For the double linear damage rule, analytical expressions are given for determining the two phases of life. The procedure comprises two steps, each similar to the conventional application of the commonly used linear damage rule. Once the sum of cycle ratios based on Phase I lives reaches unity, Phase I is presumed complete, and further loadings are summed as cycle ratios based on Phase II lives. When the Phase II sum attains unity, failure is presumed to occur. It is noted that no physical properties or material constants other than those normally used in a conventional linear damage rule analysis are required for application of either of the two cumulative damage methods described. Illustrations and comparisons are discussed for both methods.

  6. Relative Stabilities and Reactivities of Isolated Versus Conjugated Alkenes: Reconciliation Via a Molecular Orbital Approach

    NASA Astrophysics Data System (ADS)

    Sotiriou-Leventis, Chariklia; Hanna, Samir B.; Leventis, Nicholas

    1996-04-01

    The well-accepted practice of generating a pair of molecular orbitals, one of lower energy and another of higher energy than the original pair of overlapping atomic orbitals, and the concept of a particle in a one-dimensional box are implemented in a simplified, nonmathematical method that explains the relative stabilities and reactivities of alkenes with conjugated versus isolated double bonds. In this method, Huckel-type MO's of higher polyenes are constructed by energy rules of linear combination of atomic orbitals. One additional rule is obeyed: bonding molecular orbitals overlap only with bonding molecular orbitals, and antibonding molecular orbitals overlap only with antibonding molecular orbitals.

  7. Combined Prediction Model of Death Toll for Road Traffic Accidents Based on Independent and Dependent Variables

    PubMed Central

    Zhong-xiang, Feng; Shi-sheng, Lu; Wei-hua, Zhang; Nan-nan, Zhang

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability. PMID:25610454

  8. Combined prediction model of death toll for road traffic accidents based on independent and dependent variables.

    PubMed

    Feng, Zhong-xiang; Lu, Shi-sheng; Zhang, Wei-hua; Zhang, Nan-nan

    2014-01-01

    In order to build a combined model which can meet the variation rule of death toll data for road traffic accidents and can reflect the influence of multiple factors on traffic accidents and improve prediction accuracy for accidents, the Verhulst model was built based on the number of death tolls for road traffic accidents in China from 2002 to 2011; and car ownership, population, GDP, highway freight volume, highway passenger transportation volume, and highway mileage were chosen as the factors to build the death toll multivariate linear regression model. Then the two models were combined to be a combined prediction model which has weight coefficient. Shapley value method was applied to calculate the weight coefficient by assessing contributions. Finally, the combined model was used to recalculate the number of death tolls from 2002 to 2011, and the combined model was compared with the Verhulst and multivariate linear regression models. The results showed that the new model could not only characterize the death toll data characteristics but also quantify the degree of influence to the death toll by each influencing factor and had high accuracy as well as strong practicability.

  9. Atmospheric Downscaling using Genetic Programming

    NASA Astrophysics Data System (ADS)

    Zerenner, Tanja; Venema, Victor; Simmer, Clemens

    2013-04-01

    Coupling models for the different components of the Soil-Vegetation-Atmosphere-System requires up-and downscaling procedures. Subject of our work is the downscaling scheme used to derive high resolution forcing data for land-surface and subsurface models from coarser atmospheric model output. The current downscaling scheme [Schomburg et. al. 2010, 2012] combines a bi-quadratic spline interpolation, deterministic rules and autoregressive noise. For the development of the scheme, training and validation data sets have been created by carrying out high-resolution runs of the atmospheric model. The deterministic rules in this scheme are partly based on known physical relations and partly determined by an automated search for linear relationships between the high resolution fields of the atmospheric model output and high resolution data on surface characteristics. Up to now deterministic rules are available for downscaling surface pressure and partially, depending on the prevailing weather conditions, for near surface temperature and radiation. Aim of our work is to improve those rules and to find deterministic rules for the remaining variables, which require downscaling, e.g. precipitation or near surface specifc humidity. To accomplish that, we broaden the search by allowing for interdependencies between different atmospheric parameters, non-linear relations, non-local and time-lagged relations. To cope with the vast number of possible solutions, we use genetic programming, a method from machine learning, which is based on the principles of natural evolution. We are currently working with GPLAB, a Genetic Programming toolbox for Matlab. At first we have tested the GP system to retrieve the known physical rule for downscaling surface pressure, i.e. the hydrostatic equation, from our training data. We have found this to be a simple task to the GP system. Furthermore we have improved accuracy and efficiency of the GP solution by implementing constant variation and optimization as genetic operators. Next we have worked on an improvement of the downscaling rule for the two-meter-temperature. We have added an if-function with four input arguments to the function set. Since this has shown to increase bloat we have additionally modified our fitness function by including penalty terms for both the size of the solutions and the number intron nodes, i.e program parts that are never evaluated. Starting from the known downscaling rule for the two-meter temperature, which linearly exploits the orography anomalies allowed or disallowed by a certain temperature gradient, our GP system has been able to find an improvement. The rule produced by the GP clearly shows a better performance concerning the reproduced small-scale variability.

  10. Origami rules for the construction of localized eigenstates of the Hubbard model in decorated lattices

    NASA Astrophysics Data System (ADS)

    Dias, R. G.; Gouveia, J. D.

    2015-11-01

    We present a method of construction of exact localized many-body eigenstates of the Hubbard model in decorated lattices, both for U = 0 and U → ∞. These states are localized in what concerns both hole and particle movement. The starting point of the method is the construction of a plaquette or a set of plaquettes with a higher symmetry than that of the whole lattice. Using a simple set of rules, the tight-binding localized state in such a plaquette can be divided, folded and unfolded to new plaquette geometries. This set of rules is also valid for the construction of a localized state for one hole in the U → ∞ limit of the same plaquette, assuming a spin configuration which is a uniform linear combination of all possible permutations of the set of spins in the plaquette.

  11. Direct surface magnetometry with photoemission magnetic x-ray dichroism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tobin, J.G.; Goodman, K.W.; Schumann, F.O.

    1997-04-01

    Element specific surface magnetometry remains a central goal of synchrotron radiation based studies of nanomagnetic structures. One appealing possibility is the combination of x-ray absorption dichroism measurements and the theoretical framework provided by the {open_quotes}sum rules.{close_quotes} Unfortunately, sum rule analysis are hampered by several limitations including delocalization of the final state, multi-electronic phenomena and the presence of surface dipoles. An alternative experiment, Magnetic X-Ray Dichroism in Photoelectron Spectroscopy, holds out promise based upon its elemental specificity, surface sensitivity and high resolution. Computational simulations by Tamura et al. demonstrated the relationship between exchange and spin orbit splittings and experimental data ofmore » linear and circular dichroisms. Now the authors have developed an analytical framework which allows for the direct extraction of core level exchange splittings from circular and linear dichroic photoemission data. By extending a model initially proposed by Venus, it is possible to show a linear relation between normalized dichroism peaks in the experimental data and the underlying exchange splitting. Since it is reasonable to expect that exchange splittings and magnetic moments track together, this measurement thus becomes a powerful new tool for direct surface magnetometry, without recourse to time consuming and difficult spectral simulations. The theoretical derivation will be supported by high resolution linear and circular dichroism data collected at the Spectromicroscopy Facility of the Advanced Light Source.« less

  12. A simple attitude control of quadrotor helicopter based on Ziegler-Nichols rules for tuning PD parameters.

    PubMed

    He, ZeFang; Zhao, Long

    2014-01-01

    An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement.

  13. Discretely Conservative Finite-Difference Formulations for Nonlinear Conservation Laws in Split Form: Theory and Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Fisher, Travis C.; Carpenter, Mark H.; Nordstroem, Jan; Yamaleev, Nail K.; Swanson, R. Charles

    2011-01-01

    Simulations of nonlinear conservation laws that admit discontinuous solutions are typically restricted to discretizations of equations that are explicitly written in divergence form. This restriction is, however, unnecessary. Herein, linear combinations of divergence and product rule forms that have been discretized using diagonal-norm skew-symmetric summation-by-parts (SBP) operators, are shown to satisfy the sufficient conditions of the Lax-Wendroff theorem and thus are appropriate for simulations of discontinuous physical phenomena. Furthermore, special treatments are not required at the points that are near physical boundaries (i.e., discrete conservation is achieved throughout the entire computational domain, including the boundaries). Examples are presented of a fourth-order, SBP finite-difference operator with second-order boundary closures. Sixth- and eighth-order constructions are derived, and included in E. Narrow-stencil difference operators for linear viscous terms are also derived; these guarantee the conservative form of the combined operator.

  14. An efficient reliability algorithm for locating design point using the combination of importance sampling concepts and response surface method

    NASA Astrophysics Data System (ADS)

    Shayanfar, Mohsen Ali; Barkhordari, Mohammad Ali; Roudak, Mohammad Amin

    2017-06-01

    Monte Carlo simulation (MCS) is a useful tool for computation of probability of failure in reliability analysis. However, the large number of required random samples makes it time-consuming. Response surface method (RSM) is another common method in reliability analysis. Although RSM is widely used for its simplicity, it cannot be trusted in highly nonlinear problems due to its linear nature. In this paper, a new efficient algorithm, employing the combination of importance sampling, as a class of MCS, and RSM is proposed. In the proposed algorithm, analysis starts with importance sampling concepts and using a represented two-step updating rule of design point. This part finishes after a small number of samples are generated. Then RSM starts to work using Bucher experimental design, with the last design point and a represented effective length as the center point and radius of Bucher's approach, respectively. Through illustrative numerical examples, simplicity and efficiency of the proposed algorithm and the effectiveness of the represented rules are shown.

  15. Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.

    PubMed

    Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi

    2013-01-01

    The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.

  16. A Simple Attitude Control of Quadrotor Helicopter Based on Ziegler-Nichols Rules for Tuning PD Parameters

    PubMed Central

    He, ZeFang

    2014-01-01

    An attitude control strategy based on Ziegler-Nichols rules for tuning PD (proportional-derivative) parameters of quadrotor helicopters is presented to solve the problem that quadrotor tends to be instable. This problem is caused by the narrow definition domain of attitude angles of quadrotor helicopters. The proposed controller is nonlinear and consists of a linear part and a nonlinear part. The linear part is a PD controller with PD parameters tuned by Ziegler-Nichols rules and acts on the quadrotor decoupled linear system after feedback linearization; the nonlinear part is a feedback linearization item which converts a nonlinear system into a linear system. It can be seen from the simulation results that the attitude controller proposed in this paper is highly robust, and its control effect is better than the other two nonlinear controllers. The nonlinear parts of the other two nonlinear controllers are the same as the attitude controller proposed in this paper. The linear part involves a PID (proportional-integral-derivative) controller with the PID controller parameters tuned by Ziegler-Nichols rules and a PD controller with the PD controller parameters tuned by GA (genetic algorithms). Moreover, this attitude controller is simple and easy to implement. PMID:25614879

  17. Implementing a Bayes Filter in a Neural Circuit: The Case of Unknown Stimulus Dynamics.

    PubMed

    Sokoloski, Sacha

    2017-09-01

    In order to interact intelligently with objects in the world, animals must first transform neural population responses into estimates of the dynamic, unknown stimuli that caused them. The Bayesian solution to this problem is known as a Bayes filter, which applies Bayes' rule to combine population responses with the predictions of an internal model. The internal model of the Bayes filter is based on the true stimulus dynamics, and in this note, we present a method for training a theoretical neural circuit to approximately implement a Bayes filter when the stimulus dynamics are unknown. To do this we use the inferential properties of linear probabilistic population codes to compute Bayes' rule and train a neural network to compute approximate predictions by the method of maximum likelihood. In particular, we perform stochastic gradient descent on the negative log-likelihood of the neural network parameters with a novel approximation of the gradient. We demonstrate our methods on a finite-state, a linear, and a nonlinear filtering problem and show how the hidden layer of the neural network develops tuning curves consistent with findings in experimental neuroscience.

  18. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  19. A density matrix-based method for the linear-scaling calculation of dynamic second- and third-order properties at the Hartree-Fock and Kohn-Sham density functional theory levels.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-11-28

    A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.

  20. A comparison of Heuristic method and Llewellyn’s rules for identification of redundant constraints

    NASA Astrophysics Data System (ADS)

    Estiningsih, Y.; Farikhin; Tjahjana, R. H.

    2018-03-01

    Important techniques in linear programming is modelling and solving practical optimization. Redundant constraints are consider for their effects on general linear programming problems. Identification and reduce redundant constraints are for avoidance of all the calculations associated when solving an associated linear programming problems. Many researchers have been proposed for identification redundant constraints. This paper a compararison of Heuristic method and Llewellyn’s rules for identification of redundant constraints.

  1. Combination Rules for Morse-Based van der Waals Force Fields.

    PubMed

    Yang, Li; Sun, Lei; Deng, Wei-Qiao

    2018-02-15

    In traditional force fields (FFs), van der Waals interactions have been usually described by the Lennard-Jones potentials. Conventional combination rules for the parameters of van der Waals (VDW) cross-termed interactions were developed for the Lennard-Jones based FFs. Here, we report that the Morse potentials were a better function to describe VDW interactions calculated by highly precise quantum mechanics methods. A new set of combination rules was developed for Morse-based FFs, in which VDW interactions were described by Morse potentials. The new set of combination rules has been verified by comparing the second virial coefficients of 11 noble gas mixtures. For all of the mixed binaries considered in this work, the combination rules work very well and are superior to all three other existing sets of combination rules reported in the literature. We further used the Morse-based FF by using the combination rules to simulate the adsorption isotherms of CH 4 at 298 K in four covalent-organic frameworks (COFs). The overall agreement is great, which supports the further applications of this new set of combination rules in more realistic simulation systems.

  2. Modelling dynamics with context-free grammars

    NASA Astrophysics Data System (ADS)

    García-Huerta, Juan-M.; Jiménez-Hernández, Hugo; Herrera-Navarro, Ana-M.; Hernández-Díaz, Teresa; Terol-Villalobos, Ivan

    2014-03-01

    This article presents a strategy to model the dynamics performed by vehicles in a freeway. The proposal consists on encode the movement as a set of finite states. A watershed-based segmentation is used to localize regions with high-probability of motion. Each state represents a proportion of a camera projection in a two-dimensional space, where each state is associated to a symbol, such that any combination of symbols is expressed as a language. Starting from a sequence of symbols through a linear algorithm a free-context grammar is inferred. This grammar represents a hierarchical view of common sequences observed into the scene. Most probable grammar rules express common rules associated to normal movement behavior. Less probable rules express themselves a way to quantify non-common behaviors and they might need more attention. Finally, all sequences of symbols that does not match with the grammar rules, may express itself uncommon behaviors (abnormal). The grammar inference is built with several sequences of images taken from a freeway. Testing process uses the sequence of symbols emitted by the scenario, matching the grammar rules with common freeway behaviors. The process of detect abnormal/normal behaviors is managed as the task of verify if any word generated by the scenario is recognized by the grammar.

  3. Magnetic susceptibility, artifact volume in MRI, and tensile properties of swaged Zr-Ag composites for biomedical applications.

    PubMed

    Imai, Haruki; Tanaka, Yoji; Nomura, Naoyuki; Doi, Hisashi; Tsutsumi, Yusuke; Ono, Takashi; Hanawa, Takao

    2017-02-01

    Zr-Ag composites were fabricated to decrease the magnetic susceptibility by compensating for the magnetic susceptibility of their components. The Zr-Ag composites with a different Zr-Ag ratio were swaged, and their magnetic susceptibility, artifact volume, and mechanical properties were evaluated by magnetic balance, three-dimensional (3-D) artifact rendering, and a tensile test, respectively. These properties were correlated with the volume fraction of Ag using the linear rule of mixture. We successfully obtained the swaged Zr-Ag composites up to the reduction ratio of 96% for Zr-4, 16, 36, 64Ag and 86% for Zr-81Ag. However, the volume fraction of Ag after swaging tended to be lower than that before swaging, especially for Ag-rich Zr-Ag composites. The magnetic susceptibility of the composites linearly decreased with the increasing volume fraction of Ag. No artifact could be estimated with the Ag volume fraction in the range from 93.7% to 95.4% in three conditions. Young's modulus, ultimate tensile strength (UTS), and 0.2% yield strength of Zr-Ag composites showed slightly lower values compared to the estimated values using a linear rule of mixture. The decrease in magnetic susceptibility of Zr and Ag by alloying or combining would contribute to the decrease of the Ag fraction, leading to the improvement of mechanical properties. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Loading Deformation Characteristic Simulation Study of Engineering Vehicle Refurbished Tire

    NASA Astrophysics Data System (ADS)

    Qiang, Wang; Xiaojie, Qi; Zhao, Yang; Yunlong, Wang; Guotian, Wang; Degang, Lv

    2018-05-01

    The paper constructed engineering vehicle refurbished tire computer geometry model, mechanics model, contact model, finite element analysis model, did simulation study on load-deformation property of engineering vehicle refurbished tire by comparing with that of the new and the same type tire, got load-deformation of engineering vehicle refurbished tire under the working condition of static state and ground contact. The analysis result shows that change rules of radial-direction deformation and side-direction deformation of engineering vehicle refurbished tire are close to that of the new tire, radial-direction and side-direction deformation value is a little less than that of the new tire. When air inflation pressure was certain, radial-direction deformation linear rule of engineer vehicle refurbished tire would increase with load adding, however, side-direction deformation showed linear change rule, when air inflation pressure was low; and it would show increase of non-linear change rule, when air inflation pressure was very high.

  5. Local Subspace Classifier with Transform-Invariance for Image Classification

    NASA Astrophysics Data System (ADS)

    Hotta, Seiji

    A family of linear subspace classifiers called local subspace classifier (LSC) outperforms the k-nearest neighbor rule (kNN) and conventional subspace classifiers in handwritten digit classification. However, LSC suffers very high sensitivity to image transformations because it uses projection and the Euclidean distances for classification. In this paper, I present a combination of a local subspace classifier (LSC) and a tangent distance (TD) for improving accuracy of handwritten digit recognition. In this classification rule, we can deal with transform-invariance easily because we are able to use tangent vectors for approximation of transformations. However, we cannot use tangent vectors in other type of images such as color images. Hence, kernel LSC (KLSC) is proposed for incorporating transform-invariance into LSC via kernel mapping. The performance of the proposed methods is verified with the experiments on handwritten digit and color image classification.

  6. Optimal operating rules definition in complex water resource systems combining fuzzy logic, expert criteria and stochastic programming

    NASA Astrophysics Data System (ADS)

    Macian-Sorribes, Hector; Pulido-Velazquez, Manuel

    2016-04-01

    This contribution presents a methodology for defining optimal seasonal operating rules in multireservoir systems coupling expert criteria and stochastic optimization. Both sources of information are combined using fuzzy logic. The structure of the operating rules is defined based on expert criteria, via a joint expert-technician framework consisting in a series of meetings, workshops and surveys carried out between reservoir managers and modelers. As a result, the decision-making process used by managers can be assessed and expressed using fuzzy logic: fuzzy rule-based systems are employed to represent the operating rules and fuzzy regression procedures are used for forecasting future inflows. Once done that, a stochastic optimization algorithm can be used to define optimal decisions and transform them into fuzzy rules. Finally, the optimal fuzzy rules and the inflow prediction scheme are combined into a Decision Support System for making seasonal forecasts and simulate the effect of different alternatives in response to the initial system state and the foreseen inflows. The approach presented has been applied to the Jucar River Basin (Spain). Reservoir managers explained how the system is operated, taking into account the reservoirs' states at the beginning of the irrigation season and the inflows previewed during that season. According to the information given by them, the Jucar River Basin operating policies were expressed via two fuzzy rule-based (FRB) systems that estimate the amount of water to be allocated to the users and how the reservoir storages should be balanced to guarantee those deliveries. A stochastic optimization model using Stochastic Dual Dynamic Programming (SDDP) was developed to define optimal decisions, which are transformed into optimal operating rules embedding them into the two FRBs previously created. As a benchmark, historical records are used to develop alternative operating rules. A fuzzy linear regression procedure was employed to foresee future inflows depending on present and past hydrological and meteorological variables actually used by the reservoir managers to define likely inflow scenarios. A Decision Support System (DSS) was created coupling the FRB systems and the inflow prediction scheme in order to give the user a set of possible optimal releases in response to the reservoir states at the beginning of the irrigation season and the fuzzy inflow projections made using hydrological and meteorological information. The results show that the optimal DSS created using the FRB operating policies are able to increase the amount of water allocated to the users in 20 to 50 Mm3 per irrigation season with respect to the current policies. Consequently, the mechanism used to define optimal operating rules and transform them into a DSS is able to increase the water deliveries in the Jucar River Basin, combining expert criteria and optimization algorithms in an efficient way. This study has been partially supported by the IMPADAPT project (CGL2013-48424-C2-1-R) with Spanish MINECO (Ministerio de Economía y Competitividad) and FEDER funds. It also has received funding from the European Union's Horizon 2020 research and innovation programme under the IMPREX project (grant agreement no: 641.811).

  7. Double Linear Damage Rule for Fatigue Analysis

    NASA Technical Reports Server (NTRS)

    Halford, G.; Manson, S.

    1985-01-01

    Double Linear Damage Rule (DLDR) method for use by structural designers to determine fatigue-crack-initiation life when structure subjected to unsteady, variable-amplitude cyclic loadings. Method calculates in advance of service how many loading cycles imposed on structural component before macroscopic crack initiates. Approach eventually used in design of high performance systems and incorporated into design handbooks and codes.

  8. New fundamental parameters for attitude representation

    NASA Astrophysics Data System (ADS)

    Patera, Russell P.

    2017-08-01

    A new attitude parameter set is developed to clarify the geometry of combining finite rotations in a rotational sequence and in combining infinitesimal angular increments generated by angular rate. The resulting parameter set of six Pivot Parameters represents a rotation as a great circle arc on a unit sphere that can be located at any clocking location in the rotation plane. Two rotations are combined by linking their arcs at either of the two intersection points of the respective rotation planes. In a similar fashion, linking rotational increments produced by angular rate is used to derive the associated kinematical equations, which are linear and have no singularities. Included in this paper is the derivation of twelve Pivot Parameter elements that represent all twelve Euler Angle sequences, which enables efficient conversions between Pivot Parameters and any Euler Angle sequence. Applications of this new parameter set include the derivation of quaternions and the quaternion composition rule, as well as, the derivation of the analytical solution to time dependent coning motion. The relationships between Pivot Parameters and traditional parameter sets are included in this work. Pivot Parameters are well suited for a variety of aerospace applications due to their effective composition rule, singularity free kinematic equations, efficient conversion to and from Euler Angle sequences and clarity of their geometrical foundation.

  9. Modeling somatic and dendritic spike mediated plasticity at the single neuron and network level.

    PubMed

    Bono, Jacopo; Clopath, Claudia

    2017-09-26

    Synaptic plasticity is thought to be the principal neuronal mechanism underlying learning. Models of plastic networks typically combine point neurons with spike-timing-dependent plasticity (STDP) as the learning rule. However, a point neuron does not capture the local non-linear processing of synaptic inputs allowed for by dendrites. Furthermore, experimental evidence suggests that STDP is not the only learning rule available to neurons. By implementing biophysically realistic neuron models, we study how dendrites enable multiple synaptic plasticity mechanisms to coexist in a single cell. In these models, we compare the conditions for STDP and for synaptic strengthening by local dendritic spikes. We also explore how the connectivity between two cells is affected by these plasticity rules and by different synaptic distributions. Finally, we show that how memory retention during associative learning can be prolonged in networks of neurons by including dendrites.Synaptic plasticity is the neuronal mechanism underlying learning. Here the authors construct biophysical models of pyramidal neurons that reproduce observed plasticity gradients along the dendrite and show that dendritic spike dependent LTP which is predominant in distal sections can prolong memory retention.

  10. Linear discriminant analysis with misallocation in training samples

    NASA Technical Reports Server (NTRS)

    Chhikara, R. (Principal Investigator); Mckeon, J.

    1982-01-01

    Linear discriminant analysis for a two-class case is studied in the presence of misallocation in training samples. A general appraoch to modeling of mislocation is formulated, and the mean vectors and covariance matrices of the mixture distributions are derived. The asymptotic distribution of the discriminant boundary is obtained and the asymptotic first two moments of the two types of error rate given. Certain numerical results for the error rates are presented by considering the random and two non-random misallocation models. It is shown that when the allocation procedure for training samples is objectively formulated, the effect of misallocation on the error rates of the Bayes linear discriminant rule can almost be eliminated. If, however, this is not possible, the use of Fisher rule may be preferred over the Bayes rule.

  11. Plasmonic modes in nanowire dimers: A study based on the hydrodynamic Drude model including nonlocal and nonlinear effects

    NASA Astrophysics Data System (ADS)

    Moeferdt, Matthias; Kiel, Thomas; Sproll, Tobias; Intravaia, Francesco; Busch, Kurt

    2018-02-01

    A combined analytical and numerical study of the modes in two distinct plasmonic nanowire systems is presented. The computations are based on a discontinuous Galerkin time-domain approach, and a fully nonlinear and nonlocal hydrodynamic Drude model for the metal is utilized. In the linear regime, these computations demonstrate the strong influence of nonlocality on the field distributions as well as on the scattering and absorption spectra. Based on these results, second-harmonic-generation efficiencies are computed over a frequency range that covers all relevant modes of the linear spectra. In order to interpret the physical mechanisms that lead to corresponding field distributions, the associated linear quasielectrostatic problem is solved analytically via conformal transformation techniques. This provides an intuitive classification of the linear excitations of the systems that is then applied to the full Maxwell case. Based on this classification, group theory facilitates the determination of the selection rules for the efficient excitation of modes in both the linear and nonlinear regimes. This leads to significantly enhanced second-harmonic generation via judiciously exploiting the system symmetries. These results regarding the mode structure and second-harmonic generation are of direct relevance to other nanoantenna systems.

  12. Diffusive Public Goods and Coexistence of Cooperators and Cheaters on a 1D Lattice

    PubMed Central

    Scheuring, István

    2014-01-01

    Many populations of cells cooperate through the production of extracellular materials. These materials (enzymes, siderophores) spread by diffusion and can be applied by both the cooperator and cheater (non-producer) cells. In this paper the problem of coexistence of cooperator and cheater cells is studied on a 1D lattice where cooperator cells produce a diffusive material which is beneficial to the individuals according to the local concentration of this public good. The reproduction success of a cell increases linearly with the benefit in the first model version and increases non-linearly (saturates) in the second version. Two types of update rules are considered; either the cooperative cell stops producing material before death (death-production-birth, DpB) or it produces the common material before it is selected to die (production-death-birth, pDB). The empty space is occupied by its neighbors according to their replication rates. By using analytical and numerical methods I have shown that coexistence of the cooperator and cheater cells is possible although atypical in the linear version of this 1D model if either DpB or pDB update rule is assumed. While coexistence is impossible in the non-linear model with pDB update rule, it is one of the typical behaviors in case of the non-linear model with DpB update rule. PMID:25025985

  13. Non-Condon nonequilibrium Fermi’s golden rule rates from the linearized semiclassical method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Xiang; Geva, Eitan

    2016-08-14

    The nonequilibrium Fermi’s golden rule describes the transition between a photoexcited bright donor electronic state and a dark acceptor electronic state, when the nuclear degrees of freedom start out in a nonequilibrium state. In a previous paper [X. Sun and E. Geva, J. Chem. Theory Comput. 12, 2926 (2016)], we proposed a new expression for the nonequilibrium Fermi’s golden rule within the framework of the linearized semiclassical approximation and based on the Condon approximation, according to which the electronic coupling between donor and acceptor is assumed constant. In this paper we propose a more general expression, which is applicable tomore » the case of non-Condon electronic coupling. We test the accuracy of the new non-Condon nonequilibrium Fermi’s golden rule linearized semiclassical expression on a model where the donor and acceptor potential energy surfaces are parabolic and identical except for shifts in the equilibrium energy and geometry, and the coupling between them is linear in the nuclear coordinates. Since non-Condon effects may or may not give rise to conical intersections, both possibilities are examined by considering the following: (1) A modified Garg-Onuchic-Ambegaokar model for charge transfer in the condensed phase, where the donor-acceptor coupling is linear in the primary-mode coordinate, and for which non-Condon effects do not give rise to a conical intersection; (2) the linear vibronic coupling model for electronic transitions in gas phase molecules, where non-Condon effects give rise to conical intersections. We also present a comprehensive comparison between the linearized semiclassical expression and a progression of more approximate expressions, in both normal and inverted regions, and over a wide range of initial nonequilibrium states, temperatures, and frictions.« less

  14. Amplitudes for multiphoton quantum processes in linear optics

    NASA Astrophysics Data System (ADS)

    Urías, Jesús

    2011-07-01

    The prominent role that linear optical networks have acquired in the engineering of photon states calls for physically intuitive and automatic methods to compute the probability amplitudes for the multiphoton quantum processes occurring in linear optics. A version of Wick's theorem for the expectation value, on any vector state, of products of linear operators, in general, is proved. We use it to extract the combinatorics of any multiphoton quantum processes in linear optics. The result is presented as a concise rule to write down directly explicit formulae for the probability amplitude of any multiphoton process in linear optics. The rule achieves a considerable simplification and provides an intuitive physical insight about quantum multiphoton processes. The methodology is applied to the generation of high-photon-number entangled states by interferometrically mixing coherent light with spontaneously down-converted light.

  15. Linearly polarized GHz magnetization dynamics of spin helix modes in the ferrimagnetic insulator Cu2OSeO3.

    PubMed

    Stasinopoulos, I; Weichselbaumer, S; Bauer, A; Waizner, J; Berger, H; Garst, M; Pfleiderer, C; Grundler, D

    2017-08-01

    Linear dichroism - the polarization dependent absorption of electromagnetic waves- is routinely exploited in applications as diverse as structure determination of DNA or polarization filters in optical technologies. Here filamentary absorbers with a large length-to-width ratio are a prerequisite. For magnetization dynamics in the few GHz frequency regime strictly linear dichroism was not observed for more than eight decades. Here, we show that the bulk chiral magnet Cu 2 OSeO 3 exhibits linearly polarized magnetization dynamics at an unexpectedly small frequency of about 2 GHz at zero magnetic field. Unlike optical filters that are assembled from filamentary absorbers, the magnet is shown to provide linear polarization as a bulk material for an extremely wide range of length-to-width ratios. In addition, the polarization plane of a given mode can be switched by 90° via a small variation in width. Our findings shed a new light on magnetization dynamics in that ferrimagnetic ordering combined with antisymmetric exchange interaction offers strictly linear polarization and cross-polarized modes for a broad spectrum of sample shapes at zero field. The discovery allows for novel design rules and optimization of microwave-to-magnon transduction in emerging microwave technologies.

  16. Computer simulation of two-dimensional unsteady flows in estuaries and embayments by the method of characteristics : basic theory and the formulation of the numerical method

    USGS Publications Warehouse

    Lai, Chintu

    1977-01-01

    Two-dimensional unsteady flows of homogeneous density in estuaries and embayments can be described by hyperbolic, quasi-linear partial differential equations involving three dependent and three independent variables. A linear combination of these equations leads to a parametric equation of characteristic form, which consists of two parts: total differentiation along the bicharacteristics and partial differentiation in space. For its numerical solution, the specified-time-interval scheme has been used. The unknown, partial space-derivative terms can be eliminated first by suitable combinations of difference equations, converted from the corresponding differential forms and written along four selected bicharacteristics and a streamline. Other unknowns are thus made solvable from the known variables on the current time plane. The computation is carried to the second-order accuracy by using trapezoidal rule of integration. Means to handle complex boundary conditions are developed for practical application. Computer programs have been written and a mathematical model has been constructed for flow simulation. The favorable computer outputs suggest further exploration and development of model worthwhile. (Woodard-USGS)

  17. Evaluation of the grand-canonical partition function using expanded Wang-Landau simulations. III. Impact of combining rules on mixtures properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desgranges, Caroline; Delhommelle, Jerome

    2014-03-14

    Combining rules, such as the Lorentz-Berthelot rules, are routinely used to calculate the thermodynamic properties of mixtures using molecular simulations. Here we extend the expanded Wang-Landau simulation approach to determine the impact of the combining rules on the value of the partition function of binary systems, and, in turn, on the phase coexistence and thermodynamics of these mixtures. We study various types of mixtures, ranging from systems of rare gases to biologically and technologically relevant mixtures, such as water-urea and water-carbon dioxide. Comparing the simulation results to the experimental data on mixtures of rare gases allows us to rank themore » performance of combining rules. We find that the widely used Lorentz-Berthelot rules exhibit the largest deviations from the experimental data, both for the bulk and at coexistence, while the Kong and Waldman-Hagler provide much better alternatives. In particular, in the case of aqueous solutions of urea, we show that the use of the Lorentz-Berthelot rules has a strong impact on the Gibbs free energy of the solute, overshooting the value predicted by the Waldman-Hagler rules by 7%. This result emphasizes the importance of the combining rule for the determination of hydration free energies using molecular simulations.« less

  18. Extending the Coyote emulator to dark energy models with standard w {sub 0}- w {sub a} parametrization of the equation of state

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Casarini, L.; Bonometto, S.A.; Tessarotto, E.

    2016-08-01

    We discuss an extension of the Coyote emulator to predict non-linear matter power spectra of dark energy (DE) models with a scale factor dependent equation of state of the form w = w {sub 0}+(1- a ) w {sub a} . The extension is based on the mapping rule between non-linear spectra of DE models with constant equation of state and those with time varying one originally introduced in ref. [40]. Using a series of N-body simulations we show that the spectral equivalence is accurate to sub-percent level across the same range of modes and redshift covered by the Coyotemore » suite. Thus, the extended emulator provides a very efficient and accurate tool to predict non-linear power spectra for DE models with w {sub 0}- w {sub a} parametrization. According to the same criteria we have developed a numerical code that we have implemented in a dedicated module for the CAMB code, that can be used in combination with the Coyote Emulator in likelihood analyses of non-linear matter power spectrum measurements. All codes can be found at https://github.com/luciano-casarini/pkequal.« less

  19. COSMOLOGY OF CHAMELEONS WITH POWER-LAW COUPLINGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mota, David F.; Winther, Hans A.

    2011-05-20

    In chameleon field theories, a scalar field can couple to matter with gravitational strength and still evade local gravity constraints due to a combination of self-interactions and the couplings to matter. Originally, these theories were proposed with a constant coupling to matter; however, the chameleon mechanism also extends to the case where the coupling becomes field dependent. We study the cosmology of chameleon models with power-law couplings and power-law potentials. It is found that these generalized chameleons, when viable, have a background expansion very close to {Lambda}CDM, but can in some special cases enhance the growth of the linear perturbationsmore » at low redshifts. For the models we consider, it is found that this region of the parameter space is ruled out by local gravity constraints. Imposing a coupling to dark matter only, the local constraints are avoided, and it is possible to have observable signatures on the linear matter perturbations.« less

  20. Linear solvation energy relationships: "rule of thumb" for estimation of variable values

    USGS Publications Warehouse

    Hickey, James P.; Passino-Reader, Dora R.

    1991-01-01

    For the linear solvation energy relationship (LSER), values are listed for each of the variables (Vi/100, π*, &betam, αm) for fundamental organic structures and functional groups. We give the guidelines to estimate LSER variable values quickly for a vast array of possible organic compounds such as those found in the environment. The difficulty in generating these variables has greatly discouraged the application of this quantitative structure-activity relationship (QSAR) method. This paper present the first compilation of molecular functional group values together with a utilitarian set of the LSER variable estimation rules. The availability of these variable values and rules should facilitate widespread application of LSER for hazard evaluation of environmental contaminants.

  1. A Hierarchy of Proof Rules for Checking Differential Invariance of Algebraic Sets

    DTIC Science & Technology

    2014-11-01

    linear hybrid systems by linear algebraic methods. In SAS, volume 6337 of LNCS, pages 373–389. Springer, 2010. [19] E. W. Mayr. Membership in polynomial...383–394, 2009. [31] A. Tarski. A decision method for elementary algebra and geometry. Bull. Amer. Math. Soc., 59, 1951. [32] A. Tiwari. Abstractions...A Hierarchy of Proof Rules for Checking Differential Invariance of Algebraic Sets Khalil Ghorbal1 Andrew Sogokon2 André Platzer1 November 2014 CMU

  2. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    PubMed

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  3. Integrating the ECG power-line interference removal methods with rule-based system.

    PubMed

    Kumaravel, N; Senthil, A; Sridhar, K S; Nithiyanandam, N

    1995-01-01

    The power-line frequency interference in electrocardiographic signals is eliminated to enhance the signal characteristics for diagnosis. The power-line frequency normally varies +/- 1.5 Hz from its standard value of 50 Hz. In the present work, the performances of the linear FIR filter, Wave digital filter (WDF) and adaptive filter for the power-line frequency variations from 48.5 to 51.5 Hz in steps of 0.5 Hz are studied. The advantage of the LMS adaptive filter in the removal of power-line frequency interference even if the frequency of interference varies by +/- 1.5 Hz from its normal value of 50 Hz over other fixed frequency filters is very well justified. A novel method of integrating rule-based system approach with linear FIR filter and also with Wave digital filter are proposed. The performances of Rule-based FIR filter and Rule-based Wave digital filter are compared with the LMS adaptive filter.

  4. Axial and Torsional Load-Type Sequencing in Cumulative Fatigue: Low Amplitude Followed by High Amplitude Loading

    NASA Technical Reports Server (NTRS)

    Bonacuse, Peter J.; Kalluri, Sreeramesh

    2001-01-01

    The experiments described herein were performed to determine whether damage imposed by axial loading interacts with damage imposed by torsional loading. This paper is a follow on to a study that investigated effects of load-type sequencing on the cumulative fatigue behavior of a cobalt base superalloy, Haynes 188 at 538 C Both the current and the previous study were used to test the applicability of cumulative fatigue damage models to conditions where damage is imposed by different loading modes. In the previous study, axial and torsional two load level cumulative fatigue experiments were conducted, in varied combinations, with the low-cycle fatigue (high amplitude loading) applied first. In present study, the high-cycle fatigue (low amplitude loading) is applied initially. As in the previous study, four sequences (axial/axial, torsion/torsion, axial/torsion, and torsion/axial) of two load level cumulative fatigue experiments were performed. The amount of fatigue damage contributed by each of the imposed loads was estimated by both the Palmgren-Miner linear damage rule (LDR) and the non-linear damage curve approach (DCA). Life predictions for the various cumulative loading combinations are compared with experimental results.

  5. Combined Economic and Hydrologic Modeling to Support Collaborative Decision Making Processes

    NASA Astrophysics Data System (ADS)

    Sheer, D. P.

    2008-12-01

    For more than a decade, the core concept of the author's efforts in support of collaborative decision making has been a combination of hydrologic simulation and multi-objective optimization. The modeling has generally been used to support collaborative decision making processes. The OASIS model developed by HydroLogics Inc. solves a multi-objective optimization at each time step using a mixed integer linear program (MILP). The MILP can be configured to include any user defined objective, including but not limited too economic objectives. For example, an estimated marginal value for water for crops and M&I use were included in the objective function to drive trades in a model of the lower Rio Grande. The formulation of the MILP, constraints and objectives, in any time step is conditional: it changes based on the value of state variables and dynamic external forcing functions, such as rainfall, hydrology, market prices, arrival of migratory fish, water temperature, etc. It therefore acts as a dynamic short term multi-objective economic optimization for each time step. MILP is capable of solving a general problem that includes a very realistic representation of the physical system characteristics in addition to the normal multi-objective optimization objectives and constraints included in economic models. In all of these models, the short term objective function is a surrogate for achieving long term multi-objective results. The long term performance for any alternative (especially including operating strategies) is evaluated by simulation. An operating rule is the combination of conditions, parameters, constraints and objectives used to determine the formulation of the short term optimization in each time step. Heuristic wrappers for the simulation program have been developed improve the parameters of an operating rule, and are initiating research on a wrapper that will allow us to employ a genetic algorithm to improve the form of the rule (conditions, constraints, and short term objectives) as well. In the models operating rules represent different models of human behavior, and the objective of the modeling is to find rules for human behavior that perform well in terms of long term human objectives. The conceptual model used to represent human behavior incorporates economic multi-objective optimization for surrogate objectives, and rules that set those objectives based on current conditions and accounting for uncertainty, at least implicitly. The author asserts that real world operating rules follow this form and have evolved because they have been perceived as successful in the past. Thus, the modeling efforts focus on human behavior in much the same way that economic models focus on human behavior. This paper illustrates the above concepts with real world examples.

  6. Efficient Web Services Policy Combination

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh; Harman, Joseph G.

    2010-01-01

    Large-scale Web security systems usually involve cooperation between domains with non-identical policies. The network management and Web communication software used by the different organizations presents a stumbling block. Many of the tools used by the various divisions do not have the ability to communicate network management data with each other. At best, this means that manual human intervention into the communication protocols used at various network routers and endpoints is required. Developing practical, sound, and automated ways to compose policies to bridge these differences is a long-standing problem. One of the key subtleties is the need to deal with inconsistencies and defaults where one organization proposes a rule on a particular feature, and another has a different rule or expresses no rule. A general approach is to assign priorities to rules and observe the rules with the highest priorities when there are conflicts. The present methods have inherent inefficiency, which heavily restrict their practical applications. A new, efficient algorithm combines policies utilized for Web services. The method is based on an algorithm that allows an automatic and scalable composition of security policies between multiple organizations. It is based on defeasible policy composition, a promising approach for finding conflicts and resolving priorities between rules. In the general case, policy negotiation is an intractable problem. A promising method, suggested in the literature, is when policies are represented in defeasible logic, and composition is based on rules for non-monotonic inference. In this system, policy writers construct metapolicies describing both the policy that they wish to enforce and annotations describing their composition preferences. These annotations can indicate whether certain policy assertions are required by the policy writer or, if not, under what circumstances the policy writer is willing to compromise and allow other assertions to take precedence. Meta-policies are specified in defeasible logic, a computationally efficient non-monotonic logic developed to model human reasoning. One drawback of this method is that at one point the algorithm starts an exhaustive search of all subsets of the set of conclusions of a defeasible theory. Although the propositional defeasible logic has linear complexity, the set of conclusions here may be large, especially in real-life practical cases. This phenomenon leads to an inefficient exponential explosion of complexity. The current process of getting a Web security policy from combination of two meta-policies consists of two steps. The first is generating a new meta-policy that is a composition of the input meta-policies, and the second is mapping the meta-policy onto a security policy. The new algorithm avoids the exhaustive search in the current algorithm, and provides a security policy that matches all requirements of the involved metapolicies.

  7. Rule-based support system for multiple UMLS semantic type assignments

    PubMed Central

    Geller, James; He, Zhe; Perl, Yehoshua; Morrey, C. Paul; Xu, Julia

    2012-01-01

    Background When new concepts are inserted into the UMLS, they are assigned one or several semantic types from the UMLS Semantic Network by the UMLS editors. However, not every combination of semantic types is permissible. It was observed that many concepts with rare combinations of semantic types have erroneous semantic type assignments or prohibited combinations of semantic types. The correction of such errors is resource-intensive. Objective We design a computational system to inform UMLS editors as to whether a specific combination of two, three, four, or five semantic types is permissible or prohibited or questionable. Methods We identify a set of inclusion and exclusion instructions in the UMLS Semantic Network documentation and derive corresponding rule-categories as well as rule-categories from the UMLS concept content. We then design an algorithm adviseEditor based on these rule-categories. The algorithm specifies rules for an editor how to proceed when considering a tuple (pair, triple, quadruple, quintuple) of semantic types to be assigned to a concept. Results Eight rule-categories were identified. A Web-based system was developed to implement the adviseEditor algorithm, which returns for an input combination of semantic types whether it is permitted, prohibited or (in a few cases) requires more research. The numbers of semantic type pairs assigned to each rule-category are reported. Interesting examples for each rule-category are illustrated. Cases of semantic type assignments that contradict rules are listed, including recently introduced ones. Conclusion The adviseEditor system implements explicit and implicit knowledge available in the UMLS in a system that informs UMLS editors about the permissibility of a desired combination of semantic types. Using adviseEditor might help accelerate the work of the UMLS editors and prevent erroneous semantic type assignments. PMID:23041716

  8. Mathematical programming models for the economic design and assessment of wind energy conversion systems

    NASA Astrophysics Data System (ADS)

    Reinert, K. A.

    The use of linear decision rules (LDR) and chance constrained programming (CCP) to optimize the performance of wind energy conversion clusters coupled to storage systems is described. Storage is modelled by LDR and output by CCP. The linear allocation rule and linear release rule prescribe the size and optimize a storage facility with a bypass. Chance constraints are introduced to explicitly treat reliability in terms of an appropriate value from an inverse cumulative distribution function. Details of deterministic programming structure and a sample problem involving a 500 kW and a 1.5 MW WECS are provided, considering an installed cost of $1/kW. Four demand patterns and three levels of reliability are analyzed for optimizing the generator choice and the storage configuration for base load and peak operating conditions. Deficiencies in ability to predict reliability and to account for serial correlations are noted in the model, which is concluded useful for narrowing WECS design options.

  9. Assessing Performance of Multipurpose Reservoir System Using Two-Point Linear Hedging Rule

    NASA Astrophysics Data System (ADS)

    Sasireka, K.; Neelakantan, T. R.

    2017-07-01

    Reservoir operation is the one of the important filed of water resource management. Innovative techniques in water resource management are focussed at optimizing the available water and in decreasing the environmental impact of water utilization on the natural environment. In the operation of multi reservoir system, efficient regulation of the release to satisfy the demand for various purpose like domestic, irrigation and hydropower can lead to increase the benefit from the reservoir as well as significantly reduces the damage due to floods. Hedging rule is one of the emerging techniques in reservoir operation, which reduce the severity of drought by accepting number of smaller shortages. The key objective of this paper is to maximize the minimum power production and improve the reliability of water supply for municipal and irrigation purpose by using hedging rule. In this paper, Type II two-point linear hedging rule is attempted to improve the operation of Bargi reservoir in the Narmada basin in India. The results obtained from simulation of hedging rule is compared with results from Standard Operating Policy, the result shows that the application of hedging rule significantly improved the reliability of water supply and reliability of irrigation release and firm power production.

  10. Extending Linear Models to Non-Linear Contexts: An In-Depth Study about Two University Students' Mathematical Productions

    ERIC Educational Resources Information Center

    Esteley, Cristina; Villarreal, Monica; Alagia, Humberto

    2004-01-01

    This research report presents a study of the work of agronomy majors in which an extension of linear models to non-linear contexts can be observed. By linear models we mean the model y=a.x+b, some particular representations of direct proportionality and the diagram for the rule of three. Its presence and persistence in different types of problems…

  11. Combining multiple imputation and meta-analysis with individual participant data

    PubMed Central

    Burgess, Stephen; White, Ian R; Resche-Rigon, Matthieu; Wood, Angela M

    2013-01-01

    Multiple imputation is a strategy for the analysis of incomplete data such that the impact of the missingness on the power and bias of estimates is mitigated. When data from multiple studies are collated, we can propose both within-study and multilevel imputation models to impute missing data on covariates. It is not clear how to choose between imputation models or how to combine imputation and inverse-variance weighted meta-analysis methods. This is especially important as often different studies measure data on different variables, meaning that we may need to impute data on a variable which is systematically missing in a particular study. In this paper, we consider a simulation analysis of sporadically missing data in a single covariate with a linear analysis model and discuss how the results would be applicable to the case of systematically missing data. We find in this context that ensuring the congeniality of the imputation and analysis models is important to give correct standard errors and confidence intervals. For example, if the analysis model allows between-study heterogeneity of a parameter, then we should incorporate this heterogeneity into the imputation model to maintain the congeniality of the two models. In an inverse-variance weighted meta-analysis, we should impute missing data and apply Rubin's rules at the study level prior to meta-analysis, rather than meta-analyzing each of the multiple imputations and then combining the meta-analysis estimates using Rubin's rules. We illustrate the results using data from the Emerging Risk Factors Collaboration. PMID:23703895

  12. Combined rule extraction and feature elimination in supervised classification.

    PubMed

    Liu, Sheng; Patel, Ronak Y; Daga, Pankaj R; Liu, Haining; Fu, Gang; Doerksen, Robert J; Chen, Yixin; Wilkins, Dawn E

    2012-09-01

    There are a vast number of biology related research problems involving a combination of multiple sources of data to achieve a better understanding of the underlying problems. It is important to select and interpret the most important information from these sources. Thus it will be beneficial to have a good algorithm to simultaneously extract rules and select features for better interpretation of the predictive model. We propose an efficient algorithm, Combined Rule Extraction and Feature Elimination (CRF), based on 1-norm regularized random forests. CRF simultaneously extracts a small number of rules generated by random forests and selects important features. We applied CRF to several drug activity prediction and microarray data sets. CRF is capable of producing performance comparable with state-of-the-art prediction algorithms using a small number of decision rules. Some of the decision rules are biologically significant.

  13. Assessment of Solder Joint Fatigue Life Under Realistic Service Conditions

    NASA Astrophysics Data System (ADS)

    Hamasha, Sa'd.; Jaradat, Younis; Qasaimeh, Awni; Obaidat, Mazin; Borgesen, Peter

    2014-12-01

    The behavior of lead-free solder alloys under complex loading scenarios is still not well understood. Common damage accumulation rules fail to account for strong effects of variations in cycling amplitude, and random vibration test results cannot be interpreted in terms of performance under realistic service conditions. This is a result of the effects of cycling parameters on materials properties. These effects are not yet fully understood or quantitatively predictable, preventing modeling based on parameters such as strain, work, or entropy. Depending on the actual spectrum of amplitudes, Miner's rule of linear damage accumulation has been shown to overestimate life by more than an order of magnitude, and greater errors are predicted for other combinations. Consequences may be particularly critical for so-called environmental stress screening. Damage accumulation has, however, been shown to scale with the inelastic work done, even if amplitudes vary. This and the observation of effects of loading history on subsequent work per cycle provide for a modified damage accumulation rule which allows for the prediction of life. Individual joints of four different Sn-Ag-Cu-based solder alloys (SAC305, SAC105, SAC-Ni, and SACXplus) were cycled in shear at room temperature, alternating between two different amplitudes while monitoring the evolution of the effective stiffness and work per cycle. This helped elucidate general trends and behaviors that are expected to occur in vibrations of microelectronics assemblies. Deviations from Miner's rule varied systematically with the combination of amplitudes, the sequences of cycles, and the strain rates in each. The severity of deviations also varied systematically with Ag content in the solder, but major effects were observed for all the alloys. A systematic analysis was conducted to assess whether scenarios might exist in which the more fatigue-resistant high-Ag alloys would fail sooner than the lower-Ag ones.

  14. Nonlinear dynamic systems identification using recurrent interval type-2 TSK fuzzy neural network - A novel structure.

    PubMed

    El-Nagar, Ahmad M

    2018-01-01

    In this study, a novel structure of a recurrent interval type-2 Takagi-Sugeno-Kang (TSK) fuzzy neural network (FNN) is introduced for nonlinear dynamic and time-varying systems identification. It combines the type-2 fuzzy sets (T2FSs) and a recurrent FNN to avoid the data uncertainties. The fuzzy firing strengths in the proposed structure are returned to the network input as internal variables. The interval type-2 fuzzy sets (IT2FSs) is used to describe the antecedent part for each rule while the consequent part is a TSK-type, which is a linear function of the internal variables and the external inputs with interval weights. All the type-2 fuzzy rules for the proposed RIT2TSKFNN are learned on-line based on structure and parameter learning, which are performed using the type-2 fuzzy clustering. The antecedent and consequent parameters of the proposed RIT2TSKFNN are updated based on the Lyapunov function to achieve network stability. The obtained results indicate that our proposed network has a small root mean square error (RMSE) and a small integral of square error (ISE) with a small number of rules and a small computation time compared with other type-2 FNNs. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Optimal Sequential Rules for Computer-Based Instruction.

    ERIC Educational Resources Information Center

    Vos, Hans J.

    1998-01-01

    Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…

  16. 77 FR 36324 - Self-Regulatory Organizations; NYSE MKT LLC; Notice of Filing of Proposed Rule Change Amending...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-18

    ... proposed combination of NYSE Euronext and Deutsche B[ouml]rse AG (the ``Combination'').\\4\\ Under the rule... prohibit the Combination, NYSE Euronext and Deutsche B[ouml]rse agreed to terminate the agreement to...

  17. 77 FR 36307 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Amending...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-18

    ... proposed combination of NYSE Euronext and Deutsche B[ouml]rse AG (the ``Combination'').\\4\\ Under the rule... prohibit the Combination, NYSE Euronext and Deutsche B[ouml]rse agreed to terminate the agreement to...

  18. Why is working memory capacity related to matrix reasoning tasks?

    PubMed

    Harrison, Tyler L; Shipstead, Zach; Engle, Randall W

    2015-04-01

    One of the reasons why working memory capacity is so widely researched is its substantial relationship with fluid intelligence. Although this relationship has been found in numerous studies, researchers have been unable to provide a conclusive answer as to why the two constructs are related. In a recent study, researchers examined which attributes of Raven's Progressive Matrices were most strongly linked with working memory capacity (Wiley, Jarosz, Cushen, & Colflesh, Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 256-263, 2011). In that study, Raven's problems that required a novel combination of rules to solve were more strongly correlated with working memory capacity than were problems that did not. In the present study, we wanted to conceptually replicate the Wiley et al. results while controlling for a few potential confounds. Thus, we experimentally manipulated whether a problem required a novel combination of rules and found that repeated-rule-combination problems were more strongly related to working memory capacity than were novel-rule-combination problems. The relationship to other measures of fluid intelligence did not change based on whether the problem required a novel rule combination.

  19. Watching Nanoscale Self-Assembly Kinetics of Gold Prisms in Liquids

    NASA Astrophysics Data System (ADS)

    Kim, Juyeong; Ou, Zihao; Jones, Matthew R.; Chen, Qian

    We use liquid-phase transmission electron microscopy to watch self-assembly of gold triangular prisms into polymer-like structures. The in situ dynamics monitoring enabled by liquid-phase transmission electron microscopy, single nanoparticle tracking, and the marked conceptual similarity between molecular reactions and nanoparticle self-assembly combined elucidate the following mechanistic understanding: a step-growth polymerization based assembly statistics, kinetic pathways sampling particle curvature dependent energy minima and their interconversions, and directed assembly into polymorphs (linear or cyclic chains) through in situ modulation of the prism bonding geometry. Our study bridges the constituent kinetics on the molecular and nanoparticle length scales, which enriches the design rules in directed self-assembly of anisotropic nanoparticles.

  20. Elliptic biquaternion algebra

    NASA Astrophysics Data System (ADS)

    Özen, Kahraman Esen; Tosun, Murat

    2018-01-01

    In this study, we define the elliptic biquaternions and construct the algebra of elliptic biquaternions over the elliptic number field. Also we give basic properties of elliptic biquaternions. An elliptic biquaternion is in the form A0 + A1i + A2j + A3k which is a linear combination of {1, i, j, k} where the four components A0, A1, A2 and A3 are elliptic numbers. Here, 1, i, j, k are the quaternion basis of the elliptic biquaternion algebra and satisfy the same multiplication rules which are satisfied in both real quaternion algebra and complex quaternion algebra. In addition, we discuss the terms; conjugate, inner product, semi-norm, modulus and inverse for elliptic biquaternions.

  1. Learning and Tuning of Fuzzy Rules

    NASA Technical Reports Server (NTRS)

    Berenji, Hamid R.

    1997-01-01

    In this chapter, we review some of the current techniques for learning and tuning fuzzy rules. For clarity, we refer to the process of generating rules from data as the learning problem and distinguish it from tuning an already existing set of fuzzy rules. For learning, we touch on unsupervised learning techniques such as fuzzy c-means, fuzzy decision tree systems, fuzzy genetic algorithms, and linear fuzzy rules generation methods. For tuning, we discuss Jang's ANFIS architecture, Berenji-Khedkar's GARIC architecture and its extensions in GARIC-Q. We show that the hybrid techniques capable of learning and tuning fuzzy rules, such as CART-ANFIS, RNN-FLCS, and GARIC-RB, are desirable in development of a number of future intelligent systems.

  2. High-order boundary integral equation solution of high frequency wave scattering from obstacles in an unbounded linearly stratified medium

    NASA Astrophysics Data System (ADS)

    Barnett, Alex H.; Nelson, Bradley J.; Mahoney, J. Matthew

    2015-09-01

    We apply boundary integral equations for the first time to the two-dimensional scattering of time-harmonic waves from a smooth obstacle embedded in a continuously-graded unbounded medium. In the case we solve, the square of the wavenumber (refractive index) varies linearly in one coordinate, i.e. (Δ + E +x2) u (x1 ,x2) = 0 where E is a constant; this models quantum particles of fixed energy in a uniform gravitational field, and has broader applications to stratified media in acoustics, optics and seismology. We evaluate the fundamental solution efficiently with exponential accuracy via numerical saddle-point integration, using the truncated trapezoid rule with typically 102 nodes, with an effort that is independent of the frequency parameter E. By combining with a high-order Nyström quadrature, we are able to solve the scattering from obstacles 50 wavelengths across to 11 digits of accuracy in under a minute on a desktop or laptop.

  3. Perception of the dynamic visual vertical during sinusoidal linear motion.

    PubMed

    Pomante, A; Selen, L P J; Medendorp, W P

    2017-10-01

    The vestibular system provides information for spatial orientation. However, this information is ambiguous: because the otoliths sense the gravitoinertial force, they cannot distinguish gravitational and inertial components. As a consequence, prolonged linear acceleration of the head can be interpreted as tilt, referred to as the somatogravic effect. Previous modeling work suggests that the brain disambiguates the otolith signal according to the rules of Bayesian inference, combining noisy canal cues with the a priori assumption that prolonged linear accelerations are unlikely. Within this modeling framework the noise of the vestibular signals affects the dynamic characteristics of the tilt percept during linear whole-body motion. To test this prediction, we devised a novel paradigm to psychometrically characterize the dynamic visual vertical-as a proxy for the tilt percept-during passive sinusoidal linear motion along the interaural axis (0.33 Hz motion frequency, 1.75 m/s 2 peak acceleration, 80 cm displacement). While subjects ( n =10) kept fixation on a central body-fixed light, a line was briefly flashed (5 ms) at different phases of the motion, the orientation of which had to be judged relative to gravity. Consistent with the model's prediction, subjects showed a phase-dependent modulation of the dynamic visual vertical, with a subject-specific phase shift with respect to the imposed acceleration signal. The magnitude of this modulation was smaller than predicted, suggesting a contribution of nonvestibular signals to the dynamic visual vertical. Despite their dampening effect, our findings may point to a link between the noise components in the vestibular system and the characteristics of dynamic visual vertical. NEW & NOTEWORTHY A fundamental question in neuroscience is how the brain processes vestibular signals to infer the orientation of the body and objects in space. We show that, under sinusoidal linear motion, systematic error patterns appear in the disambiguation of linear acceleration and spatial orientation. We discuss the dynamics of these illusory percepts in terms of a dynamic Bayesian model that combines uncertainty in the vestibular signals with priors based on the natural statistics of head motion. Copyright © 2017 the American Physiological Society.

  4. Optical nonlinearities of excitons in monolayer MoS2

    NASA Astrophysics Data System (ADS)

    Soh, Daniel B. S.; Rogers, Christopher; Gray, Dodd J.; Chatterjee, Eric; Mabuchi, Hideo

    2018-04-01

    We calculate linear and nonlinear optical susceptibilities arising from the excitonic states of monolayer MoS2 for in-plane light polarizations, using second-quantized bound and unbound exciton operators. Optical selection rules are critical for obtaining the susceptibilities. We derive the valley-chirality rule for the second-order harmonic generation in monolayer MoS2 and find that the third-order harmonic process is efficient only for linearly polarized input light while the third-order two-photon process (optical Kerr effect) is efficient for circularly polarized light using a higher order exciton state. The absence of linear absorption due to the band gap and the unusually strong two-photon third-order nonlinearity make the monolayer MoS2 excitonic structure a promising resource for coherent nonlinear photonics.

  5. High-cycle fatigue characterization of titanium 5Al-2.5Sn alloy

    NASA Technical Reports Server (NTRS)

    Mahfuz, H.; Xin, Yu T.; Jeelani, S.

    1993-01-01

    High-cycle fatigue behavior of titanium 5Al 2.5Sn alloy at room temperature has been studied. S-N curve characterization is performed at different stress ratios ranging from 0 to 0.9 on a subsized fatigue specimen. Both two-stress and three-stress level tests are conducted at different stress ratios to study the cumulative fatigue damage. Life prediction techniques of linear damage rule, double linear damage rule and damage curve approaches are applied, and results are compared with the experimental data. The agreement between prediction and experiment is found to be excellent.

  6. Cramer's Rule Revisited

    ERIC Educational Resources Information Center

    Ayoub, Ayoub B.

    2005-01-01

    In 1750, the Swiss mathematician Gabriel Cramer published a well-written algebra book entitled "Introduction a l'Analyse des Lignes Courbes Algebriques." In the appendix to this book, Cramer gave, without proof, the rule named after him for solving a linear system of equations using determinants (Kosinki, 2001). Since then several derivations of…

  7. 78 FR 75386 - Entergy Operations, Inc.; Combined License Application for River Bend Station Unit 3, Exemption...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ... Combined License Application To Comply With Enhancements to Emergency Preparedness Rule AGENCY: Nuclear... an exemption from addressing enhancements to the Emergency Preparedness (EP) rules in their Combined... to Emergency Preparedness Regulations. EOI's requested exemption is seen as an open-ended, one-time...

  8. Reinforcement Learning Trees

    PubMed Central

    Zhu, Ruoqing; Zeng, Donglin; Kosorok, Michael R.

    2015-01-01

    In this paper, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman, 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree utilizes the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that towards terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings. PMID:26903687

  9. A Markovian engine for a biological energy transducer: the catalytic wheel.

    PubMed

    Tsong, Tian Yow; Chang, Cheng-Hung

    2007-04-01

    The molecular machines in biological cells are made of proteins, DNAs and other classes of molecules. The structures of these molecules are characteristically "soft", highly flexible, and yet their interactions with other molecules or ions are specific and selective. This chapter discusses a prevalent form, the catalytic wheel, or the energy transducer of cells, examines its mechanism of action, and extracts from it a set of simple but general rules for understanding the energetics of the biomolecular devices. These rules should also benefit design of manmade nanometer scale machines such as rotary motors or track-guided linear transporters. We will focus on an electric work that, by matching system dynamics and then enhancing the conformational fluctuation of one or several driver proteins, converts stochastic input of energy into rotation or locomotion of a receptor protein. The spatial (or barrier) and temporal symmetry breakings required for selected driver/receptor combinations are examined. This electric ratchet consists of a core engine that follows the Markovian dynamic, alleviates difficulties encountered in rigid mechanical model, and tailors to the soft-matter characteristics of the biomolecules.

  10. Comparison of conventional rule based flow control with control processes based on fuzzy logic in a combined sewer system.

    PubMed

    Klepiszewski, K; Schmitt, T G

    2002-01-01

    While conventional rule based, real time flow control of sewer systems is in common use, control systems based on fuzzy logic have been used only rarely, but successfully. The intention of this study is to compare a conventional rule based control of a combined sewer system with a fuzzy logic control by using hydrodynamic simulation. The objective of both control strategies is to reduce the combined sewer overflow volume by an optimization of the utilized storage capacities of four combined sewer overflow tanks. The control systems affect the outflow of four combined sewer overflow tanks depending on the water levels inside the structures. Both systems use an identical rule base. The developed control systems are tested and optimized for a single storm event which affects heterogeneously hydraulic load conditions and local discharge. Finally the efficiencies of the two different control systems are compared for two more storm events. The results indicate that the conventional rule based control and the fuzzy control similarly reach the objective of the control strategy. In spite of the higher expense to design the fuzzy control system its use provides no advantages in this case.

  11. Combining High Sensitivity Cardiac Troponin I and Cardiac Troponin T in the Early Diagnosis of Acute Myocardial Infarction.

    PubMed

    van der Linden, Noreen; Wildi, Karin; Twerenbold, Raphael; Pickering, John W; Than, Martin; Cullen, Louise; Greenslade, Jaimi; Parsonage, William; Nestelberger, Thomas; Boeddinghaus, Jasper; Badertscher, Patrick; Rubini Giménez, Maria; Klinkenberg, Lieke J J; Bekers, Otto; Schöni, Aline; Keller, Dagmar I; Sabti, Zaid; Puelacher, Christian; Cupa, Janosch; Schumacher, Lukas; Kozhuharov, Nikola; Grimm, Karin; Shrestha, Samyut; Flores, Dayana; Freese, Michael; Stelzig, Claudia; Strebel, Ivo; Miró, Òscar; Rentsch, Katharina; Morawiec, Beata; Kawecki, Damian; Kloos, Wanda; Lohrmann, Jens; Richards, A Mark; Troughton, Richard; Pemberton, Christopher; Osswald, Stefan; van Dieijen-Visser, Marja P; Mingels, Alma M; Reichlin, Tobias; Meex, Steven J R; Mueller, Christian

    2018-04-24

    Background -Combining two signals of cardiomyocyte injury, cardiac troponin I (cTnI) and T (cTnT), might overcome some individual pathophysiological and analytical limitations and thereby increase diagnostic accuracy for acute myocardial infarction (AMI) with a single blood draw. We aimed to evaluate the diagnostic performance of combinations of high sensitivity (hs) cTnI and hs-cTnT for the early diagnosis of AMI. Methods -The diagnostic performance of combining hs-cTnI (Architect, Abbott) and hs-cTnT (Elecsys, Roche) concentrations (sum, product, ratio and a combination algorithm) obtained at the time of presentation was evaluated in a large multicenter diagnostic study of patients with suspected AMI. The optimal rule out and rule in thresholds were externally validated in a second large multicenter diagnostic study. The proportion of patients eligible for early rule out was compared with the ESC 0/1 and 0/3 hour algorithms. Results -Combining hs-cTnI and hs-cTnT concentrations did not consistently increase overall diagnostic accuracy as compared with the individual isoforms. However, the combination improved the proportion of patients meeting criteria for very early rule-out. With the ESC 2015 guideline recommended algorithms and cut-offs, the proportion meeting rule out criteria after the baseline blood sampling was limited (6-24%) and assay dependent. Application of optimized cut-off values using the sum (9 ng/L) and product (18 ng2/L2) of hs-cTnI and hs-cTnT concentrations led to an increase in the proportion ruled-out after a single blood draw to 34-41% in the original (sum: negative predictive value (NPV) 100% (95%CI: 99.5-100%); product: NPV 100% (95%CI: 99.5-100%) and in the validation cohort (sum: NPV 99.6% (95%CI: 99.0-99.9%); product: NPV 99.4% (95%CI: 98.8-99.8%). The use of a combination algorithm (hs-cTnI <4 ng/L and hs-cTnT <9 ng/L) showed comparable results for rule out (40-43% ruled out; NPV original cohort 99.9% (95%CI: 99.2-100%); NPV validation cohort 99.5% (95%CI: 98.9-99.8%)) and rule-in (PPV original cohort 74.4% (95%Cl 69.6-78.8%); PPV validation cohort 84.0% (95%Cl 79.7-87.6%)). Conclusions -New strategies combining hs-cTnI and hs-cTnT concentrations may significantly increase the number of patients eligible for very early and safe rule-out, but do not seem helpful for the rule-in of AMI. Clinical Trial Registration -APACE URL: www.clinicaltrial.gov, Unique Identifier: NCT00470587; ADAPT URL: www.anzctr.org.au, Unique Identifier: ACTRN12611001069943.

  12. 75 FR 81683 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-28

    ... under Rule 135 (17 CFR 230.135) and Rule 165 (17 CFR 230.165) in connection with business combination... tender offers, mergers and other business combination transactions on a more timely basis, so long as the...

  13. Choreographing Patterns and Functions

    ERIC Educational Resources Information Center

    Hawes, Zachary; Moss, Joan; Finch, Heather; Katz, Jacques

    2012-01-01

    In this article, the authors begin with a description of an algebraic dance--the translation of composite linear growing patterns into choreographed movement--which was the last component of a research-based instructional unit that focused on fostering an understanding of linear functional rules through geometric growing patterns and…

  14. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.

    PubMed

    Gilra, Aditya; Gerstner, Wulfram

    2017-11-27

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.

  15. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network

    PubMed Central

    Gerstner, Wulfram

    2017-01-01

    The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically. PMID:29173280

  16. Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule

    NASA Technical Reports Server (NTRS)

    Bay, Stephen D.; Schwabacher, Mark

    2003-01-01

    Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.

  17. Rock-paper-scissors played within competing domains in predator-prey games

    NASA Astrophysics Data System (ADS)

    Labavić, Darka; Meyer-Ortmanns, Hildegard

    2016-11-01

    We consider (N, r) games of prey and predation with N species and r  <  N prey and predators, acting in a cyclic way. Further basic reactions include reproduction, decay and diffusion over a one- or two-dimensional regular grid, without a hard constraint on the occupation number per site, so in a ‘bosonic’ implementation. For special combinations of N and r and appropriate parameter choices we observe games within games, that is different coexisting games, depending on the spatial resolution. As a concrete and simplest example we analyze the (6,3) game. Once the players segregate from a random initial distribution, domains emerge, which effectively play a (2,1)-game on the coarse scale of domain diameters, while agents inside the domains play (3,1) (rock-paper-scissors), leading to spiral formation with species chasing each other. The (2,1)-game has a winner in the end, so that the coexistence of domains is transient, while agents inside the remaining domain coexist, until demographic fluctuations lead to extinction of all but one species in the very end. This means that we observe a dynamical generation of multiple space and time scales with emerging re-organization of players upon segregation, starting from a simple set of rules on the smallest scale (that of the grid) and changed rules from the coarser perspective. These observations are based on Gillespie simulations. We discuss the deterministic limit derived from a van Kampen expansion. In this limit we perform a linear stability analysis and numerically integrate the resulting equations. The linear stability analysis predicts the number of forming domains, their composition in terms of species; it explains the instability of interfaces between domains, which drives their extinction; spiral patterns are identified as motion along heteroclinic cycles. The numerical solutions reproduce the observed patterns of the Gillespie simulations including even extinction events, so that the mean-field analysis here is very conclusive, which is due to the specific implementation of rules.

  18. Gas chimney detection based on improving the performance of combined multilayer perceptron and support vector classifier

    NASA Astrophysics Data System (ADS)

    Hashemi, H.; Tax, D. M. J.; Duin, R. P. W.; Javaherian, A.; de Groot, P.

    2008-11-01

    Seismic object detection is a relatively new field in which 3-D bodies are visualized and spatial relationships between objects of different origins are studied in order to extract geologic information. In this paper, we propose a method for finding an optimal classifier with the help of a statistical feature ranking technique and combining different classifiers. The method, which has general applicability, is demonstrated here on a gas chimney detection problem. First, we evaluate a set of input seismic attributes extracted at locations labeled by a human expert using regularized discriminant analysis (RDA). In order to find the RDA score for each seismic attribute, forward and backward search strategies are used. Subsequently, two non-linear classifiers: multilayer perceptron (MLP) and support vector classifier (SVC) are run on the ranked seismic attributes. Finally, to capitalize on the intrinsic differences between both classifiers, the MLP and SVC results are combined using logical rules of maximum, minimum and mean. The proposed method optimizes the ranked feature space size and yields the lowest classification error in the final combined result. We will show that the logical minimum reveals gas chimneys that exhibit both the softness of MLP and the resolution of SVC classifiers.

  19. Fuzzy self-learning control for magnetic servo system

    NASA Technical Reports Server (NTRS)

    Tarn, J. H.; Kuo, L. T.; Juang, K. Y.; Lin, C. E.

    1994-01-01

    It is known that an effective control system is the key condition for successful implementation of high-performance magnetic servo systems. Major issues to design such control systems are nonlinearity; unmodeled dynamics, such as secondary effects for copper resistance, stray fields, and saturation; and that disturbance rejection for the load effect reacts directly on the servo system without transmission elements. One typical approach to design control systems under these conditions is a special type of nonlinear feedback called gain scheduling. It accommodates linear regulators whose parameters are changed as a function of operating conditions in a preprogrammed way. In this paper, an on-line learning fuzzy control strategy is proposed. To inherit the wealth of linear control design, the relations between linear feedback and fuzzy logic controllers have been established. The exercise of engineering axioms of linear control design is thus transformed into tuning of appropriate fuzzy parameters. Furthermore, fuzzy logic control brings the domain of candidate control laws from linear into nonlinear, and brings new prospects into design of the local controllers. On the other hand, a self-learning scheme is utilized to automatically tune the fuzzy rule base. It is based on network learning infrastructure; statistical approximation to assign credit; animal learning method to update the reinforcement map with a fast learning rate; and temporal difference predictive scheme to optimize the control laws. Different from supervised and statistical unsupervised learning schemes, the proposed method learns on-line from past experience and information from the process and forms a rule base of an FLC system from randomly assigned initial control rules.

  20. 78 FR 75381 - Entergy Operations, Inc.; Combined License Application for Grand Gulf Unit 3; Exemption From the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ... License Application To Comply With Enhancements to Emergency Preparedness Rule AGENCY: Nuclear Regulatory... exemption from addressing enhancements to the Emergency Preparedness (EP) rules in their Combined License..., 2013, [[Page 75382

  1. Allocating application to group of consecutive processors in fault-tolerant deadlock-free routing path defined by routers obeying same rules for path selection

    DOEpatents

    Leung, Vitus J [Albuquerque, NM; Phillips, Cynthia A [Albuquerque, NM; Bender, Michael A [East Northport, NY; Bunde, David P [Urbana, IL

    2009-07-21

    In a multiple processor computing apparatus, directional routing restrictions and a logical channel construct permit fault tolerant, deadlock-free routing. Processor allocation can be performed by creating a linear ordering of the processors based on routing rules used for routing communications between the processors. The linear ordering can assume a loop configuration, and bin-packing is applied to this loop configuration. The interconnection of the processors can be conceptualized as a generally rectangular 3-dimensional grid, and the MC allocation algorithm is applied with respect to the 3-dimensional grid.

  2. Can Baird's and Clar's Rules Combined Explain Triplet State Energies of Polycyclic Conjugated Hydrocarbons with Fused 4nπ- and (4n + 2)π-Rings?

    PubMed

    Ayub, Rabia; Bakouri, Ouissam El; Jorner, Kjell; Solà, Miquel; Ottosson, Henrik

    2017-06-16

    Compounds that can be labeled as "aromatic chameleons" are π-conjugated compounds that are able to adjust their π-electron distributions so as to comply with the different rules of aromaticity in different electronic states. We used quantum chemical calculations to explore how the fusion of benzene rings onto aromatic chameleonic units represented by biphenylene, dibenzocyclooctatetraene, and dibenzo[a,e]pentalene modifies the first triplet excited states (T 1 ) of the compounds. Decreases in T 1 energies are observed when going from isomers with linear connectivity of the fused benzene rings to those with cis- or trans-bent connectivities. The T 1 energies decreased down to those of the parent (isolated) 4nπ-electron units. Simultaneously, we observe an increased influence of triplet state aromaticity of the central 4n ring as given by Baird's rule and evidenced by geometric, magnetic, and electron density based aromaticity indices (HOMA, NICS-XY, ACID, and FLU). Because of an influence of triplet state aromaticity in the central 4nπ-electron units, the most stabilized compounds retain the triplet excitation in Baird π-quartets or octets, enabling the outer benzene rings to adapt closed-shell singlet Clar π-sextet character. Interestingly, the T 1 energies go down as the total number of aromatic cycles within a molecule in the T 1 state increases.

  3. A rational model of function learning.

    PubMed

    Lucas, Christopher G; Griffiths, Thomas L; Williams, Joseph J; Kalish, Michael L

    2015-10-01

    Theories of how people learn relationships between continuous variables have tended to focus on two possibilities: one, that people are estimating explicit functions, or two that they are performing associative learning supported by similarity. We provide a rational analysis of function learning, drawing on work on regression in machine learning and statistics. Using the equivalence of Bayesian linear regression and Gaussian processes, which provide a probabilistic basis for similarity-based function learning, we show that learning explicit rules and using similarity can be seen as two views of one solution to this problem. We use this insight to define a rational model of human function learning that combines the strengths of both approaches and accounts for a wide variety of experimental results.

  4. Boolean linear differential operators on elementary cellular automata

    NASA Astrophysics Data System (ADS)

    Martín Del Rey, Ángel

    2014-12-01

    In this paper, the notion of boolean linear differential operator (BLDO) on elementary cellular automata (ECA) is introduced and some of their more important properties are studied. Special attention is paid to those differential operators whose coefficients are the ECA with rule numbers 90 and 150.

  5. Exclusion of deep vein thrombosis using the Wells rule in clinically important subgroups: individual patient data meta-analysis.

    PubMed

    Geersing, G J; Zuithoff, N P A; Kearon, C; Anderson, D R; Ten Cate-Hoek, A J; Elf, J L; Bates, S M; Hoes, A W; Kraaijenhagen, R A; Oudega, R; Schutgens, R E G; Stevens, S M; Woller, S C; Wells, P S; Moons, K G M

    2014-03-10

    To assess the accuracy of the Wells rule for excluding deep vein thrombosis and whether this accuracy applies to different subgroups of patients. Meta-analysis of individual patient data. Authors of 13 studies (n = 10,002) provided their datasets, and these individual patient data were merged into one dataset. Studies were eligible if they enrolled consecutive outpatients with suspected deep vein thrombosis, scored all variables of the Wells rule, and performed an appropriate reference standard. Multilevel logistic regression models, including an interaction term for each subgroup, were used to estimate differences in predicted probabilities of deep vein thrombosis by the Wells rule. In addition, D-dimer testing was added to assess differences in the ability to exclude deep vein thrombosis using an unlikely score on the Wells rule combined with a negative D-dimer test result. Overall, increasing scores on the Wells rule were associated with an increasing probability of having deep vein thrombosis. Estimated probabilities were almost twofold higher in patients with cancer, in patients with suspected recurrent events, and (to a lesser extent) in males. An unlikely score on the Wells rule (≤ 1) combined with a negative D-dimer test result was associated with an extremely low probability of deep vein thrombosis (1.2%, 95% confidence interval 0.7% to 1.8%). This combination occurred in 29% (95% confidence interval 20% to 40%) of patients. These findings were consistent in subgroups defined by type of D-dimer assay (quantitative or qualitative), sex, and care setting (primary or hospital care). For patients with cancer, the combination of an unlikely score on the Wells rule and a negative D-dimer test result occurred in only 9% of patients and was associated with a 2.2% probability of deep vein thrombosis being present. In patients with suspected recurrent events, only the modified Wells rule (adding one point for the previous event) is safe. Combined with a negative D-dimer test result (both quantitative and qualitative), deep vein thrombosis can be excluded in patients with an unlikely score on the Wells rule. This finding is true for both sexes, as well as for patients presenting in primary and hospital care. In patients with cancer, the combination is neither safe nor efficient. For patients with suspected recurrent disease, one extra point should be added to the rule to enable a safe exclusion.

  6. Adolescents' as Active Agents in the Socialization Process: Legitimacy of Parental Authority and Obligation to Obey as Predictors of Obedience

    ERIC Educational Resources Information Center

    Darling, Nancy; Cumsille, Patricio; Loreto Martinez, M.

    2007-01-01

    Adolescents' agreement with parental standards and beliefs about the legitimacy of parental authority and their own obligation to obey were used to predict adolescents' obedience, controlling for parental monitoring, rules, and rule enforcement. Hierarchical linear models were used to predict both between-adolescent and within-adolescent,…

  7. Visualizing the Chain Rule (for Functions over R and C) and More

    ERIC Educational Resources Information Center

    Kreminski, Rick

    2009-01-01

    A visual approach to understanding the chain rule and related derivative formulae, for functions from R to R and from C to C, is presented. This apparently novel approach has been successfully used with several audiences: students first studying calculus, students with some background in linear algebra, students beginning study of functions of a…

  8. 78 FR 36797 - Self-Regulatory Organizations; Fixed Income Clearing Corporation; Notice of Designation of Longer...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-19

    ... NYPC. In the proposed rule change, FICC acknowledged that it will have to alter its risk management framework to account for the non- linear risks presented by options on interest rate futures.\\6\\ The... rule change so that it has sufficient time to evaluate the risk management implications of the proposed...

  9. Dandruff, seborrheic dermatitis, and psoriasis drug products containing coal tar and menthol for over-the-counter human use; amendment to the monograph. Final rule

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    2006-03-15

    The Food and Drug Administration (FDA) is issuing a final rule amending the final monograph (FM) for over-the-counter (OTC) dandruff, seborrheic dermatitis, and psoriasis drug products to include the combination of 1.8 percent coal tar solution and 1.5 percent menthol in a shampoo drug product to control dandruff. FDA did not receive any comments or data in response to its previously proposed rule to include this combination. This final rule is part of FDA's ongoing review of OTC drug products.

  10. From Feynman rules to conserved quantum numbers, I

    NASA Astrophysics Data System (ADS)

    Nogueira, P.

    2017-05-01

    In the context of Quantum Field Theory (QFT) there is often the need to find sets of graph-like diagrams (the so-called Feynman diagrams) for a given physical model. If negative, the answer to the related problem 'Are there any diagrams with this set of external fields?' may settle certain physical questions at once. Here the latter problem is formulated in terms of a system of linear diophantine equations derived from the Lagrangian density, from which necessary conditions for the existence of the required diagrams may be obtained. Those conditions are equalities that look like either linear diophantine equations or linear modular (i.e. congruence) equations, and may be found by means of fairly simple algorithms that involve integer computations. The diophantine equations so obtained represent (particle) number conservation rules, and are related to the conserved (additive) quantum numbers that may be assigned to the fields of the model.

  11. Preconditioned alternating direction method of multipliers for inverse problems with constraints

    NASA Astrophysics Data System (ADS)

    Jiao, Yuling; Jin, Qinian; Lu, Xiliang; Wang, Weijie

    2017-02-01

    We propose a preconditioned alternating direction method of multipliers (ADMM) to solve linear inverse problems in Hilbert spaces with constraints, where the feature of the sought solution under a linear transformation is captured by a possibly non-smooth convex function. During each iteration step, our method avoids solving large linear systems by choosing a suitable preconditioning operator. In case the data is given exactly, we prove the convergence of our preconditioned ADMM without assuming the existence of a Lagrange multiplier. In case the data is corrupted by noise, we propose a stopping rule using information on noise level and show that our preconditioned ADMM is a regularization method; we also propose a heuristic rule when the information on noise level is unavailable or unreliable and give its detailed analysis. Numerical examples are presented to test the performance of the proposed method.

  12. Discovering Sentinel Rules for Business Intelligence

    NASA Astrophysics Data System (ADS)

    Middelfart, Morten; Pedersen, Torben Bach

    This paper proposes the concept of sentinel rules for multi-dimensional data that warns users when measure data concerning the external environment changes. For instance, a surge in negative blogging about a company could trigger a sentinel rule warning that revenue will decrease within two months, so a new course of action can be taken. Hereby, we expand the window of opportunity for organizations and facilitate successful navigation even though the world behaves chaotically. Since sentinel rules are at the schema level as opposed to the data level, and operate on data changes as opposed to absolute data values, we are able to discover strong and useful sentinel rules that would otherwise be hidden when using sequential pattern mining or correlation techniques. We present a method for sentinel rule discovery and an implementation of this method that scales linearly on large data volumes.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dhawan, Suhail; Goobar, Ariel; Mörtsell, Edvard

    Recent re-calibration of the Type Ia supernova (SNe Ia) magnitude-redshift relation combined with cosmic microwave background (CMB) and baryon acoustic oscillation (BAO) data have provided excellent constraints on the standard cosmological model. Here, we examine particular classes of alternative cosmologies, motivated by various physical mechanisms, e.g. scalar fields, modified gravity and phase transitions to test their consistency with observations of SNe Ia and the ratio of the angular diameter distances from the CMB and BAO. Using a model selection criterion for a relative comparison of the models (the Bayes Factor), we find moderate to strong evidence that the data prefermore » flat ΛCDM over models invoking a thawing behaviour of the quintessence scalar field. However, some exotic models like the growing neutrino mass cosmology and vacuum metamorphosis still present acceptable evidence values. The bimetric gravity model with only the linear interaction term as well as a simplified Galileon model can be ruled out by the combination of SNe Ia and CMB/BAO datasets whereas the model with linear and quadratic interaction terms has a comparable evidence value to standard ΛCDM. Thawing models are found to have significantly poorer evidence compared to flat ΛCDM cosmology under the assumption that the CMB compressed likelihood provides an adequate description for these non-standard cosmologies. We also present estimates for constraints from future data and find that geometric probes from oncoming surveys can put severe limits on non-standard cosmological models.« less

  14. A fuzzy controller with nonlinear control rules is the sum of a global nonlinear controller and a local nonlinear PI-like controller

    NASA Technical Reports Server (NTRS)

    Ying, Hao

    1993-01-01

    The fuzzy controllers studied in this paper are the ones that employ N trapezoidal-shaped members for input fuzzy sets, Zadeh fuzzy logic and a centroid defuzzification algorithm for output fuzzy set. The author analytically proves that the structure of the fuzzy controllers is the sum of a global nonlinear controller and a local nonlinear proportional-integral-like controller. If N approaches infinity, the global controller becomes a nonlinear controller while the local controller disappears. If linear control rules are used, the global controller becomes a global two-dimensional multilevel relay which approaches a global linear proportional-integral (PI) controller as N approaches infinity.

  15. Analysis of the unusual wavelength dependence of the first hyperpolarizability of porphyrin derivatives

    NASA Astrophysics Data System (ADS)

    De Mey, K.; Clays, K.; Therien, Michael J.; Beratan, David N.; Asselberghs, Inge

    2010-08-01

    Successfully predicting the frequency dispersion of electronic hyperpolarizabilities is an unresolved challenge in materials science and electronic structure theory. It has been shown1 that the generalized Thomas-Kuhn sum rules combined with linear absorption data and measured hyperpolarizabilities at one or two frequencies, may be used to predict the entire frequency-dependent electronic hyperpolarizability spectrum. This treatment includes two- and threelevel contributions that arise from the lowest two or three excited state manifolds, enabling us to describe the unusual observed frequency dispersion of the dynamic hyperpolarizability in high oscillator strength M-PZn chromophores, where (porphinato)zinc(II) (PZn) and metal(II)polypyridyl (M) units are connected via an ethyne unit that aligns the high oscillator strength transition dipoles of these components in a head-to-tail arrangement. Importantly, this approach provides a quantitative scheme to use linear optical absorption spectra and very few individual hyperpolarizability values to predict the entire frequency-dependent nonlinear optical response. In addition we provide here experimental dynamic hyperpolarizability values determined by hyper-Rayleigh scattering that underscore the validity of our approach.

  16. Modified Kramers-Kronig relations and sum rules for meromorphic total refractive index

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peiponen, Kai-Erik; Saarinen, Jarkko J.; Vartiainen, Erik M.

    2003-08-01

    Modified Kramers-Kronig relations and corresponding sum rules are shown to hold for the total refractive index that can be presented as a sum of complex linear and nonlinear refractive indices, respectively. It is suggested that a self-action process, involving the degenerate third-order nonlinear susceptibility, can yield a negative total refractive index at some spectral range.

  17. A Brief Historical Introduction to Matrices and Their Applications

    ERIC Educational Resources Information Center

    Debnath, L.

    2014-01-01

    This paper deals with the ancient origin of matrices, and the system of linear equations. Included are algebraic properties of matrices, determinants, linear transformations, and Cramer's Rule for solving the system of algebraic equations. Special attention is given to some special matrices, including matrices in graph theory and electrical…

  18. Robust linear discriminant analysis with distance based estimators

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Yahaya, Sharipah Soaad Syed; Ali, Hazlina

    2017-11-01

    Linear discriminant analysis (LDA) is one of the supervised classification techniques concerning relationship between a categorical variable and a set of continuous variables. The main objective of LDA is to create a function to distinguish between populations and allocating future observations to previously defined populations. Under the assumptions of normality and homoscedasticity, the LDA yields optimal linear discriminant rule (LDR) between two or more groups. However, the optimality of LDA highly relies on the sample mean and pooled sample covariance matrix which are known to be sensitive to outliers. To alleviate these conflicts, a new robust LDA using distance based estimators known as minimum variance vector (MVV) has been proposed in this study. The MVV estimators were used to substitute the classical sample mean and classical sample covariance to form a robust linear discriminant rule (RLDR). Simulation and real data study were conducted to examine on the performance of the proposed RLDR measured in terms of misclassification error rates. The computational result showed that the proposed RLDR is better than the classical LDR and was comparable with the existing robust LDR.

  19. Discrimination theory of rule-governed behavior

    PubMed Central

    Cerutti, Daniel T.

    1989-01-01

    In rule-governed behavior, previously established elementary discriminations are combined in complex instructions and thus result in complex behavior. Discriminative combining and recombining of responses produce behavior with characteristics differing from those of behavior that is established through the effects of its direct consequences. For example, responding in instructed discrimination may be occasioned by discriminative stimuli that are temporally and situationally removed from the circumstances under which the discrimination is instructed. The present account illustrates properties of rule-governed behavior with examples from research in instructional control and imitation learning. Units of instructed behavior, circumstances controlling compliance with instructions, and rule-governed problem solving are considered. PMID:16812579

  20. DecisionMaker software and extracting fuzzy rules under uncertainty

    NASA Technical Reports Server (NTRS)

    Walker, Kevin B.

    1992-01-01

    Knowledge acquisition under uncertainty is examined. Theories proposed in deKorvin's paper 'Extracting Fuzzy Rules Under Uncertainty and Measuring Definability Using Rough Sets' are discussed as they relate to rule calculation algorithms. A data structure for holding an arbitrary number of data fields is described. Limitations of Pascal for loops in the generation of combinations are also discussed. Finally, recursive algorithms for generating all possible combination of attributes and for calculating the intersection of an arbitrary number of fuzzy sets are presented.

  1. 77 FR 65497 - Gross Combination Weight Rating (GCWR); Definition

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-29

    ... [Docket No. FMCSA-2012-0156] RIN 2126-AB53 Gross Combination Weight Rating (GCWR); Definition AGENCY...: FMCSA withdraws its August 27, 2012, direct final rule (DFR) amending the definition of ``gross... definition. DATES: The direct final rule published August 27, 2012 (77 FR 51706) is withdrawn effective...

  2. Prestraining and Its Influence on Subsequent Fatigue Life

    NASA Technical Reports Server (NTRS)

    Halford, Gary R.; Mcgaw, Michael A.; Kalluri, Sreeramesh

    1995-01-01

    An experimental program was conducted to study the damaging effects of tensile and compressive prestrains on the fatigue life of nickel-base, Inconel 718 superalloy at room temperature. To establish baseline fatigue behavior, virgin specimens with a solid uniform gage section were fatigued to failure under fully-reversed strain-control. Additional specimens were prestrained to 2 percent, 5 percent, and 10 percent (engineering strains) in the tensile direction and to 2 percent (engineering strain) in the compressive direction under stroke-control, and were subsequently fatigued to failure under fully-reversed strain-control. Experimental results are compared with estimates of remaining fatigue lives (after prestraining) using three life prediction approaches: (1) the Linear Damage Rule; (2) the Linear Strain and Life Fraction Rule; and (3) the nonlinear Damage Curve Approach. The Smith-Watson-Topper parameter was used to estimate fatigue lives in the presence of mean stresses. Among the cumulative damage rules investigated, best remaining fatigue life predictions were obtained with the nonlinear Damage Curve Approach.

  3. Evaluation of techniques for increasing recall in a dictionary approach to gene and protein name identification.

    PubMed

    Schuemie, Martijn J; Mons, Barend; Weeber, Marc; Kors, Jan A

    2007-06-01

    Gene and protein name identification in text requires a dictionary approach to relate synonyms to the same gene or protein, and to link names to external databases. However, existing dictionaries are incomplete. We investigate two complementary methods for automatic generation of a comprehensive dictionary: combination of information from existing gene and protein databases and rule-based generation of spelling variations. Both methods have been reported in literature before, but have hitherto not been combined and evaluated systematically. We combined gene and protein names from several existing databases of four different organisms. The combined dictionaries showed a substantial increase in recall on three different test sets, as compared to any single database. Application of 23 spelling variation rules to the combined dictionaries further increased recall. However, many rules appeared to have no effect and some appear to have a detrimental effect on precision.

  4. Nonlinear spike-and-slab sparse coding for interpretable image encoding.

    PubMed

    Shelton, Jacquelyn A; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg

    2015-01-01

    Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process.

  5. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata

    PubMed Central

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-01-01

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research. PMID:28353664

  6. Modeling and Density Estimation of an Urban Freeway Network Based on Dynamic Graph Hybrid Automata.

    PubMed

    Chen, Yangzhou; Guo, Yuqi; Wang, Ying

    2017-03-29

    In this paper, in order to describe complex network systems, we firstly propose a general modeling framework by combining a dynamic graph with hybrid automata and thus name it Dynamic Graph Hybrid Automata (DGHA). Then we apply this framework to model traffic flow over an urban freeway network by embedding the Cell Transmission Model (CTM) into the DGHA. With a modeling procedure, we adopt a dual digraph of road network structure to describe the road topology, use linear hybrid automata to describe multi-modes of dynamic densities in road segments and transform the nonlinear expressions of the transmitted traffic flow between two road segments into piecewise linear functions in terms of multi-mode switchings. This modeling procedure is modularized and rule-based, and thus is easily-extensible with the help of a combination algorithm for the dynamics of traffic flow. It can describe the dynamics of traffic flow over an urban freeway network with arbitrary topology structures and sizes. Next we analyze mode types and number in the model of the whole freeway network, and deduce a Piecewise Affine Linear System (PWALS) model. Furthermore, based on the PWALS model, a multi-mode switched state observer is designed to estimate the traffic densities of the freeway network, where a set of observer gain matrices are computed by using the Lyapunov function approach. As an example, we utilize the PWALS model and the corresponding switched state observer to traffic flow over Beijing third ring road. In order to clearly interpret the principle of the proposed method and avoid computational complexity, we adopt a simplified version of Beijing third ring road. Practical application for a large-scale road network will be implemented by decentralized modeling approach and distributed observer designing in the future research.

  7. Nonlinear Spike-And-Slab Sparse Coding for Interpretable Image Encoding

    PubMed Central

    Shelton, Jacquelyn A.; Sheikh, Abdul-Saboor; Bornschein, Jörg; Sterne, Philip; Lücke, Jörg

    2015-01-01

    Sparse coding is a popular approach to model natural images but has faced two main challenges: modelling low-level image components (such as edge-like structures and their occlusions) and modelling varying pixel intensities. Traditionally, images are modelled as a sparse linear superposition of dictionary elements, where the probabilistic view of this problem is that the coefficients follow a Laplace or Cauchy prior distribution. We propose a novel model that instead uses a spike-and-slab prior and nonlinear combination of components. With the prior, our model can easily represent exact zeros for e.g. the absence of an image component, such as an edge, and a distribution over non-zero pixel intensities. With the nonlinearity (the nonlinear max combination rule), the idea is to target occlusions; dictionary elements correspond to image components that can occlude each other. There are major consequences of the model assumptions made by both (non)linear approaches, thus the main goal of this paper is to isolate and highlight differences between them. Parameter optimization is analytically and computationally intractable in our model, thus as a main contribution we design an exact Gibbs sampler for efficient inference which we can apply to higher dimensional data using latent variable preselection. Results on natural and artificial occlusion-rich data with controlled forms of sparse structure show that our model can extract a sparse set of edge-like components that closely match the generating process, which we refer to as interpretable components. Furthermore, the sparseness of the solution closely follows the ground-truth number of components/edges in the images. The linear model did not learn such edge-like components with any level of sparsity. This suggests that our model can adaptively well-approximate and characterize the meaningful generation process. PMID:25954947

  8. Dual-energy X-ray analysis using synchrotron computed tomography at 35 and 60 keV for the estimation of photon interaction coefficients describing attenuation and energy absorption.

    PubMed

    Midgley, Stewart; Schleich, Nanette

    2015-05-01

    A novel method for dual-energy X-ray analysis (DEXA) is tested using measurements of the X-ray linear attenuation coefficient μ. The key is a mathematical model that describes elemental cross sections using a polynomial in atomic number. The model is combined with the mixture rule to describe μ for materials, using the same polynomial coefficients. Materials are characterized by their electron density Ne and statistical moments Rk describing their distribution of elements, analogous to the concept of effective atomic number. In an experiment with materials of known density and composition, measurements of μ are written as a system of linear simultaneous equations, which is solved for the polynomial coefficients. DEXA itself involves computed tomography (CT) scans at two energies to provide a system of non-linear simultaneous equations that are solved for Ne and the fourth statistical moment R4. Results are presented for phantoms containing dilute salt solutions and for a biological specimen. The experiment identifies 1% systematic errors in the CT measurements, arising from third-harmonic radiation, and 20-30% noise, which is reduced to 3-5% by pre-processing with the median filter and careful choice of reconstruction parameters. DEXA accuracy is quantified for the phantom as the mean absolute differences for Ne and R4: 0.8% and 1.0% for soft tissue and 1.2% and 0.8% for bone-like samples, respectively. The DEXA results for the biological specimen are combined with model coefficients obtained from the tabulations to predict μ and the mass energy absorption coefficient at energies of 10 keV to 20 MeV.

  9. Programmable Potentials: Approximate N-body potentials from coarse-level logic.

    PubMed

    Thakur, Gunjan S; Mohr, Ryan; Mezić, Igor

    2016-09-27

    This paper gives a systematic method for constructing an N-body potential, approximating the true potential, that accurately captures meso-scale behavior of the chemical or biological system using pairwise potentials coming from experimental data or ab initio methods. The meso-scale behavior is translated into logic rules for the dynamics. Each pairwise potential has an associated logic function that is constructed using the logic rules, a class of elementary logic functions, and AND, OR, and NOT gates. The effect of each logic function is to turn its associated potential on and off. The N-body potential is constructed as linear combination of the pairwise potentials, where the "coefficients" of the potentials are smoothed versions of the associated logic functions. These potentials allow a potentially low-dimensional description of complex processes while still accurately capturing the relevant physics at the meso-scale. We present the proposed formalism to construct coarse-grained potential models for three examples: an inhibitor molecular system, bond breaking in chemical reactions, and DNA transcription from biology. The method can potentially be used in reverse for design of molecular processes by specifying properties of molecules that can carry them out.

  10. Programmable Potentials: Approximate N-body potentials from coarse-level logic

    NASA Astrophysics Data System (ADS)

    Thakur, Gunjan S.; Mohr, Ryan; Mezić, Igor

    2016-09-01

    This paper gives a systematic method for constructing an N-body potential, approximating the true potential, that accurately captures meso-scale behavior of the chemical or biological system using pairwise potentials coming from experimental data or ab initio methods. The meso-scale behavior is translated into logic rules for the dynamics. Each pairwise potential has an associated logic function that is constructed using the logic rules, a class of elementary logic functions, and AND, OR, and NOT gates. The effect of each logic function is to turn its associated potential on and off. The N-body potential is constructed as linear combination of the pairwise potentials, where the “coefficients” of the potentials are smoothed versions of the associated logic functions. These potentials allow a potentially low-dimensional description of complex processes while still accurately capturing the relevant physics at the meso-scale. We present the proposed formalism to construct coarse-grained potential models for three examples: an inhibitor molecular system, bond breaking in chemical reactions, and DNA transcription from biology. The method can potentially be used in reverse for design of molecular processes by specifying properties of molecules that can carry them out.

  11. Programmable Potentials: Approximate N-body potentials from coarse-level logic

    PubMed Central

    Thakur, Gunjan S.; Mohr, Ryan; Mezić, Igor

    2016-01-01

    This paper gives a systematic method for constructing an N-body potential, approximating the true potential, that accurately captures meso-scale behavior of the chemical or biological system using pairwise potentials coming from experimental data or ab initio methods. The meso-scale behavior is translated into logic rules for the dynamics. Each pairwise potential has an associated logic function that is constructed using the logic rules, a class of elementary logic functions, and AND, OR, and NOT gates. The effect of each logic function is to turn its associated potential on and off. The N-body potential is constructed as linear combination of the pairwise potentials, where the “coefficients” of the potentials are smoothed versions of the associated logic functions. These potentials allow a potentially low-dimensional description of complex processes while still accurately capturing the relevant physics at the meso-scale. We present the proposed formalism to construct coarse-grained potential models for three examples: an inhibitor molecular system, bond breaking in chemical reactions, and DNA transcription from biology. The method can potentially be used in reverse for design of molecular processes by specifying properties of molecules that can carry them out. PMID:27671683

  12. Efficient model learning methods for actor-critic control.

    PubMed

    Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik

    2012-06-01

    We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.

  13. Problems in the Study of lineaments

    NASA Astrophysics Data System (ADS)

    Anokhin, Vladimir; Kholmyanskii, Michael

    2015-04-01

    The study of linear objects in upper crust, called lineaments, led at one time to a major scientific results - discovery of the planetary regmatic network, the birth of some new tectonic concepts, establishment of new search for signs of mineral deposits. But now lineaments studied not enough for such a promising research direction. Lineament geomorphology has a number of problems. 1.Terminology problems. Lineament theme still has no generally accepted terminology base. Different scientists have different interpretations even for the definition of lineament We offer an expanded definition for it: lineaments - line features of the earth's crust, expressed by linear landforms, geological linear forms, linear anomalies of physical fields may follow each other, associated with faults. The term "lineament" is not identical to the term "fault", but always lineament - reasonable suspicion to fault, and this suspicion is justified in most cases. The structure lineament may include only the objects that are at least presumably can be attributed to the deep processes. Specialists in the lineament theme can overcome terminological problems if together create a common terminology database. 2. Methodological problems. Procedure manual selection lineaments mainly is depiction of straight line segments along the axes of linear morphostructures on some cartographic basis. Reduce the subjective factors of manual selection is possible, following a few simple rules: - The choice of optimal projection, scale and quality of cartographic basis; - Selection of the optimal type of linear objects under study; - The establishment of boundary conditions for the allocation lineament (minimum length, maximum bending, the minimum length to width ratio, etc.); - Allocation of an increasing number of lineaments - for representative sampling and reduce the influence of random errors; - Ranking lineaments: fine lines (rank 3) combined to form larger lineaments rank 2; which, when combined capabilities in large lineaments rank 1; - Correlation of the resulting pattern of lineaments with a pattern already known of faults in the study area; - Separate allocation lineaments by several experts with correlation of the resulting schemes and create a common scheme. The problem of computer lineament allocation is not solved yet. Existing programs for lineament analysis is not so perfect to completely rely on them. In any of them, changing the initial parameters, we can get pictures lineaments any desired configuration. Also a high probability of heavy and hardly recognized systematic errors. In any case, computer lineament patterns after their creation should be subject to examination Real. 3. Interpretive problems. To minimize the distortion results of the lineament analysis is advisable to stick to a few techniques and rules: - use of visualization techniques, in particular, rose-charts, which are submitted azimuth and length of selected lineaments; - consistent downscaling of analysis. A preliminary analysis of a larger area that includes the area of interest with surroundings; - using the available information on the location of the already known faults and other tectonic linear objects of the study area; - comparison of the lineament scheme with schemes of other authors - can reduce the element of subjectivity in the schemes. The study of lineaments is a very promising direction of geomorfology and tectonics. Challenges facing the lineament theme, are solvable. To solve them, professionals should meet and talk to each other. The results of further work in this direction may exceed expectations.

  14. TMS for Instantiating a Knowledge Base With Incomplete Data

    NASA Technical Reports Server (NTRS)

    James, Mark

    2007-01-01

    A computer program that belongs to the class known among software experts as output truth-maintenance-systems (output TMSs) has been devised as one of a number of software tools for reducing the size of the knowledge base that must be searched during execution of artificial- intelligence software of the rule-based inference-engine type in a case in which data are missing. This program determines whether the consequences of activation of two or more rules can be combined without causing a logical inconsistency. For example, in a case involving hypothetical scenarios that could lead to turning a given device on or off, the program determines whether a scenario involving a given combination of rules could lead to turning the device both on and off at the same time, in which case that combination of rules would not be included in the scenario.

  15. Prediction of linear B-cell epitopes of hepatitis C virus for vaccine development

    PubMed Central

    2015-01-01

    Background High genetic heterogeneity in the hepatitis C virus (HCV) is the major challenge of the development of an effective vaccine. Existing studies for developing HCV vaccines have mainly focused on T-cell immune response. However, identification of linear B-cell epitopes that can stimulate B-cell response is one of the major tasks of peptide-based vaccine development. Owing to the variability in B-cell epitope length, the prediction of B-cell epitopes is much more complex than that of T-cell epitopes. Furthermore, the motifs of linear B-cell epitopes in different pathogens are quite different (e. g. HCV and hepatitis B virus). To cope with this challenge, this work aims to propose an HCV-customized sequence-based prediction method to identify B-cell epitopes of HCV. Results This work establishes an experimentally verified dataset comprising the B-cell response of HCV dataset consisting of 774 linear B-cell epitopes and 774 non B-cell epitopes from the Immune Epitope Database. An interpretable rule mining system of B-cell epitopes (IRMS-BE) is proposed to select informative physicochemical properties (PCPs) and then extracts several if-then rule-based knowledge for identifying B-cell epitopes. A web server Bcell-HCV was implemented using an SVM with the 34 informative PCPs, which achieved a training accuracy of 79.7% and test accuracy of 70.7% better than the SVM-based methods for identifying B-cell epitopes of HCV and the two general-purpose methods. This work performs advanced analysis of the 34 informative properties, and the results indicate that the most effective property is the alpha-helix structure of epitopes, which influences the connection between host cells and the E2 proteins of HCV. Furthermore, 12 interpretable rules are acquired from top-five PCPs and achieve a sensitivity of 75.6% and specificity of 71.3%. Finally, a conserved promising vaccine candidate, PDREMVLYQE, is identified for inclusion in a vaccine against HCV. Conclusions This work proposes an interpretable rule mining system IRMS-BE for extracting interpretable rules using informative physicochemical properties and a web server Bcell-HCV for predicting linear B-cell epitopes of HCV. IRMS-BE may also apply to predict B-cell epitopes for other viruses, which benefits the improvement of vaccines development of these viruses without significant modification. Bcell-HCV is useful for identifying B-cell epitopes of HCV antigen to help vaccine development, which is available at http://e045.life.nctu.edu.tw/BcellHCV. PMID:26680271

  16. Dynamic Hebbian Cross-Correlation Learning Resolves the Spike Timing Dependent Plasticity Conundrum.

    PubMed

    Olde Scheper, Tjeerd V; Meredith, Rhiannon M; Mansvelder, Huibert D; van Pelt, Jaap; van Ooyen, Arjen

    2017-01-01

    Spike Timing-Dependent Plasticity has been found to assume many different forms. The classic STDP curve, with one potentiating and one depressing window, is only one of many possible curves that describe synaptic learning using the STDP mechanism. It has been shown experimentally that STDP curves may contain multiple LTP and LTD windows of variable width, and even inverted windows. The underlying STDP mechanism that is capable of producing such an extensive, and apparently incompatible, range of learning curves is still under investigation. In this paper, it is shown that STDP originates from a combination of two dynamic Hebbian cross-correlations of local activity at the synapse. The correlation of the presynaptic activity with the local postsynaptic activity is a robust and reliable indicator of the discrepancy between the presynaptic neuron and the postsynaptic neuron's activity. The second correlation is between the local postsynaptic activity with dendritic activity which is a good indicator of matching local synaptic and dendritic activity. We show that this simple time-independent learning rule can give rise to many forms of the STDP learning curve. The rule regulates synaptic strength without the need for spike matching or other supervisory learning mechanisms. Local differences in dendritic activity at the synapse greatly affect the cross-correlation difference which determines the relative contributions of different neural activity sources. Dendritic activity due to nearby synapses, action potentials, both forward and back-propagating, as well as inhibitory synapses will dynamically modify the local activity at the synapse, and the resulting STDP learning rule. The dynamic Hebbian learning rule ensures furthermore, that the resulting synaptic strength is dynamically stable, and that interactions between synapses do not result in local instabilities. The rule clearly demonstrates that synapses function as independent localized computational entities, each contributing to the global activity, not in a simply linear fashion, but in a manner that is appropriate to achieve local and global stability of the neuron and the entire dendritic structure.

  17. Talk about New Rules! Exploring the Community College Role in Meeting the Educational Needs of an Aging Community

    ERIC Educational Resources Information Center

    Garvey, Dennis M.

    2007-01-01

    Life courses have traditionally been seen as a linear progression from school to work to retirement. Now, as our population ages, a circular life course is emerging with education, work, and leisure intertwined. This article explores the "New Rules of Business" for a community college where residents age 55+ represent 34% of the population.

  18. Comparative study of multimodal biometric recognition by fusion of iris and fingerprint.

    PubMed

    Benaliouche, Houda; Touahria, Mohamed

    2014-01-01

    This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results.

  19. Comparative Study of Multimodal Biometric Recognition by Fusion of Iris and Fingerprint

    PubMed Central

    Benaliouche, Houda; Touahria, Mohamed

    2014-01-01

    This research investigates the comparative performance from three different approaches for multimodal recognition of combined iris and fingerprints: classical sum rule, weighted sum rule, and fuzzy logic method. The scores from the different biometric traits of iris and fingerprint are fused at the matching score and the decision levels. The scores combination approach is used after normalization of both scores using the min-max rule. Our experimental results suggest that the fuzzy logic method for the matching scores combinations at the decision level is the best followed by the classical weighted sum rule and the classical sum rule in order. The performance evaluation of each method is reported in terms of matching time, error rates, and accuracy after doing exhaustive tests on the public CASIA-Iris databases V1 and V2 and the FVC 2004 fingerprint database. Experimental results prior to fusion and after fusion are presented followed by their comparison with related works in the current literature. The fusion by fuzzy logic decision mimics the human reasoning in a soft and simple way and gives enhanced results. PMID:24605065

  20. Do adjunct tuberculosis tests, when combined with Xpert MTB/RIF, improve accuracy and the cost of diagnosis in a resource-poor setting?

    PubMed Central

    Theron, Grant; Pooran, Anil; Peter, Jonny; van Zyl-Smit, Richard; Mishra, Hridesh Kumar; Meldau, Richard; Calligaro, Greg; Allwood, Brian; Sharma, Surendra Kumar; Dawson, Rod; Dheda, Keertan

    2017-01-01

    Information regarding the utility of adjunct diagnostic tests in combination with Xpert MTB/RIF (Cepheid, Sunnyvale, CA, USA) is limited. We hypothesised adjunct tests could enhance accuracy and/or reduce the cost of tuberculosis (TB) diagnosis prior to MTB/RIF testing, and rule-in or rule-out TB in MTB/RIF-negative individuals. We assessed the accuracy and/or laboratory-associated cost of diagnosis of smear microscopy, chest radiography (CXR) and interferon-γ release assays (IGRAs; T-SPOT-TB (Oxford Immunotec, Oxford, UK) and QuantiFERON-TB Gold In-Tube (Cellestis, Chadstone, Australia)) combined with MTB/RIF for TB in 480 patients in South Africa. When conducted prior to MTB/RIF: 1) smear microscopy followed by MTB/RIF (if smear negative) had the lowest cost of diagnosis of any strategy investigated; 2) a combination of smear microscopy, CXR (if smear negative) and MTB/RIF (if imaging compatible with active TB) did not further reduce the cost per TB case diagnosed; and 3) a normal CXR ruled out TB in 18% of patients (57 out of 324; negative predictive value (NPV) 100%). When downstream adjunct tests were applied to MTB/RIF-negative individuals, radiology ruled out TB in 24% (56 out of 234; NPV 100%), smear microscopy ruled in TB in 21% (seven out of 24) of culture-positive individuals and IGRAs were not useful in either context. In resource-poor settings, smear microscopy combined with MTB/RIF had the highest accuracy and lowest cost of diagnosis compared to either technique alone. In MTB/RIF-negative individuals, CXR has poor rule-in value but can reliably rule out TB in approximately one in four cases. These data inform upon the programmatic utility of MTB/RIF in high-burden settings. PMID:22075479

  1. A theory of local learning, the learning channel, and the optimality of backpropagation.

    PubMed

    Baldi, Pierre; Sadowski, Peter

    2016-11-01

    In a physical neural system, where storage and processing are intimately intertwined, the rules for adjusting the synaptic weights can only depend on variables that are available locally, such as the activity of the pre- and post-synaptic neurons, resulting in local learning rules. A systematic framework for studying the space of local learning rules is obtained by first specifying the nature of the local variables, and then the functional form that ties them together into each learning rule. Such a framework enables also the systematic discovery of new learning rules and exploration of relationships between learning rules and group symmetries. We study polynomial local learning rules stratified by their degree and analyze their behavior and capabilities in both linear and non-linear units and networks. Stacking local learning rules in deep feedforward networks leads to deep local learning. While deep local learning can learn interesting representations, it cannot learn complex input-output functions, even when targets are available for the top layer. Learning complex input-output functions requires local deep learning where target information is communicated to the deep layers through a backward learning channel. The nature of the communicated information about the targets and the structure of the learning channel partition the space of learning algorithms. For any learning algorithm, the capacity of the learning channel can be defined as the number of bits provided about the error gradient per weight, divided by the number of required operations per weight. We estimate the capacity associated with several learning algorithms and show that backpropagation outperforms them by simultaneously maximizing the information rate and minimizing the computational cost. This result is also shown to be true for recurrent networks, by unfolding them in time. The theory clarifies the concept of Hebbian learning, establishes the power and limitations of local learning rules, introduces the learning channel which enables a formal analysis of the optimality of backpropagation, and explains the sparsity of the space of learning rules discovered so far. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Integration and Interoperability of Special Operations Forces and Conventional Forces in Irregular Warfare

    DTIC Science & Technology

    2009-06-12

    Phasing Model ......................................................................................................9 Figure 2. The Continuum of...the communist periphery. In a high-intensity conflict, doctrine at the time called for conventional forces to fight the traditional, linear fight...operations and proximity of cross component forces in a non- linear battlespace – Rigid business rules, translator applications, or manual workarounds to

  3. On the Universality and Non-Universality of Spiking Neural P Systems With Rules on Synapses.

    PubMed

    Song, Tao; Xu, Jinbang; Pan, Linqiang

    2015-12-01

    Spiking neural P systems with rules on synapses are a new variant of spiking neural P systems. In the systems, the neuron contains only spikes, while the spiking/forgetting rules are moved on the synapses. It was obtained that such system with 30 neurons (using extended spiking rules) or with 39 neurons (using standard spiking rules) is Turing universal. In this work, this number is improved to 6. Specifically, we construct a Turing universal spiking neural P system with rules on synapses having 6 neurons, which can generate any set of Turing computable natural numbers. As well, it is obtained that spiking neural P system with rules on synapses having less than two neurons are not Turing universal: i) such systems having one neuron can characterize the family of finite sets of natural numbers; ii) the family of sets of numbers generated by the systems having two neurons is included in the family of semi-linear sets of natural numbers.

  4. Non-fragile consensus algorithms for a network of diffusion PDEs with boundary local interaction

    NASA Astrophysics Data System (ADS)

    Xiong, Jun; Li, Junmin

    2017-07-01

    In this study, non-fragile consensus algorithm is proposed to solve the average consensus problem of a network of diffusion PDEs, modelled by boundary controlled heat equations. The problem deals with the case where the Neumann-type boundary controllers are corrupted by additive persistent disturbances. To achieve consensus between agents, a linear local interaction rule addressing this requirement is given. The proposed local interaction rules are analysed by applying a Lyapunov-based approach. The multiplicative and additive non-fragile feedback control algorithms are designed and sufficient conditions for the consensus of the multi-agent systems are presented in terms of linear matrix inequalities, respectively. Simulation results are presented to support the effectiveness of the proposed algorithms.

  5. Behavior of Collective Cooperation Yielded by Two Update Rules in Social Dilemmas: Combining Fermi and Moran Rules

    NASA Astrophysics Data System (ADS)

    Xia, Cheng-Yi; Wang, Lei; Wang, Juan; Wang, Jin-Song

    2012-09-01

    We combine the Fermi and Moran update rules in the spatial prisoner's dilemma and snowdrift games to investigate the behavior of collective cooperation among agents on the regular lattice. Large-scale simulations indicate that, compared to the model with only one update rule, the cooperation behavior exhibits the richer phenomena, and the role of update dynamics should be paid more attention in the evolutionary game theory. Meanwhile, we also observe that the introduction of Moran rule, which needs to consider all neighbor's information, can markedly promote the aggregate cooperation level, that is, randomly selecting the neighbor proportional to its payoff to imitate will facilitate the cooperation among agents. Current results will contribute to further understand the cooperation dynamics and evolutionary behaviors within many biological, economic and social systems.

  6. Ambient-aware continuous care through semantic context dissemination.

    PubMed

    Ongenae, Femke; Famaey, Jeroen; Verstichel, Stijn; De Zutter, Saar; Latré, Steven; Ackaert, Ann; Verhoeve, Piet; De Turck, Filip

    2014-12-04

    The ultimate ambient-intelligent care room contains numerous sensors and devices to monitor the patient, sense and adjust the environment and support the staff. This sensor-based approach results in a large amount of data, which can be processed by current and future applications, e.g., task management and alerting systems. Today, nurses are responsible for coordinating all these applications and supplied information, which reduces the added value and slows down the adoption rate.The aim of the presented research is the design of a pervasive and scalable framework that is able to optimize continuous care processes by intelligently reasoning on the large amount of heterogeneous care data. The developed Ontology-based Care Platform (OCarePlatform) consists of modular components that perform a specific reasoning task. Consequently, they can easily be replicated and distributed. Complex reasoning is achieved by combining the results of different components. To ensure that the components only receive information, which is of interest to them at that time, they are able to dynamically generate and register filter rules with a Semantic Communication Bus (SCB). This SCB semantically filters all the heterogeneous care data according to the registered rules by using a continuous care ontology. The SCB can be distributed and a cache can be employed to ensure scalability. A prototype implementation is presented consisting of a new-generation nurse call system supported by a localization and a home automation component. The amount of data that is filtered and the performance of the SCB are evaluated by testing the prototype in a living lab. The delay introduced by processing the filter rules is negligible when 10 or fewer rules are registered. The OCarePlatform allows disseminating relevant care data for the different applications and additionally supports composing complex applications from a set of smaller independent components. This way, the platform significantly reduces the amount of information that needs to be processed by the nurses. The delay resulting from processing the filter rules is linear in the amount of rules. Distributed deployment of the SCB and using a cache allows further improvement of these performance results.

  7. Missing-value estimation using linear and non-linear regression with Bayesian gene selection.

    PubMed

    Zhou, Xiaobo; Wang, Xiaodong; Dougherty, Edward R

    2003-11-22

    Data from microarray experiments are usually in the form of large matrices of expression levels of genes under different experimental conditions. Owing to various reasons, there are frequently missing values. Estimating these missing values is important because they affect downstream analysis, such as clustering, classification and network design. Several methods of missing-value estimation are in use. The problem has two parts: (1) selection of genes for estimation and (2) design of an estimation rule. We propose Bayesian variable selection to obtain genes to be used for estimation, and employ both linear and nonlinear regression for the estimation rule itself. Fast implementation issues for these methods are discussed, including the use of QR decomposition for parameter estimation. The proposed methods are tested on data sets arising from hereditary breast cancer and small round blue-cell tumors. The results compare very favorably with currently used methods based on the normalized root-mean-square error. The appendix is available from http://gspsnap.tamu.edu/gspweb/zxb/missing_zxb/ (user: gspweb; passwd: gsplab).

  8. Onset of Turbulence in a Pipe

    NASA Astrophysics Data System (ADS)

    Böberg, L.; Brösa, U.

    1988-09-01

    Turbulence in a pipe is derived directly from the Navier-Stokes equation. Analysis of numerical simulations revealed that small disturbances called 'mothers' induce other much stronger disturbances called 'daughters'. Daughters determine the look of turbulence, while mothers control the transfer of energy from the basic flow to the turbulent motion. From a practical point of view, ruling mothers means ruling turbulence. For theory, the mother-daughter process represents a mechanism permitting chaotic motion in a linearly stable system. The mechanism relies on a property of the linearized problem according to which the eigenfunctions become more and more collinear as the Reynolds number increases. The mathematical methods are described, comparisons with experiments are made, mothers and daughters are analyzed, also graphically, with full particulars, and the systematic construction of small systems of differential equations to mimic the non-linear process by means as simple as possible is explained. We suggest that more then 20 but less than 180 essential degrees of freedom take part in the onset of turbulence.

  9. Large-Nc masses of light mesons from QCD sum rules for nonlinear radial Regge trajectories

    NASA Astrophysics Data System (ADS)

    Afonin, S. S.; Solomko, T. D.

    2018-04-01

    The large-Nc masses of light vector, axial, scalar and pseudoscalar mesons are calculated from QCD spectral sum rules for a particular ansatz interpolating the radial Regge trajectories. The ansatz includes a linear part plus exponentially degreasing corrections to the meson masses and residues. The form of corrections was proposed some time ago for consistency with analytical structure of Operator Product Expansion of the two-point correlation functions. We revised that original analysis and found the second solution for the proposed sum rules. The given solution describes better the spectrum of vector and axial mesons.

  10. Linearly Adjustable International Portfolios

    NASA Astrophysics Data System (ADS)

    Fonseca, R. J.; Kuhn, D.; Rustem, B.

    2010-09-01

    We present an approach to multi-stage international portfolio optimization based on the imposition of a linear structure on the recourse decisions. Multiperiod decision problems are traditionally formulated as stochastic programs. Scenario tree based solutions however can become intractable as the number of stages increases. By restricting the space of decision policies to linear rules, we obtain a conservative tractable approximation to the original problem. Local asset prices and foreign exchange rates are modelled separately, which allows for a direct measure of their impact on the final portfolio value.

  11. Graphical Tools for Linear Structural Equation Modeling

    DTIC Science & Technology

    2014-06-01

    others. 4Kenny and Milan (2011) write, “Identification is perhaps the most difficult concept for SEM researchers to understand. We have seen SEM...model to using typical SEM software to determine model identifia- bility. Kenny and Milan (2011) list the following drawbacks: (i) If poor starting...the well known recursive and null rules (Bollen, 1989) and the regression rule (Kenny and Milan , 2011). A Simple Criterion for Identifying Individual

  12. A Rule-Based Policy-Level Model of Nonsuperpower Behavior in Strategic Conflicts.

    DTIC Science & Technology

    1982-12-01

    a mechanism. The human mind tends to work linearly and to focus implicitly on a few variables. Experience results in subconscious models with far...which is slower. Alternatives to the current ROSIE implementation include reprogramming Scenario Agent in the C language (the language used for the Red...perception, opportunity perception, opportunity response, and assertiveness. As rules are refined, maintenance and reprogramming of the model will be required

  13. Improving detection of dementia in Asian patients with low education: combining the Mini-Mental State Examination and the Informant Questionnaire on Cognitive Decline in the Elderly.

    PubMed

    Narasimhalu, Kaavya; Lee, June; Auchus, Alexander P; Chen, Christopher P L H

    2008-01-01

    Previous work combining the Mini-Mental State Examination (MMSE) and Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) has been conducted in western populations. We ascertained, in an Asian population, (1) the best method of combining the tests, (2) the effects of educational level, and (3) the effect of different dementia etiologies. Data from 576 patients were analyzed (407 nondemented controls, 87 Alzheimer's disease and 82 vascular dementia patients). Sensitivity, specificity and AUC values were obtained using three methods, the 'And' rule, the 'Or' rule, and the 'weighted sum' method. The 'weighted sum' rule had statistically superior AUC and specificity results, while the 'Or' rule had the best sensitivity results. The IQCODE outperformed the MMSE in all analyses. Patients with no education benefited more from combined tests. There was no difference between Alzheimer's disease and vascular dementia populations in the predictive value of any of the combined methods. We recommend that the IQCODE be used to supplement the MMSE whenever available and that the 'weighted sum' method be used to combine the MMSE and the IQCODE, particularly in populations with low education. As the study population selected may not be representative of the general population, further studies are required before generalization to nonclinical samples. (c) 2007 S. Karger AG, Basel.

  14. Irrational decision-making in an amoeboid organism: transitivity and context-dependent preferences.

    PubMed

    Latty, Tanya; Beekman, Madeleine

    2011-01-22

    Most models of animal foraging and consumer choice assume that individuals make choices based on the absolute value of items and are therefore 'economically rational'. However, frequent violations of rationality by animals, including humans, suggest that animals use comparative valuation rules. Are comparative valuation strategies a consequence of the way brains process information, or are they an intrinsic feature of biological decision-making? Here, we examine the principles of rationality in an organism with radically different information-processing mechanisms: the brainless, unicellular, slime mould Physarum polycephalum. We offered P. polycephalum amoebas a choice between food options that varied in food quality and light exposure (P. polycephalum is photophobic). The use of an absolute valuation rule will lead to two properties: transitivity and independence of irrelevant alternatives (IIA). Transitivity is satisfied if preferences have a consistent, linear ordering, while IIA states that a decision maker's preference for an item should not change if the choice set is expanded. A violation of either of these principles suggests the use of comparative rather than absolute valuation rules. Physarum polycephalum satisfied transitivity by having linear preference rankings. However, P. polycephalum's preference for a focal alternative increased when a third, inferior quality option was added to the choice set, thus violating IIA and suggesting the use of a comparative valuation process. The discovery of comparative valuation rules in a unicellular organism suggests that comparative valuation rules are ubiquitous, if not universal, among biological decision makers.

  15. [Woolly hair nevus associated with an ipsilateral linear epidermal nevus].

    PubMed

    Martín-González, T; del Boz-González, J; Vera-Casaño, A

    2007-04-01

    We report a 4-year-old boy with two areas of woolly hair in the right parietotemporal region and a linear epidermal nevus in the areas of woolly hair as well as in the ipsilateral hemiface and chin. Evaluation by scanning electron microscopy showed woolly hair with oval transverse section and longitudinal groove. A complete examination ruled out associated anomalies.

  16. Combining fixed effects and instrumental variable approaches for estimating the effect of psychosocial job quality on mental health: evidence from 13 waves of a nationally representative cohort study.

    PubMed

    Milner, Allison; Aitken, Zoe; Kavanagh, Anne; LaMontagne, Anthony D; Pega, Frank; Petrie, Dennis

    2017-06-23

    Previous studies suggest that poor psychosocial job quality is a risk factor for mental health problems, but they use conventional regression analytic methods that cannot rule out reverse causation, unmeasured time-invariant confounding and reporting bias. This study combines two quasi-experimental approaches to improve causal inference by better accounting for these biases: (i) linear fixed effects regression analysis and (ii) linear instrumental variable analysis. We extract 13 annual waves of national cohort data including 13 260 working-age (18-64 years) employees. The exposure variable is self-reported level of psychosocial job quality. The instruments used are two common workplace entitlements. The outcome variable is the Mental Health Inventory (MHI-5). We adjust for measured time-varying confounders. In the fixed effects regression analysis adjusted for time-varying confounders, a 1-point increase in psychosocial job quality is associated with a 1.28-point improvement in mental health on the MHI-5 scale (95% CI: 1.17, 1.40; P < 0.001). When the fixed effects was combined with the instrumental variable analysis, a 1-point increase psychosocial job quality is related to 1.62-point improvement on the MHI-5 scale (95% CI: -0.24, 3.48; P = 0.088). Our quasi-experimental results provide evidence to confirm job stressors as risk factors for mental ill health using methods that improve causal inference. © The Author 2017. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  17. 78 FR 62296 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-15

    ... #0; #0;Notices #0; Federal Register #0; #0; #0;This section of the FEDERAL REGISTER contains documents other than rules #0;or proposed rules that are applicable to the public. Notices of hearings #0;and investigations, committee meetings, agency decisions and rulings, #0;delegations of authority...

  18. 38 CFR 4.68 - Amputation rule.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2013-07-01 2013-07-01 false Amputation rule. 4.68 Section 4.68 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.68 Amputation rule. The combined rating for...

  19. 38 CFR 4.68 - Amputation rule.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2014-07-01 2014-07-01 false Amputation rule. 4.68 Section 4.68 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.68 Amputation rule. The combined rating for...

  20. 38 CFR 4.68 - Amputation rule.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2012-07-01 2012-07-01 false Amputation rule. 4.68 Section 4.68 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.68 Amputation rule. The combined rating for...

  1. Combining Multiple Types of Intelligence to Generate Probability Maps of Moving Targets

    DTIC Science & Technology

    2013-09-01

    normalization coefficient k similar to Demspter-Shafer’s combination rule. d. Mass Mean This rule of combination is the most straightforward one... coefficient , we can state that without normalizing, the updated distribution is: fupdate t   qk k t M 1 qk n k t M        (3.3) 36...Lawrence, KS. Chen, Z. (2003). Bayesian filtering: From Kalman filters to particle filters and beyond. Technical report, McMaster University. Dempster

  2. Extending double modulation: combinatorial rules for identifying the modulations necessary for determining elasticities in metabolic pathways.

    PubMed

    Giersch, C; Cornish-Bowden, A

    1996-10-07

    The double modulation method for determining the elasticities of pathway enzymes, originally devised by Kacser & Burns (Biochem. Soc. Trans. 7, 1149-1160, 1979), is extended to pathways of complex topological structure, including branching and feedback loops. An explicit system of linear equations for the unknown elasticities is derived. The constraints imposed on this linear system imply that modulations of more than one enzyme are not necessarily independent. Simple combinatorial rules are described for identifying without using any algebra the set of independent modulations that allow the determination of the elasticities of any enzyme. By repeated application, the minimum numbers of modulations required to determine the elasticities of all enzymes of a given pathway can be determined. The procedure is illustrated with numerous examples.

  3. Non-Condon equilibrium Fermi’s golden rule electronic transition rate constants via the linearized semiclassical method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Xiang; Geva, Eitan

    2016-06-28

    In this paper, we test the accuracy of the linearized semiclassical (LSC) expression for the equilibrium Fermi’s golden rule rate constant for electronic transitions in the presence of non-Condon effects. We do so by performing a comparison with the exact quantum-mechanical result for a model where the donor and acceptor potential energy surfaces are parabolic and identical except for shifts in the equilibrium energy and geometry, and the coupling between them is linear in the nuclear coordinates. Since non-Condon effects may or may not give rise to conical intersections, both possibilities are examined by considering: (1) A modified Garg-Onuchic-Ambegaokar modelmore » for charge transfer in the condensed phase, where the donor-acceptor coupling is linear in the primary mode coordinate, and for which non-Condon effects do not give rise to a conical intersection; (2) the linear vibronic coupling model for electronic transitions in gas phase molecules, where non-Condon effects give rise to conical intersections. We also present a comprehensive comparison between the linearized semiclassical expression and a progression of more approximate expressions. The comparison is performed over a wide range of frictions and temperatures for model (1) and over a wide range of temperatures for model (2). The linearized semiclassical method is found to reproduce the exact quantum-mechanical result remarkably well for both models over the entire range of parameters under consideration. In contrast, more approximate expressions are observed to deviate considerably from the exact result in some regions of parameter space.« less

  4. 38 CFR 4.68 - Amputation rule.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Amputation rule. 4.68... DISABILITIES Disability Ratings The Musculoskeletal System § 4.68 Amputation rule. The combined rating for disabilities of an extremity shall not exceed the rating for the amputation at the elective level, were...

  5. 38 CFR 4.68 - Amputation rule.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2011-07-01 2011-07-01 false Amputation rule. 4.68... DISABILITIES Disability Ratings The Musculoskeletal System § 4.68 Amputation rule. The combined rating for disabilities of an extremity shall not exceed the rating for the amputation at the elective level, were...

  6. 7 CFR 29.2634 - Rule 18.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.2634 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...

  7. 7 CFR 29.3121 - Rule 18.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.3121 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves, or any lot which is not crude but contains 20 percent or more of green and crude combined, shall...

  8. 7 CFR 29.1125 - Rule 19.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.1125 Rule 19. Any lot of tobacco containing 20 percent or more of green tobacco, or any lot which is not crude but contains 20 percent or more of green and crude combined shall...

  9. 7 CFR 29.2409 - Rule 18.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.2409 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...

  10. 7 CFR 29.3620 - Rule 19.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.3620 Rule 19. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...

  11. 7 CFR 29.3620 - Rule 19.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.3620 Rule 19. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...

  12. 7 CFR 29.2409 - Rule 18.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.2409 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...

  13. 7 CFR 29.1125 - Rule 19.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.1125 Rule 19. Any lot of tobacco containing 20 percent or more of green tobacco, or any lot which is not crude but contains 20 percent or more of green and crude combined shall...

  14. 7 CFR 29.2634 - Rule 18.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.2634 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...

  15. 7 CFR 29.3121 - Rule 18.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.3121 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves, or any lot which is not crude but contains 20 percent or more of green and crude combined, shall...

  16. Rotational relaxation of AlO+(1Σ+) in collision with He

    NASA Astrophysics Data System (ADS)

    Denis-Alpizar, O.; Trabelsi, T.; Hochlaf, M.; Stoecklin, T.

    2018-03-01

    The rate coefficients for the rotational de-excitation of AlO+ by collisions with He are determined. The possible production mechanisms of the AlO+ ion in both diffuse and dense molecular clouds are first discussed. A set of ab initio interaction energies is computed at the CCSD(T)-F12 level of theory, and a three-dimensional analytical model of the potential energy surface is obtained using a linear combination of reproducing kernel Hilbert space polynomials together with an analytical long range potential. The nuclear spin free close-coupling equations are solved and the de-excitation rotational rate coefficients for the lower 15 rotational states of AlO+ are reported. A propensity rule to favour Δj = -1 transitions is obtained while the hyperfine resolved state-to-state rate coefficients are also discussed.

  17. SAW based systems for mobile communications satellites

    NASA Technical Reports Server (NTRS)

    Peach, R. C.; Miller, N.; Lee, M.

    1993-01-01

    Modern mobile communications satellites, such as INMARSAT 3, EMS, and ARTEMIS, use advanced onboard processing to make efficient use of the available L-band spectrum. In all of these cases, high performance surface acoustic wave (SAW) devices are used. SAW filters can provide high selectivity (100-200 kHz transition widths), combined with flat amplitude and linear phase characteristics; their simple construction and radiation hardness also makes them especially suitable for space applications. An overview of the architectures used in the above systems, describing the technologies employed, and the use of bandwidth switchable SAW filtering (BSSF) is given. The tradeoffs to be considered when specifying a SAW based system are analyzed, using both theoretical and experimental data. Empirical rules for estimating SAW filter performance are given. Achievable performance is illustrated using data from the INMARSAT 3 engineering model (EM) processors.

  18. The Meyer-Neldel rule and the statistical shift of the Fermi level in amorphous semiconductors

    NASA Astrophysics Data System (ADS)

    Kikuchi, Minoru

    1988-11-01

    The statistical model is used to study the origin of the Meyer-Neldel (MN) rule [σ0∝exp(AEσ)] in a tetrahedral amorphous system. It is shown that a deep minimum in the gap density of states spectrum can lead to the linearity of the Fermi energy F(T) to the derivative (dF/dkT), as required from the rule. An expression is derived which relates the constant A in the rule to the gap density of states spectrum. The dispersion ranges of σ0 and Eσ are found to be related with the constant A. Model calculations show a magnitude of A and a wide dispersion of σ0 and Eσ in fair agreement with the experimental observations. A discussion is given to what extent the MN rule is dependent on the gap density of states spectrum.

  19. Automatic Learning of Fine Operating Rules for Online Power System Security Control.

    PubMed

    Sun, Hongbin; Zhao, Feng; Wang, Hao; Wang, Kang; Jiang, Weiyong; Guo, Qinglai; Zhang, Boming; Wehenkel, Louis

    2016-08-01

    Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.

  20. Polarization-direction correlation measurement --- Experimental test of the PDCO methods

    NASA Astrophysics Data System (ADS)

    Starosta, K.; Morek, T.; Droste, Ch.; Rohoziński, S. G.; Srebrny, J.; Bergstrem, M.; Herskind, B.

    1998-04-01

    Information about spins and parities of excited states is crucial for nuclear structure studies. In ``in-beam" gamma ray spectroscopy the directional correlation (DCO) or angular distribution measurements are widely used tools for multipolarity assignment; although, it is known that neither of these methods is sensitive to electric or magnetic character of gamma radiation. Multipolarity of gamma rays may be determined when the results of the DCO analysis are combined with the results of linear polarization measurements. The large total efficiency of modern multidetector arrays allows one to carry out coincidence measurements between the polarimeter and the remaining detectors. The aim of the present study was to test experimentally the possibility of polarization-direction correlation measurements using the EUROGAM II array. The studied nucleus was ^164Yb produced in the ^138Ba(^30Si,4n) reaction at beam energies of 150 and 155 MeV. The angular correlation, linear polarization and direction-polarization correlation were measured for the strong transitions in yrast and non yrast cascades. Application of the PDCO analysis to a transition connecting a side band with the yrast band allowed one to rule out most of the ambiguities in multipolarity assignment occuring if one used angular correlations only.

  1. Accuracy and precision of the signs and symptoms of streptococcal pharyngitis in children: a systematic review.

    PubMed

    Shaikh, Nader; Swaminathan, Nithya; Hooper, Emma G

    2012-03-01

    To conduct a systematic review to determine whether clinical findings can be used to rule in or to rule out streptococcal pharyngitis in children. Two authors independently searched MEDLINE and EMBASE. We included articles if they contained data on the accuracy of symptoms or signs of streptococcal pharyngitis, individually or combined into prediction rules, in children 3-18 years of age. Thirty-eight articles with data on individual symptoms and signs and 15 articles with data on prediction rules met all inclusion criteria. In children with sore throat, the presence of a scarlatiniform rash (likelihood ratio [LR], 3.91; 95% CI, 2.00-7.62), palatal petechiae (LR, 2.69; CI, 1.92-3.77), pharyngeal exudates (LR, 1.85; CI, 1.58-2.16), vomiting (LR, 1.79; CI, 1.58-2.16), and tender cervical nodes (LR, 1.72; CI, 1.54-1.93) were moderately useful in identifying those with streptococcal pharyngitis. Nevertheless, no individual symptoms or signs were effective in ruling in or ruling out streptococcal pharyngitis. Symptoms and signs, either individually or combined into prediction rules, cannot be used to definitively diagnose or rule out streptococcal pharyngitis. Copyright © 2012 Mosby, Inc. All rights reserved.

  2. [Intersection point rule for the retention value with mobile phase composition and boiling point of the homologues and chlorobenzenes in soil leaching column chromatography].

    PubMed

    Xu, F; Liang, X; Lin, B; Su, F

    1999-03-01

    Based on the linear retention equation of the logarithm of the capacity factor (logk') vs. the methanol volume fraction (psi) of aqueous binary mobile phase in soil leaching column chromatography, the intersection point rule for the logk' of homologues and weak polar chlorobenzenes, with psi, as well as with boiling point, has been derived due to existence of the similar interactions among solutes of the same series, stationary phase (soil) and eluent (methanol-water). These rules were testified by experimental data of homologues (n-alkylbenzenes, methylbenzenes) and weak polar chlorobenzenes.

  3. Stressed out and overcommitted! The relationships between time demands and family rules and parents’ and their child’s weight status

    PubMed Central

    Hearst, Mary O.; Sevcik, Sarah; Fulkerson, Jayne A.; Pasch, Keryn E.; Harnack, Lisa J.; Lytle, Leslie A.

    2013-01-01

    Objective To determine the relationship between parent time demands and presence and enforcement of family rules and parent/child dyad weight status. Methods Dyads of one child/parent per family (n=681dyads), Twin Cities, Minnesota, 2007–2008 had measured height/weight and a survey of demographics, time demands and family rules-related questions. Parent/child dyads were classified into four healthy weight/overweight categories. Multivariate linear associations were analyzed with SAS, testing for interaction by work status and family composition (p<0.10). Results In adjusted models, lack of family rules and difficulty with rule enforcement were statistically lower in dyads in which the parent/child was healthy weight compared to dyads in which the parent/child was both overweight (Difference in family rules scores=0.49, p=0.03; difference in rule enforcement scores=1.09, p=<0.01). Of parents who worked full-time, healthy weight dyads reported lower time demands than other dyads (Difference in time demands scores=1.44, p=0.01). Conclusions Family experiences of time demands and use of family rules are related to the weight status of parents and children within families. PMID:22228775

  4. Optimal joint detection and estimation that maximizes ROC-type curves

    PubMed Central

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.

    2017-01-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544

  5. Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.

    PubMed

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K

    2016-09-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.

  6. Jahn-Teller effect versus Hund's rule coupling in C60N-

    NASA Astrophysics Data System (ADS)

    Wehrli, S.; Sigrist, M.

    2007-09-01

    We propose variational states for the ground state and the low-energy collective rotator excitations in negatively charged C60N- ions (N=1,…,5) . The approach includes the linear electron-phonon coupling and the Coulomb interaction on the same level. The electron-phonon coupling is treated within the effective mode approximation which yields the linear t1u⊗Hg Jahn-Teller problem whereas the Coulomb interaction gives rise to Hund’s rule coupling for N=2,3,4 . The Hamiltonian has accidental SO(3) symmetry which allows an elegant formulation in terms of angular momenta. Trial states are constructed from coherent states and using projection operators onto angular momentum subspaces which results in good variational states for the complete parameter range. The evaluation of the corresponding energies is to a large extent analytical. We use the approach for a detailed analysis of the competition between Jahn-Teller effect and Hund’s rule coupling, which determines the spin state for N=2,3,4 . We calculate the low-spin-high-spin gap for N=2,3,4 as a function of the Hund’s rule coupling constant J . We find that the experimentally measured gaps suggest a coupling constant in the range J=60-80meV . Using a finite value for J , we recalculate the ground state energies of the C60N- ions and find that the Jahn-Teller energy gain is partly counterbalanced by the Hund’s rule coupling. In particular, the ground state energies for N=2,3,4 are almost equal.

  7. When global rule reversal meets local task switching: The neural mechanisms of coordinated behavioral adaptation to instructed multi-level demand changes.

    PubMed

    Shi, Yiquan; Wolfensteller, Uta; Schubert, Torsten; Ruge, Hannes

    2018-02-01

    Cognitive flexibility is essential to cope with changing task demands and often it is necessary to adapt to combined changes in a coordinated manner. The present fMRI study examined how the brain implements such multi-level adaptation processes. Specifically, on a "local," hierarchically lower level, switching between two tasks was required across trials while the rules of each task remained unchanged for blocks of trials. On a "global" level regarding blocks of twelve trials, the task rules could reverse or remain the same. The current task was cued at the start of each trial while the current task rules were instructed before the start of a new block. We found that partly overlapping and partly segregated neural networks play different roles when coping with the combination of global rule reversal and local task switching. The fronto-parietal control network (FPN) supported the encoding of reversed rules at the time of explicit rule instruction. The same regions subsequently supported local task switching processes during actual implementation trials, irrespective of rule reversal condition. By contrast, a cortico-striatal network (CSN) including supplementary motor area and putamen was increasingly engaged across implementation trials and more so for rule reversal than for nonreversal blocks, irrespective of task switching condition. Together, these findings suggest that the brain accomplishes the coordinated adaptation to multi-level demand changes by distributing processing resources either across time (FPN for reversed rule encoding and later for task switching) or across regions (CSN for reversed rule implementation and FPN for concurrent task switching). © 2017 Wiley Periodicals, Inc.

  8. Developing a reversible rapid coordinate transformation model for the cylindrical projection

    NASA Astrophysics Data System (ADS)

    Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai

    2016-04-01

    Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.

  9. Highly scalable and robust rule learner: performance evaluation and comparison.

    PubMed

    Kurgan, Lukasz A; Cios, Krzysztof J; Dick, Scott

    2006-02-01

    Business intelligence and bioinformatics applications increasingly require the mining of datasets consisting of millions of data points, or crafting real-time enterprise-level decision support systems for large corporations and drug companies. In all cases, there needs to be an underlying data mining system, and this mining system must be highly scalable. To this end, we describe a new rule learner called DataSqueezer. The learner belongs to the family of inductive supervised rule extraction algorithms. DataSqueezer is a simple, greedy, rule builder that generates a set of production rules from labeled input data. In spite of its relative simplicity, DataSqueezer is a very effective learner. The rules generated by the algorithm are compact, comprehensible, and have accuracy comparable to rules generated by other state-of-the-art rule extraction algorithms. The main advantages of DataSqueezer are very high efficiency, and missing data resistance. DataSqueezer exhibits log-linear asymptotic complexity with the number of training examples, and it is faster than other state-of-the-art rule learners. The learner is also robust to large quantities of missing data, as verified by extensive experimental comparison with the other learners. DataSqueezer is thus well suited to modern data mining and business intelligence tasks, which commonly involve huge datasets with a large fraction of missing data.

  10. Evidence Combination From an Evolutionary Game Theory Perspective.

    PubMed

    Deng, Xinyang; Han, Deqiang; Dezert, Jean; Deng, Yong; Shyr, Yu

    2016-09-01

    Dempster-Shafer evidence theory is a primary methodology for multisource information fusion because it is good at dealing with uncertain information. This theory provides a Dempster's rule of combination to synthesize multiple evidences from various information sources. However, in some cases, counter-intuitive results may be obtained based on that combination rule. Numerous new or improved methods have been proposed to suppress these counter-intuitive results based on perspectives, such as minimizing the information loss or deviation. Inspired by evolutionary game theory, this paper considers a biological and evolutionary perspective to study the combination of evidences. An evolutionary combination rule (ECR) is proposed to help find the most biologically supported proposition in a multievidence system. Within the proposed ECR, we develop a Jaccard matrix game to formalize the interaction between propositions in evidences, and utilize the replicator dynamics to mimick the evolution of propositions. Experimental results show that the proposed ECR can effectively suppress the counter-intuitive behaviors appeared in typical paradoxes of evidence theory, compared with many existing methods. Properties of the ECR, such as solution's stability and convergence, have been mathematically proved as well.

  11. How to select combination operators for fuzzy expert systems using CRI

    NASA Technical Reports Server (NTRS)

    Turksen, I. B.; Tian, Y.

    1992-01-01

    A method to select combination operators for fuzzy expert systems using the Compositional Rule of Inference (CRI) is proposed. First, fuzzy inference processes based on CRI are classified into three categories in terms of their inference results: the Expansion Type Inference, the Reduction Type Inference, and Other Type Inferences. Further, implication operators under Sup-T composition are classified as the Expansion Type Operator, the Reduction Type Operator, and the Other Type Operators. Finally, the combination of rules or their consequences is investigated for inference processes based on CRI.

  12. Simple and multiple linear regression: sample size considerations.

    PubMed

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Iterative combining rules for the van der Waals potentials of mixed rare gas systems

    NASA Astrophysics Data System (ADS)

    Wei, L. M.; Li, P.; Tang, K. T.

    2017-05-01

    An iterative procedure is introduced to make the results of some simple combining rules compatible with the Tang-Toennies potential model. The method is used to calculate the well locations Re and the well depths De of the van der Waals potentials of the mixed rare gas systems from the corresponding values of the homo-nuclear dimers. When the ;sizes; of the two interacting atoms are very different, several rounds of iteration are required for the results to converge. The converged results can be substantially different from the starting values obtained from the combining rules. However, if the sizes of the interacting atoms are close, only one or even no iteration is necessary for the results to converge. In either case, the converged results are the accurate descriptions of the interaction potentials of the hetero-nuclear dimers.

  14. Theoretical and subjective bit assignments in transform picture

    NASA Technical Reports Server (NTRS)

    Jones, H. W., Jr.

    1977-01-01

    It is shown that all combinations of symmetrical input distributions with difference distortion measures give a bit assignment rule identical to the well-known rule for a Gaussian input distribution with mean-square error. Published work is examined to show that the bit assignment rule is useful for transforms of full pictures, but subjective bit assignments for transform picture coding using small block sizes are significantly different from the theoretical bit assignment rule. An intuitive explanation is based on subjective design experience, and a subjectively obtained bit assignment rule is given.

  15. Logical-rule models of classification response times: a synthesis of mental-architecture, random-walk, and decision-bound approaches.

    PubMed

    Fific, Mario; Little, Daniel R; Nosofsky, Robert M

    2010-04-01

    We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli along a set of component dimensions. Those independent decisions are then combined via logical rules to determine the overall categorization response. The time course of the independent decisions is modeled via random-walk processes operating along individual dimensions. Alternative mental architectures are used as mechanisms for combining the independent decisions to implement the logical rules. We derive fundamental qualitative contrasts for distinguishing among the predictions of the rule models and major alternative models of classification RT. We also use the models to predict detailed RT-distribution data associated with individual stimuli in tasks of speeded perceptual classification. PsycINFO Database Record (c) 2010 APA, all rights reserved.

  16. Modulation/demodulation techniques for satellite communications. Part 2: Advanced techniques. The linear channel

    NASA Technical Reports Server (NTRS)

    Omura, J. K.; Simon, M. K.

    1982-01-01

    A theory is presented for deducing and predicting the performance of transmitter/receivers for bandwidth efficient modulations suitable for use on the linear satellite channel. The underlying principle used is the development of receiver structures based on the maximum-likelihood decision rule. The application of the performance prediction tools, e.g., channel cutoff rate and bit error probability transfer function bounds to these modulation/demodulation techniques.

  17. Global strength assessment in oblique waves of a large gas carrier ship, based on a non-linear iterative method

    NASA Astrophysics Data System (ADS)

    Domnisoru, L.; Modiga, A.; Gasparotti, C.

    2016-08-01

    At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.

  18. Neural decoding of collective wisdom with multi-brain computing.

    PubMed

    Eckstein, Miguel P; Das, Koel; Pham, Binh T; Peterson, Matthew F; Abbey, Craig K; Sy, Jocelyn L; Giesbrecht, Barry

    2012-01-02

    Group decisions and even aggregation of multiple opinions lead to greater decision accuracy, a phenomenon known as collective wisdom. Little is known about the neural basis of collective wisdom and whether its benefits arise in late decision stages or in early sensory coding. Here, we use electroencephalography and multi-brain computing with twenty humans making perceptual decisions to show that combining neural activity across brains increases decision accuracy paralleling the improvements shown by aggregating the observers' opinions. Although the largest gains result from an optimal linear combination of neural decision variables across brains, a simpler neural majority decision rule, ubiquitous in human behavior, results in substantial benefits. In contrast, an extreme neural response rule, akin to a group following the most extreme opinion, results in the least improvement with group size. Analyses controlling for number of electrodes and time-points while increasing number of brains demonstrate unique benefits arising from integrating neural activity across different brains. The benefits of multi-brain integration are present in neural activity as early as 200 ms after stimulus presentation in lateral occipital sites and no additional benefits arise in decision related neural activity. Sensory-related neural activity can predict collective choices reached by aggregating individual opinions, voting results, and decision confidence as accurately as neural activity related to decision components. Estimation of the potential for the collective to execute fast decisions by combining information across numerous brains, a strategy prevalent in many animals, shows large time-savings. Together, the findings suggest that for perceptual decisions the neural activity supporting collective wisdom and decisions arises in early sensory stages and that many properties of collective cognition are explainable by the neural coding of information across multiple brains. Finally, our methods highlight the potential of multi-brain computing as a technique to rapidly and in parallel gather increased information about the environment as well as to access collective perceptual/cognitive choices and mental states. Copyright © 2011 Elsevier Inc. All rights reserved.

  19. 78 FR 61994 - Combined Notice of Filings-2

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-10

    ... #0; #0;Notices #0; Federal Register #0; #0; #0;This section of the FEDERAL REGISTER contains documents other than rules #0;or proposed rules that are applicable to the public. Notices of hearings #0;and investigations, committee meetings, agency decisions and rulings, #0;delegations of authority, filing of petitions and applications and agency #0;statements of...

  20. Effects of a combined parent-student alcohol prevention program on intermediate factors and adolescents' drinking behavior: A sequential mediation model.

    PubMed

    Koning, Ina M; Maric, Marija; MacKinnon, David; Vollebergh, Wilma A M

    2015-08-01

    Previous work revealed that the combined parent-student alcohol prevention program (PAS) effectively postponed alcohol initiation through its hypothesized intermediate factors: increase in strict parental rule setting and adolescents' self-control (Koning, van den Eijnden, Verdurmen, Engels, & Vollebergh, 2011). This study examines whether the parental strictness precedes an increase in adolescents' self-control by testing a sequential mediation model. A cluster randomized trial including 3,245 Dutch early adolescents (M age = 12.68, SD = 0.50) and their parents randomized over 4 conditions: (1) parent intervention, (2) student intervention, (3) combined intervention, and (4) control group. Outcome measure was amount of weekly drinking measured at age 12 to 15; baseline assessment (T0) and 3 follow-up assessments (T1-T3). Main effects of the combined and parent intervention on weekly drinking at T3 were found. The effect of the combined intervention on weekly drinking (T3) was mediated via an increase in strict rule setting (T1) and adolescents' subsequent self-control (T2). In addition, the indirect effect of the combined intervention via rule setting (T1) was significant. No reciprocal sequential mediation (self-control at T1 prior to rules at T2) was found. The current study is 1 of the few studies reporting sequential mediation effects of youth intervention outcomes. It underscores the need of involving parents in youth alcohol prevention programs, and the need to target both parents and adolescents, so that change in parents' behavior enables change in their offspring. (c) 2015 APA, all rights reserved).

  1. Stiffness and Damping Coefficient Estimation of Compliant Surface Gas Bearings for Oil-Free Turbomachinery

    NASA Technical Reports Server (NTRS)

    DellaCorte, Christopher

    2010-01-01

    Foil gas bearings are a key technology in many commercial and emerging Oil-Free turbomachinery systems. These bearings are non-linear and have been difficult to analytically model in terms of performance characteristics such as load capacity, power loss, stiffness and damping. Previous investigations led to an empirically derived method, a rule-of-thumb, to estimate load capacity. This method has been a valuable tool in system development. The current paper extends this tool concept to include rules for stiffness and damping coefficient estimation. It is expected that these rules will further accelerate the development and deployment of advanced Oil-Free machines operating on foil gas bearings

  2. Sum rules for the uniform-background model of an atomic-sharp metal corner

    NASA Astrophysics Data System (ADS)

    Streitenberger, P.

    1994-04-01

    Analytical results are derived for the electrostatic potential of an atomic-sharp 90° metal corner in the uniform-background model. The electrostatic potential at a free jellium edge and the jellium corner, respectively, is determined exactly in terms of the energy per electron of the uniform electron gas integrated over the background density. The surface energy, the edge formation energy and the derivative of the corner formation energy with respect to the background density are given as integrals over the electrostatic potential. The present approach represents a novel approach to such sum rules, inclusive of the Budd-Vannimenus sum rules for a free jellium surface, based on general properties of linear response functions.

  3. Thumb rule of visual angle: a new confirmation.

    PubMed

    Groot, C; Ortega, F; Beltran, F S

    1994-02-01

    The classical thumb rule of visual angle was reexamined. Hence, the visual angle was measured as a function of a thumb's width and the distance between eye and thumb. The measurement of a thumb's width when held at arm's length was taken on 67 second-year students of psychology. The visual angle was about 2 degrees as R. P. O'Shea confirmed in 1991. Also, we confirmed a linear relationship between the size of a thumb's width at arm's length and the visual angle.

  4. Optimal combination of illusory and luminance-defined 3-D surfaces: A role for ambiguity.

    PubMed

    Hartle, Brittney; Wilcox, Laurie M; Murray, Richard F

    2018-04-01

    The shape of the illusory surface in stereoscopic Kanizsa figures is determined by the interpolation of depth from the luminance edges of adjacent inducing elements. Despite ambiguity in the position of illusory boundaries, observers reliably perceive a coherent three-dimensional (3-D) surface. However, this ambiguity may contribute additional uncertainty to the depth percept beyond what is expected from measurement noise alone. We evaluated the intrinsic ambiguity of illusory boundaries by using a cue-combination paradigm to measure the reliability of depth percepts elicited by stereoscopic illusory surfaces. We assessed the accuracy and precision of depth percepts using 3-D Kanizsa figures relative to luminance-defined surfaces. The location of the surface peak was defined by illusory boundaries, luminance-defined edges, or both. Accuracy and precision were assessed using a depth-discrimination paradigm. A maximum likelihood linear cue combination model was used to evaluate the relative contribution of illusory and luminance-defined signals to the perceived depth of the combined surface. Our analysis showed that the standard deviation of depth estimates was consistent with an optimal cue combination model, but the points of subjective equality indicated that observers consistently underweighted the contribution of illusory boundaries. This systematic underweighting may reflect a combination rule that attributes additional intrinsic ambiguity to the location of the illusory boundary. Although previous studies show that illusory and luminance-defined contours share many perceptual similarities, our model suggests that ambiguity plays a larger role in the perceptual representation of illusory contours than of luminance-defined contours.

  5. Strehl ratio: a tool for optimizing optical nulls and singularities.

    PubMed

    Hénault, François

    2015-07-01

    In this paper a set of radial and azimuthal phase functions are reviewed that have a null Strehl ratio, which is equivalent to generating a central extinction in the image plane of an optical system. The study is conducted in the framework of Fraunhofer scalar diffraction, and is oriented toward practical cases where optical nulls or singularities are produced by deformable mirrors or phase plates. The identified solutions reveal unexpected links with the zeros of type-J Bessel functions of integer order. They include linear azimuthal phase ramps giving birth to an optical vortex, azimuthally modulated phase functions, and circular phase gratings (CPGs). It is found in particular that the CPG radiometric efficiency could be significantly improved by the null Strehl ratio condition. Simple design rules for rescaling and combining the different phase functions are also defined. Finally, the described analytical solutions could also serve as starting points for an automated searching software tool.

  6. Time left in the mouse.

    PubMed

    Cordes, Sara; King, Adam Philip; Gallistel, C R

    2007-02-22

    Evidence suggests that the online combination of non-verbal magnitudes (durations, numerosities) is central to learning in both human and non-human animals [Gallistel, C.R., 1990. The Organization of Learning. MIT Press, Cambridge, MA]. The molecular basis of these computations, however, is an open question at this point. The current study provides the first direct test of temporal subtraction in a species in which the genetic code is available. In two experiments, mice were run in an adaptation of Gibbon and Church's [Gibbon, J., Church, R.M., 1981. Time left: linear versus logarithmic subjective time. J. Exp. Anal. Behav. 7, 87-107] time left paradigm in order to characterize typical responding in this task. Both experiments suggest that mice engaged in online subtraction of temporal values, although the generalization of a learned response rule to novel stimulus values resulted in slightly less systematic responding. Potential explanations for this pattern of results are discussed.

  7. Rough set classification based on quantum logic

    NASA Astrophysics Data System (ADS)

    Hassan, Yasser F.

    2017-11-01

    By combining the advantages of quantum computing and soft computing, the paper shows that rough sets can be used with quantum logic for classification and recognition systems. We suggest the new definition of rough set theory as quantum logic theory. Rough approximations are essential elements in rough set theory, the quantum rough set model for set-valued data directly construct set approximation based on a kind of quantum similarity relation which is presented here. Theoretical analyses demonstrate that the new model for quantum rough sets has new type of decision rule with less redundancy which can be used to give accurate classification using principles of quantum superposition and non-linear quantum relations. To our knowledge, this is the first attempt aiming to define rough sets in representation of a quantum rather than logic or sets. The experiments on data-sets have demonstrated that the proposed model is more accuracy than the traditional rough sets in terms of finding optimal classifications.

  8. [An ultra-high-pressure liquid chromatography/linear ion trap-Orbitrap mass spectrometry method coupled with a diagnostic fragment ions-searching-based strategy for rapid identification and characterization of chemical components in Polygonum cuspidatum].

    PubMed

    Pan, Zhiran; Liang, Hailong; Liang, Chabhufi; Xu, Wen

    2015-01-01

    A method for qualitative analysis of constituents in Polygonum cuspidatum by ultra-high-pressure liquid chromatography coupled with linear ion trap-Orbitrap mass spectrometry (UHPLC-LTQ-Orbitrap MS) has been established. The methanol extract of Polygonum cuspidatumrn was separated on a Waters UPLC C18 column using acetonitrile-water (containing formic acid) eluting system and detected by LTQ-Orbitrap hybrid mass spectrometer in negative mode. The targeted components were further fragmented in LTQ and high accuracy data were acquired by Orbitrap MS. The summarized fragmentation pathways of typical reference components and a diagnostic fragment ions-searching-based strategy were used for detection and identification of the main phenolic components in Polygonum cuspidatum. Other clues such as nitrogen rule, even electron rule, degree of unsaturation rule and isotopic peak data were included for the structural elucidation as well. The whole analytical procedure was within 10 min and more than 30 components were identified or tentatively identified. This method is helpful for further phytochemical research and quality control on Polygonum cuspidatum and related preparations.

  9. Data driven model generation based on computational intelligence

    NASA Astrophysics Data System (ADS)

    Gemmar, Peter; Gronz, Oliver; Faust, Christophe; Casper, Markus

    2010-05-01

    The simulation of discharges at a local gauge or the modeling of large scale river catchments are effectively involved in estimation and decision tasks of hydrological research and practical applications like flood prediction or water resource management. However, modeling such processes using analytical or conceptual approaches is made difficult by both complexity of process relations and heterogeneity of processes. It was shown manifold that unknown or assumed process relations can principally be described by computational methods, and that system models can automatically be derived from observed behavior or measured process data. This study describes the development of hydrological process models using computational methods including Fuzzy logic and artificial neural networks (ANN) in a comprehensive and automated manner. Methods We consider a closed concept for data driven development of hydrological models based on measured (experimental) data. The concept is centered on a Fuzzy system using rules of Takagi-Sugeno-Kang type which formulate the input-output relation in a generic structure like Ri : IFq(t) = lowAND...THENq(t+Δt) = ai0 +ai1q(t)+ai2p(t-Δti1)+ai3p(t+Δti2)+.... The rule's premise part (IF) describes process states involving available process information, e.g. actual outlet q(t) is low where low is one of several Fuzzy sets defined over variable q(t). The rule's conclusion (THEN) estimates expected outlet q(t + Δt) by a linear function over selected system variables, e.g. actual outlet q(t), previous and/or forecasted precipitation p(t ?Δtik). In case of river catchment modeling we use head gauges, tributary and upriver gauges in the conclusion part as well. In addition, we consider temperature and temporal (season) information in the premise part. By creating a set of rules R = {Ri|(i = 1,...,N)} the space of process states can be covered as concise as necessary. Model adaptation is achieved by finding on optimal set A = (aij) of conclusion parameters with respect to a defined rating function and experimental data. To find A, we use for example a linear equation solver and RMSE-function. In practical process models, the number of Fuzzy sets and the according number of rules is fairly low. Nevertheless, creating the optimal model requires some experience. Therefore, we improved this development step by methods for automatic generation of Fuzzy sets, rules, and conclusions. Basically, the model achievement depends to a great extend on the selection of the conclusion variables. It is the aim that variables having most influence on the system reaction being considered and superfluous ones being neglected. At first, we use Kohonen maps, a specialized ANN, to identify relevant input variables from the large set of available system variables. A greedy algorithm selects a comprehensive set of dominant and uncorrelated variables. Next, the premise variables are analyzed with clustering methods (e.g. Fuzzy-C-means) and Fuzzy sets are then derived from cluster centers and outlines. The rule base is automatically constructed by permutation of the Fuzzy sets of the premise variables. Finally, the conclusion parameters are calculated and the total coverage of the input space is iteratively tested with experimental data, rarely firing rules are combined and coarse coverage of sensitive process states results in refined Fuzzy sets and rules. Results The described methods were implemented and integrated in a development system for process models. A series of models has already been built e.g. for rainfall-runoff modeling or for flood prediction (up to 72 hours) in river catchments. The models required significantly less development effort and showed advanced simulation results compared to conventional models. The models can be used operationally and simulation takes only some minutes on a standard PC e.g. for a gauge forecast (up to 72 hours) for the whole Mosel (Germany) river catchment.

  10. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    PubMed

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  11. Combining Biomarkers Linearly and Nonlinearly for Classification Using the Area Under the ROC Curve

    PubMed Central

    Fong, Youyi; Yin, Shuxin; Huang, Ying

    2016-01-01

    In biomedical studies, it is often of interest to classify/predict a subject’s disease status based on a variety of biomarker measurements. A commonly used classification criterion is based on AUC - Area under the Receiver Operating Characteristic Curve. Many methods have been proposed to optimize approximated empirical AUC criteria, but there are two limitations to the existing methods. First, most methods are only designed to find the best linear combination of biomarkers, which may not perform well when there is strong nonlinearity in the data. Second, many existing linear combination methods use gradient-based algorithms to find the best marker combination, which often result in sub-optimal local solutions. In this paper, we address these two problems by proposing a new kernel-based AUC optimization method called Ramp AUC (RAUC). This method approximates the empirical AUC loss function with a ramp function, and finds the best combination by a difference of convex functions algorithm. We show that as a linear combination method, RAUC leads to a consistent and asymptotically normal estimator of the linear marker combination when the data is generated from a semiparametric generalized linear model, just as the Smoothed AUC method (SAUC). Through simulation studies and real data examples, we demonstrate that RAUC out-performs SAUC in finding the best linear marker combinations, and can successfully capture nonlinear pattern in the data to achieve better classification performance. We illustrate our method with a dataset from a recent HIV vaccine trial. PMID:27058981

  12. The Convallis Rule for Unsupervised Learning in Cortical Networks

    PubMed Central

    Yger, Pierre; Harris, Kenneth D.

    2013-01-01

    The phenomenology and cellular mechanisms of cortical synaptic plasticity are becoming known in increasing detail, but the computational principles by which cortical plasticity enables the development of sensory representations are unclear. Here we describe a framework for cortical synaptic plasticity termed the “Convallis rule”, mathematically derived from a principle of unsupervised learning via constrained optimization. Implementation of the rule caused a recurrent cortex-like network of simulated spiking neurons to develop rate representations of real-world speech stimuli, enabling classification by a downstream linear decoder. Applied to spike patterns used in in vitro plasticity experiments, the rule reproduced multiple results including and beyond STDP. However STDP alone produced poorer learning performance. The mathematical form of the rule is consistent with a dual coincidence detector mechanism that has been suggested by experiments in several synaptic classes of juvenile neocortex. Based on this confluence of normative, phenomenological, and mechanistic evidence, we suggest that the rule may approximate a fundamental computational principle of the neocortex. PMID:24204224

  13. Electronic energy transfer through non-adiabatic vibrational-electronic resonance. I. Theory for a dimer

    NASA Astrophysics Data System (ADS)

    Tiwari, Vivek; Peters, William K.; Jonas, David M.

    2017-10-01

    Non-adiabatic vibrational-electronic resonance in the excited electronic states of natural photosynthetic antennas drastically alters the adiabatic framework, in which electronic energy transfer has been conventionally studied, and suggests the possibility of exploiting non-adiabatic dynamics for directed energy transfer. Here, a generalized dimer model incorporates asymmetries between pigments, coupling to the environment, and the doubly excited state relevant for nonlinear spectroscopy. For this generalized dimer model, the vibrational tuning vector that drives energy transfer is derived and connected to decoherence between singly excited states. A correlation vector is connected to decoherence between the ground state and the doubly excited state. Optical decoherence between the ground and singly excited states involves linear combinations of the correlation and tuning vectors. Excitonic coupling modifies the tuning vector. The correlation and tuning vectors are not always orthogonal, and both can be asymmetric under pigment exchange, which affects energy transfer. For equal pigment vibrational frequencies, the nonadiabatic tuning vector becomes an anti-correlated delocalized linear combination of intramolecular vibrations of the two pigments, and the nonadiabatic energy transfer dynamics become separable. With exchange symmetry, the correlation and tuning vectors become delocalized intramolecular vibrations that are symmetric and antisymmetric under pigment exchange. Diabatic criteria for vibrational-excitonic resonance demonstrate that anti-correlated vibrations increase the range and speed of vibronically resonant energy transfer (the Golden Rule rate is a factor of 2 faster). A partial trace analysis shows that vibronic decoherence for a vibrational-excitonic resonance between two excitons is slower than their purely excitonic decoherence.

  14. Electronic energy transfer through non-adiabatic vibrational-electronic resonance. I. Theory for a dimer.

    PubMed

    Tiwari, Vivek; Peters, William K; Jonas, David M

    2017-10-21

    Non-adiabatic vibrational-electronic resonance in the excited electronic states of natural photosynthetic antennas drastically alters the adiabatic framework, in which electronic energy transfer has been conventionally studied, and suggests the possibility of exploiting non-adiabatic dynamics for directed energy transfer. Here, a generalized dimer model incorporates asymmetries between pigments, coupling to the environment, and the doubly excited state relevant for nonlinear spectroscopy. For this generalized dimer model, the vibrational tuning vector that drives energy transfer is derived and connected to decoherence between singly excited states. A correlation vector is connected to decoherence between the ground state and the doubly excited state. Optical decoherence between the ground and singly excited states involves linear combinations of the correlation and tuning vectors. Excitonic coupling modifies the tuning vector. The correlation and tuning vectors are not always orthogonal, and both can be asymmetric under pigment exchange, which affects energy transfer. For equal pigment vibrational frequencies, the nonadiabatic tuning vector becomes an anti-correlated delocalized linear combination of intramolecular vibrations of the two pigments, and the nonadiabatic energy transfer dynamics become separable. With exchange symmetry, the correlation and tuning vectors become delocalized intramolecular vibrations that are symmetric and antisymmetric under pigment exchange. Diabatic criteria for vibrational-excitonic resonance demonstrate that anti-correlated vibrations increase the range and speed of vibronically resonant energy transfer (the Golden Rule rate is a factor of 2 faster). A partial trace analysis shows that vibronic decoherence for a vibrational-excitonic resonance between two excitons is slower than their purely excitonic decoherence.

  15. 78 FR 4307 - Current Good Manufacturing Practice Requirements for Combination Products

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-22

    .... Rationale for the Rulemaking B. The Proposed Rule C. The Final Rule II. Comments on the Proposed Rule A. General B. What is the scope of this subpart? (Sec. 4.1) C. How does FDA define key terms and phrases in... Act (the PHS Act) (42 U.S.C. 262). All biological products regulated under the PHS Act meet the...

  16. 33 CFR 83.25 - Sailing vessels underway and vessels under oars (Rule 25).

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... exhibit: (1) Sidelights; and (2) A sternlight. (b) Sailing vessels of less than 20 meters in length. In a... with the combined lantern permitted by paragraph (b) of this Rule. (d) Sailing vessels of less than 7... practicable, exhibit the lights prescribed in paragraph (a) or (b) of this Rule, but if she does not, she...

  17. 33 CFR 83.25 - Sailing vessels underway and vessels under oars (Rule 25).

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... exhibit: (1) Sidelights; and (2) A sternlight. (b) Sailing vessels of less than 20 meters in length. In a... with the combined lantern permitted by paragraph (b) of this Rule. (d) Sailing vessels of less than 7... practicable, exhibit the lights prescribed in paragraph (a) or (b) of this Rule, but if she does not, she...

  18. 33 CFR 83.25 - Sailing vessels underway and vessels under oars (Rule 25).

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... exhibit: (1) Sidelights; and (2) A sternlight. (b) Sailing vessels of less than 20 meters in length. In a... with the combined lantern permitted by paragraph (b) of this Rule. (d) Sailing vessels of less than 7... practicable, exhibit the lights prescribed in paragraph (a) or (b) of this Rule, but if she does not, she...

  19. Prospective observational study in two Dutch hospitals to assess the performance of inflammatory plasma markers to determine disease severity of viral respiratory tract infections in children.

    PubMed

    Ahout, Inge M L; Brand, Kim H; Zomer, Aldert; van den Hurk, Wilhelma H; Schilders, Geurt; Brouwer, Marianne L; Neeleman, Chris; Groot, Ronald de; Ferwerda, Gerben

    2017-06-30

    Respiratory viruses causing lower respiratory tract infections (LRTIs) are a major cause of hospital admissions in children. Since the course of these infections is unpredictable with potential fast deterioration into respiratory failure, infants are easily admitted to the hospital for observation. The aim of this study was to examine whether systemic inflammatory markers can be used to predict severity of disease in children with respiratory viral infections. Blood and nasopharyngeal washings from children <3 years of age with viral LRTI attending a hospital were collected within 24 hours (acute) and after 4-6 weeks (recovery). Patients were assigned to a mild (observation only), moderate (supplemental oxygen and/or nasogastric feeding) or severe (mechanical ventilation) group. Linear regression analysis was used to design a prediction rule using plasma levels of C reactive protein (CRP), serum amyloid A (SAA), pentraxin 3 (PTX3), serum amyloid P component and properdin. This rule was tested in a validation cohort. One hundred and four children (52% male) were included. A combination of CRP, SAA, PTX3 and properdin was a better indicator of severe disease compared with any of the individual makers and age (69% sensitivity (95% CI 50 to 83), 90% specificity (95% CI 80 to 96)). Validation in 141 patients resulted in 71% sensitivity (95% CI 53 to 85), 87% specificity (95% CI 79 to 92), negative predictive value of 64% (95% CI 47 to 78) and positive predictive value of 90% (95% CI 82 to 95). The prediction rule was not able to identify patients with a mild course of disease. A combination of CRP, SAA, PTX3 and properdin was able to identify children with a severe course of viral LRTI disease, even in children under 2 months of age. To assess the true impact on clinical management, these results should be validated in a prospective randomised control study. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  20. Anytime synthetic projection: Maximizing the probability of goal satisfaction

    NASA Technical Reports Server (NTRS)

    Drummond, Mark; Bresina, John L.

    1990-01-01

    A projection algorithm is presented for incremental control rule synthesis. The algorithm synthesizes an initial set of goal achieving control rules using a combination of situation probability and estimated remaining work as a search heuristic. This set of control rules has a certain probability of satisfying the given goal. The probability is incrementally increased by synthesizing additional control rules to handle 'error' situations the execution system is likely to encounter when following the initial control rules. By using situation probabilities, the algorithm achieves a computationally effective balance between the limited robustness of triangle tables and the absolute robustness of universal plans.

  1. Approaches to Streamline Air Permitting for Combined Heat and Power: Permits by Rule and General Permits

    EPA Pesticide Factsheets

    This factsheet provides information about permit by rule (PBR) and general permit (GP) processes, including the factors that contributed to their development and lessons learned from their implementation.

  2. Luria-Delbrück, revisited: the classic experiment does not rule out Lamarckian evolution

    NASA Astrophysics Data System (ADS)

    Holmes, Caroline M.; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya

    2017-10-01

    We re-examined data from the classic Luria-Delbrück fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms (as would happen for bacteria with CRISPR-Cas immunity). Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbrück paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbrück experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics.

  3. Luria-Delbrück, revisited: the classic experiment does not rule out Lamarckian evolution.

    PubMed

    Holmes, Caroline M; Ghafari, Mahan; Abbas, Anzar; Saravanan, Varun; Nemenman, Ilya

    2017-08-21

    We re-examined data from the classic Luria-Delbrück fluctuation experiment, which is often credited with establishing a Darwinian basis for evolution. We argue that, for the Lamarckian model of evolution to be ruled out by the experiment, the experiment must favor pure Darwinian evolution over both the Lamarckian model and a model that allows both Darwinian and Lamarckian mechanisms (as would happen for bacteria with CRISPR-Cas immunity). Analysis of the combined model was not performed in the original 1943 paper. The Luria-Delbrück paper also did not consider the possibility of neither model fitting the experiment. Using Bayesian model selection, we find that the Luria-Delbrück experiment, indeed, favors the Darwinian evolution over purely Lamarckian. However, our analysis does not rule out the combined model, and hence cannot rule out Lamarckian contributions to the evolutionary dynamics.

  4. A COMPARISON OF THE EFFECTS OF BRIEF RULES, A TIMER, AND PREFERRED TOYS ON SELF-CONTROL

    PubMed Central

    Newquist, Matthew H; Dozier, Claudia L; Neidert, Pamela L

    2012-01-01

    Some children make impulsive choices (i.e., choose a small but immediate reinforcer over a large but delayed reinforcer). Previous research has shown that delay fading, providing an alternative activity during the delay, teaching participants to repeat a rule during the delay, combining delay fading with an alternative activity, and combining delay fading with a countdown timer are effective for increasing self-control (i.e., choosing the large but delayed reinforcer over the small but immediate reinforcer). The purpose of the current study was to compare the effects of various interventions in the absence of delay fading (i.e., providing brief rules, providing a countdown timer during the delay, or providing preferred toys during the delay) on self-control. Results suggested that providing brief rules or a countdown timer during the delay was ineffective for enhancing self-control. However, providing preferred toys during the delay effectively enhanced self-control. PMID:23060664

  5. The association rules search of Indonesian university graduate’s data using FP-growth algorithm

    NASA Astrophysics Data System (ADS)

    Faza, S.; Rahmat, R. F.; Nababan, E. B.; Arisandi, D.; Effendi, S.

    2018-02-01

    The attribute varieties in university graduates data have caused frustrations to the institution in finding the combinations of attributes that often emerge and have high integration between attributes. Association rules mining is a data mining technique to determine the integration of the data or the way of a data set affects another set of data. By way of explanation, there are possibilities in finding the integration of data on a large scale. Frequent Pattern-Growth (FP-Growth) algorithm is one of the association rules mining technique to determine a frequent itemset in an FP-Tree data set. From the research on the search of university graduate’s association rules, it can be concluded that the most common attributes that have high integration between them are in the combination of State-owned High School outside Medan, regular university entrance exam, GPA of 3.00 to 3.49 and over 4-year-long study duration.

  6. A comparison of the effects of brief rules, a timer, and preferred toys on self-control.

    PubMed

    Newquist, Matthew H; Dozier, Claudia L; Neidert, Pamela L

    2012-01-01

    Some children make impulsive choices (i.e., choose a small but immediate reinforcer over a large but delayed reinforcer). Previous research has shown that delay fading, providing an alternative activity during the delay, teaching participants to repeat a rule during the delay, combining delay fading with an alternative activity, and combining delay fading with a countdown timer are effective for increasing self-control (i.e., choosing the large but delayed reinforcer over the small but immediate reinforcer). The purpose of the current study was to compare the effects of various interventions in the absence of delay fading (i.e., providing brief rules, providing a countdown timer during the delay, or providing preferred toys during the delay) on self-control. Results suggested that providing brief rules or a countdown timer during the delay was ineffective for enhancing self-control. However, providing preferred toys during the delay effectively enhanced self-control.

  7. A characterization of linearly repetitive cut and project sets

    NASA Astrophysics Data System (ADS)

    Haynes, Alan; Koivusalo, Henna; Walton, James

    2018-02-01

    For the development of a mathematical theory which can be used to rigorously investigate physical properties of quasicrystals, it is necessary to understand regularity of patterns in special classes of aperiodic point sets in Euclidean space. In one dimension, prototypical mathematical models for quasicrystals are provided by Sturmian sequences and by point sets generated by substitution rules. Regularity properties of such sets are well understood, thanks mostly to well known results by Morse and Hedlund, and physicists have used this understanding to study one dimensional random Schrödinger operators and lattice gas models. A key fact which plays an important role in these problems is the existence of a subadditive ergodic theorem, which is guaranteed when the corresponding point set is linearly repetitive. In this paper we extend the one-dimensional model to cut and project sets, which generalize Sturmian sequences in higher dimensions, and which are frequently used in mathematical and physical literature as models for higher dimensional quasicrystals. By using a combination of algebraic, geometric, and dynamical techniques, together with input from higher dimensional Diophantine approximation, we give a complete characterization of all linearly repetitive cut and project sets with cubical windows. We also prove that these are precisely the collection of such sets which satisfy subadditive ergodic theorems. The results are explicit enough to allow us to apply them to known classical models, and to construct linearly repetitive cut and project sets in all pairs of dimensions and codimensions in which they exist. Research supported by EPSRC grants EP/L001462, EP/J00149X, EP/M023540. HK also gratefully acknowledges the support of the Osk. Huttunen foundation.

  8. Linear solvation energy relationships (LSER): 'rules of thumb' for Vi/100, π*, Βm, and αm estimation and use in aquatic toxicology

    USGS Publications Warehouse

    Hickey, James P.

    1996-01-01

    This chapter provides a listing of the increasing variety of organic moieties and heteroatom group for which Linear Solvation Energy Relationship (LSER) values are available, and the LSER variable estimation rules. The listings include values for typical nitrogen-, sulfur- and phosphorus-containing moieties, and general organosilicon and organotin groups. The contributions by an ion pair situation to the LSER values are also offered in Table 1, allowing estimation of parameters for salts and zwitterions. The guidelines permit quick estimation of values for the four primary LSER variables Vi/100, π*, Βm, and αm by summing the contribtuions from its components. The use of guidelines and Table 1 significantly simplifies computation of values for the LSER variables for most possible organic comppounds in the environment, including the larger compounds of environmental and biological interest.

  9. Imparting Motion to a Test Object Such as a Motor Vehicle in a Controlled Fashion

    NASA Technical Reports Server (NTRS)

    Southward, Stephen C. (Inventor); Reubush, Chandler (Inventor); Pittman, Bryan (Inventor); Roehrig, Kurt (Inventor); Gerard, Doug (Inventor)

    2014-01-01

    An apparatus imparts motion to a test object such as a motor vehicle in a controlled fashion. A base has mounted on it a linear electromagnetic motor having a first end and a second end, the first end being connected to the base. A pneumatic cylinder and piston combination have a first end and a second end, the first end connected to the base so that the pneumatic cylinder and piston combination is generally parallel with the linear electromagnetic motor. The second ends of the linear electromagnetic motor and pneumatic cylinder and piston combination being commonly linked to a mount for the test object. A control system for the linear electromagnetic motor and pneumatic cylinder and piston combination drives the pneumatic cylinder and piston combination to support a substantial static load of the test object and the linear electromagnetic motor to impart controlled motion to the test object.

  10. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  11. Evidence Combination From an Evolutionary Game Theory Perspective

    PubMed Central

    Deng, Xinyang; Han, Deqiang; Dezert, Jean; Deng, Yong; Shyr, Yu

    2017-01-01

    Dempster-Shafer evidence theory is a primary methodology for multi-source information fusion because it is good at dealing with uncertain information. This theory provides a Dempster’s rule of combination to synthesize multiple evidences from various information sources. However, in some cases, counter-intuitive results may be obtained based on that combination rule. Numerous new or improved methods have been proposed to suppress these counter-intuitive results based on perspectives, such as minimizing the information loss or deviation. Inspired by evolutionary game theory, this paper considers a biological and evolutionary perspective to study the combination of evidences. An evolutionary combination rule (ECR) is proposed to help find the most biologically supported proposition in a multi-evidence system. Within the proposed ECR, we develop a Jaccard matrix game (JMG) to formalize the interaction between propositions in evidences, and utilize the replicator dynamics to mimick the evolution of propositions. Experimental results show that the proposed ECR can effectively suppress the counter-intuitive behaviors appeared in typical paradoxes of evidence theory, compared with many existing methods. Properties of the ECR, such as solution’s stability and convergence, have been mathematically proved as well. PMID:26285231

  12. Object Synthesis in Conway's Game of Life and Other Cellular Automata

    NASA Astrophysics Data System (ADS)

    Niemiec, Mark D.

    Of the very large number of cellular automata rules in existence, a relatively small number of rules may be considered interesting. Some of the features that make such rules interesting permit patterns to expand, contract, separate into multiple sub-patterns, or combine with other patterns. Such rules generally include still-lifes, oscillators, spaceships, spaceship guns, and puffer trains. Such structures can often be used to construct more complicated computational circuitry, and rules that contain them can often be shown to be computationally universal. Conway's Game of Life is one rule that has been well-studied for several decades, and has been shown to be very fruitful in this regard.

  13. Current standard rules of combined anteversion prevent prosthetic impingement but ignore osseous contact in total hip arthroplasty.

    PubMed

    Weber, Markus; Woerner, Michael; Craiovan, Benjamin; Voellner, Florian; Worlicek, Michael; Springorum, Hans-Robert; Grifka, Joachim; Renkawitz, Tobias

    2016-12-01

    In this prospective study of 135 patients undergoing cementless total hip arthroplasty (THA) we asked whether six current definitions of combined anteversion prevent impingement and increase postoperative patient individual impingement-free range-of-motion (ROM). Implant position was measured by an independent, external institute on 3D-CT performed six weeks post-operatively. Post-operative ROM was calculated using a CT-based algorithm detecting osseous and/or prosthetic impingement by virtual hip movement. Additionally, clinical ROM was evaluated pre-operatively and one-year post-operatively by a blinded observer. Combined component position of cup and stem according to the definitions of Ranawat, Widmer, Dorr, Hisatome and Yoshimine inhibited prosthetic impingement in over 90 %, while combined osseous and prosthetic impingement still occurred in over 40 % of the cases. The recommendations by Jolles, Widmer, Dorr, Yoshimine and Hisatome enabled higher flexion (p ≤ 0.001) and internal rotation (p ≤ 0.006). Clinically, anteversion rules of Widmer and Yoshimine provided one-year post-operatively statistically but not clinically relevant higher internal rotation (p ≤0.034). Standard rules of combined anteversion detect prosthetic but fail to prevent combined osseous and prosthetic impingement in THA. Future models will have to account for the patient-individual anatomic situation to ensure impingement-free ROM.

  14. 78 FR 69167 - Self-Regulatory Organizations; ICE Clear Credit LLC; Order Approving Proposed Rule Change To...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-18

    ..., the Bolivian Republic of Venezuela, the Argentine Republic, the Republic of Turkey and the Russian... transactions, and ICC's proposal, in combination with its existing rules, policies, and procedures for clearing...

  15. Statistical Origin of the Meyer-Neldel Rule in Amorphous Semiconductor Thin Film Transistors

    NASA Astrophysics Data System (ADS)

    Kikuchi, Minoru

    1990-09-01

    The origin of the Meyer-Neldel (MN) rule [G0{\\propto}\\exp (AEσ)] in the dc conductance of amorphous semiconductor thin-film transistors (TFT) is investigated based on the statistical model. We analyzed the temperature derivative of the band bending energy eVs(T) at the semiconductor interface as a function of Vs. It is shown that the condition for the validity of the rule, i.e., the linearity of the derivative deVs/dkT to Vs, certainly holds as a natural consequence of the interplay between the steep tail states and the low gap density of states spectrum. An expression is derived which relates the parameter A in the rule to the gap states spectrum. Model calculations show a magnitude of A in fair agreement with the experimental observations. The effects of the Fermi level position and the magnitude of the midgap density of states are also discussed.

  16. Adolescents' as active agents in the socialization process: legitimacy of parental authority and obligation to obey as predictors of obedience.

    PubMed

    Darling, Nancy; Cumsille, Patricio; Martínez, M Loreto

    2007-04-01

    Adolescents' agreement with parental standards and beliefs about the legitimacy of parental authority and their own obligation to obey were used to predict adolescents' obedience, controlling for parental monitoring, rules, and rule enforcement. Hierarchical linear models were used to predict both between-adolescent and within-adolescent, issue-specific differences in obedience in a sample of 703 Chilean adolescents (M age=15.0 years). Adolescents' global agreement with parents and global beliefs about their obligation to obey predicted between-adolescent obedience, controlling for parental monitoring, age, and gender. Adolescents' issue-specific agreement, legitimacy beliefs, and obligation to obey predicted issue-specific obedience, controlling for rules and parents' reports of rule enforcement. The potential of examining adolescents' agreement and beliefs about authority as a key link between parenting practices and adolescents' decisions to obey is discussed.

  17. To the Federal Trade Commission in the Matter of a Trade Regulation Rule on Over-the-Counter Drug Advertising.

    ERIC Educational Resources Information Center

    Council on Children, Media, and Merchandising, Washington, DC.

    This report supports amending the proposed Federal Trade Commission (FTC) Rule on Over-the Counter (OTC) Drug Advertising to insure better protection for children, illiterate populations, the deaf and the blind, from advertising on the air-waves. Several points are addressed: (1) the difficulties of combining the rule making schedules of the Food…

  18. Design rules for successful governmental payments for ecosystem services: Taking agri-environmental measures in Germany as an example.

    PubMed

    Meyer, Claas; Reutter, Michaela; Matzdorf, Bettina; Sattler, Claudia; Schomers, Sarah

    2015-07-01

    In recent years, increasing attention has been paid to financial environmental policy instruments that have played important roles in solving agri-environmental problems throughout the world, particularly in the European Union and the United States. The ample and increasing literature on Payments for Ecosystem Services (PES) and agri-environmental measures (AEMs), generally understood as governmental PES, shows that certain single design rules may have an impact on the success of a particular measure. Based on this research, we focused on the interplay of several design rules and conducted a comparative analysis of AEMs' institutional arrangements by examining 49 German cases. We analyzed the effects of the design rules and certain rule combinations on the success of AEMs. Compliance and noncompliance with the hypothesized design rules and the success of the AEMs were surveyed by questioning the responsible agricultural administration and the AEMs' mid-term evaluators. The different rules were evaluated in regard to their necessity and sufficiency for success using Qualitative Comparative Analysis (QCA). Our results show that combinations of certain design rules such as environmental goal targeting and area targeting conditioned the success of the AEMs. Hence, we generalize design principles for AEMs and discuss implications for the general advancement of ecosystem services and the PES approach in agri-environmental policies. Moreover, we highlight the relevance of the results for governmental PES program research and design worldwide. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Structural Dynamic Analyses And Test Predictions For Spacecraft Structures With Non-Linearities

    NASA Astrophysics Data System (ADS)

    Vergniaud, Jean-Baptiste; Soula, Laurent; Newerla, Alfred

    2012-07-01

    The overall objective of the mechanical development and verification process is to ensure that the spacecraft structure is able to sustain the mechanical environments encountered during launch. In general the spacecraft structures are a-priori assumed to behave linear, i.e. the responses to a static load or dynamic excitation, respectively, will increase or decrease proportionally to the amplitude of the load or excitation induced. However, past experiences have shown that various non-linearities might exist in spacecraft structures and the consequences of their dynamic effects can significantly affect the development and verification process. Current processes are mainly adapted to linear spacecraft structure behaviour. No clear rules exist for dealing with major structure non-linearities. They are handled outside the process by individual analysis and margin policy, and analyses after tests to justify the CLA coverage. Non-linearities can primarily affect the current spacecraft development and verification process on two aspects. Prediction of flights loads by launcher/satellite coupled loads analyses (CLA): only linear satellite models are delivered for performing CLA and no well-established rules exist how to properly linearize a model when non- linearities are present. The potential impact of the linearization on the results of the CLA has not yet been properly analyzed. There are thus difficulties to assess that CLA results will cover actual flight levels. Management of satellite verification tests: the CLA results generated with a linear satellite FEM are assumed flight representative. If the internal non- linearities are present in the tested satellite then there might be difficulties to determine which input level must be passed to cover satellite internal loads. The non-linear behaviour can also disturb the shaker control, putting the satellite at risk by potentially imposing too high levels. This paper presents the results of a test campaign performed in the frame of an ESA TRP study [1]. A bread-board including typical non-linearities has been designed, manufactured and tested through a typical spacecraft dynamic test campaign. The study has demonstrate the capabilities to perform non-linear dynamic test predictions on a flight representative spacecraft, the good correlation of test results with respect to Finite Elements Model (FEM) prediction and the possibility to identify modal behaviour and to characterize non-linearities characteristics from test results. As a synthesis for this study, overall guidelines have been derived on the mechanical verification process to improve level of expertise on tests involving spacecraft including non-linearity.

  20. REACH-ER: a tool to evaluate river basin remediation measures for contaminants at the catchment scale

    NASA Astrophysics Data System (ADS)

    van Griensven, Ann; Haest, Pieter Jan; Broekx, Steven; Seuntjens, Piet; Campling, Paul; Ducos, Geraldine; Blaha, Ludek; Slobodnik, Jaroslav

    2010-05-01

    The European Union (EU) adopted the Water Framework Directive (WFD) in 2000 ensuring that all aquatic ecosystems meet ‘good status' by 2015. However, it is a major challenge for river basin managers to meet this requirement in river basins with a high population density as well as intensive agricultural and industrial activities. The EU financed AQUAREHAB project (FP7) specifically examines the ecological and economic impact of innovative rehabilitation technologies for multi-pressured degraded water bodies. For this purpose, a generic collaborative management tool ‘REACH-ER' is being developed that can be used by stakeholders, citizens and water managers to evaluate the ecological and economical effects of different remedial actions on waterbodies. The tool is built using databases from large scale models simulating the hydrological dynamics of the river basing and sub-basins, the costs of the measures and the effectiveness of the measures in terms of ecological impact. Knowledge rules are used to describe the relationships between these data in order to compute the flux concentrations or to compute the effectiveness of measures. The management tool specifically addresses nitrate pollution and pollution by organic micropollutants. Detailed models are also used to predict the effectiveness of site remedial technologies using readily available global data. Rules describing ecological impacts are derived from ecotoxicological data for (mixtures of) specific contaminants (msPAF) and ecological indices relating effects to the presence of certain contaminants. Rules describing the cost-effectiveness of measures are derived from linear programming models identifying the least-cost combination of abatement measures to satisfy multi-pollutant reduction targets and from multi-criteria analysis.

  1. The strainrange conversion principle for treating cumulative fatigue damage in the creep range

    NASA Technical Reports Server (NTRS)

    Manson, S. S.

    1983-01-01

    A formula is derived for combining effects of successive hysteresis loops in the creep range of materials when one loop has excess tensile creep, while the other contains excess compressive creep. The resultant effect resembles single loops involving balanced tensile and compressive creep. The attempt to use the Interaction Damage Rule as a tool in combining loops of non-equal size and complex strainrange content has led to important new concepts useful in future studies of creep-fatigue. It turns out that the Interaction Damage Rule is basically an expression of how a set of hysteresis loops involving only single generic strains can combine to produce the same micromechanistic damage as the loop containing the combined strainranges which it analyzes. Making use of the underlying concept of Strainrange Partitioning that only the strainrange content of a hysteresis loop governs fatigue life, not order of introducing strainranges, a rational derivation of the Interaction Damage Rule is provided, showing also how it can effectively be used to synthesize independent loops and determine both damaging and healing effects.

  2. Gorilla and Orangutan Brains Conform to the Primate Cellular Scaling Rules: Implications for Human Evolution

    PubMed Central

    Herculano-Houzel, Suzana; Kaas, Jon H.

    2011-01-01

    Gorillas and orangutans are primates at least as large as humans, but their brains amount to about one third of the size of the human brain. This discrepancy has been used as evidence that the human brain is about 3 times larger than it should be for a primate species of its body size. In contrast to the view that the human brain is special in its size, we have suggested that it is the great apes that might have evolved bodies that are unusually large, on the basis of our recent finding that the cellular composition of the human brain matches that expected for a primate brain of its size, making the human brain a linearly scaled-up primate brain in its number of cells. To investigate whether the brain of great apes also conforms to the primate cellular scaling rules identified previously, we determine the numbers of neuronal and other cells that compose the orangutan and gorilla cerebella, use these numbers to calculate the size of the brain and of the cerebral cortex expected for these species, and show that these match the sizes described in the literature. Our results suggest that the brains of great apes also scale linearly in their numbers of neurons like other primate brains, including humans. The conformity of great apes and humans to the linear cellular scaling rules that apply to other primates that diverged earlier in primate evolution indicates that prehistoric Homo species as well as other hominins must have had brains that conformed to the same scaling rules, irrespective of their body size. We then used those scaling rules and published estimated brain volumes for various hominin species to predict the numbers of neurons that composed their brains. We predict that Homo heidelbergensis and Homo neanderthalensis had brains with approximately 80 billion neurons, within the range of variation found in modern Homo sapiens. We propose that while the cellular scaling rules that apply to the primate brain have remained stable in hominin evolution (since they apply to simians, great apes and modern humans alike), the Colobinae and Pongidae lineages favored marked increases in body size rather than brain size from the common ancestor with the Homo lineage, while the Homo lineage seems to have favored a large brain instead of a large body, possibly due to the metabolic limitations to having both. PMID:21228547

  3. Gorilla and orangutan brains conform to the primate cellular scaling rules: implications for human evolution.

    PubMed

    Herculano-Houzel, Suzana; Kaas, Jon H

    2011-01-01

    Gorillas and orangutans are primates at least as large as humans, but their brains amount to about one third of the size of the human brain. This discrepancy has been used as evidence that the human brain is about 3 times larger than it should be for a primate species of its body size. In contrast to the view that the human brain is special in its size, we have suggested that it is the great apes that might have evolved bodies that are unusually large, on the basis of our recent finding that the cellular composition of the human brain matches that expected for a primate brain of its size, making the human brain a linearly scaled-up primate brain in its number of cells. To investigate whether the brain of great apes also conforms to the primate cellular scaling rules identified previously, we determine the numbers of neuronal and other cells that compose the orangutan and gorilla cerebella, use these numbers to calculate the size of the brain and of the cerebral cortex expected for these species, and show that these match the sizes described in the literature. Our results suggest that the brains of great apes also scale linearly in their numbers of neurons like other primate brains, including humans. The conformity of great apes and humans to the linear cellular scaling rules that apply to other primates that diverged earlier in primate evolution indicates that prehistoric Homo species as well as other hominins must have had brains that conformed to the same scaling rules, irrespective of their body size. We then used those scaling rules and published estimated brain volumes for various hominin species to predict the numbers of neurons that composed their brains. We predict that Homo heidelbergensis and Homo neanderthalensis had brains with approximately 80 billion neurons, within the range of variation found in modern Homo sapiens. We propose that while the cellular scaling rules that apply to the primate brain have remained stable in hominin evolution (since they apply to simians, great apes and modern humans alike), the Colobinae and Pongidae lineages favored marked increases in body size rather than brain size from the common ancestor with the Homo lineage, while the Homo lineage seems to have favored a large brain instead of a large body, possibly due to the metabolic limitations to having both. Copyright © 2011 S. Karger AG, Basel.

  4. Measuring uncertainty by extracting fuzzy rules using rough sets and extracting fuzzy rules under uncertainty and measuring definability using rough sets

    NASA Technical Reports Server (NTRS)

    Worm, Jeffrey A.; Culas, Donald E.

    1991-01-01

    Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. This paper examines the concepts of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to provide the possible optimal solution. By incorporating principles from these theories, a decision-making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much we believe these rules is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of its fuzzy attributes is studied.

  5. Intelligent query by humming system based on score level fusion of multiple classifiers

    NASA Astrophysics Data System (ADS)

    Pyo Nam, Gi; Thu Trang Luong, Thi; Ha Nam, Hyun; Ryoung Park, Kang; Park, Sung-Joo

    2011-12-01

    Recently, the necessity for content-based music retrieval that can return results even if a user does not know information such as the title or singer has increased. Query-by-humming (QBH) systems have been introduced to address this need, as they allow the user to simply hum snatches of the tune to find the right song. Even though there have been many studies on QBH, few have combined multiple classifiers based on various fusion methods. Here we propose a new QBH system based on the score level fusion of multiple classifiers. This research is novel in the following three respects: three local classifiers [quantized binary (QB) code-based linear scaling (LS), pitch-based dynamic time warping (DTW), and LS] are employed; local maximum and minimum point-based LS and pitch distribution feature-based LS are used as global classifiers; and the combination of local and global classifiers based on the score level fusion by the PRODUCT rule is used to achieve enhanced matching accuracy. Experimental results with the 2006 MIREX QBSH and 2009 MIR-QBSH corpus databases show that the performance of the proposed method is better than that of single classifier and other fusion methods.

  6. One Shot Detection with Laplacian Object and Fast Matrix Cosine Similarity.

    PubMed

    Biswas, Sujoy Kumar; Milanfar, Peyman

    2016-03-01

    One shot, generic object detection involves searching for a single query object in a larger target image. Relevant approaches have benefited from features that typically model the local similarity patterns. In this paper, we combine local similarity (encoded by local descriptors) with a global context (i.e., a graph structure) of pairwise affinities among the local descriptors, embedding the query descriptors into a low dimensional but discriminatory subspace. Unlike principal components that preserve global structure of feature space, we actually seek a linear approximation to the Laplacian eigenmap that permits us a locality preserving embedding of high dimensional region descriptors. Our second contribution is an accelerated but exact computation of matrix cosine similarity as the decision rule for detection, obviating the computationally expensive sliding window search. We leverage the power of Fourier transform combined with integral image to achieve superior runtime efficiency that allows us to test multiple hypotheses (for pose estimation) within a reasonably short time. Our approach to one shot detection is training-free, and experiments on the standard data sets confirm the efficacy of our model. Besides, low computation cost of the proposed (codebook-free) object detector facilitates rather straightforward query detection in large data sets including movie videos.

  7. A path planning method for robot end effector motion using the curvature theory of the ruled surfaces

    NASA Astrophysics Data System (ADS)

    Güler, Fatma; Kasap, Emin

    Using the curvature theory for the ruled surfaces a technique for robot trajectory planning is presented. This technique ensures the calculation of robot’s next path. The positional variation of the Tool Center Point (TCP), linear velocity, angular velocity are required in the work area of the robot. In some circumstances, it may not be physically achievable and a re-computation of the robot trajectory might be necessary. This technique is suitable for re-computation of the robot trajectory. We obtain different robot trajectories which change depending on the darboux angle function and define trajectory ruled surface family with a common trajectory curve with the rotation trihedron. Also, the motion of robot end effector is illustrated with examples.

  8. Structure and Reversibility of 2D von Neumann Cellular Automata Over Triangular Lattice

    NASA Astrophysics Data System (ADS)

    Uguz, Selman; Redjepov, Shovkat; Acar, Ecem; Akin, Hasan

    2017-06-01

    Even though the fundamental main structure of cellular automata (CA) is a discrete special model, the global behaviors at many iterative times and on big scales could be a close, nearly a continuous, model system. CA theory is a very rich and useful phenomena of dynamical model that focuses on the local information being relayed to the neighboring cells to produce CA global behaviors. The mathematical points of the basic model imply the computable values of the mathematical structure of CA. After modeling the CA structure, an important problem is to be able to move forwards and backwards on CA to understand their behaviors in more elegant ways. A possible case is when CA is to be a reversible one. In this paper, we investigate the structure and the reversibility of two-dimensional (2D) finite, linear, triangular von Neumann CA with null boundary case. It is considered on ternary field ℤ3 (i.e. 3-state). We obtain their transition rule matrices for each special case. For given special triangular information (transition) rule matrices, we prove which triangular linear 2D von Neumann CAs are reversible or not. It is known that the reversibility cases of 2D CA are generally a much challenged problem. In the present study, the reversibility problem of 2D triangular, linear von Neumann CA with null boundary is resolved completely over ternary field. As far as we know, there is no structure and reversibility study of von Neumann 2D linear CA on triangular lattice in the literature. Due to the main CA structures being sufficiently simple to investigate in mathematical ways, and also very complex to obtain in chaotic systems, it is believed that the present construction can be applied to many areas related to these CA using any other transition rules.

  9. A generalized linear integrate-and-fire neural model produces diverse spiking behaviors.

    PubMed

    Mihalaş, Stefan; Niebur, Ernst

    2009-03-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model's rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation.

  10. A Generalized Linear Integrate-and-Fire Neural Model Produces Diverse Spiking Behaviors

    PubMed Central

    Mihalaş, Ştefan; Niebur, Ernst

    2010-01-01

    For simulations of neural networks, there is a trade-off between the size of the network that can be simulated and the complexity of the model used for individual neurons. In this study, we describe a generalization of the leaky integrate-and-fire model that produces a wide variety of spiking behaviors while still being analytically solvable between firings. For different parameter values, the model produces spiking or bursting, tonic, phasic or adapting responses, depolarizing or hyperpolarizing after potentials and so forth. The model consists of a diagonalizable set of linear differential equations describing the time evolution of membrane potential, a variable threshold, and an arbitrary number of firing-induced currents. Each of these variables is modified by an update rule when the potential reaches threshold. The variables used are intuitive and have biological significance. The model’s rich behavior does not come from the differential equations, which are linear, but rather from complex update rules. This single-neuron model can be implemented using algorithms similar to the standard integrate-and-fire model. It is a natural match with event-driven algorithms for which the firing times are obtained as a solution of a polynomial equation. PMID:18928368

  11. Evaluation of empirical rule of linearly correlated peptide selection (ERLPS) for proteotypic peptide-based quantitative proteomics.

    PubMed

    Liu, Kehui; Zhang, Jiyang; Fu, Bin; Xie, Hongwei; Wang, Yingchun; Qian, Xiaohong

    2014-07-01

    Precise protein quantification is essential in comparative proteomics. Currently, quantification bias is inevitable when using proteotypic peptide-based quantitative proteomics strategy for the differences in peptides measurability. To improve quantification accuracy, we proposed an "empirical rule for linearly correlated peptide selection (ERLPS)" in quantitative proteomics in our previous work. However, a systematic evaluation on general application of ERLPS in quantitative proteomics under diverse experimental conditions needs to be conducted. In this study, the practice workflow of ERLPS was explicitly illustrated; different experimental variables, such as, different MS systems, sample complexities, sample preparations, elution gradients, matrix effects, loading amounts, and other factors were comprehensively investigated to evaluate the applicability, reproducibility, and transferability of ERPLS. The results demonstrated that ERLPS was highly reproducible and transferable within appropriate loading amounts and linearly correlated response peptides should be selected for each specific experiment. ERLPS was used to proteome samples from yeast to mouse and human, and in quantitative methods from label-free to O18/O16-labeled and SILAC analysis, and enabled accurate measurements for all proteotypic peptide-based quantitative proteomics over a large dynamic range. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Life insurance risk assessment using a fuzzy logic expert system

    NASA Technical Reports Server (NTRS)

    Carreno, Luis A.; Steel, Roy A.

    1992-01-01

    In this paper, we present a knowledge based system that combines fuzzy processing with rule-based processing to form an improved decision aid for evaluating risk for life insurance. This application illustrates the use of FuzzyCLIPS to build a knowledge based decision support system possessing fuzzy components to improve user interactions and KBS performance. The results employing FuzzyCLIPS are compared with the results obtained from the solution of the problem using traditional numerical equations. The design of the fuzzy solution consists of a CLIPS rule-based system for some factors combined with fuzzy logic rules for others. This paper describes the problem, proposes a solution, presents the results, and provides a sample output of the software product.

  13. The effect of multiple primary rules on population-based cancer survival

    PubMed Central

    Weir, Hannah K.; Johnson, Christopher J.; Thompson, Trevor D.

    2015-01-01

    Purpose Different rules for registering multiple primary (MP) cancers are used by cancer registries throughout the world, making international data comparisons difficult. This study evaluates the effect of Surveillance, Epidemiology, and End Results (SEER) and International Association of Cancer Registries (IACR) MP rules on population-based cancer survival estimates. Methods Data from five US states and six metropolitan area cancer registries participating in the SEER Program were used to estimate age-standardized relative survival (RS%) for first cancers-only and all first cancers matching the selection criteria according to SEER and IACR MP rules for all cancer sites combined and for the top 25 cancer site groups among men and women. Results During 1995–2008, the percentage of MP cancers (all sites, both sexes) increased 25.4 % by using SEER rules (from 14.6 to 18.4 %) and 20.1 % by using IACR rules (from 13.2 to 15.8 %). More MP cancers were registered among females than among males, and SEER rules registered more MP cancers than IACR rules (15.8 vs. 14.4 % among males; 17.2 vs. 14.5 % among females). The top 3 cancer sites with the largest differences were melanoma (5.8 %), urinary bladder (3.5 %), and kidney and renal pelvis (2.9 %) among males, and breast (5.9 %), melanoma (3.9 %), and urinary bladder (3.4 %) among females. Five-year survival estimates (all sites combined) restricted to first primary cancers-only were higher than estimates by using first site-specific primaries (SEER or IACR rules), and for 11 of 21 sites among males and 11 of 23 sites among females. SEER estimates are comparable to IACR estimates for all site-specific cancers and marginally higher for all sites combined among females (RS 62.28 vs. 61.96 %). Conclusion Survival after diagnosis has improved for many leading cancers. However, cancer patients remain at risk of subsequent cancers. Survival estimates based on first cancers-only exclude a large and increasing number of MP cancers. To produce clinically and epidemiologically relevant and less biased cancer survival estimates, data on all cancers should be included in the analysis. The multiple primary rules (SEER or IACR) used to identify primary cancers do not affect survival estimates if all first cancers matching the selection criteria are used to produce site-specific survival estimates. PMID:23558444

  14. On some new properties of fractional derivatives with Mittag-Leffler kernel

    NASA Astrophysics Data System (ADS)

    Baleanu, Dumitru; Fernandez, Arran

    2018-06-01

    We establish a new formula for the fractional derivative with Mittag-Leffler kernel, in the form of a series of Riemann-Liouville fractional integrals, which brings out more clearly the non-locality of fractional derivatives and is easier to handle for certain computational purposes. We also prove existence and uniqueness results for certain families of linear and nonlinear fractional ODEs defined using this fractional derivative. We consider the possibility of a semigroup property for these derivatives, and establish extensions of the product rule and chain rule, with an application to fractional mechanics.

  15. NDRAM: nonlinear dynamic recurrent associative memory for learning bipolar and nonbipolar correlated patterns.

    PubMed

    Chartier, Sylvain; Proulx, Robert

    2005-11-01

    This paper presents a new unsupervised attractor neural network, which, contrary to optimal linear associative memory models, is able to develop nonbipolar attractors as well as bipolar attractors. Moreover, the model is able to develop less spurious attractors and has a better recall performance under random noise than any other Hopfield type neural network. Those performances are obtained by a simple Hebbian/anti-Hebbian online learning rule that directly incorporates feedback from a specific nonlinear transmission rule. Several computer simulations show the model's distinguishing properties.

  16. Combination of dynamic Bayesian network classifiers for the recognition of degraded characters

    NASA Astrophysics Data System (ADS)

    Likforman-Sulem, Laurence; Sigelle, Marc

    2009-01-01

    We investigate in this paper the combination of DBN (Dynamic Bayesian Network) classifiers, either independent or coupled, for the recognition of degraded characters. The independent classifiers are a vertical HMM and a horizontal HMM whose observable outputs are the image columns and the image rows respectively. The coupled classifiers, presented in a previous study, associate the vertical and horizontal observation streams into single DBNs. The scores of the independent and coupled classifiers are then combined linearly at the decision level. We compare the different classifiers -independent, coupled or linearly combined- on two tasks: the recognition of artificially degraded handwritten digits and the recognition of real degraded old printed characters. Our results show that coupled DBNs perform better on degraded characters than the linear combination of independent HMM scores. Our results also show that the best classifier is obtained by linearly combining the scores of the best coupled DBN and the best independent HMM.

  17. Expert networks in CLIPS

    NASA Technical Reports Server (NTRS)

    Hruska, S. I.; Dalke, A.; Ferguson, J. J.; Lacher, R. C.

    1991-01-01

    Rule-based expert systems may be structurally and functionally mapped onto a special class of neural networks called expert networks. This mapping lends itself to adaptation of connectionist learning strategies for the expert networks. A parsing algorithm to translate C Language Integrated Production System (CLIPS) rules into a network of interconnected assertion and operation nodes has been developed. The translation of CLIPS rules to an expert network and back again is illustrated. Measures of uncertainty similar to those rules in MYCIN-like systems are introduced into the CLIPS system and techniques for combining and hiring nodes in the network based on rule-firing with these certainty factors in the expert system are presented. Several learning algorithms are under study which automate the process of attaching certainty factors to rules.

  18. Investigation of model-based physical design restrictions (Invited Paper)

    NASA Astrophysics Data System (ADS)

    Lucas, Kevin; Baron, Stanislas; Belledent, Jerome; Boone, Robert; Borjon, Amandine; Couderc, Christophe; Patterson, Kyle; Riviere-Cazaux, Lionel; Rody, Yves; Sundermann, Frank; Toublan, Olivier; Trouiller, Yorick; Urbani, Jean-Christophe; Wimmer, Karl

    2005-05-01

    As lithography and other patterning processes become more complex and more non-linear with each generation, the task of physical design rules necessarily increases in complexity also. The goal of the physical design rules is to define the boundary between the physical layout structures which will yield well from those which will not. This is essentially a rule-based pre-silicon guarantee of layout correctness. However the rapid increase in design rule requirement complexity has created logistical problems for both the design and process functions. Therefore, similar to the semiconductor industry's transition from rule-based to model-based optical proximity correction (OPC) due to increased patterning complexity, opportunities for improving physical design restrictions by implementing model-based physical design methods are evident. In this paper we analyze the possible need and applications for model-based physical design restrictions (MBPDR). We first analyze the traditional design rule evolution, development and usage methodologies for semiconductor manufacturers. Next we discuss examples of specific design rule challenges requiring new solution methods in the patterning regime of low K1 lithography and highly complex RET. We then evaluate possible working strategies for MBPDR in the process development and product design flows, including examples of recent model-based pre-silicon verification techniques. Finally we summarize with a proposed flow and key considerations for MBPDR implementation.

  19. Combined linear theory/impact theory method for analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1980-01-01

    Pressure distributions on a wing body at Mach 4.63 are calculated. The combined theory is shown to give improved predictions over either linear theory or impact theory alone. The combined theory is also applied in the inverse design mode to calculate optimum camber slopes at Mach 4.63. Comparisons with optimum camber slopes obtained from unmodified linear theory show large differences. Analysis of the results indicate that the combined theory correctly predicts the effect of thickness on the loading distributions at high Mach numbers, and that finite thickness wings optimized at high Mach numbers using unmodified linear theory will not achieve the minimum drag characteristics for which they are designed.

  20. New insight into the comparative power of quality-control rules that use control observations within a single analytical run.

    PubMed

    Parvin, C A

    1993-03-01

    The error detection characteristics of quality-control (QC) rules that use control observations within a single analytical run are investigated. Unlike the evaluation of QC rules that span multiple analytical runs, most of the fundamental results regarding the performance of QC rules applied within a single analytical run can be obtained from statistical theory, without the need for simulation studies. The case of two control observations per run is investigated for ease of graphical display, but the conclusions can be extended to more than two control observations per run. Results are summarized in a graphical format that offers many interesting insights into the relations among the various QC rules. The graphs provide heuristic support to the theoretical conclusions that no QC rule is best under all error conditions, but the multirule that combines the mean rule and a within-run standard deviation rule offers an attractive compromise.

  1. The RC Circuit--A Multipurpose Laboratory Experiment.

    ERIC Educational Resources Information Center

    Wood, Herbert T.

    1993-01-01

    Describes an experiment that demonstrates the use of Kirchoff's rules in the analysis of electrical circuits. The experiment also involves the solution of a linear nonhomogeneous differential equation that is slightly different from the standard one for the simple RC circuit. (ZWH)

  2. Challenges for Rule Systems on the Web

    NASA Astrophysics Data System (ADS)

    Hu, Yuh-Jong; Yeh, Ching-Long; Laun, Wolfgang

    The RuleML Challenge started in 2007 with the objective of inspiring the issues of implementation for management, integration, interoperation and interchange of rules in an open distributed environment, such as the Web. Rules are usually classified as three types: deductive rules, normative rules, and reactive rules. The reactive rules are further classified as ECA rules and production rules. The study of combination rule and ontology is traced back to an earlier active rule system for relational and object-oriented (OO) databases. Recently, this issue has become one of the most important research problems in the Semantic Web. Once we consider a computer executable policy as a declarative set of rules and ontologies that guides the behavior of entities within a system, we have a flexible way to implement real world policies without rewriting the computer code, as we did before. Fortunately, we have de facto rule markup languages, such as RuleML or RIF to achieve the portability and interchange of rules for different rule systems. Otherwise, executing real-life rule-based applications on the Web is almost impossible. Several commercial or open source rule engines are available for the rule-based applications. However, we still need a standard rule language and benchmark for not only to compare the rule systems but also to measure the progress in the field. Finally, a number of real-life rule-based use cases will be investigated to demonstrate the applicability of current rule systems on the Web.

  3. Validity of Vegard’s rule for Al1-xInxN (0.08  <  x  <  0.28) thin films grown on GaN templates

    NASA Astrophysics Data System (ADS)

    Magalhães, S.; Franco, N.; Watson, I. M.; Martin, R. W.; O'Donnell, K. P.; Schenk, H. P. D.; Tang, F.; Sadler, T. C.; Kappers, M. J.; Oliver, R. A.; Monteiro, T.; Martin, T. L.; Bagot, P. A. J.; Moody, M. P.; Alves, E.; Lorenz, K.

    2017-05-01

    In this work, comparative x-ray diffraction (XRD) and Rutherford backscattering spectrometry (RBS) measurements allow a comprehensive characterization of Al1-xInxN thin films grown on GaN. Within the limits of experimental accuracy, and in the compositional range 0.08  <  x  <  0.28, the lattice parameters of the alloys generally obey Vegard’s rule, varying linearly with the InN fraction. Results are also consistent with the small deviation from linear behaviour suggested by Darakchieva et al (2008 Appl. Phys. Lett. 93 261908). However, unintentional incorporation of Ga, revealed by atom probe tomography (APT) at levels below the detection limit for RBS, may also affect the lattice parameters. Furthermore, in certain samples the compositions determined by XRD and RBS differ significantly. This fact, which was interpreted in earlier publications as an indication of a deviation from Vegard’s rule, may rather be ascribed to the influence of defects or impurities on the lattice parameters of the alloy. The wide-ranging set of Al1-xInxN films studied allowed furthermore a detailed investigation of the composition leading to lattice-matching of Al1-xInxN/GaN bilayers.

  4. Analysis of correlation between pediatric asthma exacerbation and exposure to pollutant mixtures with association rule mining.

    PubMed

    Toti, Giulia; Vilalta, Ricardo; Lindner, Peggy; Lefer, Barry; Macias, Charles; Price, Daniel

    2016-11-01

    Traditional studies on effects of outdoor pollution on asthma have been criticized for questionable statistical validity and inefficacy in exploring the effects of multiple air pollutants, alone and in combination. Association rule mining (ARM), a method easily interpretable and suitable for the analysis of the effects of multiple exposures, could be of use, but the traditional interest metrics of support and confidence need to be substituted with metrics that focus on risk variations caused by different exposures. We present an ARM-based methodology that produces rules associated with relevant odds ratios and limits the number of final rules even at very low support levels (0.5%), thanks to post-pruning criteria that limit rule redundancy and control for statistical significance. The methodology has been applied to a case-crossover study to explore the effects of multiple air pollutants on risk of asthma in pediatric subjects. We identified 27 rules with interesting odds ratio among more than 10,000 having the required support. The only rule including only one chemical is exposure to ozone on the previous day of the reported asthma attack (OR=1.14). 26 combinatory rules highlight the limitations of air quality policies based on single pollutant thresholds and suggest that exposure to mixtures of chemicals is more harmful, with odds ratio as high as 1.54 (associated with the combination day0 SO 2 , day0 NO, day0 NO 2 , day1 PM). The proposed method can be used to analyze risk variations caused by single and multiple exposures. The method is reliable and requires fewer assumptions on the data than parametric approaches. Rules including more than one pollutant highlight interactions that deserve further investigation, while helping to limit the search field. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Weighting Primary Care Patient Panel Size: A Novel Electronic Health Record-Derived Measure Using Machine Learning.

    PubMed

    Rajkomar, Alvin; Yim, Joanne Wing Lan; Grumbach, Kevin; Parekh, Ami

    2016-10-14

    Characterizing patient complexity using granular electronic health record (EHR) data regularly available to health systems is necessary to optimize primary care processes at scale. To characterize the utilization patterns of primary care patients and create weighted panel sizes for providers based on work required to care for patients with different patterns. We used EHR data over a 2-year period from patients empaneled to primary care clinicians in a single academic health system, including their in-person encounter history and virtual encounters such as telephonic visits, electronic messaging, and care coordination with specialists. Using a combination of decision rules and k-means clustering, we identified clusters of patients with similar health care system activity. Phenotypes with basic demographic information were used to predict future health care utilization using log-linear models. Phenotypes were also used to calculate weighted panel sizes. We identified 7 primary care utilization phenotypes, which were characterized by various combinations of primary care and specialty usage and were deemed clinically distinct by primary care physicians. These phenotypes, combined with age-sex and primary payer variables, predicted future primary care utilization with R 2 of .394 and were used to create weighted panel sizes. Individual patients' health care utilization may be useful for classifying patients by primary care work effort and for predicting future primary care usage.

  6. High Density Polyethylene Composites Reinforced with Hybrid Inorganic Fillers: Morphology, Mechanical and Thermal Expansion Performance

    PubMed Central

    Huang, Runzhou; Xu, Xinwu; Lee, Sunyoung; Zhang, Yang; Kim, Birm-June; Wu, Qinglin

    2013-01-01

    The effect of individual and combined talc and glass fibers (GFs) on mechanical and thermal expansion performance of the filled high density polyethylene (HDPE) composites was studied. Several published models were adapted to fit the measured tensile modulus and strength of various composite systems. It was shown that the use of silane-modified GFs had a much larger effect in improving mechanical properties and in reducing linear coefficient of thermal expansion (LCTE) values of filled composites, compared with the use of un-modified talc particles due to enhanced bonding to the matrix, larger aspect ratio, and fiber alignment for GFs. Mechanical properties and LCTE values of composites with combined talc and GF fillers varied with talc and GF ratio at a given total filler loading level. The use of a larger portion of GFs in the mix can lead to better composite performance, while the use of talc can help lower the composite costs and increase its recyclability. The use of 30 wt % combined filler seems necessary to control LCTE values of filled HDPE in the data value range generally reported for commercial wood plastic composites. Tensile modulus for talc-filled composite can be predicted with rule of mixture, while a PPA-based model can be used to predict the modulus and strength of GF-filled composites. PMID:28788322

  7. 7 CFR 29.3120 - Rule 17.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Rule 17. 29.3120 Section 29.3120 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... leaves, or any lot which contains 20 percent of greenish and green leaves combined, shall be designated...

  8. 7 CFR 29.3120 - Rule 17.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Rule 17. 29.3120 Section 29.3120 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... leaves, or any lot which contains 20 percent of greenish and green leaves combined, shall be designated...

  9. Deformation history and load sequence effects on cumulative fatigue damage and life predictions

    NASA Astrophysics Data System (ADS)

    Colin, Julie

    Fatigue loading seldom involves constant amplitude loading. This is especially true in the cooling systems of nuclear power plants, typically made of stainless steel, where thermal fluctuations and water turbulent flow create variable amplitude loads, with presence of mean stresses and overloads. These complex loading sequences lead to the formation of networks of microcracks (crazing) that can propagate. As stainless steel is a material with strong deformation history effects and phase transformation resulting from plastic straining, such load sequence and variable amplitude loading effects are significant to its fatigue behavior and life predictions. The goal of this study was to investigate the effects of cyclic deformation on fatigue behavior of stainless steel 304L as a deformation history sensitive material and determine how to quantify and accumulate fatigue damage to enable life predictions under variable amplitude loading conditions for such materials. A comprehensive experimental program including testing under fully-reversed, as well as mean stress and/or mean strain conditions, with initial or periodic overloads, along with step testing and random loading histories was conducted on two grades of stainless steel 304L, under both strain-controlled and load-controlled conditions. To facilitate comparisons with a material without deformation history effects, similar tests were also carried out on aluminum 7075-T6. Experimental results are discussed, including peculiarities observed with stainless steel behavior, such as a phenomenon, referred to as secondary hardening characterized by a continuous increase in the stress response in a strain-controlled test and often leading to runout fatigue life. Possible mechanisms for secondary hardening observed in some tests are also discussed. The behavior of aluminum is shown not to be affected by preloading, whereas the behavior of stainless steel is greatly influenced by prior loading. Mean stress relaxation in strain control and ratcheting in load control and their influence on fatigue life are discussed. Some unusual mean strain test results are presented for stainless steel 304L, where in spite of mean stress relaxation fatigue lives were significantly longer than fully-reversed tests. Prestraining indicated no effect on either deformation or fatigue behavior of aluminum, while it induced considerable hardening in stainless steel 304L and led to different results on fatigue life, depending on the test control mode. In step tests for stainless steel 304L, strong hardening induced by the first step of a high-low sequence significantly affects the fatigue behavior, depending on the test control mode used. For periodic overload tests of stainless steel 340L, hardening due to the overloads was progressive throughout life and more significant than in high-low step tests. For aluminum, no effect on deformation behavior was observed due to periodic overloads. However, the direction of the overloads was found to affect fatigue life, as tensile overloads led to longer lives, while compressive overloads led to shorter lives. Deformation and fatigue behaviors under random loading conditions are also presented and discussed for the two materials. The applicability of a common cumulative damage rule, the linear damage rule, is assessed for the two types of material, and for various loading conditions. While the linear damage rule associated with a strain-life or stress-life curve is shown to be fairly accurate for life predictions for aluminum, it is shown to poorly represent the behavior of stainless steel, especially in prestrained and high-low step tests, in load control. In order to account for prior deformation effects and achieve accurate fatigue life predictions for stainless steel, parameters including both stress and strain terms are required. The Smith-Watson-Topper and Fatemi-Socie approaches, as such parameters, are shown to correlate most test data fairly accurately. For damage accumulation under variable amplitude loading, the linear damage rule associated with strain-life or stress-life curves can lead to inaccurate fatigue life predictions, especially for materials presenting strong deformation memory effect, such as stainless steel 304L. The inadequacy of this method is typically attributed to the linear damage rule itself. On the contrary, this study demonstrates that damage accumulation using the linear damage rule can be accurate, provided that the linear damage rule is used in conjunction with parameters including both stress and strain terms. By including both loading history and response of the material in damage quantification, shortcomings of the commonly used linear damage rule approach can be circumvented in an effective manner. In addition, cracking behavior was also analyzed under various loading conditions. Results on microcrack initiation and propagation are presented in relation to deformation and fatigue behaviors of the materials. Microcracks were observed to form during the first few percent of life, indicating that most of the fatigue life of smooth specimens is spent in microcrack formation and growth. Analyses of fractured specimens showed that microcrack formation and growth is dependent on the loading history, and less important in aluminum than stainless steel 304L, due to the higher toughness of this latter material.

  10. Fusion of local and global detection systems to detect tuberculosis in chest radiographs.

    PubMed

    Hogeweg, Laurens; Mol, Christian; de Jong, Pim A; Dawson, Rodney; Ayles, Helen; van Ginneken, Bramin

    2010-01-01

    Automatic detection of tuberculosis (TB) on chest radiographs is a difficult problem because of the diverse presentation of the disease. A combination of detection systems for abnormalities and normal anatomy is used to improve detection performance. A textural abnormality detection system operating at the pixel level is combined with a clavicle detection system to suppress false positive responses. The output of a shape abnormality detection system operating at the image level is combined in a next step to further improve performance by reducing false negatives. Strategies for combining systems based on serial and parallel configurations were evaluated using the minimum, maximum, product, and mean probability combination rules. The performance of TB detection increased, as measured using the area under the ROC curve, from 0.67 for the textural abnormality detection system alone to 0.86 when the three systems were combined. The best result was achieved using the sum and product rule in a parallel combination of outputs.

  11. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  12. Mining knowledge from corpora: an application to retrieval and indexing.

    PubMed

    Soualmia, Lina F; Dahamna, Badisse; Darmoni, Stéfan

    2008-01-01

    The present work aims at discovering new associations between medical concepts to be exploited as input in retrieval and indexing. Association rules method is applied to documents. The process is carried out on three major document categories referring to e-health information consumers: health professionals, students and lay people. Association rules evaluation is founded on statistical measures combined with domain knowledge. Association rules represent existing relations between medical concepts (60.62%) and new knowledge (54.21%). Based on observations, 463 expert rules are defined by medical librarians for retrieval and indexing. Association rules bear out existing relations, produce new knowledge and support users and indexers in document retrieval and indexing.

  13. Deductibles in health insurance

    NASA Astrophysics Data System (ADS)

    Dimitriyadis, I.; Öney, Ü. N.

    2009-11-01

    This study is an extension to a simulation study that has been developed to determine ruin probabilities in health insurance. The study concentrates on inpatient and outpatient benefits for customers of varying age bands. Loss distributions are modelled through the Allianz tool pack for different classes of insureds. Premiums at different levels of deductibles are derived in the simulation and ruin probabilities are computed assuming a linear loading on the premium. The increase in the probability of ruin at high levels of the deductible clearly shows the insufficiency of proportional loading in deductible premiums. The PH-transform pricing rule developed by Wang is analyzed as an alternative pricing rule. A simple case, where an insured is assumed to be an exponential utility decision maker while the insurer's pricing rule is a PH-transform is also treated.

  14. 7 CFR 29.2633 - Rule 17.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Rule 17. 29.2633 Section 29.2633 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... leaves or any lot which contains 20 percent of greenish and green leaves combined shall be designated by...

  15. 7 CFR 29.2633 - Rule 17.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Rule 17. 29.2633 Section 29.2633 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... leaves or any lot which contains 20 percent of greenish and green leaves combined shall be designated by...

  16. The Development of Display Rule Knowledge: Linkages with Family Expressiveness and Social Competence.

    ERIC Educational Resources Information Center

    Jones, Diane Carlson; Cumberland, Amanda; Abbey, Belynda Bowling

    1998-01-01

    Two studies investigated emotional-display-rule knowledge and its associations with family expressiveness and peer competence. Findings indicated that third graders combined expression regulation with prosocial reasoning, norm-maintenance, and self-protective motives more frequently than kindergartners. Negative expressiveness was related…

  17. Origin of nonsaturating linear magnetoresistivity

    NASA Astrophysics Data System (ADS)

    Kisslinger, Ferdinand; Ott, Christian; Weber, Heiko B.

    2017-01-01

    The observation of nonsaturating classical linear magnetoresistivity has been an enigmatic phenomenon in solid-state physics. We present a study of a two-dimensional ohmic conductor, including local Hall effect and a self-consistent consideration of the environment. An equivalent-circuit scheme delivers a simple and convincing argument why the magnetoresistivity is linear in strong magnetic field, provided that current and biasing electric field are misaligned by a nonlocal mechanism. A finite-element model of a two-dimensional conductor is suited to display the situations that create such deviating currents. Besides edge effects next to electrodes, charge carrier density fluctuations are efficiently generating this effect. However, mobility fluctuations that have frequently been related to linear magnetoresistivity are barely relevant. Despite its rare observation, linear magnetoresitivity is rather the rule than the exception in a regime of low charge carrier densities, misaligned current pathways and strong magnetic field.

  18. A novel way of integrating rule-based knowledge into a web ontology language framework.

    PubMed

    Gamberger, Dragan; Krstaçić, Goran; Jović, Alan

    2013-01-01

    Web ontology language (OWL), used in combination with the Protégé visual interface, is a modern standard for development and maintenance of ontologies and a powerful tool for knowledge presentation. In this work, we describe a novel possibility to use OWL also for the conceptualization of knowledge presented by a set of rules. In this approach, rules are represented as a hierarchy of actionable classes with necessary and sufficient conditions defined by the description logic formalism. The advantages are that: the set of the rules is not an unordered set anymore, the concepts defined in descriptive ontologies can be used directly in the bodies of rules, and Protégé presents an intuitive tool for editing the set of rules. Standard ontology reasoning processes are not applicable in this framework, but experiments conducted on the rule sets have demonstrated that the reasoning problems can be successfully solved.

  19. Adaptive decision rules for the acquisition of nature reserves.

    PubMed

    Turner, Will R; Wilcove, David S

    2006-04-01

    Although reserve-design algorithms have shown promise for increasing the efficiency of conservation planning, recent work casts doubt on the usefulness of some of these approaches in practice. Using three data sets that vary widely in size and complexity, we compared various decision rules for acquiring reserve networks over multiyear periods. We explored three factors that are often important in real-world conservation efforts: uncertain availability of sites for acquisition, degradation of sites, and overall budget constraints. We evaluated the relative strengths and weaknesses of existing optimal and heuristic decision rules and developed a new set of adaptive decision rules that combine the strengths of existing optimal and heuristic approaches. All three of the new adaptive rules performed better than the existing rules we tested under virtually all scenarios of site availability, site degradation, and budget constraints. Moreover, the adaptive rules required no additional data beyond what was readily available and were relatively easy to compute.

  20. Operation Condition Monitoring using Temporal Weighted Dempster-Shafer Theory

    DTIC Science & Technology

    2014-12-23

    are mutually exclusive; A mapping of  : 2 0,1m   , which defines the basic probability assignment( BPA ) of each subset A  of hypotheses and...satisfying ( ) 0;m   ( ) 1 A m A   . The BPA represents a certain piece of ev idence. A rule of D-S evidence combination, which could be used to...yield a new BPA from two independent evidences and their BPAs . There are a number of possible combination rules in application (Sentz, 2002). One

  1. Response-Time Tests of Logical-Rule Models of Categorization

    ERIC Educational Resources Information Center

    Little, Daniel R.; Nosofsky, Robert M.; Denton, Stephen E.

    2011-01-01

    A recent resurgence in logical-rule theories of categorization has motivated the development of a class of models that predict not only choice probabilities but also categorization response times (RTs; Fific, Little, & Nosofsky, 2010). The new models combine mental-architecture and random-walk approaches within an integrated framework and…

  2. Pinochle Poker: An Activity for Counting and Probability

    ERIC Educational Resources Information Center

    Wroughton, Jacqueline; Nolan, Joseph

    2012-01-01

    Understanding counting rules is challenging for students; in particular, they struggle with determining when and how to implement combinations, permutations, and the multiplication rule as tools for counting large sets and computing probability. We present an activity--using ideas from the games of poker and pinochle--designed to help students…

  3. The Aromaticity of Pericyclic Reaction Transition States

    ERIC Educational Resources Information Center

    Rzepa, Henry S.

    2007-01-01

    An approach is presented that starts from two fundamental concepts in organic chemistry, chirality and aromaticity, and combines them into a simple rule for stating selection rules for pericyclic reactions in terms of achiral Huckel-aromatic and chiral Mobius-aromatic transition states. This is illustrated using an example that leads to apparent…

  4. 7 CFR 29.1126 - Rule 20.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 2 2011-01-01 2011-01-01 false Rule 20. 29.1126 Section 29.1126 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... except green, green red, green variegated, gray green, or the combination symbols “GL,” or “GF” in the...

  5. 7 CFR 29.1126 - Rule 20.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Rule 20. 29.1126 Section 29.1126 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... except green, green red, green variegated, gray green, or the combination symbols “GL,” or “GF” in the...

  6. How second-grade students internalize rules during teacher-student transactions: a case study.

    PubMed

    Méard, Jacques; Bertone, Stefano; Flavier, Eric

    2008-09-01

    Vygotsky's theory of the internalization of signs provided the basis for this study. This study tried to analyse the processes by which second-grade students internalize school rules. Ethnographic data were collected on 102 lessons in a second-grade class (6-8 years) during 1 year. This study focused on three lessons (ethnographic data completed by video-recordings, post-lesson interviews with the teacher, and re-transcriptions of the verbal interactions of the lessons and interviews). The longitudinal observation data were broken down into discrete transactions, crossed with the recorded data, and analysed in a four-step procedure. The results showed that the students' self-regulated actions (voluntary performance of prescribed actions) corresponded to the teacher's presentation of the rules, which was varied and personalized. She used explanation/justification, negotiation, persuasion, or imposition as a function of the situation and the students concerned. The results revealed: (a) Multiple actions of explanation/justification of the rules, negotiation and persuasion to the entire class, (b) Personalized actions of persuasion and rule imposition in instances of heteronomous actions by students, (c) Actions adjusted to the dynamics of the transactions. This study demonstrates how closely the actions of teacher and students are linked. More than a linear process of rules internalization, education looks like a co-construction of rules between teacher and students. These results can serve as a basis for the tools of teacher teaching.

  7. A fuzzy classifier system for process control

    NASA Technical Reports Server (NTRS)

    Karr, C. L.; Phillips, J. C.

    1994-01-01

    A fuzzy classifier system that discovers rules for controlling a mathematical model of a pH titration system was developed by researchers at the U.S. Bureau of Mines (USBM). Fuzzy classifier systems successfully combine the strengths of learning classifier systems and fuzzy logic controllers. Learning classifier systems resemble familiar production rule-based systems, but they represent their IF-THEN rules by strings of characters rather than in the traditional linguistic terms. Fuzzy logic is a tool that allows for the incorporation of abstract concepts into rule based-systems, thereby allowing the rules to resemble the familiar 'rules-of-thumb' commonly used by humans when solving difficult process control and reasoning problems. Like learning classifier systems, fuzzy classifier systems employ a genetic algorithm to explore and sample new rules for manipulating the problem environment. Like fuzzy logic controllers, fuzzy classifier systems encapsulate knowledge in the form of production rules. The results presented in this paper demonstrate the ability of fuzzy classifier systems to generate a fuzzy logic-based process control system.

  8. Response Surface Modeling Tolerance and Inference Error Risk Specifications: Proposed Industry Standards

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2012-01-01

    This paper reviews the derivation of an equation for scaling response surface modeling experiments. The equation represents the smallest number of data points required to fit a linear regression polynomial so as to achieve certain specified model adequacy criteria. Specific criteria are proposed which simplify an otherwise rather complex equation, generating a practical rule of thumb for the minimum volume of data required to adequately fit a polynomial with a specified number of terms in the model. This equation and the simplified rule of thumb it produces can be applied to minimize the cost of wind tunnel testing.

  9. A quantitative quantum chemical model of the Dewar-Knott color rule for cationic diarylmethanes

    NASA Astrophysics Data System (ADS)

    Olsen, Seth

    2012-04-01

    We document the quantitative manifestation of the Dewar-Knott color rule in a four-electron, three-orbital state-averaged complete active space self-consistent field (SA-CASSCF) model of a series of bridge-substituted cationic diarylmethanes. We show that the lowest excitation energies calculated using multireference perturbation theory based on the model are linearly correlated with the development of hole density in an orbital localized on the bridge, and the depletion of pair density in the same orbital. We quantitatively express the correlation in the form of a generalized Hammett equation.

  10. 75 FR 64753 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-20

    ....135) and Rule 165 (17 CFR 230.165) in connection with business combination transactions. The purpose..., mergers and other business combination transactions on a more timely basis, so long as the written...

  11. Predicting the nonlinear optical response in the resonant region from the linear characterization: a self-consistent theory for the first-, second-, and third-order (non)linear optical response

    NASA Astrophysics Data System (ADS)

    Pérez-Moreno, Javier; Clays, Koen; Kuzyk, Mark G.

    2010-08-01

    We introduce a self-consistent theory for the description of the optical linear and nonlinear response of molecules that is based strictly on the results of the experimental characterization. We show how the Thomas-Kuhn sum-rules can be used to eliminate the dependence of the nonlinear response on parameters that are not directly measurable. Our approach leads to the successful modeling of the dispersion of the nonlinear response of complex molecular structures with different geometries (dipolar and octupolar), and can be used as a guide towards the modeling in terms of fundamental physical parameters.

  12. Rule-based modeling: a computational approach for studying biomolecular site dynamics in cell signaling systems

    PubMed Central

    Chylek, Lily A.; Harris, Leonard A.; Tung, Chang-Shung; Faeder, James R.; Lopez, Carlos F.

    2013-01-01

    Rule-based modeling was developed to address the limitations of traditional approaches for modeling chemical kinetics in cell signaling systems. These systems consist of multiple interacting biomolecules (e.g., proteins), which themselves consist of multiple parts (e.g., domains, linear motifs, and sites of phosphorylation). Consequently, biomolecules that mediate information processing generally have the potential to interact in multiple ways, with the number of possible complexes and post-translational modification states tending to grow exponentially with the number of binary interactions considered. As a result, only large reaction networks capture all possible consequences of the molecular interactions that occur in a cell signaling system, which is problematic because traditional modeling approaches for chemical kinetics (e.g., ordinary differential equations) require explicit network specification. This problem is circumvented through representation of interactions in terms of local rules. With this approach, network specification is implicit and model specification is concise. Concise representation results in a coarse graining of chemical kinetics, which is introduced because all reactions implied by a rule inherit the rate law associated with that rule. Coarse graining can be appropriate if interactions are modular, and the coarseness of a model can be adjusted as needed. Rules can be specified using specialized model-specification languages, and recently developed tools designed for specification of rule-based models allow one to leverage powerful software engineering capabilities. A rule-based model comprises a set of rules, which can be processed by general-purpose simulation and analysis tools to achieve different objectives (e.g., to perform either a deterministic or stochastic simulation). PMID:24123887

  13. Guggenheim's rule and the enthalpy of vaporization of simple and polar fluids, molten salts, and room temperature ionic liquids.

    PubMed

    Weiss, Volker C

    2010-07-22

    One of Guggenheim's many corresponding-states rules for simple fluids implies that the molar enthalpy of vaporization (determined at the temperature at which the pressure reaches 1/50th of its critical value, which approximately coincides with the normal boiling point) divided by the critical temperature has a value of roughly 5.2R, where R is the universal gas constant. For more complex fluids, such as strongly polar and ionic fluids, one must expect deviations from Guggenheim's rule. Such a deviation has far-reaching consequences for other empirical rules related to the vaporization of fluids, namely Guldberg's rule and Trouton's rule. We evaluate these characteristic quantities for simple fluids, polar fluids, hydrogen-bonding fluids, simple inorganic molten salts, and room temperature ionic liquids (RTILs). For the ionic fluids, the critical parameters are not accessible to direct experimental observation; therefore, suitable extrapolation schemes have to be applied. For the RTILs [1-n-alkyl-3-methylimidazolium bis(trifluoromethylsulfonyl)imides, where the alkyl chain is ethyl, butyl, hexyl, or octyl], the critical temperature is estimated by extrapolating the surface tension to zero using Guggenheim's and Eotvos' rules; the critical density is obtained using the linear-diameter rule. It is shown that the RTILs adhere to Guggenheim's master curve for the reduced surface tension of simple and moderately polar fluids, but that they deviate significantly from his rule for the reduced enthalpy of vaporization of simple fluids. Consequences for evaluating the Trouton constant of RTILs, the value of which has been discussed controversially in the literature, are indicated.

  14. Restoring Low Sidelobe Antenna Patterns with Failed Elements in a Phased Array Antenna

    DTIC Science & Technology

    2016-02-01

    optimum low sidelobes are demonstrated in several examples. Index Terms — Array signal processing, beams, linear algebra , phased arrays, shaped...represented by a linear combination of low sidelobe beamformers with no failed elements, ’s, in a neighborhood around under the constraint that the linear ...would expect that linear combinations of them in a neighborhood around would also have low sidelobes. The algorithms in this paper exploit this

  15. Gene Ontology synonym generation rules lead to increased performance in biomedical concept recognition.

    PubMed

    Funk, Christopher S; Cohen, K Bretonnel; Hunter, Lawrence E; Verspoor, Karin M

    2016-09-09

    Gene Ontology (GO) terms represent the standard for annotation and representation of molecular functions, biological processes and cellular compartments, but a large gap exists between the way concepts are represented in the ontology and how they are expressed in natural language text. The construction of highly specific GO terms is formulaic, consisting of parts and pieces from more simple terms. We present two different types of manually generated rules to help capture the variation of how GO terms can appear in natural language text. The first set of rules takes into account the compositional nature of GO and recursively decomposes the terms into their smallest constituent parts. The second set of rules generates derivational variations of these smaller terms and compositionally combines all generated variants to form the original term. By applying both types of rules, new synonyms are generated for two-thirds of all GO terms and an increase in F-measure performance for recognition of GO on the CRAFT corpus from 0.498 to 0.636 is observed. Additionally, we evaluated the combination of both types of rules over one million full text documents from Elsevier; manual validation and error analysis show we are able to recognize GO concepts with reasonable accuracy (88 %) based on random sampling of annotations. In this work we present a set of simple synonym generation rules that utilize the highly compositional and formulaic nature of the Gene Ontology concepts. We illustrate how the generated synonyms aid in improving recognition of GO concepts on two different biomedical corpora. We discuss other applications of our rules for GO ontology quality assurance, explore the issue of overgeneration, and provide examples of how similar methodologies could be applied to other biomedical terminologies. Additionally, we provide all generated synonyms for use by the text-mining community.

  16. The generation of arbitrary order, non-classical, Gauss-type quadrature for transport applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spence, Peter J., E-mail: peter.spence@awe.co.uk

    A method is presented, based upon the Stieltjes method (1884), for the determination of non-classical Gauss-type quadrature rules, and the associated sets of abscissae and weights. The method is then used to generate a number of quadrature sets, to arbitrary order, which are primarily aimed at deterministic transport calculations. The quadrature rules and sets detailed include arbitrary order reproductions of those presented by Abu-Shumays in [4,8] (known as the QR sets, but labelled QRA here), in addition to a number of new rules and associated sets; these are generated in a similar way, and we label them the QRS quadraturemore » sets. The method presented here shifts the inherent difficulty (encountered by Abu-Shumays) associated with solving the non-linear moment equations, particular to the required quadrature rule, to one of the determination of non-classical weight functions and the subsequent calculation of various associated inner products. Once a quadrature rule has been written in a standard form, with an associated weight function having been identified, the calculation of the required inner products is achieved using specific variable transformations, in addition to the use of rapid, highly accurate quadrature suited to this purpose. The associated non-classical Gauss quadrature sets can then be determined, and this can be done to any order very rapidly. In this paper, instead of listing weights and abscissae for the different quadrature sets detailed (of which there are a number), the MATLAB code written to generate them is included as Appendix D. The accuracy and efficacy (in a transport setting) of the quadrature sets presented is not tested in this paper (although the accuracy of the QRA quadrature sets has been studied in [12,13]), but comparisons to tabulated results listed in [8] are made. When comparisons are made with one of the azimuthal QRA sets detailed in [8], the inherent difficulty in the method of generation, used there, becomes apparent, with the highest order tabulated sets showing unexpected anomalies. Although not in an actual transport setting, the accuracy of the sets presented here is assessed to some extent, by using them to approximate integrals (over an octant of the unit sphere) of various high order spherical harmonics. When this is done, errors in the tabulated QRA sets present themselves at the highest tabulated orders, whilst combinations of the new QRS quadrature sets offer some improvements in accuracy over the original QRA sets. Finally, in order to offer a quick, visual understanding of the various quadrature sets presented, when combined to give product sets for the purposes of integrating functions confined to the surface of a sphere, three-dimensional representations of points located on an octant of the unit sphere (as in [8,12]) are shown.« less

  17. Linear combination methods to improve diagnostic/prognostic accuracy on future observations

    PubMed Central

    Kang, Le; Liu, Aiyi; Tian, Lili

    2014-01-01

    Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714

  18. Measuring uncertainty by extracting fuzzy rules using rough sets

    NASA Technical Reports Server (NTRS)

    Worm, Jeffrey A.

    1991-01-01

    Despite the advancements in the computer industry in the past 30 years, there is still one major deficiency. Computers are not designed to handle terms where uncertainty is present. To deal with uncertainty, techniques other than classical logic must be developed. The methods are examined of statistical analysis, the Dempster-Shafer theory, rough set theory, and fuzzy set theory to solve this problem. The fundamentals of these theories are combined to possibly provide the optimal solution. By incorporating principles from these theories, a decision making process may be simulated by extracting two sets of fuzzy rules: certain rules and possible rules. From these rules a corresponding measure of how much these rules is believed is constructed. From this, the idea of how much a fuzzy diagnosis is definable in terms of a set of fuzzy attributes is studied.

  19. 78 FR 79714 - Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-31

    ... (17 CFR 230.135) and Rule 165 (17 CFR 230.165) in connection with business combination transactions... tender offers, mergers and other business combination transactions on a more timely basis, so long as the...

  20. 75 FR 62898 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-13

    ... CFR 230.165) in connection with business combination transactions. The purpose of the rule is to... business combination transactions on a more timely basis, so long as the written communications are filed...

  1. Technology Focus: Enhancing Conceptual Knowledge of Linear Programming with a Flash Tool

    ERIC Educational Resources Information Center

    Garofalo, Joe; Cory, Beth

    2007-01-01

    Mathematical knowledge can be categorized in different ways. One commonly used way is to distinguish between procedural mathematical knowledge and conceptual mathematical knowledge. Procedural knowledge of mathematics refers to formal language, symbols, algorithms, and rules. Conceptual knowledge is essential for meaningful understanding of…

  2. LinguisticBelief: a java application for linguistic evaluation using belief, fuzzy sets, and approximate reasoning.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darby, John L.

    LinguisticBelief is a Java computer code that evaluates combinations of linguistic variables using an approximate reasoning rule base. Each variable is comprised of fuzzy sets, and a rule base describes the reasoning on combinations of variables fuzzy sets. Uncertainty is considered and propagated through the rule base using the belief/plausibility measure. The mathematics of fuzzy sets, approximate reasoning, and belief/ plausibility are complex. Without an automated tool, this complexity precludes their application to all but the simplest of problems. LinguisticBelief automates the use of these techniques, allowing complex problems to be evaluated easily. LinguisticBelief can be used free of chargemore » on any Windows XP machine. This report documents the use and structure of the LinguisticBelief code, and the deployment package for installation client machines.« less

  3. A comparison of methods for DPLL loop filter design

    NASA Technical Reports Server (NTRS)

    Aguirre, S.; Hurd, W. J.; Kumar, R.; Statman, J.

    1986-01-01

    Four design methodologies for loop filters for a class of digital phase-locked loops (DPLLs) are presented. The first design maps an optimum analog filter into the digital domain; the second approach designs a filter that minimizes in discrete time weighted combination of the variance of the phase error due to noise and the sum square of the deterministic phase error component; the third method uses Kalman filter estimation theory to design a filter composed of a least squares fading memory estimator and a predictor. The last design relies on classical theory, including rules for the design of compensators. Linear analysis is used throughout the article to compare different designs, and includes stability, steady state performance and transient behavior of the loops. Design methodology is not critical when the loop update rate can be made high relative to loop bandwidth, as the performance approaches that of continuous time. For low update rates, however, the miminization method is significantly superior to the other methods.

  4. The electrostatic persistence length of polymers beyond the OSF limit.

    PubMed

    Everaers, R; Milchev, A; Yamakov, V

    2002-05-01

    We use large-scale Monte Carlo simulations to test scaling theories for the electrostatic persistence length l(e) of isolated, uniformly charged polymers with Debye-Hückel intrachain interactions in the limit where the screening length kappa(-1) exceeds the intrinsic persistence length of the chains. Our simulations cover a significantly larger part of the parameter space than previous studies. We observe no significant deviations from the prediction l(e) proportional to kappa(-2) by Khokhlov and Khachaturian which is based on applying the Odijk-Skolnick-Fixman theories of electrostatic bending rigidity and electrostatically excluded volume to the stretched de Gennes-Pincus-Velasco-Brochard polyelectrolyte blob chain. A linear or sublinear dependence of the persistence length on the screening length can be ruled out. We show that previous results pointing into this direction are due to a combination of excluded-volume and finite chain length effects. The paper emphasizes the role of scaling arguments in the development of useful representations for experimental and simulation data.

  5. Four classes of interactions for evolutionary games.

    PubMed

    Szabó, György; Bodó, Kinga S; Allen, Benjamin; Nowak, Martin A

    2015-08-01

    The symmetric four-strategy games are decomposed into a linear combination of 16 basis games represented by orthogonal matrices. Among these basis games four classes can be distinguished as it is already found for the three-strategy games. The games with self-dependent (cross-dependent) payoffs are characterized by matrices consisting of uniform rows (columns). Six of 16 basis games describe coordination-type interactions among the strategy pairs and three basis games span the parameter space of the cyclic components that are analogous to the rock-paper-scissors games. In the absence of cyclic components the game is a potential game and the potential matrix is evaluated. The main features of the four classes of games are discussed separately and we illustrate some characteristic strategy distributions on a square lattice in the low noise limit if logit rule controls the strategy evolution. Analysis of the general properties indicates similar types of interactions at larger number of strategies for the symmetric matrix games.

  6. RVB signatures in the spin dynamics of the square-lattice Heisenberg antiferromagnet

    NASA Astrophysics Data System (ADS)

    Ghioldi, E. A.; Gonzalez, M. G.; Manuel, L. O.; Trumper, A. E.

    2016-03-01

    We investigate the spin dynamics of the square-lattice spin-\\frac{1}{2} Heisenberg antiferromagnet by means of an improved mean-field Schwinger boson calculation. By identifying both, the long-range Néel and the RVB-like components of the ground state, we propose an educated guess for the mean-field magnetic excitation consisting on a linear combination of local and bond spin flips to compute the dynamical structure factor. Our main result is that when this magnetic excitation is optimized in such a way that the corresponding sum rule is fulfilled, we recover the low- and high-energy spectral weight features of the experimental spectrum. In particular, the anomalous spectral weight depletion at (π,0) found in recent inelastic neutron scattering experiments can be attributed to the interference of the triplet bond excitations of the RVB component of the ground state. We conclude that the Schwinger boson theory seems to be a good candidate to adequately interpret the dynamic properties of the square-lattice Heisenberg antiferromagnet.

  7. Damage accumulation of bovine bone under variable amplitude loads.

    PubMed

    Campbell, Abbey M; Cler, Michelle L; Skurla, Carolyn P; Kuehl, Joseph J

    2016-12-01

    Stress fractures, a painful injury, are caused by excessive fatigue in bone. This study on damage accumulation in bone sought to determine if the Palmgren-Miner rule (PMR), a well-known linear damage accumulation hypothesis, is predictive of fatigue failure in bone. An electromagnetic shaker apparatus was constructed to conduct cyclic and variable amplitude tests on bovine bone specimens. Three distinct damage regimes were observed following fracture. Fractures due to a low cyclic amplitude loading appeared ductile ( 4000 μ ϵ ), brittle due to high cyclic amplitude loading (> 9000 μ ϵ ), and a combination of ductile and brittle from mid-range cyclic amplitude loading (6500 -6750 μ ϵ ). Brittle and ductile fracture mechanisms were isolated and mixed, in a controlled way, into variable amplitude loading tests. PMR predictions of cycles to failure consistently over-predicted fatigue life when mixing isolated fracture mechanisms. However, PMR was not proven ineffective when used with a single damage mechanism.

  8. Rate adaptive multilevel coded modulation with high coding gain in intensity modulation direct detection optical communication

    NASA Astrophysics Data System (ADS)

    Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao

    2018-02-01

    A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.

  9. Identification of optimal feedback control rules from micro-quadrotor and insect flight trajectories.

    PubMed

    Faruque, Imraan A; Muijres, Florian T; Macfarlane, Kenneth M; Kehlenbeck, Andrew; Humbert, J Sean

    2018-06-01

    This paper presents "optimal identification," a framework for using experimental data to identify the optimality conditions associated with the feedback control law implemented in the measurements. The technique compares closed loop trajectory measurements against a reduced order model of the open loop dynamics, and uses linear matrix inequalities to solve an inverse optimal control problem as a convex optimization that estimates the controller optimality conditions. In this study, the optimal identification technique is applied to two examples, that of a millimeter-scale micro-quadrotor with an engineered controller on board, and the example of a population of freely flying Drosophila hydei maneuvering about forward flight. The micro-quadrotor results show that the performance indices used to design an optimal flight control law for a micro-quadrotor may be recovered from the closed loop simulated flight trajectories, and the Drosophila results indicate that the combined effect of the insect longitudinal flight control sensing and feedback acts principally to regulate pitch rate.

  10. The Interferometric Measurement of Phase Mismatch in Potential Second Harmonic Generators.

    NASA Astrophysics Data System (ADS)

    Sinofsky, Edward Lawrence

    This dissertation combines aspects of lasers, nonlinear optics and interferometry to measure the linear optical properties involved in phase matched second harmonic generation, (SHG). A new measuring technique has been developed to rapidly analyze the phase matching performance of potential SHGs. The data taken is in the form of interferograms produced by the self referencing nonlinear Fizeau interferometer (NLF), and correctly predicts when phase matched SHG will occur in the sample wedge. Data extracted from the interferograms produced by the NLF, allows us to predict both phase matching temperatures for noncritically phase matchable crystals and crystal orientation for angle tuned crystals. Phase matching measurements can be made for both Type I and Type II configurations. Phase mismatch measurements were made at the fundamental wavelength of 1.32 (mu)m, for: calcite, lithium niobate, and gadolinium molybdate (GMO). Similar measurements were made at 1.06 (mu)m. for calcite. Phase matched SHG was demonstrated in calcite, lithium niobate and KTP, while phase matching by temperature tuning is ruled out for GMO.

  11. Investigation on electromechanical properties of a muscle-like linear actuator fabricated by bi-film ionic polymer metal composites

    NASA Astrophysics Data System (ADS)

    Sun, Zhuangzhi; Zhao, Gang; Qiao, Dongpan; Song, Wenlong

    2017-12-01

    Artificial muscles have attracted great attention for their potentials in intelligent robots, biomimetic devices, and micro-electromechanical system. However, there are many performance bottlenecks restricting the development of artificial muscles in engineering applications, e.g., the little blocking force and short working life. Focused on the larger requirements of the output force and the lack characteristics of the linear motion, an innovative muscle-like linear actuator based on two segmented IPMC strips was developed to imitate linear motion of artificial muscles. The structures of the segmented IPMC strip of muscle-like linear actuator were developed and the established mathematical model was to determine the appropriate segmented proportion as 1:2:1. The muscle-like linear actuator with two segmented IPMC strips assemble by two supporting link blocks was manufactured for the study of electromechanical properties. Electromechanical properties of muscle-like linear actuator under the different technological factors were obtained to experiment, and the corresponding changing rules of muscle-like linear actuators were presented to research. Results showed that factors of redistributed resistance and surface strain on both end-sides were two main reasons affecting the emergence of different electromechanical properties of muscle-like linear actuators.

  12. Evolving fuzzy rules in a learning classifier system

    NASA Technical Reports Server (NTRS)

    Valenzuela-Rendon, Manuel

    1993-01-01

    The fuzzy classifier system (FCS) combines the ideas of fuzzy logic controllers (FLC's) and learning classifier systems (LCS's). It brings together the expressive powers of fuzzy logic as it has been applied in fuzzy controllers to express relations between continuous variables, and the ability of LCS's to evolve co-adapted sets of rules. The goal of the FCS is to develop a rule-based system capable of learning in a reinforcement regime, and that can potentially be used for process control.

  13. Determining a human cardiac pacemaker using fuzzy logic

    NASA Astrophysics Data System (ADS)

    Varnavsky, A. N.; Antonenco, A. V.

    2017-01-01

    The paper presents a possibility of estimating a human cardiac pacemaker using combined application of nonlinear integral transformation and fuzzy logic, which allows carrying out the analysis in the real-time mode. The system of fuzzy logical conclusion is proposed, membership functions and rules of fuzzy products are defined. It was shown that the ratio of the value of a truth degree of the winning rule condition to the value of a truth degree of any other rule condition is at least 3.

  14. Convergence of electromagnetic field components across discontinuous permittivity profiles: comment.

    PubMed

    Li, Lifeng

    2002-07-01

    The inverse rule that is described in a recent paper [J. Opt. Soc. Am. A 17, 491 (2000)] is not a multiplication rule for multiplying two infinite series, because it does not address how the terms of two series being multiplied are combined to form the product series. Furthermore, it is not the one that is being used in numerical practice. Therefore the insight that the paper provides into why the inverse rule yields correct results at the points of complementary discontinuities is questionable.

  15. Dipole polarizability, sum rules, mean excitation energies, and long-range dispersion coefficients for buckminsterfullerene C 60

    NASA Astrophysics Data System (ADS)

    Kumar, Ashok; Thakkar, Ajit J.

    2011-11-01

    Experimental photoabsorption cross-sections combined with constraints provided by the Kuhn-Reiche-Thomas sum rule and the high-energy behavior of the dipole-oscillator-strength density are used to construct dipole oscillator strength distributions for buckminsterfullerene (C60). The distributions are used to predict dipole sum rules Sk, mean excitation energies Ik, the frequency dependent polarizability, and C6 coefficients for the long-range dipole-dipole interactions of C60 with a variety of atoms and molecules.

  16. 76 FR 18325 - Federal Travel Regulation; FTR Cases 2007-304 and 2003-309, Relocation Allowances

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-01

    ...The General Services Administration (GSA), Office of Governmentwide Policy (OGP) continually reviews and adjusts policies as part of its ongoing mission to provide policy assistance to Government agencies subject to the Federal Travel Regulation (FTR). This final rule is a combination of two previous proposed rules that were published in the Federal Register on November 23, 2004 and August 3, 2007. The result is a unified, single final rule that addresses a wide range of relocation issues.

  17. 75 FR 54196 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-03

    ... Combination Transactions by Investment Companies and Business Development Companies. Form N-14 is used by... Act'') to be issued in business combination transactions specified in rule 145(a) under the Securities... business combination transactions. The Commission staff reviews registration statements on Form N-14 for...

  18. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  19. Transmission of singularities through a shock wave and the sound generation

    NASA Technical Reports Server (NTRS)

    Ting, L.

    1974-01-01

    The interaction of a plane shock wave of finite strength with a vortex line, point vortex, doublet or quadrupole of weak strength is studied. Based upon the physical condition that a free vortex line cannot support a pressure difference, rules are established which define the change of the linear intensity of the segment of the vortex line after its passage through the shock. The rules for point vortex, doublet, and quadrupole are then established as limiting cases. These rules can be useful for the construction of the solution of the entire flow field and for its physical interpretation. However, the solution can be obtained directly by the technique developed for shock diffraction problems. Explicit solutions and the associated sound generation are obtained for the passage of a point vortex through the shock wave.

  20. A 3/D finite element approach for metal matrix composites based on micromechanical models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svobodnik, A.J.; Boehm, H.J.; Rammerstorfer, F.G.

    Based on analytical considerations by Dvorak and Bahel-El-Din, a 3/D finite element material law has been developed for the elastic-plastic analysis of unidirectional fiber-reinforced metal matrix composites. The material law described in this paper has been implemented in the finite element code ABAQUS via the user subroutine UMAT. A constitutive law is described under the assumption that the fibers are linear-elastic and the matrix is of a von Mises-type with a Prager-Ziegler kinematic hardening rule. The uniaxial effective stress-strain relationship of the matrix in the plastic range is approximated by a Ramberg-Osgood law, a linear hardening rule or a nonhardeningmore » rule. Initial yield surface of the matrix material and for the fiber reinforced composite are compared to show the effect of reinforcement. Implementation of this material law in a finite element program is shown. Furthermore, the efficiency of substepping schemes and stress corrections for the numerical integration of the elastic-plastic stress-strain relations for anisotropic materials are investigated. The results of uniaxial monotonic tests of a boron/aluminum composite are compared to some finite element analyses based on micromechanical considerations. Furthermore a complete 3/D analysis of a tensile test specimen made of a silicon-carbide/aluminum MMC and the analysis of an MMC inlet inserted in a homogenous material are shown. 12 refs.« less

  1. Estimating enthalpy of vaporization from vapor pressure using Trouton's rule.

    PubMed

    MacLeod, Matthew; Scheringer, Martin; Hungerbühler, Konrad

    2007-04-15

    The enthalpy of vaporization of liquids and subcooled liquids at 298 K (delta H(VAP)) is an important parameter in environmental fate assessments that consider spatial and temporal variability in environmental conditions. It has been shown that delta H(VAP)P for non-hydrogen-bonding substances can be estimated from vapor pressure at 298 K (P(L)) using an empirically derived linear relationship. Here, we demonstrate that the relationship between delta H(VAP)and PL is consistent with Trouton's rule and the ClausiusClapeyron equation under the assumption that delta H(VAP) is linearly dependent on temperature between 298 K and the boiling point temperature. Our interpretation based on Trouton's rule substantiates the empirical relationship between delta H(VAP) degree and P(L) degrees for non-hydrogen-bonding chemicals with subcooled liquid vapor pressures ranging over 15 orders of magnitude. We apply the relationship between delta H(VAP) degrees and P(L) degrees to evaluate data reported in literature reviews for several important classes of semivolatile environmental contaminants, including polycyclic aromatic hydrocarbons, chlorobenzenes, polychlorinated biphenyls and polychlorinated dibenzo-dioxins and -furans and illustrate the temperature dependence of results from a multimedia model presented as a partitioning map. The uncertainty associated with estimating delta H(VAP)degrees from P(L) degrees using this relationship is acceptable for most environmental fate modeling of non-hydrogen-bonding semivolatile organic chemicals.

  2. 77 FR 43407 - Self-Regulatory Organizations; The Options Clearing Corporation; Order Approving Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-24

    ...-Laws and Rules to security futures on index-linked securities such as exchange-traded notes, which are currently traded on OneChicago, LLC. Index-linked securities are non-convertible debt of a major financial... futures contracts, one or more physical commodities, currencies or debt securities, or a combination of...

  3. 26 CFR 1.79-1 - Group-term life insurance-general rules.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... precludes individual selection. (b) May group-term life insurance be combined with other benefits? No part... that does not provide general death benefits, such as travel insurance or accident and health insurance... 26 Internal Revenue 2 2011-04-01 2011-04-01 false Group-term life insurance-general rules. 1.79-1...

  4. 76 FR 54095 - Pilot in Command Proficiency Check and Other Changes to the Pilot and Pilot School Certification...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-31

    ... Command Proficiency Check and Other Changes to the Pilot and Pilot School Certification Rules AGENCY... regulations concerning pilot, flight instructor, and pilot school certification. This rule will require pilot... and permits pilot schools and provisional pilot schools to apply for a combined private pilot...

  5. Estimating Classification Accuracy for Complex Decision Rules Based on Multiple Scores

    ERIC Educational Resources Information Center

    Douglas, Karen M.; Mislevy, Robert J.

    2010-01-01

    Important decisions about students are made by combining multiple measures using complex decision rules. Although methods for characterizing the accuracy of decisions based on a single measure have been suggested by numerous researchers, such methods are not useful for estimating the accuracy of decisions based on multiple measures. This study…

  6. On the rules of integration of crowded orientation signals

    PubMed Central

    Põder, Endel

    2012-01-01

    Crowding is related to an integration of feature signals over an inappropriately large area in the visual periphery. The rules of this integration are still not well understood. This study attempts to understand how the orientation signals from the target and flankers are combined. A target Gabor, together with 2, 4, or 6 flanking Gabors, was briefly presented in a peripheral location (4° eccentricity). The observer's task was to identify the orientation of the target (eight-alternative forced-choice). Performance was found to be nonmonotonically dependent on the target–flanker orientation difference (a drop at intermediate differences). For small target–flanker differences, a strong assimilation bias was observed. An effect of the number of flankers was found for heterogeneous flankers only. It appears that different rules of integration are used, dependent on some salient aspects (target pop-out, homogeneity–heterogeneity) of the stimulus pattern. The strategy of combining simple rules may be explained by the goal of the visual system to encode potentially important aspects of a stimulus with limited processing resources and using statistical regularities of the natural visual environment. PMID:23145295

  7. On the rules of integration of crowded orientation signals.

    PubMed

    Põder, Endel

    2012-01-01

    Crowding is related to an integration of feature signals over an inappropriately large area in the visual periphery. The rules of this integration are still not well understood. This study attempts to understand how the orientation signals from the target and flankers are combined. A target Gabor, together with 2, 4, or 6 flanking Gabors, was briefly presented in a peripheral location (4° eccentricity). The observer's task was to identify the orientation of the target (eight-alternative forced-choice). Performance was found to be nonmonotonically dependent on the target-flanker orientation difference (a drop at intermediate differences). For small target-flanker differences, a strong assimilation bias was observed. An effect of the number of flankers was found for heterogeneous flankers only. It appears that different rules of integration are used, dependent on some salient aspects (target pop-out, homogeneity-heterogeneity) of the stimulus pattern. The strategy of combining simple rules may be explained by the goal of the visual system to encode potentially important aspects of a stimulus with limited processing resources and using statistical regularities of the natural visual environment.

  8. 75 FR 68846 - Self-Regulatory Organizations; The NASDAQ Stock Market LLC; Notice of Filing of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-09

    ... business combination for which the Company must file and furnish a proxy or information statement subject... combination is being considered. If a shareholder vote on the business combination is held, [(e) Until the... working capital purposes) if the business combination is approved and consummated. A Company may establish...

  9. Organizational Knowledge Transfer Using Ontologies and a Rule-Based System

    NASA Astrophysics Data System (ADS)

    Okabe, Masao; Yoshioka, Akiko; Kobayashi, Keido; Yamaguchi, Takahira

    In recent automated and integrated manufacturing, so-called intelligence skill is becoming more and more important and its efficient transfer to next-generation engineers is one of the urgent issues. In this paper, we propose a new approach without costly OJT (on-the-job training), that is, combinational usage of a domain ontology, a rule ontology and a rule-based system. Intelligence skill can be decomposed into pieces of simple engineering rules. A rule ontology consists of these engineering rules as primitives and the semantic relations among them. A domain ontology consists of technical terms in the engineering rules and the semantic relations among them. A rule ontology helps novices get the total picture of the intelligence skill and a domain ontology helps them understand the exact meanings of the engineering rules. A rule-based system helps domain experts externalize their tacit intelligence skill to ontologies and also helps novices internalize them. As a case study, we applied our proposal to some actual job at a remote control and maintenance office of hydroelectric power stations in Tokyo Electric Power Co., Inc. We also did an evaluation experiment for this case study and the result supports our proposal.

  10. The development of display rule knowledge: linkages with family expressiveness and social competence.

    PubMed

    Jones, D C; Abbey, B B; Cumberland, A

    1998-08-01

    The development of display rule knowledge and its associations with family expressiveness (Study 1) and peer competence (Study 2) were investigated among elementary school children. In Study 1, the display rule knowledge of 121 kindergartners and third graders was assessed using validated hypothetical scenarios. There were significant grade differences in display rule knowledge such that third graders compared to kindergartners more frequently combined expression regulation with prosocial reasoning, norm-maintenance, and self-protective motives. Maternal reports of family emotional climates indicated that aspects of negative expressiveness were related positively to self-protective display rules and negatively to prosocial display rules. Study 2 included 93 third and fifth graders who reported on their display rule knowledge and on their emotional reactions and strategies to resolve peer conflict. Classmates and teachers provided ratings on social competence. Age differences for display rule knowledge were not documented, but prosocial display rules were most consistently related to hypothetical peer conflict responses and social competence. The findings confirm that display rule knowledge is related in consistent and systematic ways to what children learn within the family emotional context, how they propose to resolve peer conflict, and how they are perceived by peers and teachers.

  11. Solving ay'' + by' + cy = 0 with a Simple Product Rule Approach

    ERIC Educational Resources Information Center

    Tolle, John

    2011-01-01

    When elementary ordinary differential equations (ODEs) of first and second order are included in the calculus curriculum, second-order linear constant coefficient ODEs are typically solved by a method more appropriate to differential equations courses. This method involves the characteristic equation and its roots, complex-valued solutions, and…

  12. 76 FR 72134 - Annual Charges for Use of Government Lands

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-22

    ... revise the methodology used to compute these annual charges. Under the proposed rule, the Commission would create a fee schedule based on the U.S. Bureau of Land Management's (BLM) methodology for calculating rental rates for linear rights of way. This methodology includes a land value per acre, an...

  13. Ten-Year-Old Students Solving Linear Equations

    ERIC Educational Resources Information Center

    Brizuela, Barbara; Schliemann, Analucia

    2004-01-01

    In this article, the authors seek to re-conceptualize the perspective regarding students' difficulties with algebra. While acknowledging that students "do" have difficulties when learning algebra, they also argue that the generally espoused criteria for algebra as the ability to work with the syntactical rules for solving equations is…

  14. Inference in fuzzy rule bases with conflicting evidence

    NASA Technical Reports Server (NTRS)

    Koczy, Laszlo T.

    1992-01-01

    Inference based on fuzzy 'If ... then' rules has played a very important role since when Zadeh proposed the Compositional Rule of Inference and, especially, since the first successful application presented by Mamdani. From the mid-1980's when the 'fuzzy boom' started in Japan, numerous industrial applications appeared, all using simplified techniques because of the high levels of computational complexity. Another feature is that antecedents in the rules are distributed densely in the input space, so the conclusion can be calculated by some weighted combination of the consequents of the matching (fired) rules. The CRI works in the following way: If R is a rule and A* is an observation, the conclusion is computed by B* = R o A* (o stands for the max-min composition). Algorithms implementing this idea directly have an exponential time complexity (maybe the problem is NP-hard) as the rules are relations in X x Y, a k1 x k2 dimensional space, if X is k1, Y is k2 dimensional. The simplified techniques usually decompose the relation into k1 projections in X(sub i) and measure in some way the degree of similarity between observation and antecedent by some parameter of the overlapping. These parameters are aggregated to a single value in (0,1) which is applied as a resulting weight for the given rule. The projections of rules in dimensions Y(sub i) are weighted by these aggregated values and then they are combined in order to obtain a resulting conclusion separately in every dimension. This method is unapplicable with sparse bases as there is no guarantee that an arbitrary observation matches with any of the antecedents. Then, the degree of similarity is 0 and all consequents are weighted by 0. Some considerations for such a situation are summarized in the next sections.

  15. Comparison of futility monitoring guidelines using completed phase III oncology trials.

    PubMed

    Zhang, Qiang; Freidlin, Boris; Korn, Edward L; Halabi, Susan; Mandrekar, Sumithra; Dignam, James J

    2017-02-01

    Futility (inefficacy) interim monitoring is an important component in the conduct of phase III clinical trials, especially in life-threatening diseases. Desirable futility monitoring guidelines allow timely stopping if the new therapy is harmful or if it is unlikely to demonstrate to be sufficiently effective if the trial were to continue to its final analysis. There are a number of analytical approaches that are used to construct futility monitoring boundaries. The most common approaches are based on conditional power, sequential testing of the alternative hypothesis, or sequential confidence intervals. The resulting futility boundaries vary considerably with respect to the level of evidence required for recommending stopping the study. We evaluate the performance of commonly used methods using event histories from completed phase III clinical trials of the Radiation Therapy Oncology Group, Cancer and Leukemia Group B, and North Central Cancer Treatment Group. We considered published superiority phase III trials with survival endpoints initiated after 1990. There are 52 studies available for this analysis from different disease sites. Total sample size and maximum number of events (statistical information) for each study were calculated using protocol-specified effect size, type I and type II error rates. In addition to the common futility approaches, we considered a recently proposed linear inefficacy boundary approach with an early harm look followed by several lack-of-efficacy analyses. For each futility approach, interim test statistics were generated for three schedules with different analysis frequency, and early stopping was recommended if the interim result crossed a futility stopping boundary. For trials not demonstrating superiority, the impact of each rule is summarized as savings on sample size, study duration, and information time scales. For negative studies, our results show that the futility approaches based on testing the alternative hypothesis and repeated confidence interval rules yielded less savings (compared to the other two rules). These boundaries are too conservative, especially during the first half of the study (<50% of information). The conditional power rules are too aggressive during the second half of the study (>50% of information) and may stop a trial even when there is a clinically meaningful treatment effect. The linear inefficacy boundary with three or more interim analyses provided the best results. For positive studies, we demonstrated that none of the futility rules would have stopped the trials. The linear inefficacy boundary futility approach is attractive from statistical, clinical, and logistical standpoints in clinical trials evaluating new anti-cancer agents.

  16. Multi-factorial analysis of class prediction error: estimating optimal number of biomarkers for various classification rules.

    PubMed

    Khondoker, Mizanur R; Bachmann, Till T; Mewissen, Muriel; Dickinson, Paul; Dobrzelecki, Bartosz; Campbell, Colin J; Mount, Andrew R; Walton, Anthony J; Crain, Jason; Schulze, Holger; Giraud, Gerard; Ross, Alan J; Ciani, Ilenia; Ember, Stuart W J; Tlili, Chaker; Terry, Jonathan G; Grant, Eilidh; McDonnell, Nicola; Ghazal, Peter

    2010-12-01

    Machine learning and statistical model based classifiers have increasingly been used with more complex and high dimensional biological data obtained from high-throughput technologies. Understanding the impact of various factors associated with large and complex microarray datasets on the predictive performance of classifiers is computationally intensive, under investigated, yet vital in determining the optimal number of biomarkers for various classification purposes aimed towards improved detection, diagnosis, and therapeutic monitoring of diseases. We investigate the impact of microarray based data characteristics on the predictive performance for various classification rules using simulation studies. Our investigation using Random Forest, Support Vector Machines, Linear Discriminant Analysis and k-Nearest Neighbour shows that the predictive performance of classifiers is strongly influenced by training set size, biological and technical variability, replication, fold change and correlation between biomarkers. Optimal number of biomarkers for a classification problem should therefore be estimated taking account of the impact of all these factors. A database of average generalization errors is built for various combinations of these factors. The database of generalization errors can be used for estimating the optimal number of biomarkers for given levels of predictive accuracy as a function of these factors. Examples show that curves from actual biological data resemble that of simulated data with corresponding levels of data characteristics. An R package optBiomarker implementing the method is freely available for academic use from the Comprehensive R Archive Network (http://www.cran.r-project.org/web/packages/optBiomarker/).

  17. A Novel Feature Level Fusion for Heart Rate Variability Classification Using Correntropy and Cauchy-Schwarz Divergence.

    PubMed

    Goshvarpour, Ateke; Goshvarpour, Atefeh

    2018-04-30

    Heart rate variability (HRV) analysis has become a widely used tool for monitoring pathological and psychological states in medical applications. In a typical classification problem, information fusion is a process whereby the effective combination of the data can achieve a more accurate system. The purpose of this article was to provide an accurate algorithm for classifying HRV signals in various psychological states. Therefore, a novel feature level fusion approach was proposed. First, using the theory of information, two similarity indicators of the signal were extracted, including correntropy and Cauchy-Schwarz divergence. Applying probabilistic neural network (PNN) and k-nearest neighbor (kNN), the performance of each index in the classification of meditators and non-meditators HRV signals was appraised. Then, three fusion rules, including division, product, and weighted sum rules were used to combine the information of both similarity measures. For the first time, we propose an algorithm to define the weights of each feature based on the statistical p-values. The performance of HRV classification using combined features was compared with the non-combined features. Totally, the accuracy of 100% was obtained for discriminating all states. The results showed the strong ability and proficiency of division and weighted sum rules in the improvement of the classifier accuracies.

  18. Quantiles for Finite Mixtures of Normal Distributions

    ERIC Educational Resources Information Center

    Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.

    2006-01-01

    Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)

  19. A neural network architecture for implementation of expert systems for real time monitoring

    NASA Technical Reports Server (NTRS)

    Ramamoorthy, P. A.

    1991-01-01

    Since neural networks have the advantages of massive parallelism and simple architecture, they are good tools for implementing real time expert systems. In a rule based expert system, the antecedents of rules are in the conjunctive or disjunctive form. We constructed a multilayer feedforward type network in which neurons represent AND or OR operations of rules. Further, we developed a translator which can automatically map a given rule base into the network. Also, we proposed a new and powerful yet flexible architecture that combines the advantages of both fuzzy expert systems and neural networks. This architecture uses the fuzzy logic concepts to separate input data domains into several smaller and overlapped regions. Rule-based expert systems for time critical applications using neural networks, the automated implementation of rule-based expert systems with neural nets, and fuzzy expert systems vs. neural nets are covered.

  20. Communicative signals support abstract rule learning by 7-month-old infants

    PubMed Central

    Ferguson, Brock; Lew-Williams, Casey

    2016-01-01

    The mechanisms underlying the discovery of abstract rules like those found in natural language may be evolutionarily tuned to speech, according to previous research. When infants hear speech sounds, they can learn rules that govern their combination, but when they hear non-speech sounds such as sine-wave tones, they fail to do so. Here we show that infants’ rule learning is not tied to speech per se, but is instead enhanced more broadly by communicative signals. In two experiments, infants succeeded in learning and generalizing rules from tones that were introduced as if they could be used to communicate. In two control experiments, infants failed to learn the very same rules when familiarized to tones outside of a communicative exchange. These results reveal that infants’ attention to social agents and communication catalyzes a fundamental achievement of human learning. PMID:27150270

  1. On implementing clinical decision support: achieving scalability and maintainability by combining business rules and ontologies.

    PubMed

    Kashyap, Vipul; Morales, Alfredo; Hongsermeier, Tonya

    2006-01-01

    We present an approach and architecture for implementing scalable and maintainable clinical decision support at the Partners HealthCare System. The architecture integrates a business rules engine that executes declarative if-then rules stored in a rule-base referencing objects and methods in a business object model. The rules engine executes object methods by invoking services implemented on the clinical data repository. Specialized inferences that support classification of data and instances into classes are identified and an approach to implement these inferences using an OWL based ontology engine is presented. Alternative representations of these specialized inferences as if-then rules or OWL axioms are explored and their impact on the scalability and maintenance of the system is presented. Architectural alternatives for integration of clinical decision support functionality with the invoking application and the underlying clinical data repository; and their associated trade-offs are discussed and presented.

  2. Cognitive changes in conjunctive rule-based category learning: An ERP approach.

    PubMed

    Rabi, Rahel; Joanisse, Marc F; Zhu, Tianshu; Minda, John Paul

    2018-06-25

    When learning rule-based categories, sufficient cognitive resources are needed to test hypotheses, maintain the currently active rule in working memory, update rules after feedback, and to select a new rule if necessary. Prior research has demonstrated that conjunctive rules are more complex than unidimensional rules and place greater demands on executive functions like working memory. In our study, event-related potentials (ERPs) were recorded while participants performed a conjunctive rule-based category learning task with trial-by-trial feedback. In line with prior research, correct categorization responses resulted in a larger stimulus-locked late positive complex compared to incorrect responses, possibly indexing the updating of rule information in memory. Incorrect trials elicited a pronounced feedback-locked P300 elicited which suggested a disconnect between perception, and the rule-based strategy. We also examined the differential processing of stimuli that were able to be correctly classified by the suboptimal single-dimensional rule ("easy" stimuli) versus those that could only be correctly classified by the optimal, conjunctive rule ("difficult" stimuli). Among strong learners, a larger, late positive slow wave emerged for difficult compared with easy stimuli, suggesting differential processing of category items even though strong learners performed well on the conjunctive category set. Overall, the findings suggest that ERP combined with computational modelling can be used to better understand the cognitive processes involved in rule-based category learning.

  3. Profitability of simple stationary technical trading rules with high-frequency data of Chinese Index Futures

    NASA Astrophysics Data System (ADS)

    Chen, Jing-Chao; Zhou, Yu; Wang, Xi

    2018-02-01

    Technical trading rules have been widely used by practitioners in financial markets for a long time. The profitability remains controversial and few consider the stationarity of technical indicators used in trading rules. We convert MA, KDJ and Bollinger bands into stationary processes and investigate the profitability of these trading rules by using 3 high-frequency data(15s,30s and 60s) of CSI300 Stock Index Futures from January 4th 2012 to December 31st 2016. Several performance and risk measures are adopted to assess the practical value of all trading rules directly while ADF-test is used to verify the stationarity and SPA test to check whether trading rules perform well due to intrinsic superiority or pure luck. The results show that there are several significant combinations of parameters for each indicator when transaction costs are not taken into consideration. Once transaction costs are included, trading profits will be eliminated completely. We also propose a method to reduce the risk of technical trading rules.

  4. Knowledge-guided mutation in classification rules for autism treatment efficacy.

    PubMed

    Engle, Kelley; Rada, Roy

    2017-03-01

    Data mining methods in biomedical research might benefit by combining genetic algorithms with domain-specific knowledge. The objective of this research is to show how the evolution of treatment rules for autism might be guided. The semantic distance between two concepts in the taxonomy is measured by the number of relationships separating the concepts in the taxonomy. The hypothesis is that replacing a concept in a treatment rule will change the accuracy of the rule in direct proportion to the semantic distance between the concepts. The method uses a patient database and autism taxonomies. Treatment rules are developed with an algorithm that exploits the taxonomies. The results support the hypothesis. This research should both advance the understanding of autism data mining in particular and of knowledge-guided evolutionary search in biomedicine in general.

  5. A distributed lag approach to fitting non-linear dose-response models in particulate matter air pollution time series investigations.

    PubMed

    Roberts, Steven; Martin, Michael A

    2007-06-01

    The majority of studies that have investigated the relationship between particulate matter (PM) air pollution and mortality have assumed a linear dose-response relationship and have used either a single-day's PM or a 2- or 3-day moving average of PM as the measure of PM exposure. Both of these modeling choices have come under scrutiny in the literature, the linear assumption because it does not allow for non-linearities in the dose-response relationship, and the use of the single- or multi-day moving average PM measure because it does not allow for differential PM-mortality effects spread over time. These two problems have been dealt with on a piecemeal basis with non-linear dose-response models used in some studies and distributed lag models (DLMs) used in others. In this paper, we propose a method for investigating the shape of the PM-mortality dose-response relationship that combines a non-linear dose-response model with a DLM. This combined model will be shown to produce satisfactory estimates of the PM-mortality dose-response relationship in situations where non-linear dose response models and DLMs alone do not; that is, the combined model did not systemically underestimate or overestimate the effect of PM on mortality. The combined model is applied to ten cities in the US and a pooled dose-response model formed. When fitted with a change-point value of 60 microg/m(3), the pooled model provides evidence for a positive association between PM and mortality. The combined model produced larger estimates for the effect of PM on mortality than when using a non-linear dose-response model or a DLM in isolation. For the combined model, the estimated percentage increase in mortality for PM concentrations of 25 and 75 microg/m(3) were 3.3% and 5.4%, respectively. In contrast, the corresponding values from a DLM used in isolation were 1.2% and 3.5%, respectively.

  6. Effective domain-dependent reuse in medical knowledge bases.

    PubMed

    Dojat, M; Pachet, F

    1995-12-01

    Knowledge reuse is now a critical issue for most developers of medical knowledge-based systems. As a rule, reuse is addressed from an ambitious, knowledge-engineering perspective that focuses on reusable general purpose knowledge modules, concepts, and methods. However, such a general goal fails to take into account the specific aspects of medical practice. From the point of view of the knowledge engineer, whose goal is to capture the specific features and intricacies of a given domain, this approach addresses the wrong level of generality. In this paper, we adopt a more pragmatic viewpoint, introducing the less ambitious goal of "domain-dependent limited reuse" and suggesting effective means of achieving it in practice. In a knowledge representation framework combining objects and production rules, we propose three mechanisms emerging from the combination of object-oriented programming and rule-based programming. We show these mechanisms contribute to achieve limited reuse and to introduce useful limited variations in medical expertise.

  7. Screening-level models to estimate partition ratios of organic chemicals between polymeric materials, air and water.

    PubMed

    Reppas-Chrysovitsinos, Efstathios; Sobek, Anna; MacLeod, Matthew

    2016-06-15

    Polymeric materials flowing through the technosphere are repositories of organic chemicals throughout their life cycle. Equilibrium partition ratios of organic chemicals between these materials and air (KMA) or water (KMW) are required for models of fate and transport, high-throughput exposure assessment and passive sampling. KMA and KMW have been measured for a growing number of chemical/material combinations, but significant data gaps still exist. We assembled a database of 363 KMA and 910 KMW measurements for 446 individual compounds and nearly 40 individual polymers and biopolymers, collected from 29 studies. We used the EPI Suite and ABSOLV software packages to estimate physicochemical properties of the compounds and we employed an empirical correlation based on Trouton's rule to adjust the measured KMA and KMW values to a standard reference temperature of 298 K. Then, we used a thermodynamic triangle with Henry's law constant to calculate a complete set of 1273 KMA and KMW values. Using simple linear regression, we developed a suite of single parameter linear free energy relationship (spLFER) models to estimate KMA from the EPI Suite-estimated octanol-air partition ratio (KOA) and KMW from the EPI Suite-estimated octanol-water (KOW) partition ratio. Similarly, using multiple linear regression, we developed a set of polyparameter linear free energy relationship (ppLFER) models to estimate KMA and KMW from ABSOLV-estimated Abraham solvation parameters. We explored the two LFER approaches to investigate (1) their performance in estimating partition ratios, and (2) uncertainties associated with treating all different polymers as a single "bulk" polymeric material compartment. The models we have developed are suitable for screening assessments of the tendency for organic chemicals to be emitted from materials, and for use in multimedia models of the fate of organic chemicals in the indoor environment. In screening applications we recommend that KMA and KMW be modeled as 0.06 ×KOA and 0.06 ×KOW respectively, with an uncertainty range of a factor of 15.

  8. Modifications to the Patient Rule-Induction Method that utilize non-additive combinations of genetic and environmental effects to define partitions that predict ischemic heart disease.

    PubMed

    Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G; Tybjaerg-Hansen, Anne; Sing, Charles F

    2009-05-01

    This article extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (Genet Epidemiol 31:515-527) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method to construct combinations of predictors, permutation testing to assess the significance of each combination, theoretical arguments for incorporating a multiple testing adjustment and bootstrap resampling to produce the confidence intervals. An illustration of this revised PRIM utilizing a sample of 2,258 European male participants from the Copenhagen City Heart Study is presented that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors.

  9. Modifications to the Patient Rule-Induction Method that utilize non-additive combinations of genetic and environmental effects to define partitions that predict ischemic heart disease

    PubMed Central

    Dyson, Greg; Frikke-Schmidt, Ruth; Nordestgaard, Børge G.; Tybjærg-Hansen, Anne; Sing, Charles F.

    2009-01-01

    This paper extends the Patient Rule-Induction Method (PRIM) for modeling cumulative incidence of disease developed by Dyson et al. (2007) to include the simultaneous consideration of non-additive combinations of predictor variables, a significance test of each combination, an adjustment for multiple testing and a confidence interval for the estimate of the cumulative incidence of disease in each partition. We employ the partitioning algorithm component of the Combinatorial Partitioning Method (CPM) to construct combinations of predictors, permutation testing to assess the significance of each combination, theoretical arguments for incorporating a multiple testing adjustment and bootstrap resampling to produce the confidence intervals. An illustration of this revised PRIM utilizing a sample of 2258 European male participants from the Copenhagen City Heart Study is presented that assesses the utility of genetic variants in predicting the presence of ischemic heart disease beyond the established risk factors. PMID:19025787

  10. The effects of cumulative practice on mathematics problem solving.

    PubMed

    Mayfield, Kristin H; Chase, Philip N

    2002-01-01

    This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving.

  11. The effects of cumulative practice on mathematics problem solving.

    PubMed Central

    Mayfield, Kristin H; Chase, Philip N

    2002-01-01

    This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving. PMID:12102132

  12. Genetic reinforcement learning through symbiotic evolution for fuzzy controller design.

    PubMed

    Juang, C F; Lin, J Y; Lin, C T

    2000-01-01

    An efficient genetic reinforcement learning algorithm for designing fuzzy controllers is proposed in this paper. The genetic algorithm (GA) adopted in this paper is based upon symbiotic evolution which, when applied to fuzzy controller design, complements the local mapping property of a fuzzy rule. Using this Symbiotic-Evolution-based Fuzzy Controller (SEFC) design method, the number of control trials, as well as consumed CPU time, are considerably reduced when compared to traditional GA-based fuzzy controller design methods and other types of genetic reinforcement learning schemes. Moreover, unlike traditional fuzzy controllers, which partition the input space into a grid, SEFC partitions the input space in a flexible way, thus creating fewer fuzzy rules. In SEFC, different types of fuzzy rules whose consequent parts are singletons, fuzzy sets, or linear equations (TSK-type fuzzy rules) are allowed. Further, the free parameters (e.g., centers and widths of membership functions) and fuzzy rules are all tuned automatically. For the TSK-type fuzzy rule especially, which put the proposed learning algorithm in use, only the significant input variables are selected to participate in the consequent of a rule. The proposed SEFC design method has been applied to different simulated control problems, including the cart-pole balancing system, a magnetic levitation system, and a water bath temperature control system. The proposed SEFC has been verified to be efficient and superior from these control problems, and from comparisons with some traditional GA-based fuzzy systems.

  13. Chemical association in simple models of molecular and ionic fluids. III. The cavity function

    NASA Astrophysics Data System (ADS)

    Zhou, Yaoqi; Stell, George

    1992-01-01

    Exact equations which relate the cavity function to excess solvation free energies and equilibrium association constants are rederived by using a thermodynamic cycle. A zeroth-order approximation, derived previously by us as a simple interpolation scheme, is found to be very accurate if the associative bonding occurs on or near the surface of the repulsive core of the interaction potential. If the bonding radius is substantially less than the core radius, the approximation overestimates the association degree and the association constant. For binary association, the zeroth-order approximation is equivalent to the first-order thermodynamic perturbation theory (TPT) of Wertheim. For n-particle association, the combination of the zeroth-order approximation with a ``linear'' approximation (for n-particle distribution functions in terms of the two-particle function) yields the first-order TPT result. Using our exact equations to go beyond TPT, near-exact analytic results for binary hard-sphere association are obtained. Solvent effects on binary hard-sphere association and ionic association are also investigated. A new rule which generalizes Le Chatelier's principle is used to describe the three distinct forms of behaviors involving solvent effects that we find. The replacement of the dielectric-continuum solvent model by a dipolar hard-sphere model leads to improved agreement with an experimental observation. Finally, equation of state for an n-particle flexible linear-chain fluid is derived on the basis of a one-parameter approximation that interpolates between the generalized Kirkwood superposition approximation and the linear approximation. A value of the parameter that appears to be near optimal in the context of this application is obtained from comparison with computer-simulation data.

  14. The application of information theory for the research of aging and aging-related diseases.

    PubMed

    Blokh, David; Stambler, Ilia

    2017-10-01

    This article reviews the application of information-theoretical analysis, employing measures of entropy and mutual information, for the study of aging and aging-related diseases. The research of aging and aging-related diseases is particularly suitable for the application of information theory methods, as aging processes and related diseases are multi-parametric, with continuous parameters coexisting alongside discrete parameters, and with the relations between the parameters being as a rule non-linear. Information theory provides unique analytical capabilities for the solution of such problems, with unique advantages over common linear biostatistics. Among the age-related diseases, information theory has been used in the study of neurodegenerative diseases (particularly using EEG time series for diagnosis and prediction), cancer (particularly for establishing individual and combined cancer biomarkers), diabetes (mainly utilizing mutual information to characterize the diseased and aging states), and heart disease (mainly for the analysis of heart rate variability). Few works have employed information theory for the analysis of general aging processes and frailty, as underlying determinants and possible early preclinical diagnostic measures for aging-related diseases. Generally, the use of information-theoretical analysis permits not only establishing the (non-linear) correlations between diagnostic or therapeutic parameters of interest, but may also provide a theoretical insight into the nature of aging and related diseases by establishing the measures of variability, adaptation, regulation or homeostasis, within a system of interest. It may be hoped that the increased use of such measures in research may considerably increase diagnostic and therapeutic capabilities and the fundamental theoretical mathematical understanding of aging and disease. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. PASCAL Data Base File Description and Indexing Rules in Chemistry, Biology and Medicine.

    ERIC Educational Resources Information Center

    Gaillardin, R.; And Others

    This report on the multidisciplinary PASCAL database describes the files and the indexing rules for chemistry, biology, and medicine. PASCAL deals with all aspects of chemistry within two subfiles whose combined yearly growth is about 100,000 references. The Biopascal file, organized in the two subfiles of Plant Science and Biology and Medicine,…

  16. 78 FR 47449 - Self-Regulatory Organizations; The Options Clearing Corporation; Order Approving Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-05

    ... SECURITIES AND EXCHANGE COMMISSION [Release No. 34-70076; File No. SR-OCC-2013-09] Self-Regulatory Organizations; The Options Clearing Corporation; Order Approving Proposed Rule Change, as Modified by Amendment No. 1, To Separate the Powers and Duties Currently Combined in the Office of OCC's Chairman Into Two Offices, Chairman and President, an...

  17. New Rule Use Drives the Relation between Working Memory Capacity and Raven's Advanced Progressive Matrices

    ERIC Educational Resources Information Center

    Wiley, Jennifer; Jarosz, Andrew F.; Cushen, Patrick J.; Colflesh, Gregory J. H.

    2011-01-01

    The correlation between individual differences in working memory capacity and performance on the Raven's Advanced Progressive Matrices (RAPM) is well documented yet poorly understood. The present work proposes a new explanation: that the need to use a new combination of rules on RAPM problems drives the relation between performance and working…

  18. 76 FR 55956 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change To List...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-09

    ... Commentary .02(b)(4) to NYSE Arca Equities Rule 8.200, means any combination of investments, including cash... investment objective of USMI is for the daily changes in percentage terms of its Units' net asset value... Fundamentals of Commodity Futures Returns,'' Gorton, Rouwenhorst and Hayashi (September 2008), Yale...

  19. A Comparison of the Effects of Brief Rules, a Timer, and Preferred Toys on Self-Control

    ERIC Educational Resources Information Center

    Newquist, Matthew H.; Dozier, Claudia L.; Neidert, Pamela L.

    2012-01-01

    Some children make impulsive choices (i.e., choose a small but immediate reinforcer over a large but delayed reinforcer). Previous research has shown that delay fading, providing an alternative activity during the delay, teaching participants to repeat a rule during the delay, combining delay fading with an alternative activity, and combining…

  20. 76 FR 19347 - Combined Notice of Filings #1

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-07

    ... per 35: Compliance filing to conform Attachment M of APS's OATT to be effective 8/31/2010. Filed Date... tariff filing per 35.13(a)(2)(iii): W2-014 ? Interim ISA Original Service Agreement No. 2797 to be... accordance with Rules 211 and 214 of the Commission's Rules of Practice and Procedure (18 CFR 385.211 and 385...

  1. Should I or Shouldn't I? An Ethical Conundrum

    ERIC Educational Resources Information Center

    Simpson, Carol

    2004-01-01

    The Golden Rule which combines the two bodies of ethics namely Codes of Ethics (Gerhardt 1990) and Association for Educational Communications and Technology code (AECT) plays an important rule in analyzing the two codes of ethics that affect school librarianship, which is aimed to keep the patrons safe and secure. Some of the ways in which library…

  2. How to combine probabilistic and fuzzy uncertainties in fuzzy control

    NASA Technical Reports Server (NTRS)

    Nguyen, Hung T.; Kreinovich, Vladik YA.; Lea, Robert

    1991-01-01

    Fuzzy control is a methodology that translates natural-language rules, formulated by expert controllers, into the actual control strategy that can be implemented in an automated controller. In many cases, in addition to the experts' rules, additional statistical information about the system is known. It is explained how to use this additional information in fuzzy control methodology.

  3. Dynamical and topological aspects of consensus formation in complex networks

    NASA Astrophysics Data System (ADS)

    Chacoma, A.; Mato, G.; Kuperman, M. N.

    2018-04-01

    The present work analyzes a particular scenario of consensus formation, where the individuals navigate across an underlying network defining the topology of the walks. The consensus, associated to a given opinion coded as a simple message, is generated by interactions during the agent's walk and manifest itself in the collapse of the various opinions into a single one. We analyze how the topology of the underlying networks and the rules of interaction between the agents promote or inhibit the emergence of this consensus. We find that non-linear interaction rules are required to form consensus and that consensus is more easily achieved in networks whose degree distribution is narrower.

  4. Evaluation of the influence of dominance rules for the assembly line design problem under consideration of product design alternatives

    NASA Astrophysics Data System (ADS)

    Oesterle, Jonathan; Lionel, Amodeo

    2018-06-01

    The current competitive situation increases the importance of realistically estimating product costs during the early phases of product and assembly line planning projects. In this article, several multi-objective algorithms using difference dominance rules are proposed to solve the problem associated with the selection of the most effective combination of product and assembly lines. The list of developed algorithms includes variants of ant colony algorithms, evolutionary algorithms and imperialist competitive algorithms. The performance of each algorithm and dominance rule is analysed by five multi-objective quality indicators and fifty problem instances. The algorithms and dominance rules are ranked using a non-parametric statistical test.

  5. Process Materialization Using Templates and Rules to Design Flexible Process Models

    NASA Astrophysics Data System (ADS)

    Kumar, Akhil; Yao, Wen

    The main idea in this paper is to show how flexible processes can be designed by combining generic process templates and business rules. We instantiate a process by applying rules to specific case data, and running a materialization algorithm. The customized process instance is then executed in an existing workflow engine. We present an architecture and also give an algorithm for process materialization. The rules are written in a logic-based language like Prolog. Our focus is on capturing deeper process knowledge and achieving a holistic approach to robust process design that encompasses control flow, resources and data, as well as makes it easier to accommodate changes to business policy.

  6. A New Approach for Resolving Conflicts in Actionable Behavioral Rules

    PubMed Central

    Zhu, Dan; Zeng, Daniel

    2014-01-01

    Knowledge is considered actionable if users can take direct actions based on such knowledge to their advantage. Among the most important and distinctive actionable knowledge are actionable behavioral rules that can directly and explicitly suggest specific actions to take to influence (restrain or encourage) the behavior in the users' best interest. However, in mining such rules, it often occurs that different rules may suggest the same actions with different expected utilities, which we call conflicting rules. To resolve the conflicts, a previous valid method was proposed. However, inconsistency of the measure for rule evaluating may hinder its performance. To overcome this problem, we develop a new method that utilizes rule ranking procedure as the basis for selecting the rule with the highest utility prediction accuracy. More specifically, we propose an integrative measure, which combines the measures of the support and antecedent length, to evaluate the utility prediction accuracies of conflicting rules. We also introduce a tunable weight parameter to allow the flexibility of integration. We conduct several experiments to test our proposed approach and evaluate the sensitivity of the weight parameter. Empirical results indicate that our approach outperforms those from previous research. PMID:25162054

  7. Textural features for image classification

    NASA Technical Reports Server (NTRS)

    Haralick, R. M.; Dinstein, I.; Shanmugam, K.

    1973-01-01

    Description of some easily computable textural features based on gray-tone spatial dependances, and illustration of their application in category-identification tasks of three different kinds of image data - namely, photomicrographs of five kinds of sandstones, 1:20,000 panchromatic aerial photographs of eight land-use categories, and ERTS multispectral imagery containing several land-use categories. Two kinds of decision rules are used - one for which the decision regions are convex polyhedra (a piecewise-linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89% for the photomicrographs, 82% for the aerial photographic imagery, and 83% for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

  8. A new modulated Hebbian learning rule--biologically plausible method for local computation of a principal subspace.

    PubMed

    Jankovic, Marko; Ogawa, Hidemitsu

    2003-08-01

    This paper presents one possible implementation of a transformation that performs linear mapping to a lower-dimensional subspace. Principal component subspace will be the one that will be analyzed. Idea implemented in this paper represents generalization of the recently proposed infinity OH neural method for principal component extraction. The calculations in the newly proposed method are performed locally--a feature which is usually considered as desirable from the biological point of view. Comparing to some other wellknown methods, proposed synaptic efficacy learning rule requires less information about the value of the other efficacies to make single efficacy modification. Synaptic efficacies are modified by implementation of Modulated Hebb-type (MH) learning rule. Slightly modified MH algorithm named Modulated Hebb Oja (MHO) algorithm, will be also introduced. Structural similarity of the proposed network with part of the retinal circuit will be presented, too.

  9. Making the Cut in Gifted Selection: Score Combination Rules and Their Impact on Program Diversity

    ERIC Educational Resources Information Center

    Lakin, Joni M.

    2018-01-01

    The recommendation of using "multiple measures" is common in policy guidelines for gifted and talented assessment systems. However, the integration of multiple test scores in a system that uses cut-scores requires choosing between different methods of combining quantitative scores. Past research has indicated that OR combination rules…

  10. 78 FR 75607 - Self-Regulatory Organizations; BATS Exchange, Inc.; Notice of Filing of a Proposed Rule Change in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... the Proposed Business Combination Involving BATS Global Markets, Inc. and Direct Edge Holdings LLC...'') in connection with the proposed business combination (the ``Combination''), as described in more detail below, involving its parent company, BATS Global Markets, Inc. and Direct Edge Holdings LLC (``DE...

  11. 78 FR 75585 - Self-Regulatory Organizations; BATS Y-Exchange, Inc.; Notice of Filing of a Proposed Rule Change...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-12

    ... With the Proposed Business Combination Involving BATS Global Markets, Inc. and Direct Edge Holdings LLC...'') in connection with the proposed business combination (the ``Combination''), as described in more detail below, involving its parent company, BATS Global Markets, Inc. and Direct Edge Holdings LLC (``DE...

  12. Bayesian Estimation of Combined Accuracy for Tests with Verification Bias

    PubMed Central

    Broemeling, Lyle D.

    2011-01-01

    This presentation will emphasize the estimation of the combined accuracy of two or more tests when verification bias is present. Verification bias occurs when some of the subjects are not subject to the gold standard. The approach is Bayesian where the estimation of test accuracy is based on the posterior distribution of the relevant parameter. Accuracy of two combined binary tests is estimated employing either “believe the positive” or “believe the negative” rule, then the true and false positive fractions for each rule are computed for two tests. In order to perform the analysis, the missing at random assumption is imposed, and an interesting example is provided by estimating the combined accuracy of CT and MRI to diagnose lung cancer. The Bayesian approach is extended to two ordinal tests when verification bias is present, and the accuracy of the combined tests is based on the ROC area of the risk function. An example involving mammography with two readers with extreme verification bias illustrates the estimation of the combined test accuracy for ordinal tests. PMID:26859487

  13. Optimizing cost-efficiency in mean exposure assessment - cost functions reconsidered

    PubMed Central

    2011-01-01

    Background Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Methods Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Results Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods. For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. Conclusions The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios. PMID:21600023

  14. Optimizing cost-efficiency in mean exposure assessment--cost functions reconsidered.

    PubMed

    Mathiassen, Svend Erik; Bolin, Kristian

    2011-05-21

    Reliable exposure data is a vital concern in medical epidemiology and intervention studies. The present study addresses the needs of the medical researcher to spend monetary resources devoted to exposure assessment with an optimal cost-efficiency, i.e. obtain the best possible statistical performance at a specified budget. A few previous studies have suggested mathematical optimization procedures based on very simple cost models; this study extends the methodology to cover even non-linear cost scenarios. Statistical performance, i.e. efficiency, was assessed in terms of the precision of an exposure mean value, as determined in a hierarchical, nested measurement model with three stages. Total costs were assessed using a corresponding three-stage cost model, allowing costs at each stage to vary non-linearly with the number of measurements according to a power function. Using these models, procedures for identifying the optimally cost-efficient allocation of measurements under a constrained budget were developed, and applied on 225 scenarios combining different sizes of unit costs, cost function exponents, and exposure variance components. Explicit mathematical rules for identifying optimal allocation could be developed when cost functions were linear, while non-linear cost functions implied that parts of or the entire optimization procedure had to be carried out using numerical methods.For many of the 225 scenarios, the optimal strategy consisted in measuring on only one occasion from each of as many subjects as allowed by the budget. Significant deviations from this principle occurred if costs for recruiting subjects were large compared to costs for setting up measurement occasions, and, at the same time, the between-subjects to within-subject variance ratio was small. In these cases, non-linearities had a profound influence on the optimal allocation and on the eventual size of the exposure data set. The analysis procedures developed in the present study can be used for informed design of exposure assessment strategies, provided that data are available on exposure variability and the costs of collecting and processing data. The present shortage of empirical evidence on costs and appropriate cost functions however impedes general conclusions on optimal exposure measurement strategies in different epidemiologic scenarios.

  15. Global optimization algorithm for heat exchanger networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesada, I.; Grossmann, I.E.

    This paper deals with the global optimization of heat exchanger networks with fixed topology. It is shown that if linear area cost functions are assumed, as well as arithmetic mean driving force temperature differences in networks with isothermal mixing, the corresponding nonlinear programming (NLP) optimization problem involves linear constraints and a sum of linear fractional functions in the objective which are nonconvex. A rigorous algorithm is proposed that is based on a convex NLP underestimator that involves linear and nonlinear estimators for fractional and bilinear terms which provide a tight lower bound to the global optimum. This NLP problem ismore » used within a spatial branch and bound method for which branching rules are given. Basic properties of the proposed method are presented, and its application is illustrated with several example problems. The results show that the proposed method only requires few nodes in the branch and bound search.« less

  16. Thin-plate spline quadrature of geodetic integrals

    NASA Technical Reports Server (NTRS)

    Vangysen, Herman

    1989-01-01

    Thin-plate spline functions (known for their flexibility and fidelity in representing experimental data) are especially well-suited for the numerical integration of geodetic integrals in the area where the integration is most sensitive to the data, i.e., in the immediate vicinity of the evaluation point. Spline quadrature rules are derived for the contribution of a circular innermost zone to Stoke's formula, to the formulae of Vening Meinesz, and to the recursively evaluated operator L(n) in the analytical continuation solution of Molodensky's problem. These rules are exact for interpolating thin-plate splines. In cases where the integration data are distributed irregularly, a system of linear equations needs to be solved for the quadrature coefficients. Formulae are given for the terms appearing in these equations. In case the data are regularly distributed, the coefficients may be determined once-and-for-all. Examples are given of some fixed-point rules. With such rules successive evaluation, within a circular disk, of the terms in Molodensky's series becomes relatively easy. The spline quadrature technique presented complements other techniques such as ring integration for intermediate integration zones.

  17. Multimodal hybrid reasoning methodology for personalized wellbeing services.

    PubMed

    Ali, Rahman; Afzal, Muhammad; Hussain, Maqbool; Ali, Maqbool; Siddiqi, Muhammad Hameed; Lee, Sungyoung; Ho Kang, Byeong

    2016-02-01

    A wellness system provides wellbeing recommendations to support experts in promoting a healthier lifestyle and inducing individuals to adopt healthy habits. Adopting physical activity effectively promotes a healthier lifestyle. A physical activity recommendation system assists users to adopt daily routines to form a best practice of life by involving themselves in healthy physical activities. Traditional physical activity recommendation systems focus on general recommendations applicable to a community of users rather than specific individuals. These recommendations are general in nature and are fit for the community at a certain level, but they are not relevant to every individual based on specific requirements and personal interests. To cover this aspect, we propose a multimodal hybrid reasoning methodology (HRM) that generates personalized physical activity recommendations according to the user׳s specific needs and personal interests. The methodology integrates the rule-based reasoning (RBR), case-based reasoning (CBR), and preference-based reasoning (PBR) approaches in a linear combination that enables personalization of recommendations. RBR uses explicit knowledge rules from physical activity guidelines, CBR uses implicit knowledge from experts׳ past experiences, and PBR uses users׳ personal interests and preferences. To validate the methodology, a weight management scenario is considered and experimented with. The RBR part of the methodology generates goal, weight status, and plan recommendations, the CBR part suggests the top three relevant physical activities for executing the recommended plan, and the PBR part filters out irrelevant recommendations from the suggested ones using the user׳s personal preferences and interests. To evaluate the methodology, a baseline-RBR system is developed, which is improved first using ranged rules and ultimately using a hybrid-CBR. A comparison of the results of these systems shows that hybrid-CBR outperforms the modified-RBR and baseline-RBR systems. Hybrid-CBR yields a 0.94% recall, a 0.97% precision, a 0.95% f-score, and low Type I and Type II errors. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners

    PubMed Central

    Kirchberger, Martin

    2016-01-01

    Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. PMID:26868955

  19. Dynamic Range Across Music Genres and the Perception of Dynamic Compression in Hearing-Impaired Listeners.

    PubMed

    Kirchberger, Martin; Russo, Frank A

    2016-02-10

    Dynamic range compression serves different purposes in the music and hearing-aid industries. In the music industry, it is used to make music louder and more attractive to normal-hearing listeners. In the hearing-aid industry, it is used to map the variable dynamic range of acoustic signals to the reduced dynamic range of hearing-impaired listeners. Hence, hearing-aided listeners will typically receive a dual dose of compression when listening to recorded music. The present study involved an acoustic analysis of dynamic range across a cross section of recorded music as well as a perceptual study comparing the efficacy of different compression schemes. The acoustic analysis revealed that the dynamic range of samples from popular genres, such as rock or rap, was generally smaller than the dynamic range of samples from classical genres, such as opera and orchestra. By comparison, the dynamic range of speech, based on recordings of monologues in quiet, was larger than the dynamic range of all music genres tested. The perceptual study compared the effect of the prescription rule NAL-NL2 with a semicompressive and a linear scheme. Music subjected to linear processing had the highest ratings for dynamics and quality, followed by the semicompressive and the NAL-NL2 setting. These findings advise against NAL-NL2 as a prescription rule for recorded music and recommend linear settings. © The Author(s) 2016.

  20. Not so Complex: Iteration in the Complex Plane

    ERIC Educational Resources Information Center

    O'Dell, Robin S.

    2014-01-01

    The simple process of iteration can produce complex and beautiful figures. In this article, Robin O'Dell presents a set of tasks requiring students to use the geometric interpretation of complex number multiplication to construct linear iteration rules. When the outputs are plotted in the complex plane, the graphs trace pleasing designs…

  1. Ghosts of Mathematicians Past: Paolo Ruffini

    ERIC Educational Resources Information Center

    Fitzherbert, John

    2016-01-01

    Paolo Ruffini (1765-1822) may be something of an unknown in high school mathematics; however his contributions to the world of mathematics are a rich source of inspiration. Ruffini's rule (often known as "synthetic division") is an efficient method of dividing a polynomial by a linear factor, with or without a remainder. The process can…

  2. Meaning as a Nonlinear Effect: The Birth of Cool

    ERIC Educational Resources Information Center

    Blommaert, Jan

    2015-01-01

    Saussurean and Chomskyan "conduit" views of meaning in communication, dominant in much of expert and lay linguistic semantics, presuppose a simple, closed and linear system in which outcomes can be predicted and explained in terms of finite sets of rules. Summarizing critical traditions of scholarship, notably those driven by Bateson's…

  3. SPAR reference manual

    NASA Technical Reports Server (NTRS)

    Whetstone, W. D.

    1976-01-01

    The functions and operating rules of the SPAR system, which is a group of computer programs used primarily to perform stress, buckling, and vibrational analyses of linear finite element systems, were given. The following subject areas were discussed: basic information, structure definition, format system matrix processors, utility programs, static solutions, stresses, sparse matrix eigensolver, dynamic response, graphics, and substructure processors.

  4. Whole stand volume tables for quaking aspen in the Rocky Mountains

    Treesearch

    Wayne D. Shepperd; H. Todd Mowrer

    1984-01-01

    Linear regression equations were developed to predict stand volumes for aspen given average stand basal area and average stand height. Tables constructed from these equations allow easy field estimation of gross merchantable cubic and board foot Scribner Rules per acre, and cubic meters per hectare using simple prism cruise data.

  5. Algebraic Generalization Strategies Used by Kuwaiti Pre-Service Teachers

    ERIC Educational Resources Information Center

    Alajmi, Amal Hussain

    2016-01-01

    This study reports on the algebraic generalization strategies used by elementary and middle/high school pre-service mathematics teachers in Kuwait. They were presented with 9 tasks that involved linear, exponential, and quadratic situations. The results showed that these pre-service teachers had difficulty in generalizing algebraic rules in all 3…

  6. Genetic algorithm optimized rainfall-runoff fuzzy inference system for row crop watersheds with claypan soils

    USDA-ARS?s Scientific Manuscript database

    The fuzzy logic algorithm has the ability to describe knowledge in a descriptive human-like manner in the form of simple rules using linguistic variables, and provides a new way of modeling uncertain or naturally fuzzy hydrological processes like non-linear rainfall-runoff relationships. Fuzzy infe...

  7. Analysis of programming properties and the row-column generation method for 1-norm support vector machines.

    PubMed

    Zhang, Li; Zhou, WeiDa

    2013-12-01

    This paper deals with fast methods for training a 1-norm support vector machine (SVM). First, we define a specific class of linear programming with many sparse constraints, i.e., row-column sparse constraint linear programming (RCSC-LP). In nature, the 1-norm SVM is a sort of RCSC-LP. In order to construct subproblems for RCSC-LP and solve them, a family of row-column generation (RCG) methods is introduced. RCG methods belong to a category of decomposition techniques, and perform row and column generations in a parallel fashion. Specially, for the 1-norm SVM, the maximum size of subproblems of RCG is identical with the number of Support Vectors (SVs). We also introduce a semi-deleting rule for RCG methods and prove the convergence of RCG methods when using the semi-deleting rule. Experimental results on toy data and real-world datasets illustrate that it is efficient to use RCG to train the 1-norm SVM, especially in the case of small SVs. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. A physically based connection between fractional calculus and fractal geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butera, Salvatore, E-mail: sg.butera@gmail.com; Di Paola, Mario, E-mail: mario.dipaola@unipa.it

    2014-11-15

    We show a relation between fractional calculus and fractals, based only on physical and geometrical considerations. The link has been found in the physical origins of the power-laws, ruling the evolution of many natural phenomena, whose long memory and hereditary properties are mathematically modelled by differential operators of non integer order. Dealing with the relevant example of a viscous fluid seeping through a fractal shaped porous medium, we show that, once a physical phenomenon or process takes place on an underlying fractal geometry, then a power-law naturally comes up in ruling its evolution, whose order is related to the anomalousmore » dimension of such geometry, as well as to the model used to describe the physics involved. By linearizing the non linear dependence of the response of the system at hand to a proper forcing action then, exploiting the Boltzmann superposition principle, a fractional differential equation is found, describing the dynamics of the system itself. The order of such equation is again related to the anomalous dimension of the underlying geometry.« less

  9. An analytical study of the endoreversible Curzon-Ahlborn cycle for a non-linear heat transfer law

    NASA Astrophysics Data System (ADS)

    Páez-Hernández, Ricardo T.; Portillo-Díaz, Pedro; Ladino-Luna, Delfino; Ramírez-Rojas, Alejandro; Pacheco-Paez, Juan C.

    2016-01-01

    In the present article, an endoreversible Curzon-Ahlborn engine is studied by considering a non-linear heat transfer law, particularly the Dulong-Petit heat transfer law, using the `componendo and dividendo' rule as well as a simple differentiation to obtain the Curzon-Ahlborn efficiency as proposed by Agrawal in 2009. This rule is actually a change of variable that simplifies a two-variable problem to a one-variable problem. From elemental calculus, we obtain an analytical expression of efficiency and the power output. The efficiency is given only in terms of the temperatures of the reservoirs, such as both Carnot and Curzon-Ahlborn cycles. We make a comparison between efficiencies measured in real power plants and theoretical values from analytical expressions obtained in this article and others found in literature from several other authors. This comparison shows that the theoretical values of efficiency are close to real efficiency, and in some cases, they are exactly the same. Therefore, we can say that the Agrawal method is good in calculating thermal engine efficiencies approximately.

  10. Feasibility of combining linear theory and impact theory methods for the analysis and design of high speed configurations

    NASA Technical Reports Server (NTRS)

    Brooke, D.; Vondrasek, D. V.

    1978-01-01

    The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.

  11. Ellipsoidal fuzzy learning for smart car platoons

    NASA Astrophysics Data System (ADS)

    Dickerson, Julie A.; Kosko, Bart

    1993-12-01

    A neural-fuzzy system combined supervised and unsupervised learning to find and tune the fuzzy-rules. An additive fuzzy system approximates a function by covering its graph with fuzzy rules. A fuzzy rule patch can take the form of an ellipsoid in the input-output space. Unsupervised competitive learning found the statistics of data clusters. The covariance matrix of each synaptic quantization vector defined on ellipsoid centered at the centroid of the data cluster. Tightly clustered data gave smaller ellipsoids or more certain rules. Sparse data gave larger ellipsoids or less certain rules. Supervised learning tuned the ellipsoids to improve the approximation. The supervised neural system used gradient descent to find the ellipsoidal fuzzy patches. It locally minimized the mean-squared error of the fuzzy approximation. Hybrid ellipsoidal learning estimated the control surface for a smart car controller.

  12. Rule Extracting based on MCG with its Application in Helicopter Power Train Fault Diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, M.; Hu, N. Q.; Qin, G. J.

    2011-07-01

    In order to extract decision rules for fault diagnosis from incomplete historical test records for knowledge-based damage assessment of helicopter power train structure. A method that can directly extract the optimal generalized decision rules from incomplete information based on GrC was proposed. Based on semantic analysis of unknown attribute value, the granule was extended to handle incomplete information. Maximum characteristic granule (MCG) was defined based on characteristic relation, and MCG was used to construct the resolution function matrix. The optimal general decision rule was introduced, with the basic equivalent forms of propositional logic, the rules were extracted and reduction from incomplete information table. Combined with a fault diagnosis example of power train, the application approach of the method was present, and the validity of this method in knowledge acquisition was proved.

  13. Limit of validity of Ostwald's rule of stages in a statistical mechanical model of crystallization.

    PubMed

    Hedges, Lester O; Whitelam, Stephen

    2011-10-28

    We have only rules of thumb with which to predict how a material will crystallize, chief among which is Ostwald's rule of stages. It states that the first phase to appear upon transformation of a parent phase is the one closest to it in free energy. Although sometimes upheld, the rule is without theoretical foundation and is not universally obeyed, highlighting the need for microscopic understanding of crystallization controls. Here we study in detail the crystallization pathways of a prototypical model of patchy particles. The range of crystallization pathways it exhibits is richer than can be predicted by Ostwald's rule, but a combination of simulation and analytic theory reveals clearly how these pathways are selected by microscopic parameters. Our results suggest strategies for controlling self-assembly pathways in simulation and experiment.

  14. Cooling in the single-photon strong-coupling regime of cavity optomechanics

    NASA Astrophysics Data System (ADS)

    Nunnenkamp, A.; Børkje, K.; Girvin, S. M.

    2012-05-01

    In this Rapid Communication we discuss how red-sideband cooling is modified in the single-photon strong-coupling regime of cavity optomechanics where the radiation pressure of a single photon displaces the mechanical oscillator by more than its zero-point uncertainty. Using Fermi's golden rule we calculate the transition rates induced by the optical drive without linearizing the optomechanical interaction. In the resolved-sideband limit we find multiple-phonon cooling resonances for strong single-photon coupling that lead to nonthermal steady states including the possibility of phonon antibunching. Our study generalizes the standard linear cooling theory.

  15. Closure properties of Watson-Crick grammars

    NASA Astrophysics Data System (ADS)

    Zulkufli, Nurul Liyana binti Mohamad; Turaev, Sherzod; Tamrin, Mohd Izzuddin Mohd; Azeddine, Messikh

    2015-12-01

    In this paper, we define Watson-Crick context-free grammars, as an extension of Watson-Crick regular grammars and Watson-Crick linear grammars with context-free grammar rules. We show the relation of Watson-Crick (regular and linear) grammars to the sticker systems, and study some of the important closure properties of the Watson-Crick grammars. We establish that the Watson-Crick regular grammars are closed under almost all of the main closure operations, while the differences between other Watson-Crick grammars with their corresponding Chomsky grammars depend on the computational power of the Watson-Crick grammars which still need to be studied.

  16. Scarp degraded by linear diffusion: inverse solution for age.

    USGS Publications Warehouse

    Andrews, D.J.; Hanks, T.C.

    1985-01-01

    Under the assumption that landforms unaffected by drainage channels are degraded according to the linear diffusion equation, a procedure is developed to invert a scarp profile to find its 'diffusion age'. The inverse procedure applied to synthetic data yields the following rules of thumb. Evidence of initial scarp shape has been lost when apparent age reaches twice its initial value. A scarp that appears to have been formed by one event may have been formed by two with an interval between them as large as apparent age. The simplicity of scarp profile measurement and this inversion makes profile analysis attractive. -from Authors

  17. Towards application of rule learning to the meta-analysis of clinical data: an example of the metabolic syndrome.

    PubMed

    Wojtusiak, Janusz; Michalski, Ryszard S; Simanivanh, Thipkesone; Baranova, Ancha V

    2009-12-01

    Systematic reviews and meta-analysis of published clinical datasets are important part of medical research. By combining results of multiple studies, meta-analysis is able to increase confidence in its conclusions, validate particular study results, and sometimes lead to new findings. Extensive theory has been built on how to aggregate results from multiple studies and arrive to the statistically valid conclusions. Surprisingly, very little has been done to adopt advanced machine learning methods to support meta-analysis. In this paper we describe a novel machine learning methodology that is capable of inducing accurate and easy to understand attributional rules from aggregated data. Thus, the methodology can be used to support traditional meta-analysis in systematic reviews. Most machine learning applications give primary attention to predictive accuracy of the learned knowledge, and lesser attention to its understandability. Here we employed attributional rules, the special form of rules that are relatively easy to interpret for medical experts who are not necessarily trained in statistics and meta-analysis. The methodology has been implemented and initially tested on a set of publicly available clinical data describing patients with metabolic syndrome (MS). The objective of this application was to determine rules describing combinations of clinical parameters used for metabolic syndrome diagnosis, and to develop rules for predicting whether particular patients are likely to develop secondary complications of MS. The aggregated clinical data was retrieved from 20 separate hospital cohorts that included 12 groups of patients with present liver disease symptoms and 8 control groups of healthy subjects. The total of 152 attributes were used, most of which were measured, however, in different studies. Twenty most common attributes were selected for the rule learning process. By applying the developed rule learning methodology we arrived at several different possible rulesets that can be used to predict three considered complications of MS, namely nonalcoholic fatty liver disease (NAFLD), simple steatosis (SS), and nonalcoholic steatohepatitis (NASH).

  18. Cogeneration technology alternatives study. Volume 6: Computer data

    NASA Technical Reports Server (NTRS)

    1980-01-01

    The potential technical capabilities of energy conversion systems in the 1985 - 2000 time period were defined with emphasis on systems using coal, coal-derived fuels or alternate fuels. Industrial process data developed for the large energy consuming industries serve as a framework for the cogeneration applications. Ground rules for the study were established and other necessary equipment (balance-of-plant) was defined. This combination of technical information, energy conversion system data ground rules, industrial process information and balance-of-plant characteristics was analyzed to evaluate energy consumption, capital and operating costs and emissions. Data in the form of computer printouts developed for 3000 energy conversion system-industrial process combinations are presented.

  19. Mental Accounting in Portfolio Choice: Evidence from a Flypaper Effect

    PubMed Central

    Choi, James J.; Laibson, David; Madrian, Brigitte C.

    2009-01-01

    Consistent with mental accounting, we document that investors sometimes choose the asset allocation for one account without considering the asset allocation of their other accounts. The setting is a firm that changed its 401(k) matching rules. Initially, 401(k) enrollees chose the allocation of their own contributions, but the firm chose the match allocation. These enrollees ignored the match allocation when choosing their own-contribution allocation. In the second regime, enrollees simultaneously selected both accounts’ allocations, leading them to mentally integrate the two. Own-contribution allocations before the rule change equal the combined own- and match-contribution allocations afterwards, whereas combined allocations differ sharply across regimes. PMID:20027235

  20. A personalized health-monitoring system for elderly by combining rules and case-based reasoning.

    PubMed

    Ahmed, Mobyen Uddin

    2015-01-01

    Health-monitoring system for elderly in home environment is a promising solution to provide efficient medical services that increasingly interest by the researchers within this area. It is often more challenging when the system is self-served and functioning as personalized provision. This paper proposed a personalized self-served health-monitoring system for elderly in home environment by combining general rules with a case-based reasoning approach. Here, the system generates feedback, recommendation and alarm in a personalized manner based on elderly's medical information and health parameters such as blood pressure, blood glucose, weight, activity, pulse, etc. A set of general rules has used to classify individual health parameters. The case-based reasoning approach is used to combine all different health parameters, which generates an overall classification of health condition. According to the evaluation result considering 323 cases and k=2 i.e., top 2 most similar retrieved cases, the sensitivity, specificity and overall accuracy are achieved as 90%, 97% and 96% respectively. The preliminary result of the system is acceptable since the feedback; recommendation and alarm messages are personalized and differ from the general messages. Thus, this approach could be possibly adapted for other situations in personalized elderly monitoring.

  1. 75 FR 61642 - Fisheries of the Exclusive Economic Zone Off Alaska; Modified Nonpelagic Trawl Gear and Habitat...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-06

    ... Island Habitat Conservation Area (SMIHCA). Four minor changes to the FMP also are made, three of which do... Flexibility Act (RFA). However, based on their combined groundfish revenues, none of the four catcher vessels... states that, for each rule or group of related rules for which an agency is required to prepare a FRFA...

  2. Critical Thinking and Intelligence Analysis

    DTIC Science & Technology

    2007-03-01

    assess such systems – terrorist networks are but one example. Additionally, as sociologist Emile Durkheim observes, the combinations of elements...University Press, 99), 0. Cited hereafter as Jervis, System Effects. Emile Durkheim , The Rules of Sociological Method (Glencoe, IL: Free Press...Puzzles. New York, NY: Main Street, 2005. Durkheim , Emile . The Rules of Sociological Method. Glencoe, IL: Free Press, 1938. Eco, Umberto, and

  3. Finding Words and Word Structure in Artificial Speech: The Development of Infants' Sensitivity to Morphosyntactic Regularities

    ERIC Educational Resources Information Center

    Marchetto, Erika; Bonatti, Luca L.

    2015-01-01

    To achieve language proficiency, infants must find the building blocks of speech and master the rules governing their legal combinations. However, these problems are linked: words are also built according to rules. Here, we explored early morphosyntactic sensitivity by testing when and how infants could find either words or within-word structure…

  4. Designing seasonal initial attack resource deployment and dispatch rules using a two-stage stochastic programming procedure

    Treesearch

    Yu Wei; Michael Bevers; Erin J. Belval

    2015-01-01

    Initial attack dispatch rules can help shorten fire suppression response times by providing easy-to-follow recommendations based on fire weather, discovery time, location, and other factors that may influence fire behavior and the appropriate response. A new procedure is combined with a stochastic programming model and tested in this study for designing initial attack...

  5. Environmental Assessment for the South Gate Improvement Project Travis Air Force Base Solano County, California

    DTIC Science & Technology

    2005-12-01

    3-11 De Minimis Levels for Exemption from General Confonnity Rule Requirements...Confonnity Rule de minimis levels. Therefore, not No significant impact. Noise -------1---=c=--onsidered a significant impact. Temporary, short...required under state law. This combined element is intended to guide long-range growth and de - velopment in an orderly manner that protects the

  6. Optimal number of features as a function of sample size for various classification rules.

    PubMed

    Hua, Jianping; Xiong, Zixiang; Lowey, James; Suh, Edward; Dougherty, Edward R

    2005-04-15

    Given the joint feature-label distribution, increasing the number of features always results in decreased classification error; however, this is not the case when a classifier is designed via a classification rule from sample data. Typically (but not always), for fixed sample size, the error of a designed classifier decreases and then increases as the number of features grows. The potential downside of using too many features is most critical for small samples, which are commonplace for gene-expression-based classifiers for phenotype discrimination. For fixed sample size and feature-label distribution, the issue is to find an optimal number of features. Since only in rare cases is there a known distribution of the error as a function of the number of features and sample size, this study employs simulation for various feature-label distributions and classification rules, and across a wide range of sample and feature-set sizes. To achieve the desired end, finding the optimal number of features as a function of sample size, it employs massively parallel computation. Seven classifiers are treated: 3-nearest-neighbor, Gaussian kernel, linear support vector machine, polynomial support vector machine, perceptron, regular histogram and linear discriminant analysis. Three Gaussian-based models are considered: linear, nonlinear and bimodal. In addition, real patient data from a large breast-cancer study is considered. To mitigate the combinatorial search for finding optimal feature sets, and to model the situation in which subsets of genes are co-regulated and correlation is internal to these subsets, we assume that the covariance matrix of the features is blocked, with each block corresponding to a group of correlated features. Altogether there are a large number of error surfaces for the many cases. These are provided in full on a companion website, which is meant to serve as resource for those working with small-sample classification. For the companion website, please visit http://public.tgen.org/tamu/ofs/ e-dougherty@ee.tamu.edu.

  7. Cramer's rule, Quarks Fractional electric charge, A scientific exploration or a possible mathematical electric charge value?

    NASA Astrophysics Data System (ADS)

    Estakhr, Ahmad Reza

    2013-03-01

    In linear algebra, [Cramer's rule][1] is an explicit formula for the solution of a system of linear equations with as many equations as unknowns. 2u+1d=1 1u+2d=0 a_1d+b_1u=c_1, a_2d +b_2u=c_2 u={c_1b_2- c_2b_1}/{a_1b_2-a_2b_1} and d={a_1c_2-a_2c_1}/{a_1b_2-a_2b_1} u=+2/3 d=-1/3 now i think an up quark has no electric charge and infact this is down quark which has electeric charge of (+1,-1), then fractional electric charge completely breakdown 2u(0)+1d(+1)=+1 1u (0)+d(-1)+d(+1)=0 which means probabilities is associated with unknown parameters, Thus, Quarks fractional electric charge value is possible charge of quarks ``not'' accurate value. And also it is consisted with neutron decay, While bound neutrons in stable nuclei are stable, free neutrons are unstable; they undergo beta decay with a mean lifetime of just under 15 minutes (881.5 ± 1.5 s). (thanks god!) Free neutrons decay by emission of an electron and an electron antineutrino to become a proton, a process known as beta decay n^0 to p^{+1}+e^{-1}+ overline ν_e ref 1: http://en.wikipedia.org/wiki/Cramer's_rule

  8. Establishment of a standard operating procedure for predicting the time of calving in cattle

    PubMed Central

    Sauter-Louis, Carola; Braunert, Anna; Lange, Dorothee; Weber, Frank; Zerbe, Holm

    2011-01-01

    Precise calving monitoring is essential for minimizing the effects of dystocia in cows and calves. We conducted two studies in healthy cows that compared seven clinical signs (broad pelvic ligaments relaxation, vaginal secretion, udder hyperplasia, udder edema, teat filling, tail relaxation, and vulva edema) alone and in combination in order to predict the time of parturition. The relaxation of the broad pelvic ligaments combined with teat filling gave the best values for predicting either calving or no calving within 12 h. For the proposed parturition score (PS), a threshold of 4 PS points was identified below which calving within the next 12 h could be ruled out with a probability of 99.3% in cows (95.5% in heifers). Above this threshold, intermitted calving monitoring every 3 h and a progesterone rapid blood test (PRBT) would be recommended. By combining the PS and PRBT (if PS ≥ 4), the prediction of calving within the next 12 h improved from 14.9% to 53.1%, and the probability of ruling out calving was 96.8%. The PRBT was compared to the results of an enzyme immunoassay (sensitivity, 90.2%; specificity, 74.9%). The standard operating procedure developed in this study that combines the PS and PRBT will enable veterinarians to rule out or predict calving within a 12 h period in cows with high accuracy under field conditions. PMID:21586878

  9. Complex-energy approach to sum rules within nuclear density functional theory

    DOE PAGES

    Hinohara, Nobuo; Kortelainen, Markus; Nazarewicz, Witold; ...

    2015-04-27

    The linear response of the nucleus to an external field contains unique information about the effective interaction, correlations governing the behavior of the many-body system, and properties of its excited states. To characterize the response, it is useful to use its energy-weighted moments, or sum rules. By comparing computed sum rules with experimental values, the information content of the response can be utilized in the optimization process of the nuclear Hamiltonian or nuclear energy density functional (EDF). But the additional information comes at a price: compared to the ground state, computation of excited states is more demanding. To establish anmore » efficient framework to compute energy-weighted sum rules of the response that is adaptable to the optimization of the nuclear EDF and large-scale surveys of collective strength, we have developed a new technique within the complex-energy finite-amplitude method (FAM) based on the quasiparticle random- phase approximation. The proposed sum-rule technique based on the complex-energy FAM is a tool of choice when optimizing effective interactions or energy functionals. The method is very efficient and well-adaptable to parallel computing. As a result, the FAM formulation is especially useful when standard theorems based on commutation relations involving the nuclear Hamiltonian and external field cannot be used.« less

  10. 5 CFR 841.706 - Increases on combined CSRS/FERS annuities.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... Adjustments § 841.706 Increases on combined CSRS/FERS annuities. (a) COLA's on combined CSRS/FERS annuities... amount of COLA's under § 841.703(a). (b) The initial monthly rate is computed by— (1) Applying CSRS rules... retiree is due a full dollar increase on the FERS component with the next COLA. An employee with less than...

  11. 5 CFR 841.706 - Increases on combined CSRS/FERS annuities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... Adjustments § 841.706 Increases on combined CSRS/FERS annuities. (a) COLA's on combined CSRS/FERS annuities... amount of COLA's under § 841.703(a). (b) The initial monthly rate is computed by— (1) Applying CSRS rules... retiree is due a full dollar increase on the FERS component with the next COLA. An employee with less than...

  12. 5 CFR 841.706 - Increases on combined CSRS/FERS annuities.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... Adjustments § 841.706 Increases on combined CSRS/FERS annuities. (a) COLA's on combined CSRS/FERS annuities... amount of COLA's under § 841.703(a). (b) The initial monthly rate is computed by— (1) Applying CSRS rules... retiree is due a full dollar increase on the FERS component with the next COLA. An employee with less than...

  13. 5 CFR 841.706 - Increases on combined CSRS/FERS annuities.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... Adjustments § 841.706 Increases on combined CSRS/FERS annuities. (a) COLA's on combined CSRS/FERS annuities... amount of COLA's under § 841.703(a). (b) The initial monthly rate is computed by— (1) Applying CSRS rules... retiree is due a full dollar increase on the FERS component with the next COLA. An employee with less than...

  14. 10 CFR 2.629 - Finality of partial decision on site suitability issues in a combined license proceeding.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... a combined license proceeding. 2.629 Section 2.629 Energy NUCLEAR REGULATORY COMMISSION RULES OF... Work Authorizations Early Partial Decisions on Site Suitability-Combined License Under 10 Cfr Part 52... complete and acceptable for docketing under § 2.101(a)(3), the Director of the Office of New Reactors or...

  15. Oral matrix tablet formulations for concomitant controlled release of anti-tubercular drugs: design and in vitro evaluations.

    PubMed

    Hiremath, Praveen S; Saha, Ranendra N

    2008-10-01

    The aim of the present investigation was to develop controlled release (C.R.) matrix tablet formulations of rifampicin and isoniazid combination, to study the design parameters and to evaluate in vitro release characteristics. In the present study, a series of formulations were developed with different release rates and duration using hydrophilic polymers hydroxypropyl methylcellulose (HPMC) and hydroxypropyl cellulose (HPC). The duration of rifampicin and isoniazid release could be tailored by varying the polymer type, polymer ratio and processing techniques. Further, Eudragit L100-55 was incorporated in the matrix tablets to compensate for the pH-dependent release of rifampicin. Rifampicin was found to follow linear release profile with time from HPMC formulations. In case of formulations with HPC, there was an initial higher release in simulated gastric fluid (SGF) followed by zero order release profiles in simulated intestinal fluid (SIFsp) for rifampicin. The release of isoniazid was found to be predominantly by diffusion mechanism in case of HPMC formulations, and with HPC formulations release was due to combination of diffusion and erosion. The initial release was sufficiently higher for rifampicin from HPC thus ruling out the need to incorporate a separate loading dose. The initial release was sufficiently higher for isoniazid in all formulations. Thus, with the use of suitable polymer or polymer combinations and with the proper optimization of the processing techniques it was possible to design the C.R. formulations of rifampicin and isoniazid combination that could provide the sufficient initial release and release extension up to 24h for both the drugs despite of the wide variations in their physicochemical properties.

  16. Transonic and Supersonic Wind-Tunnel Tests of Wing-Body Combinations Designed for High Efficiency at a Mach Number of 1.41

    NASA Technical Reports Server (NTRS)

    Grant, Frederick C.; Sevier, John R., Jr.

    1960-01-01

    Wind-tunnel force tests of a number of wing-body combinations designed for high lift-drag ratio at a Mach number of 1.41 are reported. Five wings and six bodies were used in making up the various wing-body combinations investigated. All the wings had the same highly swept dis- continuously tapered plan form with NACA 65A-series airfoil sections 4 percent thick at the root tapering linearly to 3 percent thick at the tip. The bodies were based on the area distribution of a Sears-Haack body of revolution for minimum drag with a given length and volume. These wings and bodies were used to determine the effects of wing twist., wing twist and camber, wing leading-edge droop, a change from circular to elliptical body cross-sectional shape, and body indentation by the area-rule and streamline methods. The supersonic test Mach numbers were 1.41 and 2.01. The transonic test Mach number range was from 0.6 to 1.2. For the transition-fixed condition and at a Reynolds number of 2.7 x 10(exp 6) based on the mean aerodynamic chord, the maximum value of lift- drag ratio at a Mach number of 1.41 was 9.6 for a combination with a twisted wing and an indented body of elliptical cross section. The tests indicated that the transonic rise in minimum drag was low and did not change appreciably up to the highest test Mach number of 2.01. The lower values of lift-drag ratio obtained at a Mach number of 2.01 can be attributed to the increase of drag due to lift with Mach number.

  17. 77 FR 36321 - Self-Regulatory Organizations; New York Stock Exchange LLC; Notice of Filing of Proposed Rule...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-18

    ... policy in connection with the previously proposed combination of NYSE Euronext and Deutsche B[ouml]rse AG... European Commission's decision to prohibit the Combination, NYSE Euronext and Deutsche B[ouml]rse agreed to...

  18. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2001-01-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  19. Knowledge-based approach to video content classification

    NASA Astrophysics Data System (ADS)

    Chen, Yu; Wong, Edward K.

    2000-12-01

    A framework for video content classification using a knowledge-based approach is herein proposed. This approach is motivated by the fact that videos are rich in semantic contents, which can best be interpreted and analyzed by human experts. We demonstrate the concept by implementing a prototype video classification system using the rule-based programming language CLIPS 6.05. Knowledge for video classification is encoded as a set of rules in the rule base. The left-hand-sides of rules contain high level and low level features, while the right-hand-sides of rules contain intermediate results or conclusions. Our current implementation includes features computed from motion, color, and text extracted from video frames. Our current rule set allows us to classify input video into one of five classes: news, weather, reporting, commercial, basketball and football. We use MYCIN's inexact reasoning method for combining evidences, and to handle the uncertainties in the features and in the classification results. We obtained good results in a preliminary experiment, and it demonstrated the validity of the proposed approach.

  20. Theory of chromatic noise masking applied to testing linearity of S-cone detection mechanisms.

    PubMed

    Giulianini, Franco; Eskew, Rhea T

    2007-09-01

    A method for testing the linearity of cone combination of chromatic detection mechanisms is applied to S-cone detection. This approach uses the concept of mechanism noise, the noise as seen by a postreceptoral neural mechanism, to represent the effects of superposing chromatic noise components in elevating thresholds and leads to a parameter-free prediction for a linear mechanism. The method also provides a test for the presence of multiple linear detectors and off-axis looking. No evidence for multiple linear mechanisms was found when using either S-cone increment or decrement tests. The results for both S-cone test polarities demonstrate that these mechanisms combine their cone inputs nonlinearly.

  1. An expert system for choosing the best combination of options in a general purpose program for automated design synthesis

    NASA Technical Reports Server (NTRS)

    Rogers, J. L.; Barthelemy, J.-F. M.

    1986-01-01

    An expert system called EXADS has been developed to aid users of the Automated Design Synthesis (ADS) general purpose optimization program. ADS has approximately 100 combinations of strategy, optimizer, and one-dimensional search options from which to choose. It is difficult for a nonexpert to make this choice. This expert system aids the user in choosing the best combination of options based on the users knowledge of the problem and the expert knowledge stored in the knowledge base. The knowledge base is divided into three categories; constrained problems, unconstrained problems, and constrained problems being treated as unconstrained problems. The inference engine and rules are written in LISP, contains about 200 rules, and executes on DEC-VAX (with Franz-LISP) and IBM PC (with IQ-LISP) computers.

  2. A hybrid learning method for constructing compact rule-based fuzzy models.

    PubMed

    Zhao, Wanqing; Niu, Qun; Li, Kang; Irwin, George W

    2013-12-01

    The Takagi–Sugeno–Kang-type rule-based fuzzy model has found many applications in different fields; a major challenge is, however, to build a compact model with optimized model parameters which leads to satisfactory model performance. To produce a compact model, most existing approaches mainly focus on selecting an appropriate number of fuzzy rules. In contrast, this paper considers not only the selection of fuzzy rules but also the structure of each rule premise and consequent, leading to the development of a novel compact rule-based fuzzy model. Here, each fuzzy rule is associated with two sets of input attributes, in which the first is used for constructing the rule premise and the other is employed in the rule consequent. A new hybrid learning method combining the modified harmony search method with a fast recursive algorithm is hereby proposed to determine the structure and the parameters for the rule premises and consequents. This is a hard mixed-integer nonlinear optimization problem, and the proposed hybrid method solves the problem by employing an embedded framework, leading to a significantly reduced number of model parameters and a small number of fuzzy rules with each being as simple as possible. Results from three examples are presented to demonstrate the compactness (in terms of the number of model parameters and the number of rules) and the performance of the fuzzy models obtained by the proposed hybrid learning method, in comparison with other techniques from the literature.

  3. Ranking Forestry Investments With Parametric Linear Programming

    Treesearch

    Paul A. Murphy

    1976-01-01

    Parametric linear programming is introduced as a technique for ranking forestry investments under multiple constraints; it combines the advantages of simple tanking and linear programming as capital budgeting tools.

  4. A Methodology for Multihazards Load Combinations of Earthquake and Heavy Trucks for Bridges

    PubMed Central

    Wang, Xu; Sun, Baitao

    2014-01-01

    Issues of load combinations of earthquakes and heavy trucks are important contents in multihazards bridge design. Current load resistance factor design (LRFD) specifications usually treat extreme hazards alone and have no probabilistic basis in extreme load combinations. Earthquake load and heavy truck load are considered as random processes with respective characteristics, and the maximum combined load is not the simple superimposition of their maximum loads. Traditional Ferry Borges-Castaneda model that considers load lasting duration and occurrence probability well describes random process converting to random variables and load combinations, but this model has strict constraint in time interval selection to obtain precise results. Turkstra's rule considers one load reaching its maximum value in bridge's service life combined with another load with its instantaneous value (or mean value), which looks more rational, but the results are generally unconservative. Therefore, a modified model is presented here considering both advantages of Ferry Borges-Castaneda's model and Turkstra's rule. The modified model is based on conditional probability, which can convert random process to random variables relatively easily and consider the nonmaximum factor in load combinations. Earthquake load and heavy truck load combinations are employed to illustrate the model. Finally, the results of a numerical simulation are used to verify the feasibility and rationality of the model. PMID:24883347

  5. Rule Mining Techniques to Predict Prokaryotic Metabolic Pathways.

    PubMed

    Saidi, Rabie; Boudellioua, Imane; Martin, Maria J; Solovyev, Victor

    2017-01-01

    It is becoming more evident that computational methods are needed for the identification and the mapping of pathways in new genomes. We introduce an automatic annotation system (ARBA4Path Association Rule-Based Annotator for Pathways) that utilizes rule mining techniques to predict metabolic pathways across wide range of prokaryotes. It was demonstrated that specific combinations of protein domains (recorded in our rules) strongly determine pathways in which proteins are involved and thus provide information that let us very accurately assign pathway membership (with precision of 0.999 and recall of 0.966) to proteins of a given prokaryotic taxon. Our system can be used to enhance the quality of automatically generated annotations as well as annotating proteins with unknown function. The prediction models are represented in the form of human-readable rules, and they can be used effectively to add absent pathway information to many proteins in UniProtKB/TrEMBL database.

  6. C-Language Integrated Production System, Version 5.1

    NASA Technical Reports Server (NTRS)

    Riley, Gary; Donnell, Brian; Ly, Huyen-Anh VU; Culbert, Chris; Savely, Robert T.; Mccoy, Daniel J.; Giarratano, Joseph

    1992-01-01

    CLIPS 5.1 provides cohesive software tool for handling wide variety of knowledge with support for three different programming paradigms: rule-based, object-oriented, and procedural. Rule-based programming provides representation of knowledge by use of heuristics. Object-oriented programming enables modeling of complex systems as modular components. Procedural programming enables CLIPS to represent knowledge in ways similar to those allowed in such languages as C, Pascal, Ada, and LISP. Working with CLIPS 5.1, one can develop expert-system software by use of rule-based programming only, object-oriented programming only, procedural programming only, or combinations of the three.

  7. Constrained dipole oscillator strength distributions, sum rules, and dispersion coefficients for Br2 and BrCN

    NASA Astrophysics Data System (ADS)

    Kumar, Ashok; Thakkar, Ajit J.

    2017-03-01

    Dipole oscillator strength distributions for Br2 and BrCN are constructed from photoabsorption cross-sections combined with constraints provided by the Kuhn-Reiche-Thomas sum rule, the high-energy behavior of the dipole-oscillator-strength density and molar refractivity data when available. The distributions are used to predict dipole sum rules S (k) , mean excitation energies I (k) , and van der Waals C6 coefficients. Coupled-cluster calculations of the static dipole polarizabilities of Br2 and BrCN are reported for comparison with the values of S (- 2) extracted from the distributions.

  8. Nested subcritical flows within supercritical systems

    NASA Technical Reports Server (NTRS)

    Hendricks, R. C.; Braun, M. J.; Wheeler, R. L., III; Mullen, R. L.

    1985-01-01

    In supercritical systems the design inlet and outlet pressures are maintained above the thermaodynamic critical pressure P sub C. Designers rely on this simple rule of thumb to circumvent problems associated with a subcritical pressure regime nested within the supercritical pressure system along with the uncertainties in heat transfer, fluid mechanics, and thermophysical property variations. The simple rule of thumb is adequate in many low-power designs but is inadequate for high-performance turbomachines and linear systems, where nested two-phase regions can exist. Examples for a free-jet expansion with backpressure greater than P sub C and a rotor (bearing) with ambient pressure greater than P sub C illustrate the existence of subcritical pressure regimes nested within supercritical systems.

  9. A refinement of the combination equations for evaporation

    USGS Publications Warehouse

    Milly, P.C.D.

    1991-01-01

    Most combination equations for evaporation rely on a linear expansion of the saturation vapor-pressure curve around the air temperature. Because the temperature at the surface may differ from this temperature by several degrees, and because the saturation vapor-pressure curve is nonlinear, this approximation leads to a certain degree of error in those evaporation equations. It is possible, however, to introduce higher-order polynomial approximations for the saturation vapor-pressure curve and to derive a family of explicit equations for evaporation, having any desired degree of accuracy. Under the linear approximation, the new family of equations for evaporation reduces, in particular cases, to the combination equations of H. L. Penman (Natural evaporation from open water, bare soil and grass, Proc. R. Soc. London, Ser. A193, 120-145, 1948) and of subsequent workers. Comparison of the linear and quadratic approximations leads to a simple approximate expression for the error associated with the linear case. Equations based on the conventional linear approximation consistently underestimate evaporation, sometimes by a substantial amount. ?? 1991 Kluwer Academic Publishers.

  10. AUTOMOTIVE DIESEL MAINTENANCE 1. UNIT X, USE OF MEASURING TOOLS IN DIESEL MAINTENANCE.

    ERIC Educational Resources Information Center

    Human Engineering Inst., Cleveland, OH.

    THIS MODULE OF A 30-MODULE COURSE IS DESIGNED TO DEVELOP AN UNDERSTANDING OF THE PRECISION MEASURING TOOLS USED IN DIESEL ENGINE MAINTENANCE. TOPICS ARE (1) LINEAR MEASURE, (2) MEASURING WITH RULES AND TAPES, (3) GETTING PRECISION WITH MICROMETERS, (4) DIAL INDICATORS, (5) TACHOMETERS, (6) TORQUE WRENCH, (7) THICKNESS (TECHER) GAGE, AND (8) VALVE…

  11. Recursion Removal as an Instructional Method to Enhance the Understanding of Recursion Tracing

    ERIC Educational Resources Information Center

    Velázquez-Iturbide, J. Ángel; Castellanos, M. Eugenia; Hijón-Neira, Raquel

    2016-01-01

    Recursion is one of the most difficult programming topics for students. In this paper, an instructional method is proposed to enhance students' understanding of recursion tracing. The proposal is based on the use of rules to translate linear recursion algorithms into equivalent, iterative ones. The paper has two main contributions: the…

  12. EPE fundamentals and impact of EUV: Will traditional design-rule calculations work in the era of EUV?

    NASA Astrophysics Data System (ADS)

    Gabor, Allen H.; Brendler, Andrew C.; Brunner, Timothy A.; Chen, Xuemei; Culp, James A.; Levinson, Harry J.

    2018-03-01

    The relationship between edge placement error, semiconductor design-rule determination and predicted yield in the era of EUV lithography is examined. This paper starts with the basics of edge placement error and then builds up to design-rule calculations. We show that edge placement error (EPE) definitions can be used as the building blocks for design-rule equations but that in the last several years the term "EPE" has been used in the literature to refer to many patterning errors that are not EPE. We then explore the concept of "Good Fields"1 and use it predict the n-sigma value needed for design-rule determination. Specifically, fundamental yield calculations based on the failure opportunities per chip are used to determine at what n-sigma "value" design-rules need to be tested to ensure high yield. The "value" can be a space between two features, an intersect area between two features, a minimum area of a feature, etc. It is shown that across chip variation of design-rule important values needs to be tested at sigma values between seven and eight which is much higher than the four-sigma values traditionally used for design-rule determination. After recommending new statistics be used for design-rule calculations the paper examines the impact of EUV lithography on sources of variation important for design-rule calculations. We show that stochastics can be treated as an effective dose variation that is fully sampled across every chip. Combining the increased within chip variation from EUV with the understanding that across chip variation of design-rule important values needs to not cause a yield loss at significantly higher sigma values than have traditionally been looked at, the conclusion is reached that across-wafer, wafer-to-wafer and lot-to-lot variation will have to overscale for any technology introducing EUV lithography where stochastic noise is a significant fraction of the effective dose variation. We will emphasize stochastic effects on edge placement error distributions and appropriate design-rule setting. While CD distributions with long tails coming from stochastic effects do bring increased risk of failure (especially on chips that may have over a billion failure opportunities per layer) there are other sources of variation that have sharp cutoffs, i.e. have no tails. We will review these sources and show how distributions with different skew and kurtosis values combine.

  13. [Compatibility law of Baizhi formulae and molecular mechanism of core herbal pair "Baizhi-Chuanxiong"].

    PubMed

    Su, Jin; Tang, Shi-Huan; Guo, Fei-Fei; Li, De-Feng; Zhang, Yi; Xu, Hai-Yu; Yang, Hong-Jun

    2018-04-01

    By using the traditional Chinese medicine inheritance support system (TCMISS) in this study, the prescription rules of Baizhi formulae were analyzed and the core herbal pair "Baizhi-Chuanxiong" was obtained. Through the systemic analysis of prescription rules of "Baizhi-Chuanxiong" and combined with the pharmacology thinking of "Baizhi-Chuanxiong" in treating headache, the paper was aimed to find out the combination rules containing Baizhi andits molecular mechanisms for treating headaches, and provide the theory basis for further research and reference of Baizhi and its formula. Totally 3 887 prescriptions were included in this study, involving 2 534 Chinese herbs. With a support degree of 20% in analysis, 16 most commonly used drug combinations were screened, which were mainly used to treat 15 types of diseases. Baizhi was often used to treat headache, and the core combination "Baizhi-Chuanxiong" was also often used to treat, consistent with ancient record. A chemical database was established; then the headache and migraine disease targets were retrieved and added in the database to build up the "compounds-targets-pathways "core network of "Baizhi-Chuanxiong" by the internet-based computation platform for IP of TCM (TCM-IP). TCM-IP was then applied to study the molecular mechanism of "Baizhi-Chuanxiong" treatment of headache. The results suggested that37 chemical compounds in the core combination "Baizhi-Chuanxiong" were closely related with headache treatment by adjusting serotonin levels or applying to inflammation-related targets and energy metabolism pathways such as purine metabolism, pyruvate metabolism, fatty acid degradation, carbon metabolism and gluconeogenesis. Copyright© by the Chinese Pharmaceutical Association.

  14. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  15. Some Examples of the Applications of the Transonic and Supersonic Area Rules to the Prediction of Wave Drag

    NASA Technical Reports Server (NTRS)

    Nelson, Robert L.; Welsh, Clement J.

    1960-01-01

    The experimental wave drags of bodies and wing-body combinations over a wide range of Mach numbers are compared with the computed drags utilizing a 24-term Fourier series application of the supersonic area rule and with the results of equivalent-body tests. The results indicate that the equivalent-body technique provides a good method for predicting the wave drag of certain wing-body combinations at and below a Mach number of 1. At Mach numbers greater than 1, the equivalent-body wave drags can be misleading. The wave drags computed using the supersonic area rule are shown to be in best agreement with the experimental results for configurations employing the thinnest wings. The wave drags for the bodies of revolution presented in this report are predicted to a greater degree of accuracy by using the frontal projections of oblique areas than by using normal areas. A rapid method of computing wing area distributions and area-distribution slopes is given in an appendix.

  16. Galerkin finite difference Laplacian operators on isolated unstructured triangular meshes by linear combinations

    NASA Technical Reports Server (NTRS)

    Baumeister, Kenneth J.

    1990-01-01

    The Galerkin weighted residual technique using linear triangular weight functions is employed to develop finite difference formulae in Cartesian coordinates for the Laplacian operator on isolated unstructured triangular grids. The weighted residual coefficients associated with the weak formulation of the Laplacian operator along with linear combinations of the residual equations are used to develop the algorithm. The algorithm was tested for a wide variety of unstructured meshes and found to give satisfactory results.

  17. Identifying suitable land for alternative crops in a drying climate: soil salinity, texture and topographic conditions for the growth of old man saltbush (Atriplex nummularia)

    NASA Astrophysics Data System (ADS)

    Holmes, K. W.; Barrett-Lennard, E. G.; Altman, M.

    2011-12-01

    Experiments conducted under controlled conditions clearly show that the growth and survival of plants on saltland is affected by both the levels of salinity and waterlogging (or depth to water-table) in the soil. Different plant species thrive under varying combinations of these growth constraints. However in natural settings, short distance spatial variability in soil properties and subtle topographic features often complicate the definition of saline and soil hydrological conditions; additional factors may also overprint the trends identified under controlled conditions, making it difficult to define the physical settings where planting is economically viable. We investigated the establishment and growth of old man saltbush (Atriplex nummularia) in relation to variable soil-landscape conditions across an experimental site in southwestern Australia where the combination of high salinity and occasional seasonal waterlogging ruled out the growth of traditional crops and pastures. Saltbush can be critical supplemental feed in the dry season, providing essential nutrients for sheep in combination with sufficient water and dry feed (hay). We applied a range of modeling approaches including classification and regression trees and generalized linear models to statistically characterize these plant-environment relationships, and extend them spatially using full cover raster covariate datasets. Plant deaths could be consistently predicted (97% correct classification of independent dataset) using a combination of topographic variables, salinity, soil mineralogical information, and depth to the water table. Plant growth patterns were more difficult to predict, particularly after several years of grazing, however variation in plant volume was well-explained with a linear model (r2 = 0.6, P < 0.0001). All types of environmental data were required, supporting the starting hypothesis that saltland pasture success is driven by water movement in the landscape. The final selected covariates for modeling were a digital elevation model and derivatives, soil mineralogy, competitors for water (adjacent trees) and soil salinity (measured with an EM38). Our exploration of strengths and weaknesses of extrapolating simple relationships determined under controlled conditions to the field vindicates the importance of both approaches. Landholders often view the idea of the productive use of saltland with skepticism. The challenge is to use the combined datasets from glasshouse and field experiments to develop information guidelines for landholders that maximize the chances of revegetation success. Water availability, waterlogging, quality of the shallow groundwater, and secondary salinity are dominant processes that impact on agriculture in southwestern Australia. Improving our understanding of their interactions and effect on productivity will help adapt agricultural management to changing environmental conditions in the future.

  18. Bilinearity in Spatiotemporal Integration of Synaptic Inputs

    PubMed Central

    Li, Songting; Liu, Nan; Zhang, Xiao-hui; Zhou, Douglas; Cai, David

    2014-01-01

    Neurons process information via integration of synaptic inputs from dendrites. Many experimental results demonstrate dendritic integration could be highly nonlinear, yet few theoretical analyses have been performed to obtain a precise quantitative characterization analytically. Based on asymptotic analysis of a two-compartment passive cable model, given a pair of time-dependent synaptic conductance inputs, we derive a bilinear spatiotemporal dendritic integration rule. The summed somatic potential can be well approximated by the linear summation of the two postsynaptic potentials elicited separately, plus a third additional bilinear term proportional to their product with a proportionality coefficient . The rule is valid for a pair of synaptic inputs of all types, including excitation-inhibition, excitation-excitation, and inhibition-inhibition. In addition, the rule is valid during the whole dendritic integration process for a pair of synaptic inputs with arbitrary input time differences and input locations. The coefficient is demonstrated to be nearly independent of the input strengths but is dependent on input times and input locations. This rule is then verified through simulation of a realistic pyramidal neuron model and in electrophysiological experiments of rat hippocampal CA1 neurons. The rule is further generalized to describe the spatiotemporal dendritic integration of multiple excitatory and inhibitory synaptic inputs. The integration of multiple inputs can be decomposed into the sum of all possible pairwise integration, where each paired integration obeys the bilinear rule. This decomposition leads to a graph representation of dendritic integration, which can be viewed as functionally sparse. PMID:25521832

  19. Finite Element Simulation of Articular Contact Mechanics with Quadratic Tetrahedral Elements

    PubMed Central

    Maas, Steve A.; Ellis, Benjamin J.; Rawlins, David S.; Weiss, Jeffrey A.

    2016-01-01

    Although it is easier to generate finite element discretizations with tetrahedral elements, trilinear hexahedral (HEX8) elements are more often used in simulations of articular contact mechanics. This is due to numerical shortcomings of linear tetrahedral (TET4) elements, limited availability of quadratic tetrahedron elements in combination with effective contact algorithms, and the perceived increased computational expense of quadratic finite elements. In this study we implemented both ten-node (TET10) and fifteen-node (TET15) quadratic tetrahedral elements in FEBio (www.febio.org) and compared their accuracy, robustness in terms of convergence behavior and computational cost for simulations relevant to articular contact mechanics. Suitable volume integration and surface integration rules were determined by comparing the results of several benchmark contact problems. The results demonstrated that the surface integration rule used to evaluate the contact integrals for quadratic elements affected both convergence behavior and accuracy of predicted stresses. The computational expense and robustness of both quadratic tetrahedral formulations compared favorably to the HEX8 models. Of note, the TET15 element demonstrated superior convergence behavior and lower computational cost than both the TET10 and HEX8 elements for meshes with similar numbers of degrees of freedom in the contact problems that we examined. Finally, the excellent accuracy and relative efficiency of these quadratic tetrahedral elements was illustrated by comparing their predictions with those for a HEX8 mesh for simulation of articular contact in a fully validated model of the hip. These results demonstrate that TET10 and TET15 elements provide viable alternatives to HEX8 elements for simulation of articular contact mechanics. PMID:26900037

  20. The combining of multiple hemispheric resources in learning-disabled and skilled readers' recall of words: a test of three information-processing models.

    PubMed

    Swanson, H L

    1987-01-01

    Three theoretical models (additive, independence, maximum rule) that characterize and predict the influence of independent hemispheric resources on learning-disabled and skilled readers' simultaneous processing were tested. Predictions related to word recall performance during simultaneous encoding conditions (dichotic listening task) were made from unilateral (dichotic listening task) presentations. The maximum rule model best characterized both ability groups in that simultaneous encoding produced no better recall than unilateral presentations. While the results support the hypothesis that both ability groups use similar processes in the combining of hemispheric resources (i.e., weak/dominant processing), ability group differences do occur in the coordination of such resources.

  1. Application of ant colony Algorithm and particle swarm optimization in architectural design

    NASA Astrophysics Data System (ADS)

    Song, Ziyi; Wu, Yunfa; Song, Jianhua

    2018-02-01

    By studying the development of ant colony algorithm and particle swarm algorithm, this paper expounds the core idea of the algorithm, explores the combination of algorithm and architectural design, sums up the application rules of intelligent algorithm in architectural design, and combines the characteristics of the two algorithms, obtains the research route and realization way of intelligent algorithm in architecture design. To establish algorithm rules to assist architectural design. Taking intelligent algorithm as the beginning of architectural design research, the authors provide the theory foundation of ant colony Algorithm and particle swarm algorithm in architectural design, popularize the application range of intelligent algorithm in architectural design, and provide a new idea for the architects.

  2. [Prescription rules of preparations containing Crataegi Fructus in Chinese patent drug].

    PubMed

    Geng, Ya; Ma, Yue-Xiang; Xu, Hai-Yu; Li, Jun-Fang; Tang, Shi-Huan; Yang, Hong-Jun

    2016-08-01

    To analyze the prescription rules of preparations containing Crataegi Fructus in the drug standards of the People's Republic of China Ministry of Public Health-Chinese Patent Drug(hereinafter referred to as Chinese patent drug), and provide some references for clinical application and the research and development of new medicines. Based on TCMISS(V2.5), the prescriptions containing Crataegi Fructus in Chinese patent drug were collected to build the database; association rules, frequency statistics and other data mining methods were used to analyze the disease syndrome, common drug compatibility and prescription rules. There were a total of 308 prescriptions containing Crataegi Fructus, involving 499 kinds of Chinese medicines, 34 commonly used drug combinations, and mainly for 18 kinds of diseases. Drug combination analysis was done with "Crataegi Fructus-Citri Reticulatae Pericarpium" and "Crataegi Fructus-Poria" as the high-frequency herb pairs and with "stagnation" and "diarrhea" as the high-frequency diseases. The results indicated that the Crataegi Fructus in different herb pairs had a roughly same function, and its therapy effect was different in different diseases. The prescriptions containing Crataegi Fructus in Chinese patent drug had the effect of digestion, and they were widely used in clinical application, often used together with spleen-strengthening medicines to achieve different treatment effects; the prescription rules reflected the prescription characteristics of Crataegi Fructus for different diseases, providing a basis for its clinically scientific application and the research and development of new medicines. Copyright© by the Chinese Pharmaceutical Association.

  3. 12 CFR 32.5 - Combination rules.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... debt restructuring approved by the OCC, upon request by a bank for application of the non combination... external debt management; and (D) Whether the restructuring includes features of debt or debt-service... generally liable for the debts or actions of the partnership, joint venture, or association, and those...

  4. 12 CFR 32.5 - Combination rules.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... debt restructuring approved by the OCC, upon request by a bank for application of the non combination... external debt management; and (D) Whether the restructuring includes features of debt or debt-service... generally liable for the debts or actions of the partnership, joint venture, or association, and those...

  5. 26 CFR 148.1-5 - Constructive sale price.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... of articles listed in Chapter 32 of the Internal Revenue Code (other than combinations) that embraces... section. For the rule applicable to combinations of two or more articles, see subdivision (iv) of this..., perforating, cutting, and dating machines, and other check protector machine devices; (o) Taxable cash...

  6. 26 CFR 148.1-5 - Constructive sale price.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... of articles listed in Chapter 32 of the Internal Revenue Code (other than combinations) that embraces... section. For the rule applicable to combinations of two or more articles, see subdivision (iv) of this..., perforating, cutting, and dating machines, and other check protector machine devices; (o) Taxable cash...

  7. 26 CFR 148.1-5 - Constructive sale price.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... of articles listed in Chapter 32 of the Internal Revenue Code (other than combinations) that embraces... section. For the rule applicable to combinations of two or more articles, see subdivision (iv) of this..., perforating, cutting, and dating machines, and other check protector machine devices; (o) Taxable cash...

  8. Ethnicity identification from face images

    NASA Astrophysics Data System (ADS)

    Lu, Xiaoguang; Jain, Anil K.

    2004-08-01

    Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.

  9. Altruism Can Proliferate through Population Viscosity despite High Random Gene Flow

    PubMed Central

    Schonmann, Roberto H.; Vicente, Renato; Caticha, Nestor

    2013-01-01

    The ways in which natural selection can allow the proliferation of cooperative behavior have long been seen as a central problem in evolutionary biology. Most of the literature has focused on interactions between pairs of individuals and on linear public goods games. This emphasis has led to the conclusion that even modest levels of migration would pose a serious problem to the spread of altruism through population viscosity in group structured populations. Here we challenge this conclusion, by analyzing evolution in a framework which allows for complex group interactions and random migration among groups. We conclude that contingent forms of strong altruism that benefits equally all group members, regardless of kinship and without greenbeard effects, can spread when rare under realistic group sizes and levels of migration, due to the assortment of genes resulting only from population viscosity. Our analysis combines group-centric and gene-centric perspectives, allows for arbitrary strength of selection, and leads to extensions of Hamilton’s rule for the spread of altruistic alleles, applicable under broad conditions. PMID:23991035

  10. Space coding for sensorimotor transformations can emerge through unsupervised learning.

    PubMed

    De Filippo De Grazia, Michele; Cutini, Simone; Lisi, Matteo; Zorzi, Marco

    2012-08-01

    The posterior parietal cortex (PPC) is fundamental for sensorimotor transformations because it combines multiple sensory inputs and posture signals into different spatial reference frames that drive motor programming. Here, we present a computational model mimicking the sensorimotor transformations occurring in the PPC. A recurrent neural network with one layer of hidden neurons (restricted Boltzmann machine) learned a stochastic generative model of the sensory data without supervision. After the unsupervised learning phase, the activity of the hidden neurons was used to compute a motor program (a population code on a bidimensional map) through a simple linear projection and delta rule learning. The average motor error, calculated as the difference between the expected and the computed output, was less than 3°. Importantly, analyses of the hidden neurons revealed gain-modulated visual receptive fields, thereby showing that space coding for sensorimotor transformations similar to that observed in the PPC can emerge through unsupervised learning. These results suggest that gain modulation is an efficient coding strategy to integrate visual and postural information toward the generation of motor commands.

  11. Paradigm Change: Alternate Approaches to Constitutive and Necking Models for Sheet Metal Forming

    NASA Astrophysics Data System (ADS)

    Stoughton, Thomas B.; Yoon, Jeong Whan

    2011-08-01

    This paper reviews recent work proposing paradigm changes for the currently popular approach to constitutive and failure modeling, focusing on the use of non-associated flow rules to enable greater flexibility to capture the anisotropic yield and flow behavior of metals using less complex functions than those needed under associated flow to achieve that same level of fidelity to experiment, and on the use of stress-based metrics to more reliably predict necking limits under complex conditions of non-linear forming. The paper discusses motivating factors and benefits in favor of both associated and non-associated flow models for metal forming, including experimental, theoretical, and practical aspects. This review is followed by a discussion of the topic of the forming limits, the limitations of strain analysis, the evidence in favor of stress analysis, the effects of curvature, bending/unbending cycles, triaxial stress conditions, and the motivation for the development of a new type of forming limit diagram based on the effective plastic strain or equivalent plastic work in combination with a directional parameter that accounts for the current stress condition.

  12. Carrier-induced ferromagnetism in the insulating Mn-doped III-V semiconductor InP

    NASA Astrophysics Data System (ADS)

    Bouzerar, Richard; May, Daniel; Löw, Ute; Machon, Denis; Melinon, Patrice; Zhou, Shengqiang; Bouzerar, Georges

    2016-09-01

    Although InP and GaAs have very similar band structure their magnetic properties appear to drastically differ. Critical temperatures in (In,Mn)P are much smaller than those of (Ga,Mn)As and scale linearly with Mn concentration. This is in contrast to the square-root behavior found in (Ga,Mn)As. Moreover the magnetization curve exhibits an unconventional shape in (In,Mn)P contrasting with the conventional one of well-annealed (Ga,Mn)As. By combining several theoretical approaches, the nature of ferromagnetism in Mn-doped InP is investigated. It appears that the magnetic properties are essentially controlled by the position of the Mn acceptor level. Our calculations are in excellent agreement with recent measurements for both critical temperatures and magnetizations. The results are only consistent with a Fermi level lying in an impurity band, ruling out the possibility to understand the physical properties of Mn-doped InP within the valence band scenario. The quantitative success found here reveals a predictive tool of choice that should open interesting pathways to address magnetic properties in other compounds.

  13. Understanding and modeling the economics of ECM

    NASA Astrophysics Data System (ADS)

    Wells, Wayne E.; Edinbarough, Immanuel A.

    2004-12-01

    Traditional economic analysis methods for manufacturing decisions include only the clearly identified immediate cost and revenue streams. Environmental issues have generally been seen as costs, in the form of waste material losses, conformance tests and pre-discharge treatments. The components of the waste stream often purchased as raw materials, become liabilities at the "end of the pipe" and their intrinsic material value is seldom recognized. A new mathematical treatment of manufacturing economics is proposed in which the costs of separation are compared with the intrinsic value of the waste materials to show how their recovery can provide an economic advantage to the manufacturer. The model is based on a unique combination of thermodynamic analysis, economic modeling and linear optimization. This paper describes the proposed model, and examines case studies in which the changed decision rules have yielded significant savings while protecting the environment. The premise proposed is that by including the value of the waste materials in the profit objective of the firm and applying the appropriate technological solution, manufacturing processes can become closed systems in which losses approach zero and environmental problems are converted into economic savings.

  14. Color constancy: enhancing von Kries adaption via sensor transformations

    NASA Astrophysics Data System (ADS)

    Finlayson, Graham D.; Drew, Mark S.; Funt, Brian V.

    1993-09-01

    Von Kries adaptation has long been considered a reasonable vehicle for color constancy. Since the color constancy performance attainable via the von Kries rule strongly depends on the spectral response characteristics of the human cones, we consider the possibility of enhancing von Kries performance by constructing new `sensors' as linear combinations of the fixed cone sensitivity functions. We show that if surface reflectances are well-modeled by 3 basis functions and illuminants by 2 basis functions then there exists a set of new sensors for which von Kries adaptation can yield perfect color constancy. These new sensors can (like the cones) be described as long-, medium-, and short-wave sensitive; however, both the new long- and medium-wave sensors have sharpened sensitivities -- their support is more concentrated. The new short-wave sensor remains relatively unchanged. A similar sharpening of cone sensitivities has previously been observed in test and field spectral sensitivities measured for the human eye. We present simulation results demonstrating improved von Kries performance using the new sensors even when the restrictions on the illumination and reflectance are relaxed.

  15. Optical Correlation of Images With Signal-Dependent Noise Using Constrained-Modulation Filter Devices

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1995-01-01

    Images with signal-dependent noise present challenges beyond those of images with additive white or colored signal-independent noise in terms of designing the optimal 4-f correlation filter that maximizes correlation-peak signal-to-noise ratio, or combinations of correlation-peak metrics. Determining the proper design becomes more difficult when the filter is to be implemented on a constrained-modulation spatial light modulator device. The design issues involved for updatable optical filters for images with signal-dependent film-grain noise and speckle noise are examined. It is shown that although design of the optimal linear filter in the Fourier domain is impossible for images with signal-dependent noise, proper nonlinear preprocessing of the images allows the application of previously developed design rules for optimal filters to be implemented on constrained-modulation devices. Thus the nonlinear preprocessing becomes necessary for correlation in optical systems with current spatial light modulator technology. These results are illustrated with computer simulations of images with signal-dependent noise correlated with binary-phase-only filters and ternary-phase-amplitude filters.

  16. Collective Movement in the Tibetan Macaques (Macaca thibetana): Early Joiners Write the Rule of the Game.

    PubMed

    Wang, Xi; Sun, Lixing; Li, Jinhua; Xia, Dongpo; Sun, Binghua; Zhang, Dao

    2015-01-01

    Collective behavior has recently attracted a great deal of interest in both natural and social sciences. While the role of leadership has been closely scrutinized, the rules used by joiners in collective decision making have received far less attention. Two main hypotheses have been proposed concerning these rules: mimetism and quorum. Mimetism predicts that individuals are increasingly likely to join collective behavior as the number of participants increases. It can be further divided into selective mimetism, where relationships among the participants affect the process, and anonymous mimetism, where no such effect exists. Quorum predicts that a collective behavior occurs when the number of participants reaches a threshold. To probe into which rule is used in collective decision making, we conducted a study on the joining process in a group of free-ranging Tibetan macaques (Macaca thibetana) in Huangshan, China using a combination of all-occurrence and focal animal sampling methods. Our results show that the earlier individuals joined movements, the more central a role they occupied among the joining network. We also found that when less than three adults participated in the first five minutes of the joining process, no entire group movement occurred subsequently. When the number of these early joiners ranged from three to six, selective mimetism was used. This means higher rank or closer social affiliation of early joiners could be among the factors of deciding whether to participate in movements by group members. When the number of early joiners reached or exceeded seven, which was the simple majority of the group studied, entire group movement always occurred, meaning that the quorum rule was used. Putting together, Macaca thibetana used a combination of selective mimetism and quorum, and early joiners played a key role in deciding which rule should be used.

  17. Non-additive interactions involving two distinct elements mediate sloppy-paired regulation by pair-rule transcription factors

    PubMed Central

    Prazak, Lisa; Fujioka, Miki; Gergen, J. Peter

    2010-01-01

    The relatively simple combinatorial rules responsible for establishing the initial metameric expression of sloppy-paired-1 (slp1) in the Drosophila blastoderm embryo make this system an attractive model for investigating the mechanism of regulation by pair rule transcription factors. This investigation of slp1 cis-regulatory architecture identifies two distinct elements, a proximal early stripe element (PESE) and a distal early stripe element (DESE) located from −3.1 kb to −2.5 kb and from −8.1 kb to −7.1 kb upstream of the slp1 promoter, respectively, that mediate this early regulation. The proximal element expresses only even-numbered stripes and mediates repression by Even-skipped (Eve) as well as by the combination of Runt and Fushi-tarazu (Ftz). A 272 basepair sub-element of PESE retains Eve-dependent repression, but is expressed throughout the even-numbered parasegments due to the loss of repression by Runt and Ftz. In contrast, the distal element expresses both odd and even-numbered stripes and also drives inappropriate expression in the anterior half of the odd-numbered parasegments due to an inability to respond to repression by Eve. Importantly, a composite reporter gene containing both early stripe elements recapitulates pair-rule gene-dependent regulation in a manner beyond what is expected from combining their individual patterns. These results indicate interactions involving distinct cis-elements contribute to the proper integration of pair-rule regulatory information. A model fully accounting for these results proposes that metameric slp1 expression is achieved through the Runt-dependent regulation of interactions between these two pair-rule response elements and the slp1 promoter. PMID:20435028

  18. Multiple origins of linear dunes on Earth and Titan

    USGS Publications Warehouse

    Rubin, David M.; Hesp, Patrick A.

    2009-01-01

    Dunes with relatively long and parallel crests are classified as linear dunes. On Earth, they form in at least two environmental settings: where winds of bimodal direction blow across loose sand, and also where single-direction winds blow over sediment that is locally stabilized, be it through vegetation, sediment cohesion or topographic shelter from the winds. Linear dunes have also been identified on Titan, where they are thought to form in loose sand. Here we present evidence that in the Qaidam Basin, China, linear dunes are found downwind of transverse dunes owing to higher cohesiveness in the downwind sediments, which contain larger amounts of salt and mud. We also present a compilation of other settings where sediment stabilization has been reported to produce linear dunes. We suggest that in this dune-forming process, loose sediment accumulates on the dunes and is stabilized; the stable dune then functions as a topographic shelter, which induces the deposition of sediments downwind. We conclude that a model in which Titan's dunes formed similarly in cohesive sediments cannot be ruled out by the existing data.

  19. Young children make their gestural communication systems more language-like: segmentation and linearization of semantic elements in motion events.

    PubMed

    Clay, Zanna; Pople, Sally; Hood, Bruce; Kita, Sotaro

    2014-08-01

    Research on Nicaraguan Sign Language, created by deaf children, has suggested that young children use gestures to segment the semantic elements of events and linearize them in ways similar to those used in signed and spoken languages. However, it is unclear whether this is due to children's learning processes or to a more general effect of iterative learning. We investigated whether typically developing children, without iterative learning, segment and linearize information. Gestures produced in the absence of speech to express a motion event were examined in 4-year-olds, 12-year-olds, and adults (all native English speakers). We compared the proportions of gestural expressions that segmented semantic elements into linear sequences and that encoded them simultaneously. Compared with adolescents and adults, children reshaped the holistic stimuli by segmenting and recombining their semantic features into linearized sequences. A control task on recognition memory ruled out the possibility that this was due to different event perception or memory. Young children spontaneously bring fundamental properties of language into their communication system. © The Author(s) 2014.

  20. Algorithmic Trading with Developmental and Linear Genetic Programming

    NASA Astrophysics Data System (ADS)

    Wilson, Garnett; Banzhaf, Wolfgang

    A developmental co-evolutionary genetic programming approach (PAM DGP) and a standard linear genetic programming (LGP) stock trading systemare applied to a number of stocks across market sectors. Both GP techniques were found to be robust to market fluctuations and reactive to opportunities associated with stock price rise and fall, with PAMDGP generating notably greater profit in some stock trend scenarios. Both algorithms were very accurate at buying to achieve profit and selling to protect assets, while exhibiting bothmoderate trading activity and the ability to maximize or minimize investment as appropriate. The content of the trading rules produced by both algorithms are also examined in relation to stock price trend scenarios.

  1. Output Consensus of Heterogeneous Linear Multi-Agent Systems by Distributed Event-Triggered/Self-Triggered Strategy.

    PubMed

    Hu, Wenfeng; Liu, Lu; Feng, Gang

    2016-09-02

    This paper addresses the output consensus problem of heterogeneous linear multi-agent systems. We first propose a novel distributed event-triggered control scheme. It is shown that, with the proposed control scheme, the output consensus problem can be solved if two matrix equations are satisfied. Then, we further propose a novel self-triggered control scheme, with which continuous monitoring is avoided. By introducing a fixed timer into both event- and self-triggered control schemes, Zeno behavior can be ruled out for each agent. The effectiveness of the event- and self-triggered control schemes is illustrated by an example.

  2. Phantom solution in a non-linear Israel-Stewart theory

    NASA Astrophysics Data System (ADS)

    Cruz, Miguel; Cruz, Norman; Lepe, Samuel

    2017-06-01

    In this paper we present a phantom solution with a big rip singularity in a non-linear regime of the Israel-Stewart formalism. In this framework it is possible to extend this causal formalism in order to describe accelerated expansion, where assumption of near equilibrium is no longer valid. We assume a flat universe filled with a single viscous fluid ruled by a barotropic EoS, p = ωρ, which can represent a late time accelerated phase of the cosmic evolution. The solution allows to cross the phantom divide without evoking an exotic matter fluid and the effective EoS parameter is always lesser than -1 and constant in time.

  3. Numerical Study of Pressure Field in Laterally Closed Industrial Buildings with Curved Metallic Roofs due to the Wind Effect by FEM and European Rule Comparison

    NASA Astrophysics Data System (ADS)

    Nieto, P. J. García; del Coz Díaz, J. J.; Vilán, J. A. Vilán; Placer, C. Casqueiro

    2009-08-01

    In this paper, an evaluation of distribution of the air pressure is determined throughout the laterally closed industrial buildings with curved metallic roofs due to the wind effect by the finite element method (FEM). The non-linearity is due to Reynolds-averaged Navier-Stokes (RANS) equations that govern the turbulent flow. The Navier-Stokes equations are non-linear partial differential equations and this non-linearity makes most problems difficult to solve and is part of the cause of turbulence. The RANS equations are time-averaged equations of motion for fluid flow. They are primarily used while dealing with turbulent flows. Turbulence is a highly complex physical phenomenon that is pervasive in flow problems of scientific and engineering concern like this one. In order to solve the RANS equations a two-equation model is used: the standard k-ɛ model. The calculation has been carried out keeping in mind the following assumptions: turbulent flow, an exponential-like wind speed profile with a maximum velocity of 40 m/s at 10 m reference height, and different heights of the building ranging from 6 to 10 meters. Finally, the forces and moments are determined on the cover, as well as the distribution of pressures on the same one, comparing the numerical results obtained with the Spanish CTE DB SE-AE, Spanish NBE AE-88 and European standard rules, giving place to the conclusions that are exposed in the study.

  4. Feynman rules for the Standard Model Effective Field Theory in R ξ -gauges

    NASA Astrophysics Data System (ADS)

    Dedes, A.; Materkowska, W.; Paraskevas, M.; Rosiek, J.; Suxho, K.

    2017-06-01

    We assume that New Physics effects are parametrized within the Standard Model Effective Field Theory (SMEFT) written in a complete basis of gauge invariant operators up to dimension 6, commonly referred to as "Warsaw basis". We discuss all steps necessary to obtain a consistent transition to the spontaneously broken theory and several other important aspects, including the BRST-invariance of the SMEFT action for linear R ξ -gauges. The final theory is expressed in a basis characterized by SM-like propagators for all physical and unphysical fields. The effect of the non-renormalizable operators appears explicitly in triple or higher multiplicity vertices. In this mass basis we derive the complete set of Feynman rules, without resorting to any simplifying assumptions such as baryon-, lepton-number or CP conservation. As it turns out, for most SMEFT vertices the expressions are reasonably short, with a noticeable exception of those involving 4, 5 and 6 gluons. We have also supplemented our set of Feynman rules, given in an appendix here, with a publicly available Mathematica code working with the FeynRules package and producing output which can be integrated with other symbolic algebra or numerical codes for automatic SMEFT amplitude calculations.

  5. Simulation of herbicide degradation in different soils by use of Pedo-transfer functions (PTF) and non-linear kinetics.

    PubMed

    von Götz, N; Richter, O

    1999-03-01

    The degradation behaviour of bentazone in 14 different soils was examined at constant temperature and moisture conditions. Two soils were examined at different temperatures. On the basis of these data the influence of soil properties and temperature on degradation was assessed and modelled. Pedo-transfer functions (PTF) in combination with a linear and a non-linear model were found suitable to describe the bentazone degradation in the laboratory as related to soil properties. The linear PTF can be combined with a rate related to the temperature to account for both soil property and temperature influence at the same time.

  6. Quantum processing by remote quantum control

    NASA Astrophysics Data System (ADS)

    Qiang, Xiaogang; Zhou, Xiaoqi; Aungskunsiri, Kanin; Cable, Hugo; O'Brien, Jeremy L.

    2017-12-01

    Client-server models enable computations to be hosted remotely on quantum servers. We present a novel protocol for realizing this task, with practical advantages when using technology feasible in the near term. Client tasks are realized as linear combinations of operations implemented by the server, where the linear coefficients are hidden from the server. We report on an experimental demonstration of our protocol using linear optics, which realizes linear combination of two single-qubit operations by a remote single-qubit control. In addition, we explain when our protocol can remain efficient for larger computations, as well as some ways in which privacy can be maintained using our protocol.

  7. Articulation Management for Intelligent Integration of Information

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Tran, Peter B.; Clancy, Daniel (Technical Monitor)

    2001-01-01

    When combining data from distinct sources, there is a need to share meta-data and other knowledge about various source domains. Due to semantic inconsistencies and heterogeneity of representations, problems arise in combining multiple domains when the domains are merged. The knowledge that is irrelevant to the task of interoperation will be included, making the result unnecessarily complex. This heterogeneity problem can be eliminated by mediating the conflicts and managing the intersections of the domains. For interoperation and intelligent access to heterogeneous information, the focus is on the intersection of the knowledge, since intersection will define the required articulation rules. An algebra over domain has been proposed to use articulation rules to support disciplined manipulation of domain knowledge resources. The objective of a domain algebra is to provide the capability for interrogating many domain knowledge resources, which are largely semantically disjoint. The algebra supports formally the tasks of selecting, combining, extending, specializing, and modifying Components from a diverse set of domains. This paper presents a domain algebra and demonstrates the use of articulation rules to link declarative interfaces for Internet and enterprise applications. In particular, it discusses the articulation implementation as part of a production system capable of operating over the domain described by the IDL (interface description language) of objects registered in multiple CORBA servers.

  8. Two Back Stress Hardening Models in Rate Independent Rigid Plastic Deformation

    NASA Astrophysics Data System (ADS)

    Yun, Su-Jin

    In the present work, the constitutive relations based on the combination of two back stresses are developed using the Armstrong-Frederick, Phillips and Ziegler’s type hardening rules. Various evolutions of the kinematic hardening parameter can be obtained by means of a simple combination of back stress rate using the rule of mixtures. Thus, a wide range of plastic deformation behavior can be depicted depending on the dominant back stress evolution. The ultimate back stress is also determined for the present combined kinematic hardening models. Since a kinematic hardening rule is assumed in the finite deformation regime, the stress rate is co-rotated with respect to the spin of substructure obtained by incorporating the plastic spin concept. A comparison of the various co-rotational rates is also included. Assuming rigid plasticity, the continuum body consists of the elastic deformation zone and the plastic deformation zone to form a hybrid finite element formulation. Then, the plastic deformation behavior is investigated under various loading conditions with an assumption of the J2 deformation theory. The plastic deformation localization turns out to be strongly dependent on the description of back stress evolution and its associated hardening parameters. The analysis for the shear deformation with fixed boundaries is carried out to examine the deformation localization behavior and the evolution of state variables.

  9. CARSVM: a class association rule-based classification framework and its application to gene expression data.

    PubMed

    Kianmehr, Keivan; Alhajj, Reda

    2008-09-01

    In this study, we aim at building a classification framework, namely the CARSVM model, which integrates association rule mining and support vector machine (SVM). The goal is to benefit from advantages of both, the discriminative knowledge represented by class association rules and the classification power of the SVM algorithm, to construct an efficient and accurate classifier model that improves the interpretability problem of SVM as a traditional machine learning technique and overcomes the efficiency issues of associative classification algorithms. In our proposed framework: instead of using the original training set, a set of rule-based feature vectors, which are generated based on the discriminative ability of class association rules over the training samples, are presented to the learning component of the SVM algorithm. We show that rule-based feature vectors present a high-qualified source of discrimination knowledge that can impact substantially the prediction power of SVM and associative classification techniques. They provide users with more conveniences in terms of understandability and interpretability as well. We have used four datasets from UCI ML repository to evaluate the performance of the developed system in comparison with five well-known existing classification methods. Because of the importance and popularity of gene expression analysis as real world application of the classification model, we present an extension of CARSVM combined with feature selection to be applied to gene expression data. Then, we describe how this combination will provide biologists with an efficient and understandable classifier model. The reported test results and their biological interpretation demonstrate the applicability, efficiency and effectiveness of the proposed model. From the results, it can be concluded that a considerable increase in classification accuracy can be obtained when the rule-based feature vectors are integrated in the learning process of the SVM algorithm. In the context of applicability, according to the results obtained from gene expression analysis, we can conclude that the CARSVM system can be utilized in a variety of real world applications with some adjustments.

  10. Probability theory, not the very guide of life.

    PubMed

    Juslin, Peter; Nilsson, Håkan; Winman, Anders

    2009-10-01

    Probability theory has long been taken as the self-evident norm against which to evaluate inductive reasoning, and classical demonstrations of violations of this norm include the conjunction error and base-rate neglect. Many of these phenomena require multiplicative probability integration, whereas people seem more inclined to linear additive integration, in part, at least, because of well-known capacity constraints on controlled thought. In this article, the authors show with computer simulations that when based on approximate knowledge of probabilities, as is routinely the case in natural environments, linear additive integration can yield as accurate estimates, and as good average decision returns, as estimates based on probability theory. It is proposed that in natural environments people have little opportunity or incentive to induce the normative rules of probability theory and, given their cognitive constraints, linear additive integration may often offer superior bounded rationality.

  11. Lessons from Jurassic Park: patients as complex adaptive systems.

    PubMed

    Katerndahl, David A

    2009-08-01

    With realization that non-linearity is generally the rule rather than the exception in nature, viewing patients and families as complex adaptive systems may lead to a better understanding of health and illness. Doctors who successfully practise the 'art' of medicine may recognize non-linear principles at work without having the jargon needed to label them. Complex adaptive systems are systems composed of multiple components that display complexity and adaptation to input. These systems consist of self-organized components, which display complex dynamics, ranging from simple periodicity to chaotic and random patterns showing trends over time. Understanding the non-linear dynamics of phenomena both internal and external to our patients can (1) improve our definition of 'health'; (2) improve our understanding of patients, disease and the systems in which they converge; (3) be applied to future monitoring systems; and (4) be used to possibly engineer change. Such a non-linear view of the world is quite congruent with the generalist perspective.

  12. Improving ESL Writing Using an Online Formulaic Sequence Word-Combination Checker

    ERIC Educational Resources Information Center

    Grami, G. M. A.; Alkazemi, B. Y.

    2016-01-01

    Writing correct English sentences can be challenging. Furthermore, writing correct formulaic sequences can be especially difficult because accepted combinations do not follow clear rules governing which words appear together in a sequence. One solution is to provide examples of correct usage accompanied by statistical feedback from web-based…

  13. A Hyper-Heuristic Ensemble Method for Static Job-Shop Scheduling.

    PubMed

    Hart, Emma; Sim, Kevin

    2016-01-01

    We describe a new hyper-heuristic method NELLI-GP for solving job-shop scheduling problems (JSSP) that evolves an ensemble of heuristics. The ensemble adopts a divide-and-conquer approach in which each heuristic solves a unique subset of the instance set considered. NELLI-GP extends an existing ensemble method called NELLI by introducing a novel heuristic generator that evolves heuristics composed of linear sequences of dispatching rules: each rule is represented using a tree structure and is itself evolved. Following a training period, the ensemble is shown to outperform both existing dispatching rules and a standard genetic programming algorithm on a large set of new test instances. In addition, it obtains superior results on a set of 210 benchmark problems from the literature when compared to two state-of-the-art hyper-heuristic approaches. Further analysis of the relationship between heuristics in the evolved ensemble and the instances each solves provides new insights into features that might describe similar instances.

  14. Zero-field magnetic response functions in Landau levels

    PubMed Central

    Gao, Yang; Niu, Qian

    2017-01-01

    We present a fresh perspective on the Landau level quantization rule; that is, by successively including zero-field magnetic response functions at zero temperature, such as zero-field magnetization and susceptibility, the Onsager’s rule can be corrected order by order. Such a perspective is further reinterpreted as a quantization of the semiclassical electron density in solids. Our theory not only reproduces Onsager’s rule at zeroth order and the Berry phase and magnetic moment correction at first order but also explains the nature of higher-order corrections in a universal way. In applications, those higher-order corrections are expected to curve the linear relation between the level index and the inverse of the magnetic field, as already observed in experiments. Our theory then provides a way to extract the correct value of Berry phase as well as the magnetic susceptibility at zero temperature from Landau level fan diagrams in experiments. Moreover, it can be used theoretically to calculate Landau levels up to second-order accuracy for realistic models. PMID:28655849

  15. Analysis, Simulation, and Verification of Knowledge-Based, Rule-Based, and Expert Systems

    NASA Technical Reports Server (NTRS)

    Hinchey, Mike; Rash, James; Erickson, John; Gracanin, Denis; Rouff, Chris

    2010-01-01

    Mathematically sound techniques are used to view a knowledge-based system (KBS) as a set of processes executing in parallel and being enabled in response to specific rules being fired. The set of processes can be manipulated, examined, analyzed, and used in a simulation. The tool that embodies this technology may warn developers of errors in their rules, but may also highlight rules (or sets of rules) in the system that are underspecified (or overspecified) and need to be corrected for the KBS to operate as intended. The rules embodied in a KBS specify the allowed situations, events, and/or results of the system they describe. In that sense, they provide a very abstract specification of a system. The system is implemented through the combination of the system specification together with an appropriate inference engine, independent of the algorithm used in that inference engine. Viewing the rule base as a major component of the specification, and choosing an appropriate specification notation to represent it, reveals how additional power can be derived from an approach to the knowledge-base system that involves analysis, simulation, and verification. This innovative approach requires no special knowledge of the rules, and allows a general approach where standardized analysis, verification, simulation, and model checking techniques can be applied to the KBS.

  16. Text-to-phonemic transcription and parsing into mono-syllables of English text

    NASA Astrophysics Data System (ADS)

    Jusgir Mullick, Yugal; Agrawal, S. S.; Tayal, Smita; Goswami, Manisha

    2004-05-01

    The present paper describes a program that converts the English text (entered through the normal computer keyboard) into its phonemic representation and then parses it into mono-syllables. For every letter a set of context based rules is defined in lexical order. A default rule is also defined separately for each letter. Beginning from the first letter of the word the rules are checked and the most appropriate rule is applied on the letter to find its actual orthographic representation. If no matching rule is found, then the default rule is applied. Current rule sets the next position to be analyzed. Proceeding in the same manner orthographic representation for each word can be found. For example, ``reading'' is represented as ``rEdiNX'' by applying the following rules: r-->r move 1 position ahead ead-->Ed move 3 position ahead i-->i move 1 position ahead ng-->NX move 2 position ahead, i.e., end of word. The phonemic representations obtained from the above procedure are parsed to get mono-syllabic representation for various combinations such as CVC, CVCC, CV, CVCVC, etc. For example, the above phonemic representation will be parsed as rEdiNX---> /rE/ /diNX/. This study is a part of developing TTS for Indian English.

  17. Differential Modifications of Synaptic Weights During Odor Rule Learning: Dynamics of Interaction Between the Piriform Cortex with Lower and Higher Brain Areas

    PubMed Central

    Cohen, Yaniv; Wilson, Donald A.; Barkai, Edi

    2015-01-01

    Learning of a complex olfactory discrimination (OD) task results in acquisition of rule learning after prolonged training. Previously, we demonstrated enhanced synaptic connectivity between the piriform cortex (PC) and its ascending and descending inputs from the olfactory bulb (OB) and orbitofrontal cortex (OFC) following OD rule learning. Here, using recordings of evoked field postsynaptic potentials in behaving animals, we examined the dynamics by which these synaptic pathways are modified during rule acquisition. We show profound differences in synaptic connectivity modulation between the 2 input sources. During rule acquisition, the ascending synaptic connectivity from the OB to the anterior and posterior PC is simultaneously enhanced. Furthermore, post-training stimulation of the OB enhanced learning rate dramatically. In sharp contrast, the synaptic input in the descending pathway from the OFC was significantly reduced until training completion. Once rule learning was established, the strength of synaptic connectivity in the 2 pathways resumed its pretraining values. We suggest that acquisition of olfactory rule learning requires a transient enhancement of ascending inputs to the PC, synchronized with a parallel decrease in the descending inputs. This combined short-lived modulation enables the PC network to reorganize in a manner that enables it to first acquire and then maintain the rule. PMID:23960200

  18. Using an improved association rules mining optimization algorithm in web-based mobile-learning system

    NASA Astrophysics Data System (ADS)

    Huang, Yin; Chen, Jianhua; Xiong, Shaojun

    2009-07-01

    Mobile-Learning (M-learning) makes many learners get the advantages of both traditional learning and E-learning. Currently, Web-based Mobile-Learning Systems have created many new ways and defined new relationships between educators and learners. Association rule mining is one of the most important fields in data mining and knowledge discovery in databases. Rules explosion is a serious problem which causes great concerns, as conventional mining algorithms often produce too many rules for decision makers to digest. Since Web-based Mobile-Learning System collects vast amounts of student profile data, data mining and knowledge discovery techniques can be applied to find interesting relationships between attributes of learners, assessments, the solution strategies adopted by learners and so on. Therefore ,this paper focus on a new data-mining algorithm, combined with the advantages of genetic algorithm and simulated annealing algorithm , called ARGSA(Association rules based on an improved Genetic Simulated Annealing Algorithm), to mine the association rules. This paper first takes advantage of the Parallel Genetic Algorithm and Simulated Algorithm designed specifically for discovering association rules. Moreover, the analysis and experiment are also made to show the proposed method is superior to the Apriori algorithm in this Mobile-Learning system.

  19. Rule-governed behavior and behavioral anthropology

    PubMed Central

    Malott, Richard W.

    1988-01-01

    According to cultural materialism, cultural practices result from the materialistic outcomes of those practices, not from sociobiological, mentalistic, or mystical predispositions (e.g., Hindus worship cows because, in the long run, that worship results in more food, not less food). However, according to behavior analysis, such materialistic outcomes do not reinforce or punish the cultural practices, because such outcomes are too delayed, too improbable, or individually too small to directly reinforce or punish the cultural practices (e.g., the food increase is too delayed to reinforce the cow worship). Therefore, the molar, materialistic contingencies need the support of molecular, behavioral contingencies. And according to the present theory of rule-governed behavior, the statement of rules describing those molar, materialistic contingencies can establish the needed molecular contingencies. Given the proper behavioral history, such rule statements combine with noncompliance to produce a learned aversive condition (often labeled fear, anxiety, or guilt). The termination of this aversive condition reinforces compliance, just as its presentation punishes noncompliance (e.g., the termination of guilt reinforces the tending to a sick cow). In addition, supernatural rules often supplement these materialistic rules. Furthermore, the production of both materialistic and supernatural rules needs cultural designers who understand the molar, materialistic contingencies. PMID:22478012

  20. Optimal tactics for close support operations. III - Degraded intelligence and communications

    NASA Astrophysics Data System (ADS)

    Hess, J.; Kalaba, R.; Kagiwada, H.; Spingarn, K.; Tsokos, C.

    1980-04-01

    A new generation of C3 (command, control, and communication) models for military cybernetics is developed. Recursive equations for the solution of the C3 problem are derived for an amphibious campaign with linear time-varying dynamics. Air and ground commanders are assumed to have no intelligence and no communications. Numerical results are given for the optimal decision rules.

  1. High-harmonic generation by two-color mixing of circularly polarized laser fields

    NASA Astrophysics Data System (ADS)

    Milošević, D. B.; Becker, W.; Kopold, R.

    2000-06-01

    Dipole selection rules prevent harmonic generation by an atom in a circularly polarized laser field. However, this is not the case for a superposition of several circularly polarized fields, such as two circularly polarized fields with frequencies ω and 2ω that corotate or counter-rotate in the same plane. Harmonic generation in this environment has been observed and, in fact, found to be very intense in the counter-rotating case [1]. In a certain frequency region, the harmonics may be stronger than those radiated in a linearly polarized field of either frequency. The selection rules dictate that the harmonics are circularly polarized with a helicity that alternates from one harmonic to the next. Besides their practical interest, these harmonics are also intriguing from a fundamental point of view: the standard simple-man picture does not apply since orbits that start with zero velocity in this field almost never return to their point of departure. In terms of quantum trajectories, we discuss the mechanism that generates these harmonics. In several interesting ways, it is complementary to the case of linear polarization. [1] H. Eichmann et al., Phys. Rev. A 51, R3414 (1995)

  2. Efficient and Accurate Optimal Linear Phase FIR Filter Design Using Opposition-Based Harmony Search Algorithm

    PubMed Central

    Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

  3. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  4. Effect of initial strain and material nonlinearity on the nonlinear static and dynamic response of graphene sheets

    NASA Astrophysics Data System (ADS)

    Singh, Sandeep; Patel, B. P.

    2018-06-01

    Computationally efficient multiscale modelling based on Cauchy-Born rule in conjunction with finite element method is employed to study static and dynamic characteristics of graphene sheets, with/without considering initial strain, involving Green-Lagrange geometric and material nonlinearities. The strain energy density function at continuum level is established by coupling the deformation at continuum level to that at atomic level through Cauchy-Born rule. The atomic interactions between carbon atoms are modelled through Tersoff-Brenner potential. The governing equation of motion obtained using Hamilton's principle is solved through standard Newton-Raphson method for nonlinear static response and Newmark's time integration technique to obtain nonlinear transient response characteristics. Effect of initial strain on the linear free vibration frequencies, nonlinear static and dynamic response characteristics is investigated in detail. The present multiscale modelling based results are found to be in good agreement with those obtained through molecular mechanics simulation. Two different types of boundary constraints generally used in MM simulation are explored in detail and few interesting findings are brought out. The effect of initial strain is found to be greater in linear response when compared to that in nonlinear response.

  5. Fuzzy – PI controller to control the velocity parameter of Induction Motor

    NASA Astrophysics Data System (ADS)

    Malathy, R.; Balaji, V.

    2018-04-01

    The major application of Induction motor includes the usage of the same in industries because of its high robustness, reliability, low cost, highefficiency and good self-starting capability. Even though it has the above mentioned advantages, it also have some limitations: (1) the standard motor is not a true constant-speed machine, itsfull-load slip varies less than 1 % (in high-horsepower motors).And (2) it is not inherently capable of providing variable-speedoperation. In order to solve the above mentioned problem smart motor controls and variable speed controllers are used. Motor applications involve non linearity features, which can be controlled by Fuzzy logic controller as it is capable of handling those features with high efficiency and it act similar to human operator. This paper presents individuality of the plant modelling. The fuzzy logic controller (FLC)trusts on a set of linguistic if-then rules, a rule-based Mamdani for closed loop Induction Motor model. Themotor model is designed and membership functions are chosenaccording to the parameters of the motor model. Simulation results contains non linearity in induction motor model. A conventional PI controller iscompared practically to fuzzy logic controller using Simulink.

  6. Beta Hebbian Learning as a New Method for Exploratory Projection Pursuit.

    PubMed

    Quintián, Héctor; Corchado, Emilio

    2017-09-01

    In this research, a novel family of learning rules called Beta Hebbian Learning (BHL) is thoroughly investigated to extract information from high-dimensional datasets by projecting the data onto low-dimensional (typically two dimensional) subspaces, improving the existing exploratory methods by providing a clear representation of data's internal structure. BHL applies a family of learning rules derived from the Probability Density Function (PDF) of the residual based on the beta distribution. This family of rules may be called Hebbian in that all use a simple multiplication of the output of the neural network with some function of the residuals after feedback. The derived learning rules can be linked to an adaptive form of Exploratory Projection Pursuit and with artificial distributions, the networks perform as the theory suggests they should: the use of different learning rules derived from different PDFs allows the identification of "interesting" dimensions (as far from the Gaussian distribution as possible) in high-dimensional datasets. This novel algorithm, BHL, has been tested over seven artificial datasets to study the behavior of BHL parameters, and was later applied successfully over four real datasets, comparing its results, in terms of performance, with other well-known Exploratory and projection models such as Maximum Likelihood Hebbian Learning (MLHL), Locally-Linear Embedding (LLE), Curvilinear Component Analysis (CCA), Isomap and Neural Principal Component Analysis (Neural PCA).

  7. Using economy of means to evolve transition rules within 2D cellular automata.

    PubMed

    Ripps, David L

    2010-01-01

    Running a cellular automaton (CA) on a rectangular lattice is a time-honored method for studying artificial life on a digital computer. Commonly, the researcher wishes to investigate some specific or general mode of behavior, say, the ability of a coherent pattern of points to glide within the lattice, or to generate copies of itself. This technique has a problem: how to design the transitions table-the set of distinct rules that specify the next content of a cell from its current content and that of its near neighbors. Often the table is painstakingly designed manually, rule by rule. The problem is exacerbated by the potentially vast number of individual rules that need be specified to cover all combinations of center and neighbors when there are several symbols in the alphabet of the CA. In this article a method is presented to have the set of rules evolve automatically while running the CA. The transition table is initially empty, with rules being added as the need arises. A novel principle drives the evolution: maximum economy of means-maximizing the reuse of rules introduced on previous cycles. This method may not be a panacea applicable to all CA studies. Nevertheless, it is sufficiently potent to evolve sets of rules and associated patterns of points that glide (periodically regenerate themselves at another location) and to generate gliding "children" that then "mate" by collision.

  8. Java implementation of Class Association Rule algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tamura, Makio

    2007-08-30

    Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix and a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be appliedmore » more generally.« less

  9. ML-Space: Hybrid Spatial Gillespie and Particle Simulation of Multi-Level Rule-Based Models in Cell Biology.

    PubMed

    Bittig, Arne T; Uhrmacher, Adelinde M

    2017-01-01

    Spatio-temporal dynamics of cellular processes can be simulated at different levels of detail, from (deterministic) partial differential equations via the spatial Stochastic Simulation algorithm to tracking Brownian trajectories of individual particles. We present a spatial simulation approach for multi-level rule-based models, which includes dynamically hierarchically nested cellular compartments and entities. Our approach ML-Space combines discrete compartmental dynamics, stochastic spatial approaches in discrete space, and particles moving in continuous space. The rule-based specification language of ML-Space supports concise and compact descriptions of models and to adapt the spatial resolution of models easily.

  10. Features in visual search combine linearly

    PubMed Central

    Pramod, R. T.; Arun, S. P.

    2014-01-01

    Single features such as line orientation and length are known to guide visual search, but relatively little is known about how multiple features combine in search. To address this question, we investigated how search for targets differing in multiple features (intensity, length, orientation) from the distracters is related to searches for targets differing in each of the individual features. We tested race models (based on reaction times) and co-activation models (based on reciprocal of reaction times) for their ability to predict multiple feature searches. Multiple feature searches were best accounted for by a co-activation model in which feature information combined linearly (r = 0.95). This result agrees with the classic finding that these features are separable i.e., subjective dissimilarity ratings sum linearly. We then replicated the classical finding that the length and width of a rectangle are integral features—in other words, they combine nonlinearly in visual search. However, to our surprise, upon including aspect ratio as an additional feature, length and width combined linearly and this model outperformed all other models. Thus, length and width of a rectangle became separable when considered together with aspect ratio. This finding predicts that searches involving shapes with identical aspect ratio should be more difficult than searches where shapes differ in aspect ratio. We confirmed this prediction on a variety of shapes. We conclude that features in visual search co-activate linearly and demonstrate for the first time that aspect ratio is a novel feature that guides visual search. PMID:24715328

  11. Designing boosting ensemble of relational fuzzy systems.

    PubMed

    Scherer, Rafał

    2010-10-01

    A method frequently used in classification systems for improving classification accuracy is to combine outputs of several classifiers. Among various types of classifiers, fuzzy ones are tempting because of using intelligible fuzzy if-then rules. In the paper we build an AdaBoost ensemble of relational neuro-fuzzy classifiers. Relational fuzzy systems bond input and output fuzzy linguistic values by a binary relation; thus, fuzzy rules have additional, comparing to traditional fuzzy systems, weights - elements of a fuzzy relation matrix. Thanks to this the system is better adjustable to data during learning. In the paper an ensemble of relational fuzzy systems is proposed. The problem is that such an ensemble contains separate rule bases which cannot be directly merged. As systems are separate, we cannot treat fuzzy rules coming from different systems as rules from the same (single) system. In the paper, the problem is addressed by a novel design of fuzzy systems constituting the ensemble, resulting in normalization of individual rule bases during learning. The method described in the paper is tested on several known benchmarks and compared with other machine learning solutions from the literature.

  12. A Swarm Optimization approach for clinical knowledge mining.

    PubMed

    Christopher, J Jabez; Nehemiah, H Khanna; Kannan, A

    2015-10-01

    Rule-based classification is a typical data mining task that is being used in several medical diagnosis and decision support systems. The rules stored in the rule base have an impact on classification efficiency. Rule sets that are extracted with data mining tools and techniques are optimized using heuristic or meta-heuristic approaches in order to improve the quality of the rule base. In this work, a meta-heuristic approach called Wind-driven Swarm Optimization (WSO) is used. The uniqueness of this work lies in the biological inspiration that underlies the algorithm. WSO uses Jval, a new metric, to evaluate the efficiency of a rule-based classifier. Rules are extracted from decision trees. WSO is used to obtain different permutations and combinations of rules whereby the optimal ruleset that satisfies the requirement of the developer is used for predicting the test data. The performance of various extensions of decision trees, namely, RIPPER, PART, FURIA and Decision Tables are analyzed. The efficiency of WSO is also compared with the traditional Particle Swarm Optimization. Experiments were carried out with six benchmark medical datasets. The traditional C4.5 algorithm yields 62.89% accuracy with 43 rules for liver disorders dataset where as WSO yields 64.60% with 19 rules. For Heart disease dataset, C4.5 is 68.64% accurate with 98 rules where as WSO is 77.8% accurate with 34 rules. The normalized standard deviation for accuracy of PSO and WSO are 0.5921 and 0.5846 respectively. WSO provides accurate and concise rulesets. PSO yields results similar to that of WSO but the novelty of WSO lies in its biological motivation and it is customization for rule base optimization. The trade-off between the prediction accuracy and the size of the rule base is optimized during the design and development of rule-based clinical decision support system. The efficiency of a decision support system relies on the content of the rule base and classification accuracy. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  13. Graph-based linear scaling electronic structure theory.

    PubMed

    Niklasson, Anders M N; Mniszewski, Susan M; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Rubensson, Emanuel H; Djidjev, Hristo

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  14. Graph-based linear scaling electronic structure theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niklasson, Anders M. N., E-mail: amn@lanl.gov; Negre, Christian F. A.; Cawkwell, Marc J.

    2016-06-21

    We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.

  15. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  16. Is the Lorentz signature of the metric of spacetime electromagnetic in origin?

    NASA Astrophysics Data System (ADS)

    Itin, Yakov; Hehl, Friedrich W.

    2004-07-01

    We formulate a premetric version of classical electrodynamics in terms of the excitation H=( H, D) and the field strength F=( E, B). A local, linear, and symmetric spacetime relation between H and F is assumed. It yields, if electric/magnetic reciprocity is postulated, a Lorentzian metric of spacetime thereby excluding Euclidean signature (which is, nevertheless, discussed in some detail). Moreover, we determine the Dufay law (repulsion of like charges and attraction of opposite ones), the Lenz rule (the relative sign in Faraday's law), and the sign of the electromagnetic energy. In this way, we get a systematic understanding of the sign rules and the sign conventions in electrodynamics. The question in the title of the paper is answered affirmatively.

  17. Evaluation of aircraft microwave data for locating zones for well stimulation and enhanced gas recovery. [Arkansas Arkoma Basin

    NASA Technical Reports Server (NTRS)

    Macdonald, H.; Waite, W.; Elachi, C.; Babcock, R.; Konig, R.; Gattis, J.; Borengasser, M.; Tolman, D.

    1980-01-01

    Imaging radar was evaluated as an adjunct to conventional petroleum exploration techniques, especially linear mapping. Linear features were mapped from several remote sensor data sources including stereo photography, enhanced LANDSAT imagery, SLAR radar imagery, enhanced SAR radar imagery, and SAR radar/LANDSAT combinations. Linear feature maps were compared with surface joint data, subsurface and geophysical data, and gas production in the Arkansas part of the Arkoma basin. The best LANDSAT enhanced product for linear detection was found to be a winter scene, band 7, uniform distribution stretch. Of the individual SAR data products, the VH (cross polarized) SAR radar mosaic provides for detection of most linears; however, none of the SAR enhancements is significantly better than the others. Radar/LANDSAT merges may provide better linear detection than a single sensor mapping mode, but because of operator variability, the results are inconclusive. Radar/LANDSAT combinations appear promising as an optimum linear mapping technique, if the advantages and disadvantages of each remote sensor are considered.

  18. Genomic prediction based on data from three layer lines using non-linear regression models.

    PubMed

    Huang, Heyun; Windig, Jack J; Vereijken, Addie; Calus, Mario P L

    2014-11-06

    Most studies on genomic prediction with reference populations that include multiple lines or breeds have used linear models. Data heterogeneity due to using multiple populations may conflict with model assumptions used in linear regression methods. In an attempt to alleviate potential discrepancies between assumptions of linear models and multi-population data, two types of alternative models were used: (1) a multi-trait genomic best linear unbiased prediction (GBLUP) model that modelled trait by line combinations as separate but correlated traits and (2) non-linear models based on kernel learning. These models were compared to conventional linear models for genomic prediction for two lines of brown layer hens (B1 and B2) and one line of white hens (W1). The three lines each had 1004 to 1023 training and 238 to 240 validation animals. Prediction accuracy was evaluated by estimating the correlation between observed phenotypes and predicted breeding values. When the training dataset included only data from the evaluated line, non-linear models yielded at best a similar accuracy as linear models. In some cases, when adding a distantly related line, the linear models showed a slight decrease in performance, while non-linear models generally showed no change in accuracy. When only information from a closely related line was used for training, linear models and non-linear radial basis function (RBF) kernel models performed similarly. The multi-trait GBLUP model took advantage of the estimated genetic correlations between the lines. Combining linear and non-linear models improved the accuracy of multi-line genomic prediction. Linear models and non-linear RBF models performed very similarly for genomic prediction, despite the expectation that non-linear models could deal better with the heterogeneous multi-population data. This heterogeneity of the data can be overcome by modelling trait by line combinations as separate but correlated traits, which avoids the occasional occurrence of large negative accuracies when the evaluated line was not included in the training dataset. Furthermore, when using a multi-line training dataset, non-linear models provided information on the genotype data that was complementary to the linear models, which indicates that the underlying data distributions of the three studied lines were indeed heterogeneous.

  19. Music acquisition: effects of enculturation and formal training on development.

    PubMed

    Hannon, Erin E; Trainor, Laurel J

    2007-11-01

    Musical structure is complex, consisting of a small set of elements that combine to form hierarchical levels of pitch and temporal structure according to grammatical rules. As with language, different systems use different elements and rules for combination. Drawing on recent findings, we propose that music acquisition begins with basic features, such as peripheral frequency-coding mechanisms and multisensory timing connections, and proceeds through enculturation, whereby everyday exposure to a particular music system creates, in a systematic order of acquisition, culture-specific brain structures and representations. Finally, we propose that formal musical training invokes domain-specific processes that affect salience of musical input and the amount of cortical tissue devoted to its processing, as well as domain-general processes of attention and executive functioning.

  20. Biomotor structures in elite female handball players.

    PubMed

    Katić, Ratko; Cavala, Marijana; Srhoj, Vatromir

    2007-09-01

    In order to identify biomotor structures in elite female handball players, factor structures of morphological characteristics and basic motor abilities of elite female handball players (N = 53) were determined first, followed by determination of relations between the morphological-motor space factors obtained and the set of criterion variables evaluating situation motor abilities in handball. Factor analysis of 14 morphological measures produced three morphological factors, i.e. factor of absolute voluminosity (mesoendomorph), factor of longitudinal skeleton dimensionality, and factor of transverse hand dimensionality. Factor analysis of 15 motor variables yielded five basic motor dimensions, i.e. factor of agility, factor of jumping explosive strength, factor of throwing explosive strength, factor of movement frequency rate, and factor of running explosive strength (sprint). Four significant canonic correlations, i.e. linear combinations, explained the correlation between the set of eight latent variables of the morphological and basic motor space and five variables of situation motoricity. First canonic linear combination is based on the positive effect of the factors of agility/coordination on the ability of fast movement without ball. Second linear combination is based on the effect of jumping explosive strength and transverse hand dimensionality on ball manipulation, throw precision, and speed of movement with ball. Third linear combination is based on the running explosive strength determination by the speed of movement with ball, whereas fourth combination is determined by throwing and jumping explosive strength, and agility on ball pass. The results obtained were consistent with the model of selection in female handball proposed (Srhoj et al., 2006), showing the speed of movement without ball and the ability of ball manipulation to be the predominant specific abilities, as indicated by the first and second linear combination.

Top