Sample records for existing theoretical methods

  1. An information-theoretical perspective on weighted ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Weijs, Steven V.; van de Giesen, Nick

    2013-08-01

    This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.

  2. Physics in one dimension: theoretical concepts for quantum many-body systems.

    PubMed

    Schönhammer, K

    2013-01-09

    Various sophisticated approximation methods exist for the description of quantum many-body systems. It was realized early on that the theoretical description can simplify considerably in one-dimensional systems and various exact solutions exist. The focus in this introductory paper is on fermionic systems and the emergence of the Luttinger liquid concept.

  3. Measuring cognition in teams: a cross-domain review.

    PubMed

    Wildman, Jessica L; Salas, Eduardo; Scott, Charles P R

    2014-08-01

    The purpose of this article is twofold: to provide a critical cross-domain evaluation of team cognition measurement options and to provide novice researchers with practical guidance when selecting a measurement method. A vast selection of measurement approaches exist for measuring team cognition constructs including team mental models, transactive memory systems, team situation awareness, strategic consensus, and cognitive processes. Empirical studies and theoretical articles were reviewed to identify all of the existing approaches for measuring team cognition. These approaches were evaluated based on theoretical perspective assumed, constructs studied, resources required, level of obtrusiveness, internal consistency reliability, and predictive validity. The evaluations suggest that all existing methods are viable options from the point of view of reliability and validity, and that there are potential opportunities for cross-domain use. For example, methods traditionally used only to measure mental models may be useful for examining transactive memory and situation awareness. The selection of team cognition measures requires researchers to answer several key questions regarding the theoretical nature of team cognition and the practical feasibility of each method. We provide novice researchers with guidance regarding how to begin the search for a team cognition measure and suggest several new ideas regarding future measurement research. We provide (1) a broad overview and evaluation of existing team cognition measurement methods, (2) suggestions for new uses of those methods across research domains, and (3) critical guidance for novice researchers looking to measure team cognition.

  4. "It's the Method, Stupid." Interrelations between Methodological and Theoretical Advances: The Example of Comparing Higher Education Systems Internationally

    ERIC Educational Resources Information Center

    Hoelscher, Michael

    2017-01-01

    This article argues that strong interrelations between methodological and theoretical advances exist. Progress in, especially comparative, methods may have important impacts on theory evaluation. By using the example of the "Varieties of Capitalism" approach and an international comparison of higher education systems, it can be shown…

  5. Analysis of Semiotic Principles in a Constructivist Learning Environment.

    ERIC Educational Resources Information Center

    Williams, Paul

    To advance nuclear plant simulator training, the industry must focus on a more detailed and theoretical approach to conduct of this training. The use of semiotics is one method of refining the existing training and examining ways to diversify and blend it with new theoretical methods. Semiotics is the study of signs and how humans interpret them.…

  6. Theoretical and experimental investigation of supersonic aerodynamic characteristics of a twin-fuselage concept

    NASA Technical Reports Server (NTRS)

    Wood, R. M.; Miller, D. S.; Brentner, K. S.

    1983-01-01

    A theoretical and experimental investigation has been conducted to evaluate the fundamental supersonic aerodynamic characteristics of a generic twin-body model at a Mach number of 2.70. Results show that existing aerodynamic prediction methods are adequate for making preliminary aerodynamic estimates.

  7. A new theoretical approach to analyze complex processes in cytoskeleton proteins.

    PubMed

    Li, Xin; Kolomeisky, Anatoly B

    2014-03-20

    Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.

  8. Models and theories of prescribing decisions: A review and suggested a new model.

    PubMed

    Murshid, Mohsen Ali; Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the 'persuasion theory - elaboration likelihood model', the stimuli-response marketing model', the 'agency theory', the theory of planned behaviour,' and 'social power theory,' in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research.

  9. Transactors, Transformers and Beyond. A Multi-Method Development of a Theoretical Typology of Leadership.

    ERIC Educational Resources Information Center

    Pearce, Craig L.; Sims, Henry P., Jr.; Cox, Jonathan F.; Ball, Gail; Schnell, Eugene; Smith, Ken A.; Trevino, Linda

    2003-01-01

    To extend the transactional-transformational model of leadership, four theoretical behavioral types of leadership were developed based on literature review and data from studies of executive behavior (n=253) and subordinate attitudes (n=208). Confirmatory factor analysis of a third data set (n=702) support the existence of four leadership types:…

  10. Control Coordination of Multiple Agents Through Decision Theoretic and Economic Methods

    DTIC Science & Technology

    2003-02-01

    instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information...investigated the design of test data for benchmarking such optimization algorithms. Our other research on combinatorial auctions included I...average combination rule. We exemplified these theoretical results with experiments on stock market data , demonstrating how ensembles of classifiers can

  11. Is ``No-Threshold'' a ``Non-Concept''?

    NASA Astrophysics Data System (ADS)

    Schaeffer, David J.

    1981-11-01

    A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.

  12. Models and theories of prescribing decisions: A review and suggested a new model

    PubMed Central

    Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the ‘persuasion theory - elaboration likelihood model’, the stimuli–response marketing model’, the ‘agency theory’, the theory of planned behaviour,’ and ‘social power theory,’ in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research. PMID:28690701

  13. N-Sulfinylimine compounds, R-NSO: a chemistry family with strong temperament

    NASA Astrophysics Data System (ADS)

    Romano, R. M.; Della Védova, C. O.

    2000-04-01

    In this review, an update on the structural properties and theoretical studies of N-sulfinylimine compounds (R-NSO) is reported. They were deduced using several experimental techniques: gas-electron diffraction (GED), X-ray diffraction, 17O NMR, ultraviolet-visible absorption spectroscopy (UV-Vis), FTIR (including matrix studies of molecular randomisation) and Raman (including pre-resonant Raman spectra). Data are compared with those obtained by theoretical calculations. With these tools, excited state geometry using the time-dependent theory was calculated for these kinds of compounds. The existence of pre-resonant Raman effect was reported recently for R-NSO compounds. The configuration of R-NSO compounds was checked for this series confirming the existence of only one syn configuration. This finding is corroborated by theoretical calculations. The method of preparation is also summarised.

  14. a Theoretical and Experimental Investigation of 1/F Noise in the Alpha Decay Rates of AMERICIUM-241.

    NASA Astrophysics Data System (ADS)

    Pepper, Gary T.

    New experimental methods and data analysis techniques were used to investigate the hypothesis of the existence of 1/f noise in a alpha particle emission rates for ^{241}Am. Experimental estimates of the flicker floor were found to be almost two orders of magnitude less than Handel's theoretical prediction and previous measurements. The existence of a flicker floor for ^{57}Co decay, a process for which no charged particles are emitted, indicate that instrumental instability is likely responsible for the values of the flicker floor obtained. The experimental results and the theoretical arguments presented indicate that a re-examination of Handel's theory of 1/f noise is appropriate. Methods of numerical simulation of noise processes with a 1/f^{rm n} power spectral density were developed. These were used to investigate various statistical aspects of 1/f ^{rm n} noise. The probability density function for the Allan variance was investigated in order to establish confidence limits for the observations made. The effect of using grouped (correlated) data, for evaluating the Allan variance, was also investigated.

  15. Theory and applications of structured light single pixel imaging

    NASA Astrophysics Data System (ADS)

    Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.

    2018-02-01

    Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.

  16. Multimodal Image Registration through Simultaneous Segmentation.

    PubMed

    Aganj, Iman; Fischl, Bruce

    2017-11-01

    Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.

  17. Tools for the Classroom? an Examination of Existing Sociometric Methods for Teacher Use

    ERIC Educational Resources Information Center

    McMullen, Jake A.; Veermans, Koen; Laine, Kaarina

    2014-01-01

    Despite the recent technical and theoretical advances in the investigation of children's social relations, the inherent complexity of these methods may prevent their easy integration into the classroom. A simple and effective tool can be valuable for teachers who wish to investigate students' social realities in the classroom. Therefore, the…

  18. Poly(ethylene oxide) Chains Are Not ``Hydrophilic'' When They Exist As Polymer Brush Chains

    NASA Astrophysics Data System (ADS)

    Lee, Hoyoung; Kim, Dae Hwan; Witte, Kevin N.; Ohn, Kimberly; Choi, Je; Kim, Kyungil; Meron, Mati; Lin, Binhua; Akgun, Bulent; Satija, Sushil; Won, You-Yeon

    2012-02-01

    By using a combined experimental and theoretical approach, a model poly(ethylene oxide) (PEO) brush system, prepared by spreading a poly(ethylene oxide)-poly(n-butyl acrylate) (PEO-PnBA) amphiphilic diblock copolymer onto an air-water interface, was investigated. The polymer segment density profiles of the PEO brush in the direction normal to the air-water interface under various grafting density conditions were determined from combined X-ray and neutron reflectivity data. In order to achieve a theoretically sound analysis of the reflectivity data, we developed a new data analysis method that uses the self-consistent field theoretical modeling as a tool for predicting expected reflectivity results for comparison with the experimental data. Using this new data analysis method, we discovered that the effective Flory-Huggins interaction parameter of the PEO brush chains is significantly greater than that corresponding to the theta condition, suggesting that contrary to what is more commonly observed for PEO in normal situations, the PEO chains are actually not ``hydrophilic'' when they exist as polymer brush chains, because of the many body interactions forced to be effective in the brush situation.

  19. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  20. On the methods for determining the transverse dispersion coefficient in river mixing

    NASA Astrophysics Data System (ADS)

    Baek, Kyong Oh; Seo, Il Won

    2016-04-01

    In this study, the strengths and weaknesses of existing methods for determining the dispersion coefficient in the two-dimensional river mixing model were assessed based on hydraulic and tracer data sets acquired from experiments conducted on either laboratory channels or natural rivers. From the results of this study, it can be concluded that, when the longitudinal dispersion coefficient as well as the transverse dispersion coefficients must be determined in the transient concentration situation, the two-dimensional routing procedures, 2D RP and 2D STRP, can be employed to calculate dispersion coefficients among the observation methods. For the steady concentration situation, the STRP can be applied to calculate the transverse dispersion coefficient. When the tracer data are not available, either theoretical or empirical equations by the estimation method can be used to calculate the dispersion coefficient using the geometric and hydraulic data sets. Application of the theoretical and empirical equations to the laboratory channel showed that equations by Baek and Seo [[3], 2011] predicted reasonable values while equations by Fischer [23] and Boxwall and Guymer (2003) overestimated by factors of ten to one hundred. Among existing empirical equations, those by Jeon et al. [28] and Baek and Seo [6] gave the agreeable values of the transverse dispersion coefficient for most cases of natural rivers. Further, the theoretical equation by Baek and Seo [5] has the potential to be broadly applied to both laboratory and natural channels.

  1. Quality Assessment of Internationalised Studies: Theory and Practice

    ERIC Educational Resources Information Center

    Juknyte-Petreikiene, Inga

    2013-01-01

    The article reviews forms of higher education internationalisation at an institutional level. The relevance of theoretical background of internationalised study quality assessment is highlighted and definitions of internationalised studies quality are presented. Existing methods of assessment of higher education internationalisation are criticised…

  2. [Health assessment and economic assessment in health: introduction to the debate on the points of intersection].

    PubMed

    Sancho, Leyla Gomes; Dain, Sulamis

    2012-03-01

    The study aims to infer the existence of a continuum between Health Assessment and Economic Assessment in Health, by highlighting points of intersection of these forms of appraisal. To achieve this, a review of the theoretical foundations, methods and approaches of both forms of assessment was conducted. It was based on the theoretical model of health evaluation as reported by Hartz et al and economic assessment in health approaches reported by Brouwer et al. It was seen that there is a continuum between the theoretical model of evaluative research and the extrawelfarist approach for economic assessment in health, and between the normative theoretical model for health assessment and the welfarist approaches for economic assessment in health. However, in practice the assessment is still conducted using the normative theoretical model and with a welfarist approach.

  3. Reproducibility in Psychological Science: When Do Psychological Phenomena Exist?

    PubMed Central

    Iso-Ahola, Seppo E.

    2017-01-01

    Scientific evidence has recently been used to assert that certain psychological phenomena do not exist. Such claims, however, cannot be made because (1) scientific method itself is seriously limited (i.e., it can never prove a negative); (2) non-existence of phenomena would require a complete absence of both logical (theoretical) and empirical support; even if empirical support is weak, logical and theoretical support can be strong; (3) statistical data are only one piece of evidence and cannot be used to reduce psychological phenomena to statistical phenomena; and (4) psychological phenomena vary across time, situations and persons. The human mind is unreproducible from one situation to another. Psychological phenomena are not particles that can decisively be tested and discovered. Therefore, a declaration that a phenomenon is not real is not only theoretically and empirically unjustified but runs counter to the propositional and provisional nature of scientific knowledge. There are only “temporary winners” and no “final truths” in scientific knowledge. Psychology is a science of subtleties in human affect, cognition and behavior. Its phenomena fluctuate with conditions and may sometimes be difficult to detect and reproduce empirically. When strictly applied, reproducibility is an overstated and even questionable concept in psychological science. Furthermore, statistical measures (e.g., effect size) are poor indicators of the theoretical importance and relevance of phenomena (cf. “deliberate practice” vs. “talent” in expert performance), not to mention whether phenomena are real or unreal. To better understand psychological phenomena, their theoretical and empirical properties should be examined via multiple parameters and criteria. Ten such parameters are suggested. PMID:28626435

  4. Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxiao

    Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.

  5. RELIABLE COMPUTATION OF HOMOGENEOUS AZEOTROPES. (R824731)

    EPA Science Inventory

    Abstract

    It is important to determine the existence and composition of homogeneous azeotropes in the analysis of phase behavior and in the synthesis and design of separation systems, from both theoretical and practical standpoints. A new method for reliably locating an...

  6. Interactive social contagions and co-infections on complex networks

    NASA Astrophysics Data System (ADS)

    Liu, Quan-Hui; Zhong, Lin-Feng; Wang, Wei; Zhou, Tao; Eugene Stanley, H.

    2018-01-01

    What we are learning about the ubiquitous interactions among multiple social contagion processes on complex networks challenges existing theoretical methods. We propose an interactive social behavior spreading model, in which two behaviors sequentially spread on a complex network, one following the other. Adopting the first behavior has either a synergistic or an inhibiting effect on the spread of the second behavior. We find that the inhibiting effect of the first behavior can cause the continuous phase transition of the second behavior spreading to become discontinuous. This discontinuous phase transition of the second behavior can also become a continuous one when the effect of adopting the first behavior becomes synergistic. This synergy allows the second behavior to be more easily adopted and enlarges the co-existence region of both behaviors. We establish an edge-based compartmental method, and our theoretical predictions match well with the simulation results. Our findings provide helpful insights into better understanding the spread of interactive social behavior in human society.

  7. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  8. Reserves in load capacity assessment of existing bridges

    NASA Astrophysics Data System (ADS)

    Žitný, Jan; Ryjáček, Pavel

    2017-09-01

    High percentage of all railway bridges in the Czech Republic is made of structural steel. Majority of these bridges is designed according to historical codes and according to the deterioration, they have to be assessed if they satisfy the needs of modern railway traffic. The load capacity assessment of existing bridges according to Eurocodes is however often too conservative and especially, braking and acceleration forces cause huge problems to structural elements of the bridge superstructure. The aim of this paper is to review the different approaches for the determination of braking and acceleration forces. Both, current and historical theoretical models and in-situ measurements are considered. The research of several local European state norms superior to Eurocode for assessment of existing railway bridges shows the big diversity of used local approaches and the conservativeness of Eurocode. This paper should also work as an overview for designers dealing with load capacity assessment, revealing the reserves for existing bridges. Based on these different approaches, theoretical models and data obtained from the measurements, the method for determination of braking and acceleration forces on the basis of real traffic data should be proposed.

  9. Theoretical models of parental HIV disclosure: a critical review.

    PubMed

    Qiao, Shan; Li, Xiaoming; Stanton, Bonita

    2013-01-01

    This study critically examined three major theoretical models related to parental HIV disclosure (i.e., the Four-Phase Model [FPM], the Disclosure Decision Making Model [DDMM], and the Disclosure Process Model [DPM]), and the existing studies that could provide empirical support to these models or their components. For each model, we briefly reviewed its theoretical background, described its components and/or mechanisms, and discussed its strengths and limitations. The existing empirical studies supported most theoretical components in these models. However, hypotheses related to the mechanisms proposed in the models have not yet tested due to a lack of empirical evidence. This study also synthesized alternative theoretical perspectives and new issues in disclosure research and clinical practice that may challenge the existing models. The current study underscores the importance of including components related to social and cultural contexts in theoretical frameworks, and calls for more adequately designed empirical studies in order to test and refine existing theories and to develop new ones.

  10. Seven Keys for Implementing the Self-Evaluation, Periodic Evaluation and Accreditation (AVA) Method, to Improve Quality and Student Satisfaction in the Italian Higher Education System

    ERIC Educational Resources Information Center

    Murmura, Federica; Casolani, Nicola; Bravi, Laura

    2016-01-01

    This paper develops a theoretical framework that could facilitate the application of the Autovalutazione, Valutazione periodica, Accreditamento (AVA) method in Italian universities, trying to simplify the use of this approach, and to cover the existing gap between Italy and others European academic institutions. The new competitive environment in…

  11. Predicting chaos in memristive oscillator via harmonic balance method.

    PubMed

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  13. Design and Characterization of a Microfabricated Hydrogen Clearance Blood Flow Sensor

    PubMed Central

    Walton, Lindsay R.; Edwards, Martin A.; McCarty, Gregory S.; Wightman, R. Mark

    2016-01-01

    Background Modern cerebral blood flow (CBF) detection favors the use of either optical technologies that are limited to cortical brain regions, or expensive magnetic resonance. Decades ago, inhalation gas clearance was the choice method of quantifying CBF, but this suffered from poor temporal resolution. Electrolytic H2 clearance (EHC) generates and collects gas in situ at an electrode pair, which improves temporal resolution, but the probe size has prohibited meaningful subcortical use. New Method We microfabricated EHC electrodes to an order of magnitude smaller than those existing, on the scale of 100 µm, to permit use deep within the brain. Results Novel EHC probes were fabricated. The devices offered exceptional signal-to-noise, achieved high collection efficiencies (40 – 50%) in vitro, and agreed with theoretical modeling. An in vitro chemical reaction model was used to confirm that our devices detected flow rates higher than those expected physiologically. Computational modeling that incorporated realistic noise levels demonstrated devices would be sensitive to physiological CBF rates. Comparison with Existing Method The reduced size of our arrays makes them suitable for subcortical EHC measurements, as opposed to the larger, existing EHC electrodes that would cause substantial tissue damage. Our array can collect multiple CBF measurements per minute, and can thus resolve physiological changes occurring on a shorter timescale than existing gas clearance measurements. Conclusion We present and characterize microfabricated EHC electrodes and an accompanying theoretical model to interpret acquired data. Microfabrication allows for the high-throughput production of reproducible devices that are capable of monitoring deep brain CBF with sub-minute resolution. PMID:27102042

  14. Automatic Syllabification in English: A Comparison of Different Algorithms

    ERIC Educational Resources Information Center

    Marchand, Yannick; Adsett, Connie R.; Damper, Robert I.

    2009-01-01

    Automatic syllabification of words is challenging, not least because the syllable is not easy to define precisely. Consequently, no accepted standard algorithm for automatic syllabification exists. There are two broad approaches: rule-based and data-driven. The rule-based method effectively embodies some theoretical position regarding the…

  15. Teachers' Views of Their Assessment Practice

    ERIC Educational Resources Information Center

    Atjonen, Päivi

    2014-01-01

    The main aim of this research was to analyse teachers' views of pupil assessment. The theoretical framework was based on existing literature on advances and challenges of pupil assessment in regard to support for learning, fairness, educational partnership, feedback, and favourable methods. The data were gathered by means of a questionnaire…

  16. The Limited Benefits of Rereading Educational Texts

    ERIC Educational Resources Information Center

    Callender, Aimee A.; McDaniel, Mark A.

    2009-01-01

    Though rereading is a study method commonly used by students, theoretical disagreement exists regarding whether rereading a text significantly enhances the representation and retention of the text's contents. In four experiments, we evaluated the effectiveness of rereading relative to a single reading in a context paralleling that faced by…

  17. Some theoretical aspects of boundary layer stability theory

    NASA Technical Reports Server (NTRS)

    Hall, Philip

    1990-01-01

    Increased understanding in recent years of boundary layer transition has been made possible by the development of strongly nonlinear stability theories. After some twenty or so years when nonlinear stability theory was restricted to the application of the Stuart-Watson method (or less formal amplitude expansion procedures), there now exist strongly nonlinear theories which can describe processes which have an 0(1) effect on the basic state. These strongly nonlinear theories and their possible role in pushing theoretical understanding of transition ever further into the nonlinear regime are discussed.

  18. Anatomy of point-contact Andreev reflection spectroscopy from the experimental point of view

    NASA Astrophysics Data System (ADS)

    Naidyuk, Yu. G.; Gloos, K.

    2018-04-01

    We review applications of point-contact Andreev-reflection spectroscopy to study elemental superconductors, where theoretical conditions for the smallness of the point-contact size with respect to the characteristic lengths in the superconductor can be satisfied. We discuss existing theoretical models and identify new issues that have to be solved, especially when applying this method to investigate more complex superconductors. We will also demonstrate that some aspects of point-contact Andreev-reflection spectroscopy still need to be addressed even when investigating ordinary metals.

  19. A collocation-shooting method for solving fractional boundary value problems

    NASA Astrophysics Data System (ADS)

    Al-Mdallal, Qasem M.; Syam, Muhammed I.; Anwar, M. N.

    2010-12-01

    In this paper, we discuss the numerical solution of special class of fractional boundary value problems of order 2. The method of solution is based on a conjugating collocation and spline analysis combined with shooting method. A theoretical analysis about the existence and uniqueness of exact solution for the present class is proven. Two examples involving Bagley-Torvik equation subject to boundary conditions are also presented; numerical results illustrate the accuracy of the present scheme.

  20. Influences of optical-spectrum errors on excess relative intensity noise in a fiber-optic gyroscope

    NASA Astrophysics Data System (ADS)

    Zheng, Yue; Zhang, Chunxi; Li, Lijing

    2018-03-01

    The excess relative intensity noise (RIN) generated from broadband sources degrades the angular-random-walk performance of a fiber-optic gyroscope dramatically. Many methods have been proposed and managed to suppress the excess RIN. However, the properties of the excess RIN under the influences of different optical errors in the fiber-optic gyroscope have not been systematically investigated. Therefore, it is difficult for the existing RIN-suppression methods to achieve the optimal results in practice. In this work, the influences of different optical-spectrum errors on the power spectral density of the excess RIN are theoretically analyzed. In particular, the properties of the excess RIN affected by the raised-cosine-type ripples in the optical spectrum are elaborately investigated. Experimental measurements of the excess RIN corresponding to different optical-spectrum errors are in good agreement with our theoretical analysis, demonstrating its validity. This work provides a comprehensive understanding of the properties of the excess RIN under the influences of different optical-spectrum errors. Potentially, it can be utilized to optimize the configurations of the existing RIN-suppression methods by accurately evaluating the power spectral density of the excess RIN.

  1. Theoretical Background and Prognostic Modeling for Benchmarking SHM Sensors for Composite Structures

    DTIC Science & Technology

    2010-10-01

    minimum flaw size can be detected by the existing SHM based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were...Whether it be hat stiffened, corrugated sandwich, honeycomb sandwich, or foam filled sandwich, all composite structures have one basic handicap in...based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were considered for use in this study. Eigenmode frequency

  2. The development of an adolescent smoking cessation intervention--an Intervention Mapping approach to planning.

    PubMed

    Dalum, Peter; Schaalma, Herman; Kok, Gerjo

    2012-02-01

    The objective of this project was to develop a theory- and evidence-based adolescent smoking cessation intervention using both new and existing materials. We used the Intervention Mapping framework for planning health promotion programmes. Based on a needs assessment, we identified important and changeable determinants of cessation behaviour, specified change objectives for the intervention programme, selected theoretical change methods for accomplishing intervention objectives and finally operationalized change methods into practical intervention strategies. We found that guided practice, modelling, self-monitoring, coping planning, consciousness raising, dramatic relief and decisional balance were suitable methods for adolescent smoking cessation. We selected behavioural journalism, guided practice and Motivational Interviewing as strategies in our intervention. Intervention Mapping helped us to develop as systematic adolescent smoking cessation intervention with a clear link between behavioural goals, theoretical methods, practical strategies and materials and with a strong focus on implementation and recruitment. This paper does not present evaluation data.

  3. The construction of arbitrary order ERKN methods based on group theory for solving oscillatory Hamiltonian systems with applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Lijie, E-mail: bxhanm@126.com; Wu, Xinyuan, E-mail: xywu@nju.edu.cn

    In general, extended Runge–Kutta–Nyström (ERKN) methods are more effective than traditional Runge–Kutta–Nyström (RKN) methods in dealing with oscillatory Hamiltonian systems. However, the theoretical analysis for ERKN methods, such as the order conditions, the symplectic conditions and the symmetric conditions, becomes much more complicated than that for RKN methods. Therefore, it is a bottleneck to construct high-order ERKN methods efficiently. In this paper, we first establish the ERKN group Ω for ERKN methods and the RKN group G for RKN methods, respectively. We then rigorously show that ERKN methods are a natural extension of RKN methods, that is, there exists anmore » epimorphism η of the ERKN group Ω onto the RKN group G. This epimorphism gives a global insight into the structure of the ERKN group by the analysis of its kernel and the corresponding RKN group G. Meanwhile, we establish a particular mapping φ of G into Ω so that each image element is an ideal representative element of the congruence class in Ω. Furthermore, an elementary theoretical analysis shows that this map φ can preserve many structure-preserving properties, such as the order, the symmetry and the symplecticity. From the epimorphism η together with its section φ, we may gain knowledge about the structure of the ERKN group Ω via the RKN group G. In light of the theoretical analysis of this paper, we obtain high-order structure-preserving ERKN methods in an effective way for solving oscillatory Hamiltonian systems. Numerical experiments are carried out and the results are very promising, which strongly support our theoretical analysis presented in this paper.« less

  4. The cosmological lithium problem revisited

    NASA Astrophysics Data System (ADS)

    Bertulani, C. A.; Mukhamedzhanov, A. M.; Shubhchintak

    2016-07-01

    After a brief review of the cosmological lithium problem, we report a few recent attempts to find theoretical solutions by our group at Texas A&M University (Commerce & College Station). We will discuss our studies on the theoretical description of electron screening, the possible existence of parallel universes of dark matter, and the use of non-extensive statistics during the Big Bang nucleosynthesis epoch. Last but not least, we discuss possible solutions within nuclear physics realm. The impact of recent measurements of relevant nuclear reaction cross sections for the Big Bang nucleosynthesis based on indirect methods is also assessed. Although our attempts may not able to explain the observed discrepancies between theory and observations, they suggest theoretical developments that can be useful also for stellar nucleosynthesis.

  5. Estimate of the critical exponents from the field-theoretical renormalization group: mathematical meaning of the 'Standard Values'

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogorelov, A. A.; Suslov, I. M.

    2008-06-15

    New estimates of the critical exponents have been obtained from the field-theoretical renormalization group using a new method for summing divergent series. The results almost coincide with the central values obtained by Le Guillou and Zinn-Justin (the so-called standard values), but have lower uncertainty. It has been shown that usual field-theoretical estimates implicitly imply the smoothness of the coefficient functions. The last assumption is open for discussion in view of the existence of the oscillating contribution to the coefficient functions. The appropriate interpretation of the last contribution is necessary both for the estimation of the systematic errors of the standardmore » values and for a further increase in accuracy.« less

  6. General Open Systems Theory and the Substrata-Factor Theory of Reading.

    ERIC Educational Resources Information Center

    Kling, Martin

    This study was designed to extend the generality of the Substrata-Factor Theory by two methods of investigation: (1) theoretically, to establish the validity of the hypothesis that an isomorphic relationship exists between the Substrata-Factor Theory and the General Open Systems Theory, and (2) experimentally, to discover through a series of…

  7. General Open Systems Theory and the Substrata-Factor Theory of Reading.

    ERIC Educational Resources Information Center

    Kling, Martin

    This study was designed to extend the generality of the Substrata-Factor Theory by two methods of investigation: (1) theoretically, to est"blish the validity of the hypothesis that an isomorphic relationship exists between the Substrata-Factor Theory and the General Open Systems Theory, and (2) experimentally, to disc"ver through a…

  8. Research in Computational Astrobiology

    NASA Technical Reports Server (NTRS)

    Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.

    2003-01-01

    We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.

  9. Compensating for Electrode Polarization in Dielectric Spectroscopy Studies of Colloidal Suspensions: Theoretical Assessment of Existing Methods

    PubMed Central

    Chassagne, Claire; Dubois, Emmanuelle; Jiménez, María L.; van der Ploeg, J. P. M; van Turnhout, Jan

    2016-01-01

    Dielectric spectroscopy can be used to determine the dipole moment of colloidal particles from which important interfacial electrokinetic properties, for instance their zeta potential, can be deduced. Unfortunately, dielectric spectroscopy measurements are hampered by electrode polarization (EP). In this article, we review several procedures to compensate for this effect. First EP in electrolyte solutions is described: the complex conductivity is derived as function of frequency, for two cell geometries (planar and cylindrical) with blocking electrodes. The corresponding equivalent circuit for the electrolyte solution is given for each geometry. This equivalent circuit model is extended to suspensions. The complex conductivity of a suspension, in the presence of EP, is then calculated from the impedance. Different methods for compensating for EP are critically assessed, with the help of the theoretical findings. Their limit of validity is given in terms of characteristic frequencies. We can identify with one of these frequencies the frequency range within which data uncorrected for EP may be used to assess the dipole moment of colloidal particles. In order to extract this dipole moment from the measured data, two methods are reviewed: one is based on the use of existing models for the complex conductivity of suspensions, the other is the logarithmic derivative method. An extension to multiple relaxations of the logarithmic derivative method is proposed. PMID:27486575

  10. Theoretical Sum Frequency Generation Spectroscopy of Peptides

    PubMed Central

    2015-01-01

    Vibrational sum frequency generation (SFG) has become a very promising technique for the study of proteins at interfaces, and it has been applied to important systems such as anti-microbial peptides, ion channel proteins, and human islet amyloid polypeptide. Moreover, so-called “chiral” SFG techniques, which rely on polarization combinations that generate strong signals primarily for chiral molecules, have proven to be particularly discriminatory of protein secondary structure. In this work, we present a theoretical strategy for calculating protein amide I SFG spectra by combining line-shape theory with molecular dynamics simulations. We then apply this method to three model peptides, demonstrating the existence of a significant chiral SFG signal for peptides with chiral centers, and providing a framework for interpreting the results on the basis of the dependence of the SFG signal on the peptide orientation. We also examine the importance of dynamical and coupling effects. Finally, we suggest a simple method for determining a chromophore’s orientation relative to the surface using ratios of experimental heterodyne-detected signals with different polarizations, and test this method using theoretical spectra. PMID:25203677

  11. Identification of Conserved Moieties in Metabolic Networks by Graph Theoretical Analysis of Atom Transition Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haraldsdóttir, Hulda S.; Fleming, Ronan M. T.

    Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moietymore » with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. Finally, we also give examples of new applications made possible by elucidating the atomic structure of conserved moieties.« less

  12. Identification of Conserved Moieties in Metabolic Networks by Graph Theoretical Analysis of Atom Transition Networks

    PubMed Central

    Haraldsdóttir, Hulda S.; Fleming, Ronan M. T.

    2016-01-01

    Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moiety with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. We also give examples of new applications made possible by elucidating the atomic structure of conserved moieties. PMID:27870845

  13. Identification of Conserved Moieties in Metabolic Networks by Graph Theoretical Analysis of Atom Transition Networks

    DOE PAGES

    Haraldsdóttir, Hulda S.; Fleming, Ronan M. T.

    2016-11-21

    Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moietymore » with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. Finally, we also give examples of new applications made possible by elucidating the atomic structure of conserved moieties.« less

  14. Identification of Conserved Moieties in Metabolic Networks by Graph Theoretical Analysis of Atom Transition Networks.

    PubMed

    Haraldsdóttir, Hulda S; Fleming, Ronan M T

    2016-11-01

    Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moiety with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. We also give examples of new applications made possible by elucidating the atomic structure of conserved moieties.

  15. Hadron and Photon Production of J Particles and the Origin of J Particles

    DOE R&D Accomplishments Database

    Ting, S. C. C.

    1975-01-01

    There have been many theoretical speculations on the existence of long lived neutral particles with a mass larger than 10 GeV/c{sup 2} which play the role of weak interactions that photons play in electromagnetic interactions. There is, however, no theoretical justification, and no predictions exist, for long lived particles in the mass region 1-10 GeV/{up 2}. Even though there is no strong theoretical justification for the existence of long lived particles at low masses, there is no experimental indication that they should not exist. Until last year no high sensitivity experiment had been done in this mass region.

  16. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  17. A General Symbolic Method with Physical Applications

    NASA Astrophysics Data System (ADS)

    Smith, Gregory M.

    2000-06-01

    A solution to the problem of unifying the General Relativistic and Quantum Theoretical formalisms is given which introduces a new non-axiomatic symbolic method and an algebraic generalization of the Calculus to non-finite symbolisms without reference to the concept of a limit. An essential feature of the non-axiomatic method is the inadequacy of any (finite) statements: Identifying this aspect of the theory with the "existence of an external physical reality" both allows for the consistency of the method with the results of experiments and avoids the so-called "measurement problem" of quantum theory.

  18. Estimating 3D positions and velocities of projectiles from monocular views.

    PubMed

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  19. When the Mannequin Dies, Creation and Exploration of a Theoretical Framework Using a Mixed Methods Approach.

    PubMed

    Tripathy, Shreepada; Miller, Karen H; Berkenbosch, John W; McKinley, Tara F; Boland, Kimberly A; Brown, Seth A; Calhoun, Aaron W

    2016-06-01

    Controversy exists in the simulation community as to the emotional and educational ramifications of mannequin death due to learner action or inaction. No theoretical framework to guide future investigations of learner actions currently exists. The purpose of our study was to generate a model of the learner experience of mannequin death using a mixed methods approach. The study consisted of an initial focus group phase composed of 11 learners who had previously experienced mannequin death due to action or inaction on the part of learners as defined by Leighton (Clin Simul Nurs. 2009;5(2):e59-e62). Transcripts were analyzed using grounded theory to generate a list of relevant themes that were further organized into a theoretical framework. With the use of this framework, a survey was generated and distributed to additional learners who had experienced mannequin death due to action or inaction. Results were analyzed using a mixed methods approach. Forty-one clinicians completed the survey. A correlation was found between the emotional experience of mannequin death and degree of presession anxiety (P < 0.001). Debriefing was found to significantly reduce negative emotion and enhance satisfaction. Sixty-nine percent of respondents indicated that mannequin death enhanced learning. These results were used to modify our framework. Using the previous approach, we created a model of the effect of mannequin death on the educational and psychological state of learners. We offer the final model as a guide to future research regarding the learner experience of mannequin death.

  20. A Theoretical Framework for Integrating Creativity Development into Curriculum: The Case of a Korean Engineering School

    ERIC Educational Resources Information Center

    Lim, Cheolil; Lee, Jihyun; Lee, Sunhee

    2014-01-01

    Existing approaches to developing creativity rely on the sporadic teaching of creative thinking techniques or the engagement of learners in a creativity-promoting environment. Such methods cannot develop students' creativity as fully as a multilateral approach that integrates creativity throughout a curriculum. The purpose of this study was to…

  1. Optimum runway orientation relative to crosswinds

    NASA Technical Reports Server (NTRS)

    Falls, L. W.; Brown, S. C.

    1972-01-01

    Specific magnitudes of crosswinds may exist that could be constraints to the success of an aircraft mission such as the landing of the proposed space shuttle. A method is required to determine the orientation or azimuth of the proposed runway which will minimize the probability of certain critical crosswinds. Two procedures for obtaining the optimum runway orientation relative to minimizing a specified crosswind speed are described and illustrated with examples. The empirical procedure requires only hand calculations on an ordinary wind rose. The theoretical method utilizes wind statistics computed after the bivariate normal elliptical distribution is applied to a data sample of component winds. This method requires only the assumption that the wind components are bivariate normally distributed. This assumption seems to be reasonable. Studies are currently in progress for testing wind components for bivariate normality for various stations. The close agreement between the theoretical and empirical results for the example chosen substantiates the bivariate normal assumption.

  2. Virtual-pulse time integral methodology: A new explicit approach for computational dynamics - Theoretical developments for general nonlinear structural dynamics

    NASA Technical Reports Server (NTRS)

    Chen, Xiaoqin; Tamma, Kumar K.; Sha, Desong

    1993-01-01

    The present paper describes a new explicit virtual-pulse time integral methodology for nonlinear structural dynamics problems. The purpose of the paper is to provide the theoretical basis of the methodology and to demonstrate applicability of the proposed formulations to nonlinear dynamic structures. Different from the existing numerical methods such as direct time integrations or mode superposition techniques, the proposed methodology offers new perspectives and methodology of development, and possesses several unique and attractive computational characteristics. The methodology is tested and compared with the implicit Newmark method (trapezoidal rule) through a nonlinear softening and hardening spring dynamic models. The numerical results indicate that the proposed explicit virtual-pulse time integral methodology is an excellent alternative for solving general nonlinear dynamic problems.

  3. The cosmological lithium problem revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertulani, C. A., E-mail: carlos.bertulani@tamuc.edu; Department of Physics and Astronomy, Texas A&M University, College Station, TX 75429; Mukhamedzhanov, A. M., E-mail: akram@comp.tamu.edu

    After a brief review of the cosmological lithium problem, we report a few recent attempts to find theoretical solutions by our group at Texas A&M University (Commerce & College Station). We will discuss our studies on the theoretical description of electron screening, the possible existence of parallel universes of dark matter, and the use of non-extensive statistics during the Big Bang nucleosynthesis epoch. Last but not least, we discuss possible solutions within nuclear physics realm. The impact of recent measurements of relevant nuclear reaction cross sections for the Big Bang nucleosynthesis based on indirect methods is also assessed. Although ourmore » attempts may not able to explain the observed discrepancies between theory and observations, they suggest theoretical developments that can be useful also for stellar nucleosynthesis.« less

  4. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  5. Laboratory investigations of earthquake dynamics

    NASA Astrophysics Data System (ADS)

    Xia, Kaiwen

    In this thesis this will be attempted through controlled laboratory experiments that are designed to mimic natural earthquake scenarios. The earthquake dynamic rupturing process itself is a complicated phenomenon, involving dynamic friction, wave propagation, and heat production. Because controlled experiments can produce results without assumptions needed in theoretical and numerical analysis, the experimental method is thus advantageous over theoretical and numerical methods. Our laboratory fault is composed of carefully cut photoelastic polymer plates (Homahte-100, Polycarbonate) held together by uniaxial compression. As a unique unit of the experimental design, a controlled exploding wire technique provides the triggering mechanism of laboratory earthquakes. Three important components of real earthquakes (i.e., pre-existing fault, tectonic loading, and triggering mechanism) correspond to and are simulated by frictional contact, uniaxial compression, and the exploding wire technique. Dynamic rupturing processes are visualized using the photoelastic method and are recorded via a high-speed camera. Our experimental methodology, which is full-field, in situ, and non-intrusive, has better control and diagnostic capacity compared to other existing experimental methods. Using this experimental approach, we have investigated several problems: dynamics of earthquake faulting occurring along homogeneous faults separating identical materials, earthquake faulting along inhomogeneous faults separating materials with different wave speeds, and earthquake faulting along faults with a finite low wave speed fault core. We have observed supershear ruptures, subRayleigh to supershear rupture transition, crack-like to pulse-like rupture transition, self-healing (Heaton) pulse, and rupture directionality.

  6. Two dimensional kinetic analysis of electrostatic harmonic plasma waves

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fonseca-Pongutá, E. C.; Ziebell, L. F.; Gaelzer, R.

    2016-06-15

    Electrostatic harmonic Langmuir waves are virtual modes excited in weakly turbulent plasmas, first observed in early laboratory beam-plasma experiments as well as in rocket-borne active experiments in space. However, their unequivocal presence was confirmed through computer simulated experiments and subsequently theoretically explained. The peculiarity of harmonic Langmuir waves is that while their existence requires nonlinear response, their excitation mechanism and subsequent early time evolution are governed by essentially linear process. One of the unresolved theoretical issues regards the role of nonlinear wave-particle interaction process over longer evolution time period. Another outstanding issue is that existing theories for these modes aremore » limited to one-dimensional space. The present paper carries out two dimensional theoretical analysis of fundamental and (first) harmonic Langmuir waves for the first time. The result shows that harmonic Langmuir wave is essentially governed by (quasi)linear process and that nonlinear wave-particle interaction plays no significant role in the time evolution of the wave spectrum. The numerical solutions of the two-dimensional wave spectra for fundamental and harmonic Langmuir waves are also found to be consistent with those obtained by direct particle-in-cell simulation method reported in the literature.« less

  7. Use abstracted patient-specific features to assist an information-theoretic measurement to assess similarity between medical cases

    PubMed Central

    Cao, Hui; Melton, Genevieve B.; Markatou, Marianthi; Hripcsak, George

    2008-01-01

    Inter-case similarity metrics can potentially help find similar cases from a case base for evidence-based practice. While several methods to measure similarity between cases have been proposed, developing an effective means for measuring patient case similarity remains a challenging problem. We were interested in examining how abstracting could potentially assist computing case similarity. In this study, abstracted patient-specific features from medical records were used to improve an existing information-theoretic measurement. The developed metric, using a combination of abstracted disease, finding, procedure and medication features, achieved a correlation between 0.6012 and 0.6940 to experts. PMID:18487093

  8. Comparison of analytical and experimental subsonic steady and unsteady pressure distributions for a high-aspect-ratio-supercritical wing model with oscillating control surfaces

    NASA Technical Reports Server (NTRS)

    Mccain, W. E.

    1982-01-01

    The results of a comparative study using the unsteady aerodynamic lifting surface theory, known as the Doublet Lattice method, and experimental subsonic steady- and unsteady-pressure measurements, are presented for a high-aspect-ratio supercritical wing model. Comparisons of pressure distributions due to wing angle of attack and control-surface deflections were made. In general, good correlation existed between experimental and theoretical data over most of the wing planform. The more significant deviations found between experimental and theoretical data were in the vicinity of control surfaces for both static and oscillatory control-surface deflections.

  9. Relative loading on biplane wings

    NASA Technical Reports Server (NTRS)

    Diehl, Walter S

    1934-01-01

    Recent improvements in stress analysis methods have made it necessary to revise and to extend the loading curves to cover all conditions of flight. This report is concerned with a study of existing biplane data by combining the experimental and theoretical data to derive a series of curves from which the lift curves of the individual wings of a biplane may be obtained.

  10. Factors that influence utilisation of HIV/AIDS prevention methods among university students residing at a selected university campus.

    PubMed

    Ndabarora, Eléazar; Mchunu, Gugu

    2014-01-01

    Various studies have reported that university students, who are mostly young people, rarely use existing HIV/AIDS preventive methods. Although studies have shown that young university students have a high degree of knowledge about HIV/AIDS and HIV modes of transmission, they are still not utilising the existing HIV prevention methods and still engage in risky sexual practices favourable to HIV. Some variables, such as awareness of existing HIV/AIDS prevention methods, have been associated with utilisation of such methods. The study aimed to explore factors that influence use of existing HIV/AIDS prevention methods among university students residing in a selected campus, using the Health Belief Model (HBM) as a theoretical framework. A quantitative research approach and an exploratory-descriptive design were used to describe perceived factors that influence utilisation by university students of HIV/AIDS prevention methods. A total of 335 students completed online and manual questionnaires. Study findings showed that the factors which influenced utilisation of HIV/AIDS prevention methods were mainly determined by awareness of the existing university-based HIV/AIDS prevention strategies. Most utilised prevention methods were voluntary counselling and testing services and free condoms. Perceived susceptibility and perceived threat of HIV/AIDS score was also found to correlate with HIV risk index score. Perceived susceptibility and perceived threat of HIV/AIDS showed correlation with self-efficacy on condoms and their utilisation. Most HBM variables were not predictors of utilisation of HIV/AIDS prevention methods among students. Intervention aiming to improve the utilisation of HIV/AIDS prevention methods among students at the selected university should focus on removing identified barriers, promoting HIV/AIDS prevention services and providing appropriate resources to implement such programmes.

  11. Understanding the public's health problems: applications of symbolic interaction to public health.

    PubMed

    Maycock, Bruce

    2015-01-01

    Public health has typically investigated health issues using methods from the positivistic paradigm. Yet these approaches, although they are able to quantify the problem, may not be able to explain the social reasons of why the problem exists or the impact on those affected. This article will provide a brief overview of a sociological theory that provides methods and a theoretical framework that has proven useful in understanding public health problems and developing interventions. © 2014 APJPH.

  12. Effect of background dielectric on TE-polarized photonic bandgap of metallodielectric photonic crystals using Dirichlet-to-Neumann map method.

    PubMed

    Sedghi, Aliasghar; Rezaei, Behrooz

    2016-11-20

    Using the Dirichlet-to-Neumann map method, we have calculated the photonic band structure of two-dimensional metallodielectric photonic crystals having the square and triangular lattices of circular metal rods in a dielectric background. We have selected the transverse electric mode of electromagnetic waves, and the resulting band structures showed the existence of photonic bandgap in these structures. We theoretically study the effect of background dielectric on the photonic bandgap.

  13. On existence of the σ(600) Its physical implications and related problems

    NASA Astrophysics Data System (ADS)

    Ishida, Shin

    1998-05-01

    We make a re-analysis of 1=0 ππ scattering phase shift δ00 through a new method of S-matrix parametrization (IA; interfering amplitude method), and show a result suggesting strongly for the existence of σ-particle-long-sought Chiral partner of π-meson. Furthermore, through the phenomenological analyses of typical production processes of the 2π-system, the pp-central collision and the J/Ψ→ωππ decay, by applying an intuitive formula as sum of Breit-Wigner amplitudes, (VMW; variant mass and width method), the other evidences for the σ-existence are given. The validity of the methods used in the above analyses is investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem, especially in relation to the "universality" argument. It is shown that the IA and VMW are obtained as the physical state representations of scattering and production amplitudes, respectively. The VMW is shown to be an effective method to obtain the resonance properties from production processes, which generally have the unknown strong-phases. The conventional analyses based on the "universality" seem to be powerless for this purpose.

  14. Reconstructing a plasmonic metasurface for a broadband high-efficiency optical vortex in the visible frequency.

    PubMed

    Lu, Bing-Rui; Deng, Jianan; Li, Qi; Zhang, Sichao; Zhou, Jing; Zhou, Lei; Chen, Yifang

    2018-06-14

    Metasurfaces consisting of a two-dimensional metallic nano-antenna array are capable of transferring a Gaussian beam into an optical vortex with a helical phase front and a phase singularity by manipulating the polarization/phase status of light. This miniaturizes a laboratory scaled optical system into a wafer scale component, opening up a new area for broad applications in optics. However, the low conversion efficiency to generate a vortex beam from circularly polarized light hinders further development. This paper reports our recent success in improving the efficiency over a broad waveband at the visible frequency compared with the existing work. The choice of material, the geometry and the spatial organization of meta-atoms, and the fabrication fidelity are theoretically investigated by the Jones matrix method. The theoretical conversion efficiency over 40% in the visible wavelength range is worked out by systematic calculation using the finite difference time domain (FDTD) method. The fabricated metasurface based on the parameters by theoretical optimization demonstrates a high quality vortex in optical frequencies with a significantly enhanced efficiency of over 20% in a broad waveband.

  15. NLPIR: A Theoretical Framework for Applying Natural Language Processing to Information Retrieval.

    ERIC Educational Resources Information Center

    Zhou, Lina; Zhang, Dongsong

    2003-01-01

    Proposes a theoretical framework called NLPIR that integrates natural language processing (NLP) into information retrieval (IR) based on the assumption that there exists representation distance between queries and documents. Discusses problems in traditional keyword-based IR, including relevance, and describes some existing NLP techniques.…

  16. Discretization and Preconditioning Algorithms for the Euler and Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Several stabilized demoralization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin demoralization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS, and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobean linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Discrete maximum principle theory will be presented for general finite volume approximations on unstructured meshes. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc, will. be addressed as needed.

  17. Discretization and Preconditioning Algorithms for the Euler and Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    Several stabilized discretization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin discretization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobian linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. These variants have been implemented in the "ELF" library for which example calculations will be shown. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Some prevalent limiting strategies will be reviewed. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc. will be addressed as needed.

  18. Evaluating the Potential of Virtual Simulations to Facilitate Professional Learning in Law: A Literature Review

    ERIC Educational Resources Information Center

    Thanaraj, Ann

    2016-01-01

    The use of virtual simulations in Legal Education as a method for learning is relatively rare despite much theoretical support that exists for the benefits in learning. There is also some apprehension on the use of technology in legal education. Both of these are likely due to the lack of solid evaluations concerning the overall impact in…

  19. Spline-based Rayleigh-Ritz methods for the approximation of the natural modes of vibration for flexible beams with tip bodies

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1985-01-01

    Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.

  20. Relation between scattering and production amplitude--Case of intermediate {sigma}-particle in {pi}{pi}-system--

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishida, Muneyuki; Ishida, Shin; Ishida, Taku

    1998-05-29

    The relation between scattering and production amplitudes are investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem. The IA-method and VMW-method, which are applied to our phenomenological analyses [2,3] suggesting the {sigma}-existence, are obtained as the physical state representations of scattering and production amplitudes, respectively. Moreover, the VMW-method is shown to be an effective method to obtain the resonance properties from general production processes, while the conventional analyses based on the 'universality' of {pi}{pi}-scattering amplitude are powerless for this purpose.

  1. Relation between scattering and production amplitude—Case of intermediate σ-particle in ππ-system—

    NASA Astrophysics Data System (ADS)

    Ishida, Muneyuki; Ishida, Shin; Ishida, Taku

    1998-05-01

    The relation between scattering and production amplitudes are investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem. The IA-method and VMW-method, which are applied to our phenomenological analyses [2,3] suggesting the σ-existence, are obtained as the physical state representations of scattering and production amplitudes, respectively. Moreover, the VMW-method is shown to be an effective method to obtain the resonance properties from general production processes, while the conventional analyses based on the "universality" of ππ-scattering amplitude are powerless for this purpose.

  2. Off-diagonal expansion quantum Monte Carlo

    NASA Astrophysics Data System (ADS)

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  3. Off-diagonal expansion quantum Monte Carlo.

    PubMed

    Albash, Tameem; Wagenbreth, Gene; Hen, Itay

    2017-12-01

    We propose a Monte Carlo algorithm designed to simulate quantum as well as classical systems at equilibrium, bridging the algorithmic gap between quantum and classical thermal simulation algorithms. The method is based on a decomposition of the quantum partition function that can be viewed as a series expansion about its classical part. We argue that the algorithm not only provides a theoretical advancement in the field of quantum Monte Carlo simulations, but is optimally suited to tackle quantum many-body systems that exhibit a range of behaviors from "fully quantum" to "fully classical," in contrast to many existing methods. We demonstrate the advantages, sometimes by orders of magnitude, of the technique by comparing it against existing state-of-the-art schemes such as path integral quantum Monte Carlo and stochastic series expansion. We also illustrate how our method allows for the unification of quantum and classical thermal parallel tempering techniques into a single algorithm and discuss its practical significance.

  4. Simple diffusion can support the pitchfork, the flip bifurcations, and the chaos

    NASA Astrophysics Data System (ADS)

    Meng, Lili; Li, Xinfu; Zhang, Guang

    2017-12-01

    In this paper, a discrete rational fration population model with the Dirichlet boundary conditions will be considered. According to the discrete maximum principle and the sub- and supper-solution method, the necessary and sufficient conditions of uniqueness and existence of positive steady state solutions will be obtained. In addition, the dynamical behavior of a special two patch metapopulation model is investigated by using the bifurcation method, the center manifold theory, the bifurcation diagrams and the largest Lyapunov exponent. The results show that there exist the pitchfork, the flip bifurcations, and the chaos. Clearly, these phenomena are caused by the simple diffusion. The theoretical analysis of chaos is very imortant, unfortunately, there is not any results in this hand. However, some open problems are given.

  5. Reverse Engineering Cellular Networks with Information Theoretic Methods

    PubMed Central

    Villaverde, Alejandro F.; Ross, John; Banga, Julio R.

    2013-01-01

    Building mathematical models of cellular networks lies at the core of systems biology. It involves, among other tasks, the reconstruction of the structure of interactions between molecular components, which is known as network inference or reverse engineering. Information theory can help in the goal of extracting as much information as possible from the available data. A large number of methods founded on these concepts have been proposed in the literature, not only in biology journals, but in a wide range of areas. Their critical comparison is difficult due to the different focuses and the adoption of different terminologies. Here we attempt to review some of the existing information theoretic methodologies for network inference, and clarify their differences. While some of these methods have achieved notable success, many challenges remain, among which we can mention dealing with incomplete measurements, noisy data, counterintuitive behaviour emerging from nonlinear relations or feedback loops, and computational burden of dealing with large data sets. PMID:24709703

  6. Cross sections for electron scattering by carbon disulfide in the low- and intermediate-energy range

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brescansin, L. M.; Iga, I.; Lee, M.-T.

    2010-01-15

    In this work, we report a theoretical study on e{sup -}-CS{sub 2} collisions in the low- and intermediate-energy ranges. Elastic differential, integral, and momentum-transfer cross sections, as well as grand total (elastic + inelastic) and absorption cross sections, are reported in the 1-1000 eV range. A recently proposed complex optical potential composed of static, exchange, and correlation-polarization plus absorption contributions is used to describe the electron-molecule interaction. The Schwinger variational iterative method combined with the distorted-wave approximation is applied to calculate the scattering amplitudes. The comparison between our calculated results and the existing experimental and/or theoretical results is encouraging.

  7. A game-theoretical pricing mechanism for multiuser rate allocation for video over WiMAX

    NASA Astrophysics Data System (ADS)

    Chen, Chao-An; Lo, Chi-Wen; Lin, Chia-Wen; Chen, Yung-Chang

    2010-07-01

    In multiuser rate allocation in a wireless network, strategic users can bias the rate allocation by misrepresenting their bandwidth demands to a base station, leading to an unfair allocation. Game-theoretical approaches have been proposed to address the unfair allocation problems caused by the strategic users. However, existing approaches rely on a timeconsuming iterative negotiation process. Besides, they cannot completely prevent unfair allocations caused by inconsistent strategic behaviors. To address these problems, we propose a Search Based Pricing Mechanism to reduce the communication time and to capture a user's strategic behavior. Our simulation results show that the proposed method significantly reduce the communication time as well as converges stably to an optimal allocation.

  8. Relativistic R -matrix calculations for the electron-impact excitation of neutral molybdenum

    NASA Astrophysics Data System (ADS)

    Smyth, R. T.; Johnson, C. A.; Ennis, D. A.; Loch, S. D.; Ramsbottom, C. A.; Ballance, C. P.

    2017-10-01

    A recent PISCES-B Mod experiment [Nishijima et al., J. Phys. B 43, 225701 (2010), 10.1088/0953-4075/43/22/225701] has revealed up to a factor of 5 discrepancy between measurement and the two existing theoretical models [Badnell et al., J. Phys. B 29, 3683 (1996), 10.1088/0953-4075/29/16/014; Bartschat et al., J. Phys. B 35, 2899 (2002), 10.1088/0953-4075/35/13/305], providing important diagnostics for Mo i. In the following paper we address this issue by employing a relativistic atomic structure and R -matrix scattering calculations to improve upon the available models for future applications and benchmark results against a recent Compact Toroidal Hybrid experiment [Hartwell et al., Fusion Sci. Technol. 72, 76 (2017), 10.1080/15361055.2017.1291046]. We determine the atomic structure of Mo i using grasp0, which implements the multiconfigurational Dirac-Fock method. Fine structure energies and radiative transition rates are presented and compared to existing experimental and theoretical values. The electron-impact excitation of Mo i is investigated using the relativistic R -matrix method and the parallel versions of the Dirac atomic R -matrix codes. Electron-impact excitation cross sections are presented and compared to the few available theoretical cross sections. Throughout, our emphasis is on improving the results for the z 1,2,3o5P →a S52,z 2,3,4o7P → a S73 and y 2,3,4o7P → a S73 electric dipole transitions of particular relevance for diagnostic work.

  9. Limits to biofuels

    NASA Astrophysics Data System (ADS)

    Johansson, S.

    2013-06-01

    Biofuel production is dependent upon agriculture and forestry systems, and the expectations of future biofuel potential are high. A study of the global food production and biofuel production from edible crops implies that biofuel produced from edible parts of crops lead to a global deficit of food. This is rather well known, which is why there is a strong urge to develop biofuel systems that make use of residues or products from forest to eliminate competition with food production. However, biofuel from agro-residues still depend upon the crop production system, and there are many parameters to deal with in order to investigate the sustainability of biofuel production. There is a theoretical limit to how much biofuel can be achieved globally from agro-residues and this amounts to approximately one third of todays' use of fossil fuels in the transport sector. In reality this theoretical potential may be eliminated by the energy use in the biomass-conversion technologies and production systems, depending on what type of assessment method is used. By surveying existing studies on biofuel conversion the theoretical limit of biofuels from 2010 years' agricultural production was found to be either non-existent due to energy consumption in the conversion process, or up to 2-6000TWh (biogas from residues and waste and ethanol from woody biomass) in the more optimistic cases.

  10. Trans−cis Switching Mechanisms in Proline Analogues and Their Relevance for the Gating of the 5-HT3 Receptor

    PubMed Central

    2009-01-01

    Trans−cis isomerization of a proline peptide bond is a potential mechanism to open the channel of the 5-HT3 receptor. Here, we have used the metadynamics method to theoretically explore such a mechanism. We have determined the free energy surfaces in aqueous solution of a series of dipeptides of proline analogues and evaluated the free energy difference between the cis and trans isomers. These theoretical results were then compared with data from mutagenesis experiments, in which the response of the 5-HT3 receptor was measured when the proline at the apex of the M2-M3 transmembrane domain loop was mutated. The strong correlation between the experimental and the theoretical data supports the existence of a trans−cis proline switch for opening the 5-HT3 receptor ion channel. PMID:19663504

  11. Trans-cis switching mechanisms in proline analogues and their relevance for the gating of the 5-HT3 receptor.

    PubMed

    Melis, Claudio; Bussi, Giovanni; Lummis, Sarah C R; Molteni, Carla

    2009-09-03

    Trans-cis isomerization of a proline peptide bond is a potential mechanism to open the channel of the 5-HT(3) receptor. Here, we have used the metadynamics method to theoretically explore such a mechanism. We have determined the free energy surfaces in aqueous solution of a series of dipeptides of proline analogues and evaluated the free energy difference between the cis and trans isomers. These theoretical results were then compared with data from mutagenesis experiments, in which the response of the 5-HT(3) receptor was measured when the proline at the apex of the M2-M3 transmembrane domain loop was mutated. The strong correlation between the experimental and the theoretical data supports the existence of a trans-cis proline switch for opening the 5-HT(3) receptor ion channel.

  12. Elastic Cherenkov effects in transversely isotropic soft materials-I: Theoretical analysis, simulations and inverse method

    NASA Astrophysics Data System (ADS)

    Li, Guo-Yang; Zheng, Yang; Liu, Yanlin; Destrade, Michel; Cao, Yanping

    2016-11-01

    A body force concentrated at a point and moving at a high speed can induce shear-wave Mach cones in dusty-plasma crystals or soft materials, as observed experimentally and named the elastic Cherenkov effect (ECE). The ECE in soft materials forms the basis of the supersonic shear imaging (SSI) technique, an ultrasound-based dynamic elastography method applied in clinics in recent years. Previous studies on the ECE in soft materials have focused on isotropic material models. In this paper, we investigate the existence and key features of the ECE in anisotropic soft media, by using both theoretical analysis and finite element (FE) simulations, and we apply the results to the non-invasive and non-destructive characterization of biological soft tissues. We also theoretically study the characteristics of the shear waves induced in a deformed hyperelastic anisotropic soft material by a source moving with high speed, considering that contact between the ultrasound probe and the soft tissue may lead to finite deformation. On the basis of our theoretical analysis and numerical simulations, we propose an inverse approach to infer both the anisotropic and hyperelastic parameters of incompressible transversely isotropic (TI) soft materials. Finally, we investigate the properties of the solutions to the inverse problem by deriving the condition numbers in analytical form and performing numerical experiments. In Part II of the paper, both ex vivo and in vivo experiments are conducted to demonstrate the applicability of the inverse method in practical use.

  13. The Simplest Chronoscope V: A Theory of Dual Primary and Secondary Reaction Time Systems.

    PubMed

    Montare, Alberto

    2016-12-01

    Extending work by Montare, visual simple reaction time, choice reaction time, discriminative reaction time, and overall reaction time scores obtained from college students by the simplest chronoscope (a falling meterstick) method were significantly faster as well as significantly less variable than scores of the same individuals from electromechanical reaction timers (machine method). Results supported the existence of dual reaction time systems: an ancient primary reaction time system theoretically activating the V5 parietal area of the dorsal visual stream that evolved to process significantly faster sensory-motor reactions to sudden stimulations arising from environmental objects in motion, and a secondary reaction time system theoretically activating the V4 temporal area of the ventral visual stream that subsequently evolved to process significantly slower sensory-perceptual-motor reactions to sudden stimulations arising from motionless colored objects. © The Author(s) 2016.

  14. Numerical detection of the Gardner transition in a mean-field glass former.

    PubMed

    Charbonneau, Patrick; Jin, Yuliang; Parisi, Giorgio; Rainone, Corrado; Seoane, Beatriz; Zamponi, Francesco

    2015-07-01

    Recent theoretical advances predict the existence, deep into the glass phase, of a novel phase transition, the so-called Gardner transition. This transition is associated with the emergence of a complex free energy landscape composed of many marginally stable sub-basins within a glass metabasin. In this study, we explore several methods to detect numerically the Gardner transition in a simple structural glass former, the infinite-range Mari-Kurchan model. The transition point is robustly located from three independent approaches: (i) the divergence of the characteristic relaxation time, (ii) the divergence of the caging susceptibility, and (iii) the abnormal tail in the probability distribution function of cage order parameters. We show that the numerical results are fully consistent with the theoretical expectation. The methods we propose may also be generalized to more realistic numerical models as well as to experimental systems.

  15. Determination of the top quark mass circa 2013: methods, subtleties, perspectives

    NASA Astrophysics Data System (ADS)

    Juste, Aurelio; Mantry, Sonny; Mitov, Alexander; Penin, Alexander; Skands, Peter; Varnes, Erich; Vos, Marcel; Wimpenny, Stephen

    2014-10-01

    We present an up-to-date overview of the problem of top quark mass determination. We assess the need for precision in the top mass extraction in the LHC era together with the main theoretical and experimental issues arising in precision top mass determination. We collect and document existing results on top mass determination at hadron colliders and map the prospects for future precision top mass determination at e+e- colliders. We present a collection of estimates for the ultimate precision of various methods for top quark mass extraction at the LHC.

  16. A stage structure pest management model with impulsive state feedback control

    NASA Astrophysics Data System (ADS)

    Pang, Guoping; Chen, Lansun; Xu, Weijian; Fu, Gang

    2015-06-01

    A stage structure pest management model with impulsive state feedback control is investigated. We get the sufficient condition for the existence of the order-1 periodic solution by differential equation geometry theory and successor function. Further, we obtain a new judgement method for the stability of the order-1 periodic solution of the semi-continuous systems by referencing the stability analysis for limit cycles of continuous systems, which is different from the previous method of analog of Poincarè criterion. Finally, we analyze numerically the theoretical results obtained.

  17. Theory and Simulation of A Novel Viscosity Measurement Method for High Temperature Semiconductor

    NASA Technical Reports Server (NTRS)

    Lin, Bochuan; Li, Chao; Ban, Heng; Scripa, Rose; Zhu, Shen; Su, Ching-Hua; Lehoczky, S. L.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    The properties of molten semiconductors are good indicators for material structure transformation and hysteresis under temperature variations. Viscosity, as one of the most important properties, is difficult to measure because of high temperature, high pressure, and vapor toxicity of melts. Recently, a novel method was developed by applying a rotating magnetic field to the melt sealed in a suspended quartz ampoule, and measuring the transient torque exerted by rotating melt flow on the ampoule wall. The method was designed to measure viscosity in short time period, which is essential for evaluating temperature hysteresis. This paper compares the theoretical prediction of melt flow and ampoule oscillation with the experimental data. A theoretical model was established and the coupled fluid flow and ampoule torsional vibration equations were solved numerically. The simulation results showed a good agreement with experimental data. The results also showed that both electrical conductivity and viscosity could be calculated by fitting the theoretical results to the experimental data. The transient velocity of the melt caused by the rotating magnetic field was found reach equilibrium in about half a minute, and the viscosity of melt could be calculated from the altitude of oscillation. This would allow the measurement of viscosity in a minute or so, in contrast to the existing oscillation cup method, which requires about an hour for one measurement.

  18. Consistency of Cluster Analysis for Cognitive Diagnosis: The Reduced Reparameterized Unified Model and the General Diagnostic Model.

    PubMed

    Chiu, Chia-Yi; Köhn, Hans-Friedrich

    2016-09-01

    The asymptotic classification theory of cognitive diagnosis (ACTCD) provided the theoretical foundation for using clustering methods that do not rely on a parametric statistical model for assigning examinees to proficiency classes. Like general diagnostic classification models, clustering methods can be useful in situations where the true diagnostic classification model (DCM) underlying the data is unknown and possibly misspecified, or the items of a test conform to a mix of multiple DCMs. Clustering methods can also be an option when fitting advanced and complex DCMs encounters computational difficulties. These can range from the use of excessive CPU times to plain computational infeasibility. However, the propositions of the ACTCD have only been proven for the Deterministic Input Noisy Output "AND" gate (DINA) model and the Deterministic Input Noisy Output "OR" gate (DINO) model. For other DCMs, there does not exist a theoretical justification to use clustering for assigning examinees to proficiency classes. But if clustering is to be used legitimately, then the ACTCD must cover a larger number of DCMs than just the DINA model and the DINO model. Thus, the purpose of this article is to prove the theoretical propositions of the ACTCD for two other important DCMs, the Reduced Reparameterized Unified Model and the General Diagnostic Model.

  19. From atomistic interfaces to dendritic patterns

    NASA Astrophysics Data System (ADS)

    Galenko, P. K.; Alexandrov, D. V.

    2018-01-01

    Transport processes around phase interfaces, together with thermodynamic properties and kinetic phenomena, control the formation of dendritic patterns. Using the thermodynamic and kinetic data of phase interfaces obtained on the atomic scale, one can analyse the formation of a single dendrite and the growth of a dendritic ensemble. This is the result of recent progress in theoretical methods and computational algorithms calculated using powerful computer clusters. Great benefits can be attained from the development of micro-, meso- and macro-levels of analysis when investigating the dynamics of interfaces, interpreting experimental data and designing the macrostructure of samples. The review and research articles in this theme issue cover the spectrum of scales (from nano- to macro-length scales) in order to exhibit recently developing trends in the theoretical analysis and computational modelling of dendrite pattern formation. Atomistic modelling, the flow effect on interface dynamics, the transition from diffusion-limited to thermally controlled growth existing at a considerable driving force, two-phase (mushy) layer formation, the growth of eutectic dendrites, the formation of a secondary dendritic network due to coalescence, computational methods, including boundary integral and phase-field methods, and experimental tests for theoretical models-all these themes are highlighted in the present issue. This article is part of the theme issue `From atomistic interfaces to dendritic patterns'.

  20. A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.

    PubMed

    Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem

    2018-06-12

    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.

  1. Vibrational cross sections for positron scattering by nitrogen molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazon, K. T.; Tenfen, W.; Michelin, S. E.

    2010-09-15

    We present a systematic study of low-energy positron collision with nitrogen molecules. Vibrational elastic and excitation cross sections are calculated using the multichannel version of the continued fractions method in the close-coupling scheme for the positron incident energy up to 20 eV. The interaction potential is treated within the static-correlation-polarization approximation. The comparison of our calculated data with existing theoretical and experimental results is encouraging.

  2. Microwave Tubes.

    DTIC Science & Technology

    1980-06-02

    better possibilities). It should be stated, also, that there exists for both TWT and the klystron, quite straight forward theoretical approaches which can...methods of large signal calculations for coupled cavity TWTs . Copies of this internal memo can be made available to any recipient of this report. M716S GP"I...electrodes and magnetic fields. The magnetic fields, in some cases (klystrons and TWTs ), serve merely to focus the beam, that is, confine the electron

  3. Developing a theoretical framework for complex community-based interventions.

    PubMed

    Angeles, Ricardo N; Dolovich, Lisa; Kaczorowski, Janusz; Thabane, Lehana

    2014-01-01

    Applying existing theories to research, in the form of a theoretical framework, is necessary to advance knowledge from what is already known toward the next steps to be taken. This article proposes a guide on how to develop a theoretical framework for complex community-based interventions using the Cardiovascular Health Awareness Program as an example. Developing a theoretical framework starts with identifying the intervention's essential elements. Subsequent steps include the following: (a) identifying and defining the different variables (independent, dependent, mediating/intervening, moderating, and control); (b) postulating mechanisms how the independent variables will lead to the dependent variables; (c) identifying existing theoretical models supporting the theoretical framework under development; (d) scripting the theoretical framework into a figure or sets of statements as a series of hypotheses, if/then logic statements, or a visual model; (e) content and face validation of the theoretical framework; and (f) revising the theoretical framework. In our example, we combined the "diffusion of innovation theory" and the "health belief model" to develop our framework. Using the Cardiovascular Health Awareness Program as the model, we demonstrated a stepwise process of developing a theoretical framework. The challenges encountered are described, and an overview of the strategies employed to overcome these challenges is presented.

  4. The Implementation of Pharmacy Competence Teaching in Estonia

    PubMed Central

    Volmer, Daisy; Sepp, Kristiina; Veski, Peep; Raal, Ain

    2017-01-01

    Background: The PHAR-QA, “Quality Assurance in European Pharmacy Education and Training”, project has produced the European Pharmacy Competence Framework (EPCF). The aim of this study was to evaluate the existing pharmacy programme at the University of Tartu, using the EPCF. Methods: A qualitative assessment of the pharmacy programme by a convenience sample (n = 14) representing different pharmacy stakeholders in Estonia. EPCF competency levels were determined by using a five-point scale tool adopted from the Dutch competency standards framework. Mean scores of competency levels given by academia and other pharmacy stakeholders were compared. Results: Medical and social sciences, pharmaceutical technology, and pharmacy internship were more frequent subject areas contributing to EPCF competencies. In almost all domains, the competency level was seen higher by academia than by other pharmacy stakeholders. Despite on-board theoretical knowledge, the competency level at graduation could be insufficient for independent professional practice. Other pharmacy stakeholders would improve practical implementation of theoretical knowledge, especially to increase patient care competencies. Conclusions: The EPCF was utilized to evaluate professional competencies of entry-level pharmacists who have completed a traditional pharmacy curriculum. More efficient training methods and involvement of practicing specialists were suggested to reduce the gaps of the existing pharmacy programme. Applicability of competence teaching in Estonia requires more research and collaborative communication within the pharmacy sector. PMID:28970430

  5. Reference Specimen for Nondestructive Evaluation: Characterization of the Oxide Layer of a Cold Shot in Inconel 600

    NASA Astrophysics Data System (ADS)

    Saletes, I.; Filleter, T.; Goldbaum, D.; Chromik, R. R.; Sinclair, A. N.

    2015-02-01

    The presence of a cold shot in an aircraft turbine blade can lead to the catastrophic failure of the blade and ultimately to the failure of the power plant. Currently, no nondestructive evaluation (NDE) method exists to detect this kind of defect. This deficiency is primarily due to the fact that the only known cold shot defects in existence are those found in failed blades. Therefore, in order to develop effective NDE methods, reference specimens are needed which mimic the embedded oxide layer that is a primary distinguishing feature of a cold shot. Here, we present a procedure to synthetically reproduce the features of a real cold shot in Inconel 600 and the precise characterization of this oxide layer as a reference specimen suitable for NDE evaluation. As a first step to develop a suitable NDE technique, high-frequency ultrasound simulations are considered. A theoretical 1-D model is developed in order to quantify the multiple reflection-transmission trajectory of the acoustic wave in the reference specimen. This paper also presents an experimental determination of the density and the Young's modulus of the Inconel 600 oxide, which are required as inputs to calculate the acoustic impedance used in the theoretical model.

  6. The polarized Debye sheath effect on Kadomtsev-Petviashvili electrostatic structures in strongly coupled dusty plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shahmansouri, M.; Alinejad, H.

    2015-04-15

    We give a theoretical investigation on the dynamics of nonlinear electrostatic waves in a strongly coupled dusty plasma with strong electrostatic interaction between dust grains in the presence of the polarization force (i.e., the force due to the polarized Debye sheath). Adopting a reductive perturbation method, we derived a three-dimensional Kadomtsev-Petviashvili equation that describes the evolution of weakly nonlinear electrostatic localized waves. The energy integral equation is used to study the existence domains of the localized structures. The analysis provides the localized structure existence region, in terms of the effects of strong interaction between the dust particles and polarization force.

  7. Exponential Stability of Almost Periodic Solutions for Memristor-Based Neural Networks with Distributed Leakage Delays.

    PubMed

    Xu, Changjin; Li, Peiluan; Pang, Yicheng

    2016-12-01

    In this letter, we deal with a class of memristor-based neural networks with distributed leakage delays. By applying a new Lyapunov function method, we obtain some sufficient conditions that ensure the existence, uniqueness, and global exponential stability of almost periodic solutions of neural networks. We apply the results of this solution to prove the existence and stability of periodic solutions for this delayed neural network with periodic coefficients. We then provide an example to illustrate the effectiveness of the theoretical results. Our results are completely new and complement the previous studies Chen, Zeng, and Jiang ( 2014 ) and Jiang, Zeng, and Chen ( 2015 ).

  8. Theoretical explanations for maintenance of behaviour change: a systematic review of behaviour theories

    PubMed Central

    Kwasnicka, Dominika; Dombrowski, Stephan U; White, Martin; Sniehotta, Falko

    2016-01-01

    ABSTRACT Background: Behaviour change interventions are effective in supporting individuals in achieving temporary behaviour change. Behaviour change maintenance, however, is rarely attained. The aim of this review was to identify and synthesise current theoretical explanations for behaviour change maintenance to inform future research and practice. Methods: Potentially relevant theories were identified through systematic searches of electronic databases (Ovid MEDLINE, Embase, PsycINFO). In addition, an existing database of 80 theories was searched, and 25 theory experts were consulted. Theories were included if they formulated hypotheses about behaviour change maintenance. Included theories were synthesised thematically to ascertain overarching explanations for behaviour change maintenance. Initial theoretical themes were cross-validated. Findings: One hundred and seventeen behaviour theories were identified, of which 100 met the inclusion criteria. Five overarching, interconnected themes representing theoretical explanations for behaviour change maintenance emerged. Theoretical explanations of behaviour change maintenance focus on the differential nature and role of motives, self-regulation, resources (psychological and physical), habits, and environmental and social influences from initiation to maintenance. Discussion: There are distinct patterns of theoretical explanations for behaviour change and for behaviour change maintenance. The findings from this review can guide the development and evaluation of interventions promoting maintenance of health behaviours and help in the development of an integrated theory of behaviour change maintenance. PMID:26854092

  9. A theoretical signal processing framework for linear diffusion MRI: Implications for parameter estimation and experiment design.

    PubMed

    Varadarajan, Divya; Haldar, Justin P

    2017-11-01

    The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. On existence of the {sigma}(600) Its physical implications and related problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishida, Shin

    1998-05-29

    We make a re-analysis of 1=0 {pi}{pi} scattering phase shift {delta}{sub 0}{sup 0} through a new method of S-matrix parametrization (IA; interfering amplitude method), and show a result suggesting strongly for the existence of {sigma}-particle-long-sought Chiral partner of {pi}-meson. Furthermore, through the phenomenological analyses of typical production processes of the 2{pi}-system, the pp-central collision and the J/{psi}{yields}{omega}{pi}{pi} decay, by applying an intuitive formula as sum of Breit-Wigner amplitudes, (VMW; variant mass and width method), the other evidences for the {sigma}-existence are given. The validity of the methods used in the above analyses is investigated, using a simple field theoretical model,more » from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem, especially in relation to the ''universality'' argument. It is shown that the IA and VMW are obtained as the physical state representations of scattering and production amplitudes, respectively. The VMW is shown to be an effective method to obtain the resonance properties from production processes, which generally have the unknown strong-phases. The conventional analyses based on the 'universality' seem to be powerless for this purpose.« less

  11. Does Prop-2-ynylideneamine, HC≡CCH=NH, Exist in Space? A Theoretical and Computational Investigation

    PubMed Central

    Osman, Osman I.; Elroby, Shaaban A.; Aziz, Saadullah G.; Hilal, Rifaat H.

    2014-01-01

    MP2, DFT and CCSD methods with 6-311++G** and aug-cc-pvdz basis sets have been used to probe the structural changes and relative energies of E-prop-2-ynylideneamine (I), Z-prop-2-ynylideneamine (II), prop-1,2-diene-1-imine (III) and vinyl cyanide (IV). The energy near-equivalence and provenance of preference of isomers and tautomers were investigated by NBO calculations using HF and B3LYP methods with 6-311++G** and aug-cc-pvdz basis sets. All substrates have Cs symmetry. The optimized geometries were found to be mainly theoretical method dependent. All elected levels of theory have computed I/II total energy of isomerization (ΔE) of 1.707 to 3.707 kJ/mol in favour of II at 298.15 K. MP2 and CCSD methods have indicated clearly the preference of II over III; while the B3LYP functional predicted nearly similar total energies. All tested levels of theory yielded a global II/IV tautomerization total energy (ΔE) of 137.3–148.4 kJ/mol in support of IV at 298.15 K. The negative values of ΔS indicated that IV is favoured at low temperature. At high temperature, a reverse tautomerization becomes spontaneous and II is preferred. The existence of II in space was debated through the interpretation and analysis of the thermodynamic and kinetic studies of this tautomerization reaction and the presence of similar compounds in the Interstellar Medium (ISM). PMID:24950178

  12. Program of Research in Flight Dynamics, The George Washington University at NASA Langley Research Center

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick C. (Technical Monitor); Klein, Vladislav

    2005-01-01

    The program objectives are fully defined in the original proposal entitled Program of Research in Flight Dynamics in GW at NASA Langley Research Center, which was originated March 20, 1975, and in the renewals of the research program from January 1, 2003 to September 30, 2005. The program in its present form includes three major topics: 1. the improvement of existing methods and development of new methods for wind tunnel and flight data analysis, 2. the application of these methods to wind tunnel and flight test data obtained from advanced airplanes, 3. the correlation of flight results with wind tunnel measurements, and theoretical predictions.

  13. Evaluation of the flexibility of protective gloves.

    PubMed

    Harrabi, Lotfi; Dolez, Patricia I; Vu-Khanh, Toan; Lara, Jaime

    2008-01-01

    Two mechanical methods have been developed for the characterization of the flexibility of protective gloves, a key factor affecting their degree of usefulness for workers. The principle of the first method is similar to the ASTM D 4032 standard relative to fabric stiffness and simulates the deformations encountered by gloves that are not tight fitted to the hand. The second method characterizes the flexibility of gloves that are worn tight fitted. Its validity was theoretically verified for elastomer materials. Both methods should prove themselves as valuable tools for protective glove manufacturers, allowing for the characterization of their existing products in terms of flexibility and the development of new ones better fitting workers' needs.

  14. On the exactness of effective Floquet Hamiltonians employed in solid-state NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Garg, Rajat; Ramachandran, Ramesh

    2017-05-01

    Development of theoretical models based on analytic theory has remained an active pursuit in molecular spectroscopy for its utility both in the design of experiments as well as in the interpretation of spectroscopic data. In particular, the role of "Effective Hamiltonians" in the evolution of theoretical frameworks is well known across all forms of spectroscopy. Nevertheless, a constant revalidation of the approximations employed in the theoretical frameworks is necessitated by the constant improvements on the experimental front in addition to the complexity posed by the systems under study. Here in this article, we confine our discussion to the derivation of effective Floquet Hamiltonians based on the contact transformation procedure. While the importance of the effective Floquet Hamiltonians in the qualitative description of NMR experiments has been realized in simpler cases, its extension in quantifying spectral data deserves a cautious approach. With this objective, the validity of the approximations employed in the derivation of the effective Floquet Hamiltonians is re-examined through a comparison with exact numerical methods under differing experimental conditions. The limitations arising from the existing analytic methods are outlined along with remedial measures for improving the accuracy of the derived effective Floquet Hamiltonians.

  15. Shifting attention from objective risk factors to patients' self-assessed health resources: a clinical model for general practice.

    PubMed

    Hollnagel, H; Malterud, K

    1995-12-01

    The study was designed to present and apply theoretical and empirical knowledge for the construction of a clinical model intended to shift the attention of the general practitioner from objective risk factors to self-assessed health resources in male and female patients. Review, discussion and analysis of selected theoretical models about personal health resources involving assessing existing theories according to their emphasis concerning self-assessed vs. doctor-assessed health resources, specific health resources vs. life and coping in general, abstract vs. clinically applicable theory, gender perspective explicitly included or not. Relevant theoretical models on health and coping (salutogenesis, coping and social support, control/demand, locus of control, health belief model, quality of life), and the perspective of the underprivileged Other (critical theory, feminist standpoint theory, the patient-centred clinical method) were presented and assessed. Components from Antonovsky's salutogenetic perspective and McWhinney's patient-centred clinical method, supported by gender perspectives, were integrated to a clinical model which is presented. General practitioners are recommended to shift their attention from objective risk factors to self-assessed health resources by means of the clinical model. The relevance and feasibility of the model should be explored in empirical research.

  16. Theoretical study of aerodynamic characteristics of wings having vortex flow

    NASA Technical Reports Server (NTRS)

    Reddy, C. S.

    1979-01-01

    The aerodynamic characteristics of slender wings having separation induced vortex flows are investigated by employing three different computer codes--free vortex sheet, quasi vortex lattice, and suction analogy methods. Their capabilities and limitations are examined, and modifications are discussed. Flat wings of different configurations: arrow, delta, and diamond shapes, as well as cambered delta wings, are studied. The effect of notch ratio on the load distributions and the longitudinal characteristics of a family of arrow and diamond wings is explored. The sectional lift coefficients and the accumulated span loadings are determined for an arrow wing and are seen to be unusual in comparison with the attached flow results. The theoretically predicted results are compared with the existing experimental values.

  17. Benchmark results and theoretical treatments for valence-to-core x-ray emission spectroscopy in transition metal compounds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mortensen, D. R.; Seidler, G. T.; Kas, Joshua J.

    We report measurement of the valence-to-core (VTC) region of the K-shell x-ray emission spectra from several Zn and Fe inorganic compounds, and their critical comparison with several existing theoretical treatments. We find generally good agreement between the respective theories and experiment, and in particular find an important admixture of dipole and quadrupole character for Zn materials that is much weaker in Fe-based systems. These results on materials whose simple crystal structures should not, a prior, pose deep challenges to theory, will prove useful in guiding the further development of DFT and time-dependent DFT methods for VTC-XES predictions and their comparisonmore » to experiment.« less

  18. Integer-ambiguity resolution in astronomy and geodesy

    NASA Astrophysics Data System (ADS)

    Lannes, A.; Prieur, J.-L.

    2014-02-01

    Recent theoretical developments in astronomical aperture synthesis have revealed the existence of integer-ambiguity problems. Those problems, which appear in the self-calibration procedures of radio imaging, have been shown to be similar to the nearest-lattice point (NLP) problems encountered in high-precision geodetic positioning and in global navigation satellite systems. In this paper we analyse the theoretical aspects of the matter and propose new methods for solving those NLP~problems. The related optimization aspects concern both the preconditioning stage, and the discrete-search stage in which the integer ambiguities are finally fixed. Our algorithms, which are described in an explicit manner, can easily be implemented. They lead to substantial gains in the processing time of both stages. Their efficiency was shown via intensive numerical tests.

  19. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    PubMed

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  20. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution

    PubMed Central

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M.; Bai, Ruibin

    2016-01-01

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition. PMID:27854324

  1. Optimal control of thermally coupled Navier Stokes equations

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi; Scroggs, Jeffrey S.; Tran, Hien T.

    1994-01-01

    The optimal boundary temperature control of the stationary thermally coupled incompressible Navier-Stokes equation is considered. Well-posedness and existence of the optimal control and a necessary optimality condition are obtained. Optimization algorithms based on the augmented Lagrangian method with second order update are discussed. A test example motivated by control of transport process in the high pressure vapor transport (HVPT) reactor is presented to demonstrate the applicability of our theoretical results and proposed algorithm.

  2. Piezoelectric Polymers

    NASA Technical Reports Server (NTRS)

    Harrison, J. S.; Ounaies, Z.; Bushnell, Dennis M. (Technical Monitor)

    2001-01-01

    The purpose of this review is to detail the current theoretical understanding of the origin of piezoelectric and ferroelectric phenomena in polymers; to present the state-of-the-art in piezoelectric polymers and emerging material systems that exhibit promising properties; and to discuss key characterization methods, fundamental modeling approaches, and applications of piezoelectric polymers. Piezoelectric polymers have been known to exist for more than forty years, but in recent years they have gained notoriety as a valuable class of smart materials.

  3. What is positive youth development and how might it reduce substance use and violence? A systematic review and synthesis of theoretical literature.

    PubMed

    Bonell, Chris; Hinds, Kate; Dickson, Kelly; Thomas, James; Fletcher, Adam; Murphy, Simon; Melendez-Torres, G J; Bonell, Carys; Campbell, Rona

    2016-02-10

    Preventing adolescent substance use and youth violence are public health priorities. Positive youth development interventions are widely deployed often with the aim of preventing both. However, the theorised mechanisms by which PYD is intended to reduce substance use and violence are not clear and existing evaluated interventions are under-theorised. Using innovative methods, we systematically searched for and synthesised published theoretical literature describing what is meant by positive youth development and how it might reduce substance use and violence, as part of a broader systematic review examining process and outcomes of PYD interventions. We searched 19 electronic databases, review topic websites, and contacted experts between October 2013 and January 2014. We included studies written in English, published since 1985 that reported a theory of change for positive youth development focused on prevention of smoking, alcohol consumption, drug use or violence in out-of-school settings. Studies were independently coded and quality-assessed by two reviewers. We identified 16 studies that met our inclusion criteria. Our synthesis suggests that positive youth development aims to provide youth with affective relationships and diverse experiences which enable their development of intentional self-regulation and multiple positive assets. These in turn buffer against or compensate for involvement in substance use and violence. Existing literature is not clear on how intentional self-regulation is developed and which specific positive assets buffer against substance use or violence. Our synthesis provides: an example of a rigorous systematic synthesis of theory literature innovatively applying methods of qualitative synthesis to theoretical literature; a clearer understanding of how PYD might reduce substance use and violence to inform future interventions and empirical evaluations.

  4. Quantifying capability of a local seismic network in terms of locations and focal mechanism solutions of weak earthquakes

    NASA Astrophysics Data System (ADS)

    Fojtíková, Lucia; Kristeková, Miriam; Málek, Jiří; Sokos, Efthimios; Csicsay, Kristián; Zahradník, Jiří

    2016-01-01

    Extension of permanent seismic networks is usually governed by a number of technical, economic, logistic, and other factors. Planned upgrade of the network can be justified by theoretical assessment of the network capability in terms of reliable estimation of the key earthquake parameters (e.g., location and focal mechanisms). It could be useful not only for scientific purposes but also as a concrete proof during the process of acquisition of the funding needed for upgrade and operation of the network. Moreover, the theoretical assessment can also identify the configuration where no improvement can be achieved with additional stations, establishing a tradeoff between the improvement and additional expenses. This paper presents suggestion of a combination of suitable methods and their application to the Little Carpathians local seismic network (Slovakia, Central Europe) monitoring epicentral zone important from the point of seismic hazard. Three configurations of the network are considered: 13 stations existing before 2011, 3 stations already added in 2011, and 7 new planned stations. Theoretical errors of the relative location are estimated by a new method, specifically developed in this paper. The resolvability of focal mechanisms determined by waveform inversion is analyzed by a recent approach based on 6D moment-tensor error ellipsoids. We consider potential seismic events situated anywhere in the studied region, thus enabling "mapping" of the expected errors. Results clearly demonstrate that the network extension remarkably decreases the errors, mainly in the planned 23-station configuration. The already made three-station extension of the network in 2011 allowed for a few real data examples. Free software made available by the authors enables similar application in any other existing or planned networks.

  5. Verification of Internal Dose Calculations.

    NASA Astrophysics Data System (ADS)

    Aissi, Abdelmadjid

    The MIRD internal dose calculations have been in use for more than 15 years, but their accuracy has always been questionable. There have been attempts to verify these calculations; however, these attempts had various shortcomings which kept the question of verification of the MIRD data still unanswered. The purpose of this research was to develop techniques and methods to verify the MIRD calculations in a more systematic and scientific manner. The research consisted of improving a volumetric dosimeter, developing molding techniques, and adapting the Monte Carlo computer code ALGAM to the experimental conditions and vice versa. The organic dosimetric system contained TLD-100 powder and could be shaped to represent human organs. The dosimeter possessed excellent characteristics for the measurement of internal absorbed doses, even in the case of the lungs. The molding techniques are inexpensive and were used in the fabrication of dosimetric and radioactive source organs. The adaptation of the computer program provided useful theoretical data with which the experimental measurements were compared. The experimental data and the theoretical calculations were compared for 6 source organ-7 target organ configurations. The results of the comparison indicated the existence of an agreement between measured and calculated absorbed doses, when taking into consideration the average uncertainty (16%) of the measurements, and the average coefficient of variation (10%) of the Monte Carlo calculations. However, analysis of the data gave also an indication that the Monte Carlo method might overestimate the internal absorbed doses. Even if the overestimate exists, at least it could be said that the use of the MIRD method in internal dosimetry was shown to lead to no unnecessary exposure to radiation that could be caused by underestimating the absorbed dose. The experimental and the theoretical data were also used to test the validity of the Reciprocity Theorem for heterogeneous phantoms, such as the MIRD phantom and its physical representation, Mr. ADAM. The results indicated that the Reciprocity Theorem is valid within an average range of uncertainty of 8%.

  6. Theoretical Conversions of Different Hardness and Tensile Strength for Ductile Materials Based on Stress-Strain Curves

    NASA Astrophysics Data System (ADS)

    Chen, Hui; Cai, Li-Xun

    2018-04-01

    Based on the power-law stress-strain relation and equivalent energy principle, theoretical equations for converting between Brinell hardness (HB), Rockwell hardness (HR), and Vickers hardness (HV) were established. Combining the pre-existing relation between the tensile strength ( σ b ) and Hollomon parameters ( K, N), theoretical conversions between hardness (HB/HR/HV) and tensile strength ( σ b ) were obtained as well. In addition, to confirm the pre-existing σ b -( K, N) relation, a large number of uniaxial tensile tests were conducted in various ductile materials. Finally, to verify the theoretical conversions, plenty of statistical data listed in ASTM and ISO standards were adopted to test the robustness of the converting equations with various hardness and tensile strength. The results show that both hardness conversions and hardness-strength conversions calculated from the theoretical equations accord well with the standard data.

  7. Compound analysis via graph kernels incorporating chirality.

    PubMed

    Brown, J B; Urata, Takashi; Tamura, Takeyuki; Arai, Midori A; Kawabata, Takeo; Akutsu, Tatsuya

    2010-12-01

    High accuracy is paramount when predicting biochemical characteristics using Quantitative Structural-Property Relationships (QSPRs). Although existing graph-theoretic kernel methods combined with machine learning techniques are efficient for QSPR model construction, they cannot distinguish topologically identical chiral compounds which often exhibit different biological characteristics. In this paper, we propose a new method that extends the recently developed tree pattern graph kernel to accommodate stereoisomers. We show that Support Vector Regression (SVR) with a chiral graph kernel is useful for target property prediction by demonstrating its application to a set of human vitamin D receptor ligands currently under consideration for their potential anti-cancer effects.

  8. The theoretical and experimental study of a material structure evolution in gigacyclic fatigue regime

    NASA Astrophysics Data System (ADS)

    Plekhov, Oleg; Naimark, Oleg; Narykova, Maria; Kadomtsev, Andrey; Betekhtin, Vladimir

    2015-10-01

    The work is devoted to the study of the metal structure evolution under gigacyclic fatigue (VHCF) regime. The study of the mechanical properties of the samples (Armco iron) with different state of life time existing was carried out on the base of the acoustic resonance method. The damage accumulation (porosity of the samples) was studied by the hydrostatic weighing method. A statistical model of damage accumulation was proposed in order to describe the damage accumulation process. The model describes the influence of the sample surface on the location of fatigue crack initiation.

  9. Polarization holograms allow highly efficient generation of complex light beams.

    PubMed

    Ruiz, U; Pagliusi, P; Provenzano, C; Volke-Sepúlveda, K; Cipparrone, Gabriella

    2013-03-25

    We report a viable method to generate complex beams, such as the non-diffracting Bessel and Weber beams, which relies on the encoding of amplitude information, in addition to phase and polarization, using polarization holography. The holograms are recorded in polarization sensitive films by the interference of a reference plane wave with a tailored complex beam, having orthogonal circular polarizations. The high efficiency, the intrinsic achromaticity and the simplicity of use of the polarization holograms make them competitive with respect to existing methods and attractive for several applications. Theoretical analysis, based on the Jones formalism, and experimental results are shown.

  10. Structural design using equilibrium programming formulations

    NASA Technical Reports Server (NTRS)

    Scotti, Stephen J.

    1995-01-01

    Solutions to increasingly larger structural optimization problems are desired. However, computational resources are strained to meet this need. New methods will be required to solve increasingly larger problems. The present approaches to solving large-scale problems involve approximations for the constraints of structural optimization problems and/or decomposition of the problem into multiple subproblems that can be solved in parallel. An area of game theory, equilibrium programming (also known as noncooperative game theory), can be used to unify these existing approaches from a theoretical point of view (considering the existence and optimality of solutions), and be used as a framework for the development of new methods for solving large-scale optimization problems. Equilibrium programming theory is described, and existing design techniques such as fully stressed design and constraint approximations are shown to fit within its framework. Two new structural design formulations are also derived. The first new formulation is another approximation technique which is a general updating scheme for the sensitivity derivatives of design constraints. The second new formulation uses a substructure-based decomposition of the structure for analysis and sensitivity calculations. Significant computational benefits of the new formulations compared with a conventional method are demonstrated.

  11. Blurred Palmprint Recognition Based on Stable-Feature Extraction Using a Vese–Osher Decomposition Model

    PubMed Central

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  12. Blurred palmprint recognition based on stable-feature extraction using a Vese-Osher decomposition model.

    PubMed

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.

  13. Lunar ionosphere exploration method using auroral kilometric radiation

    NASA Astrophysics Data System (ADS)

    Goto, Yoshitaka; Fujimoto, Takamasa; Kasahara, Yoshiya; Kumamoto, Atsushi; Ono, Takayuki

    2011-01-01

    The evidence of a lunar ionosphere provided by radio occultation experiments performed by the Soviet spacecraft Luna 19 and 22 has been controversial for the past three decades because the observed large density is difficult to explain theoretically without magnetic shielding from the solar wind. The KAGUYA mission provided an opportunity to investigate the lunar ionosphere with another method. The natural plasma wave receiver (NPW) and waveform capture (WFC) instruments, which are subsystems of the lunar radar sounder (LRS) on board the lunar orbiter KAGUYA, frequently observe auroral kilometric radiation (AKR) propagating from the Earth. The dynamic spectra of the AKR sometimes exhibit a clear interference pattern that is caused by phase differences between direct waves and waves reflected on a lunar surface or a lunar ionosphere if it exists. It was hypothesized that the electron density profiles above the lunar surface could be evaluated by comparing the observed interference pattern with the theoretical interference patterns constructed from the profiles with ray tracing. This method provides a new approach to examining the lunar ionosphere that does not involve the conventional radio occultation technique.

  14. Theoretical Mathematics

    NASA Astrophysics Data System (ADS)

    Stöltzner, Michael

    Answering to the double-faced influence of string theory on mathematical practice and rigour, the mathematical physicists Arthur Jaffe and Frank Quinn have contemplated the idea that there exists a `theoretical' mathematics (alongside `theoretical' physics) whose basic structures and results still require independent corroboration by mathematical proof. In this paper, I shall take the Jaffe-Quinn debate mainly as a problem of mathematical ontology and analyse it against the backdrop of two philosophical views that are appreciative towards informal mathematical development and conjectural results: Lakatos's methodology of proofs and refutations and John von Neumann's opportunistic reading of Hilbert's axiomatic method. The comparison of both approaches shows that mitigating Lakatos's falsificationism makes his insights about mathematical quasi-ontology more relevant to 20th century mathematics in which new structures are introduced by axiomatisation and not necessarily motivated by informal ancestors. The final section discusses the consequences of string theorists' claim to finality for the theory's mathematical make-up. I argue that ontological reductionism as advocated by particle physicists and the quest for mathematically deeper axioms do not necessarily lead to identical results.

  15. Quantum-mechanical predictions of electron-induced ionization cross sections of DNA components

    NASA Astrophysics Data System (ADS)

    Champion, Christophe

    2013-05-01

    Ionization of biomolecules remains still today rarely investigated on both the experimental and the theoretical sides. In this context, the present work appears as one of the first quantum mechanical approaches providing a multi-differential description of the electron-induced ionization process of the main DNA components for impact energies ranging from the target ionization threshold up to about 10 keV. The cross section calculations are here performed within the 1st Born approximation framework in which the ejected electron is described by a Coulomb wave whereas the incident and the scattered electrons are both described by a plane wave. The biological targets of interest, namely, the DNA nucleobases and the sugar-phosphate backbone, are here described by means of the GAUSSIAN 09 system using the restricted Hartree-Fock method with geometry optimization. The theoretical predictions also obtained have shown a reasonable agreement with the experimental total ionization cross sections while huge discrepancies have been pointed out with existing theoretical models, mainly developed within a semi-classical framework.

  16. The four-principle formulation of common morality is at the core of bioethics mediation method.

    PubMed

    Ahmadi Nasab Emran, Shahram

    2015-08-01

    Bioethics mediation is increasingly used as a method in clinical ethics cases. My goal in this paper is to examine the implicit theoretical assumptions of the bioethics mediation method developed by Dubler and Liebman. According to them, the distinguishing feature of bioethics mediation is that the method is useful in most cases of clinical ethics in which conflict is the main issue, which implies that there is either no real ethical issue or if there were, they are not the key to finding a resolution. I question the tacit assumption of non-normativity of the mediation method in bioethics by examining the various senses in which bioethics mediation might be non-normative or neutral. The major normative assumption of the mediation method is the existence of common morality. In addition, the four-principle formulation of the theory articulated by Beauchamp and Childress implicitly provides the normative content for the method. Full acknowledgement of the theoretical and normative assumptions of bioethics mediation helps clinical ethicists better understand the nature of their job. In addition, the need for a robust philosophical background even in what appears to be a purely practical method of mediation cannot be overemphasized. Acknowledgement of the normative nature of bioethics mediation method necessitates a more critical attitude of the bioethics mediators towards the norms they usually take for granted uncritically as valid.

  17. Aircraft interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Gottwald, James A.; Srinivasan, Ramakrishna; Gustaveson, Mark B.

    1990-01-01

    Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at lower frequencies, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is being studied which considers aircraft fuselage lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. Adjacent panels would oscillate at equal amplitude, to give equal source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to become cutoff, and therefore be non-propagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is currently being investigated both theoretically and experimentally. This new concept has potential application to reducing interior noise due to the propellers in advanced turboprop aircraft as well as for existing aircraft configurations.

  18. One-Dimensional Shock Wave Formation by an Accelerating Piston. Ph.D. Thesis - Ohio State Univ.

    NASA Technical Reports Server (NTRS)

    Mann, M. J.

    1970-01-01

    The formation of a shock wave by a solid accelerating piston was studied. A theoretical solution using the method of characteristics for a perfect gas showed that a complex wave system exists, and that the compressed gas can have large gradients in temperature, density and entropy. Experiments were performed with a piston tube where piston speed, shock speed and pressure were measured. The comparison of theory and experiment was good.

  19. The collective and quantum nature of proton transfer in the cyclic water tetramer on NaCl(001)

    NASA Astrophysics Data System (ADS)

    Feng, Yexin; Wang, Zhichang; Guo, Jing; Chen, Ji; Wang, En-Ge; Jiang, Ying; Li, Xin-Zheng

    2018-03-01

    Proton tunneling is an elementary process in the dynamics of hydrogen-bonded systems. Collective tunneling is known to exist for a long time. Atomistic investigations of this mechanism in realistic systems, however, are scarce. Using a combination of ab initio theoretical and high-resolution experimental methods, we investigate the role played by the protons on the chirality switching of a water tetramer on NaCl(001). Our scanning tunneling spectroscopies show that partial deuteration of the H2O tetramer with only one D2O leads to a significant suppression of the chirality switching rate at a cryogenic temperature (T), indicating that the chirality switches by tunneling in a concerted manner. Theoretical simulations, in the meantime, support this picture by presenting a much smaller free-energy barrier for the translational collective proton tunneling mode than other chirality switching modes at low T. During this analysis, the virial energy provides a reasonable estimator for the description of the nuclear quantum effects when a traditional thermodynamic integration method cannot be used, which could be employed in future studies of similar problems. Given the high-dimensional nature of realistic systems and the topology of the hydrogen-bonded network, collective proton tunneling may exist more ubiquitously than expected. Systems of this kind can serve as ideal platforms for studies of this mechanism, easily accessible to high-resolution experimental measurements.

  20. How to integrate biological research into society and exclude errors in biomedical publications? Progress in theoretical and systems biology releases pressure on experimental research.

    PubMed

    Volkov, Vadim

    2014-01-01

    This brief opinion proposes measures to increase efficiency and exclude errors in biomedical research under the existing dynamic situation. Rapid changes in biology began with the description of the three dimensional structure of DNA 60 years ago; today biology has progressed by interacting with computer science and nanoscience together with the introduction of robotic stations for the acquisition of large-scale arrays of data. These changes have had an increasing influence on the entire research and scientific community. Future advance demands short-term measures to ensure error-proof and efficient development. They can include the fast publishing of negative results, publishing detailed methodical papers and excluding a strict connection between career progression and publication activity, especially for younger researchers. Further development of theoretical and systems biology together with the use of multiple experimental methods for biological experiments could also be helpful in the context of years and decades. With regards to the links between science and society, it is reasonable to compare both these systems, to find and describe specific features for biology and to integrate it into the existing stream of social life and financial fluxes. It will increase the level of scientific research and have mutual positive effects for both biology and society. Several examples are given for further discussion.

  1. MALDI-MS analysis and theoretical evaluation of olanzapine as a UV laser desorption ionization (LDI) matrix.

    PubMed

    Musharraf, Syed Ghulam; Ameer, Mariam; Ali, Arslan

    2017-01-05

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) being soft ionization technique, has become a method of choice for high-throughput analysis of proteins and peptides. In this study, we have explored the potential of atypical anti-psychotic drug olanzapine (OLZ) as a matrix for MALDI-MS analysis of peptides aided with the theoretical studies. Seven small peptides were employed as target analytes to check performance of olanzapine and compared with conventional MALDI matrix α-cyano-4-hydroxycinnamic acid (HCCA). All peptides were successfully detected when olanzapine was used as a matrix. Moreover, peptides angiotensin Ι and angiotensin ΙΙ were detected with better S/N ratio and resolution with this method as compared to their analysis by HCCA. Computational studies were performed to determine the thermochemical properties of olanzapine in order to further evaluate its similarity to MALDI matrices which were found in good agreement with the data of existing MALDI matrices. Copyright © 2016. Published by Elsevier B.V.

  2. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  3. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  4. Measuring strategic control in implicit learning: how and why?

    PubMed

    Norman, Elisabeth

    2015-01-01

    Several methods have been developed for measuring the extent to which implicitly learned knowledge can be applied in a strategic, flexible manner. Examples include generation exclusion tasks in Serial Reaction Time (SRT) learning (Goschke, 1998; Destrebecqz and Cleeremans, 2001) and 2-grammar classification tasks in Artificial Grammar Learning (AGL; Dienes et al., 1995; Norman et al., 2011). Strategic control has traditionally been used as a criterion for determining whether acquired knowledge is conscious or unconscious, or which properties of knowledge are consciously available. In this paper I first summarize existing methods that have been developed for measuring strategic control in the SRT and AGL tasks. I then address some methodological and theoretical questions. Methodological questions concern choice of task, whether the measurement reflects inhibitory control or task switching, and whether or not strategic control should be measured on a trial-by-trial basis. Theoretical questions concern the rationale for including measurement of strategic control, what form of knowledge is strategically controlled, and how strategic control can be combined with subjective awareness measures.

  5. Measuring strategic control in implicit learning: how and why?

    PubMed Central

    Norman, Elisabeth

    2015-01-01

    Several methods have been developed for measuring the extent to which implicitly learned knowledge can be applied in a strategic, flexible manner. Examples include generation exclusion tasks in Serial Reaction Time (SRT) learning (Goschke, 1998; Destrebecqz and Cleeremans, 2001) and 2-grammar classification tasks in Artificial Grammar Learning (AGL; Dienes et al., 1995; Norman et al., 2011). Strategic control has traditionally been used as a criterion for determining whether acquired knowledge is conscious or unconscious, or which properties of knowledge are consciously available. In this paper I first summarize existing methods that have been developed for measuring strategic control in the SRT and AGL tasks. I then address some methodological and theoretical questions. Methodological questions concern choice of task, whether the measurement reflects inhibitory control or task switching, and whether or not strategic control should be measured on a trial-by-trial basis. Theoretical questions concern the rationale for including measurement of strategic control, what form of knowledge is strategically controlled, and how strategic control can be combined with subjective awareness measures. PMID:26441809

  6. Tunable valley polarization by a gate voltage when an electron tunnels through multiple line defects in graphene.

    PubMed

    Liu, Zhe; Jiang, Liwei; Zheng, Yisong

    2015-02-04

    By means of an appropriate wave function connection condition, we study the electronic structure of a line defect superlattice of graphene with the Dirac equation method. We obtain the analytical dispersion relation, which can simulate well the tight-binding numerical result about the band structure of the superlattice. Then, we generalize this theoretical method to study the electronic transmission through a potential barrier where multiple line defects are periodically patterned. We find that there exists a critical incident angle which restricts the electronic transmission through multiple line defects within a specific incident angle range. The critical angle depends sensitively on the potential barrier height, which can be modulated by a gate voltage. As a result, non-trivial transmissions of K and K' valley electrons are restricted, respectively, in two distinct ranges of the incident angle. Our theoretical result demonstrates that a gate voltage can act as a feasible measure to tune the valley polarization when electrons tunnel through multiple line defects.

  7. Setting a disordered password on a photonic memory

    NASA Astrophysics Data System (ADS)

    Su, Shih-Wei; Gou, Shih-Chuan; Chew, Lock Yue; Chang, Yu-Yen; Yu, Ite A.; Kalachev, Alexey; Liao, Wen-Te

    2017-06-01

    An all-optical method of setting a disordered password on different schemes of photonic memory is theoretically studied. While photons are regarded as ideal information carriers, it is imperative to implement such data protection on all-optical storage. However, we wish to address the intrinsic risk of data breaches in existing schemes of photonic memory. We theoretically demonstrate a protocol using spatially disordered laser fields to encrypt data stored on an optical memory, namely, encrypted photonic memory. To address the broadband storage, we also investigate a scheme of disordered echo memory with a high fidelity approaching unity. The proposed method increases the difficulty for the eavesdropper to retrieve the stored photon without the preset password even when the randomized and stored photon state is nearly perfectly cloned. Our results pave ways to significantly reduce the exposure of memories, required for long-distance communication, to eavesdropping and therefore restrict the optimal attack on communication protocols. The present scheme also increases the sensitivity of detecting any eavesdropper and so raises the security level of photonic information technology.

  8. Accurate sparse-projection image reconstruction via nonlocal TV regularization.

    PubMed

    Zhang, Yi; Zhang, Weihua; Zhou, Jiliu

    2014-01-01

    Sparse-projection image reconstruction is a useful approach to lower the radiation dose; however, the incompleteness of projection data will cause degeneration of imaging quality. As a typical compressive sensing method, total variation has obtained great attention on this problem. Suffering from the theoretical imperfection, total variation will produce blocky effect on smooth regions and blur edges. To overcome this problem, in this paper, we introduce the nonlocal total variation into sparse-projection image reconstruction and formulate the minimization problem with new nonlocal total variation norm. The qualitative and quantitative analyses of numerical as well as clinical results demonstrate the validity of the proposed method. Comparing to other existing methods, our method more efficiently suppresses artifacts caused by low-rank reconstruction and reserves structure information better.

  9. On the Discovery of Evolving Truth

    PubMed Central

    Li, Yaliang; Li, Qi; Gao, Jing; Su, Lu; Zhao, Bo; Fan, Wei; Han, Jiawei

    2015-01-01

    In the era of big data, information regarding the same objects can be collected from increasingly more sources. Unfortunately, there usually exist conflicts among the information coming from different sources. To tackle this challenge, truth discovery, i.e., to integrate multi-source noisy information by estimating the reliability of each source, has emerged as a hot topic. In many real world applications, however, the information may come sequentially, and as a consequence, the truth of objects as well as the reliability of sources may be dynamically evolving. Existing truth discovery methods, unfortunately, cannot handle such scenarios. To address this problem, we investigate the temporal relations among both object truths and source reliability, and propose an incremental truth discovery framework that can dynamically update object truths and source weights upon the arrival of new data. Theoretical analysis is provided to show that the proposed method is guaranteed to converge at a fast rate. The experiments on three real world applications and a set of synthetic data demonstrate the advantages of the proposed method over state-of-the-art truth discovery methods. PMID:26705502

  10. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  11. Ideal flux field dielectric concentrators.

    PubMed

    García-Botella, Angel

    2011-10-01

    The concept of the vector flux field was first introduced as a photometrical theory and later developed in the field of nonimaging optics; it has provided new perspectives in the design of concentrators, overcoming standard ray tracing techniques. The flux field method has shown that reflective concentrators with the geometry of the field lines achieve the theoretical limit of concentration. In this paper we study the role of surfaces orthogonal to the field vector J. For rotationally symmetric systems J is orthogonal to its curl, and then a family of surfaces orthogonal to the lines of J exists, which can be called the family of surfaces of constant pseudopotential. Using the concept of the flux tube, it is possible to demonstrate that refractive concentrators with the shape of these pseudopotential surfaces achieve the theoretical limit of concentration.

  12. Estimation of whole lemon mass transfer parameters during hot air drying using different modelling methods

    NASA Astrophysics Data System (ADS)

    Torki-Harchegani, Mehdi; Ghanbarian, Davoud; Sadeghi, Morteza

    2015-08-01

    To design new dryers or improve existing drying equipments, accurate values of mass transfer parameters is of great importance. In this study, an experimental and theoretical investigation of drying whole lemons was carried out. The whole lemons were dried in a convective hot air dryer at different air temperatures (50, 60 and 75 °C) and a constant air velocity (1 m s-1). In theoretical consideration, three moisture transfer models including Dincer and Dost model, Bi- G correlation approach and conventional solution of Fick's second law of diffusion were used to determine moisture transfer parameters and predict dimensionless moisture content curves. The predicted results were then compared with the experimental data and the higher degree of prediction accuracy was achieved by the Dincer and Dost model.

  13. A Hybrid Approach to Protect Palmprint Templates

    PubMed Central

    Sun, Dongmei; Xiong, Ke; Qiu, Zhengding

    2014-01-01

    Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach. PMID:24982977

  14. A hybrid approach to protect palmprint templates.

    PubMed

    Liu, Hailun; Sun, Dongmei; Xiong, Ke; Qiu, Zhengding

    2014-01-01

    Biometric template protection is indispensable to protect personal privacy in large-scale deployment of biometric systems. Accuracy, changeability, and security are three critical requirements for template protection algorithms. However, existing template protection algorithms cannot satisfy all these requirements well. In this paper, we propose a hybrid approach that combines random projection and fuzzy vault to improve the performances at these three points. Heterogeneous space is designed for combining random projection and fuzzy vault properly in the hybrid scheme. New chaff point generation method is also proposed to enhance the security of the heterogeneous vault. Theoretical analyses of proposed hybrid approach in terms of accuracy, changeability, and security are given in this paper. Palmprint database based experimental results well support the theoretical analyses and demonstrate the effectiveness of proposed hybrid approach.

  15. Molecular structure and vibrational spectra of three substituted 4-thioflavones by density functional theory and ab initio Hartree-Fock calculations

    NASA Astrophysics Data System (ADS)

    Li, Xiao-Hong; Liu, Xiang-Ru; Zhang, Xian-Zhou

    2011-01-01

    The vibrational frequencies of three substituted 4-thioflavones in the ground state have been calculated using the Hartree-Fock and density functional method (B3LYP) with 6-31G* and 6-31+G** basis sets. The structural analysis shows that there exists H-bonding in the selected compounds and the hydrogen bond lengths increase with the augment of the conjugate parameters of the substituent group on the benzene ring. A complete vibrational assignment aided by the theoretical harmonic wavenumber analysis was proposed. The theoretical spectrograms for FT-IR spectra of the title compounds have been constructed. In addition, it is noted that the selected compounds show significant activity against Shigella flexniri. Several electronic properties and thermodynamic parameters were also calculated.

  16. Theoretical aspects for estimating anisotropic saturated hydraulic conductivity from in-well or direct-push probe injection tests in uniform media

    NASA Astrophysics Data System (ADS)

    Klammler, Harald; Layton, Leif; Nemer, Bassel; Hatfield, Kirk; Mohseni, Ana

    2017-06-01

    Hydraulic conductivity and its anisotropy are fundamental aquifer properties for groundwater flow and transport modeling. Current in-well or direct-push field measurement techniques allow for relatively quick determination of general conductivity profiles with depth. However, capabilities for identifying local scale conductivities in the horizontal and vertical directions are very limited. Here, we develop the theoretical basis for estimating horizontal and vertical conductivities from different types of steady-state single-well/probe injection tests under saturated conditions and in the absence of a well skin. We explore existing solutions and a recent semi-analytical solution approach to the flow problem under the assumption that the aquifer is locally homogeneous. The methods are based on the collection of an additional piece of information in the form of a second injection (or recirculation) test at a same location, or in the form of an additional head or flow observation along the well/probe. Results are represented in dimensionless charts for partial validation against approximate solutions and for practical application to test interpretation. The charts further allow for optimization of a test configuration to maximize sensitivity to anisotropy ratio. The two methods most sensitive to anisotropy are found to be (1) subsequent injection from a lateral screen and from the bottom of an otherwise cased borehole, and (2) single injection from a lateral screen with an additional head observation along the casing. Results may also be relevant for attributing consistent divergences in conductivity measurements from different testing methods applied at a same site or location to the potential effects of anisotropy. Some practical aspects are discussed and references are made to existing methods, which appear easily compatible with the proposed procedures.

  17. Exploring the use of storytelling in quantitative research fields using a multiple case study method

    NASA Astrophysics Data System (ADS)

    Matthews, Lori N. Hamlet

    The purpose of this study was to explore the emerging use of storytelling in quantitative research fields. The focus was not on examining storytelling in research, but rather how stories are used in various ways within the social context of quantitative research environments. In-depth interviews were conducted with seven professionals who had experience using storytelling in their work and my personal experience with the subject matter was also used as a source of data according to the notion of researcher-as-instrument. This study is qualitative in nature and is guided by two supporting theoretical frameworks, the sociological perspective and narrative inquiry. A multiple case study methodology was used to gain insight about why participants decided to use stories or storytelling in a quantitative research environment that may not be traditionally open to such methods. This study also attempted to identify how storytelling can strengthen or supplement existing research, as well as what value stories can provide to the practice of research in general. Five thematic findings emerged from the data and were grouped under two headings, "Experiencing Research" and "Story Work." The themes were found to be consistent with four main theoretical functions of storytelling identified in existing scholarly literature: (a) sense-making; (b) meaning-making; (c) culture; and (d) communal function. The five thematic themes that emerged from this study and were consistent with the existing literature include: (a) social context; (b) quantitative versus qualitative; (c) we think and learn in terms of stories; (d) stories tie experiences together; and (e) making sense and meaning. Recommendations are offered in the form of implications for various social contexts and topics for further research are presented as well.

  18. Frequency-area distribution of earthquake-induced landslides

    NASA Astrophysics Data System (ADS)

    Tanyas, H.; Allstadt, K.; Westen, C. J. V.

    2016-12-01

    Discovering the physical explanations behind the power-law distribution of landslides can provide valuable information to quantify triggered landslide events and as a consequence to understand the relation between landslide causes and impacts in terms of environmental settings of landslide affected area. In previous studies, the probability of landslide size was utilized for this quantification and the developed parameter was called a landslide magnitude (mL). The frequency-area distributions (FADs) of several landslide inventories were modelled and theoretical curves were established to identify the mL for any landslide inventory. In the observed landslide inventories, a divergence from the power-law distribution was recognized for the small landslides, referred to as the rollover, and this feature was taken into account in the established model. However, these analyses are based on a relatively limited number of inventories, each with a different triggering mechanism. Existing definition of the mL include some subjectivity, since it is based on a visual comparison between the theoretical curves and the FAD of the medium and large landslides. Additionally, the existed definition of mL introduces uncertainty due to the ambiguity in both the physical explanation of the rollover and its functional form. Here we focus on earthquake-induced landslides (EQIL) and aim to provide a rigorous method to estimate the mL and total landslide area of EQIL. We have gathered 36 EQIL inventories from around the globe. Using these inventories, we have evaluated existing explanations of the rollover and proposed an alternative explanation given the new data. Next, we propose a method to define the EQIL FAD curves, mL and to estimate the total landslide area. We utilize the total landslide areas obtained from inventories to compare them with our estimations and to validate our methodology. The results show that we calculate landslide magnitudes more accurately than previous methods.

  19. Automated Transition State Theory Calculations for High-Throughput Kinetics.

    PubMed

    Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H

    2017-09-21

    A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.

  20. Experimental and Theoretical Study of Propeller Spinner/Shank Interference. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Cornell, C. C.

    1986-01-01

    A fundamental experimental and theoretical investigation into the aerodynamic interference associated with propeller spinner and shank regions was conducted. The research program involved a theoretical assessment of solutions previously proposed, followed by a systematic experimental study to supplement the existing data base. As a result, a refined computational procedure was established for prediction of interference effects in terms of interference drag and resolved into propeller thrust and torque components. These quantities were examined with attention to engineering parameters such as two spinner finess ratios, three blade shank forms, and two/three/four/six/eight blades. Consideration of the physics of the phenomena aided in the logical deduction of two individual interference quantities (cascade effects and spinner/shank juncture interference). These interference effects were semi-empirically modeled using existing theories and placed into a compatible form with an existing propeller performance scheme which provided the basis for examples of application.

  1. Equation of state of U2Mo up-to Mbar pressure range: Ab-initio study

    NASA Astrophysics Data System (ADS)

    Mukherjee, D.; Sahoo, B. D.; Joshi, K. D.; Kaushik, T. C.

    2018-04-01

    Experimentally, U2Mo is known to exist in tetragonal structure at ambient conditions. In contrast to experimental reports, the past theoretical studies carried out in this material do not find this phase to be stable structure at zero pressure. In order to examine this discrepancy between experiment and theory, we have performed ab-initio electronic band structure calculations on this material. In our theoretical study, we have attempted to search for lowest enthalpy structure at ambient as well at high pressure up to 200 GPa, employing evolutionary structure search algorithm in conjunction with ab-inito method. Our investigations suggest that a hexagonal structure with space group symmetry P6/mmm is the lowest enthalpy structure not only at ambient pressure but also up to pressure range of ˜200 GPa. To further, substantiate the results of these static lattice calculations the elastic and lattice dynamical stability has also been analysed. The theoretical isotherm derived from these calculations has been utilized to determine the Hugoniot of this material. Various physical properties such as zero pressure equilibrium volume, bulk modulus and its pressure derivative has also been derived from theoretical isotherm.

  2. Theoretical survey on positronium formation and ionisation in positron atom scattering

    NASA Technical Reports Server (NTRS)

    Basu, Madhumita; Ghosh, A. S.

    1990-01-01

    The recent theoretical studies are surveyed and reported on the formation of exotic atoms in positron-hydrogen, positron-helium and positron-lithium scattering specially at intermediate energy region. The ionizations of these targets by positron impact was also considered. Theoretical predictions for both the processes are compared with existing measured values.

  3. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  4. Efficient estimation of the maximum metabolic productivity of batch systems.

    PubMed

    St John, Peter C; Crowley, Michael F; Bomble, Yannick J

    2017-01-01

    Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumption that all fluxes in the cell are free to vary is a challenging numerical task. Previous studies have therefore typically focused on simpler strategies that are more feasible to implement in practice, such as the time-dependent control of a single flux or control variable. This work presents an efficient method for the calculation of a maximum theoretical productivity of a batch culture system using a dynamic optimization framework. The proposed method follows traditional assumptions of dynamic flux balance analysis: first, that internal metabolite fluxes are governed by a pseudo-steady state, and secondly that external metabolite fluxes are dynamically bounded. The optimization is achieved via collocation on finite elements, and accounts explicitly for an arbitrary number of flux changes. The method can be further extended to calculate the complete Pareto surface of productivity as a function of yield. We apply this method to succinate production in two engineered microbial hosts, Escherichia coli and Actinobacillus succinogenes , and demonstrate that maximum productivities can be more than doubled under dynamic control regimes. The maximum theoretical yield is a measure that is well established in the metabolic engineering literature and whose use helps guide strain and pathway selection. We present a robust, efficient method to calculate the maximum theoretical productivity: a metric that will similarly help guide and evaluate the development of dynamic microbial bioconversions. Our results demonstrate that nearly optimal yields and productivities can be achieved with only two discrete flux stages, indicating that near-theoretical productivities might be achievable in practice.

  5. In Pursuit of Theoretical Ground in Behavior Change Support Systems: Analysis of Peer-to-Peer Communication in a Health-Related Online Community

    PubMed Central

    Cobb, Nathan; Cohen, Trevor

    2016-01-01

    Background Research studies involving health-related online communities have focused on examining network structure to understand mechanisms underlying behavior change. Content analysis of the messages exchanged in these communities has been limited to the “social support” perspective. However, existing behavior change theories suggest that message content plays a prominent role reflecting several sociocognitive factors that affect an individual’s efforts to make a lifestyle change. An understanding of these factors is imperative to identify and harness the mechanisms of behavior change in the Health 2.0 era. Objective The objective of this work is two-fold: (1) to harness digital communication data to capture essential meaning of communication and factors affecting a desired behavior change, and (2) to understand the applicability of existing behavior change theories to characterize peer-to-peer communication in online platforms. Methods In this paper, we describe grounded theory–based qualitative analysis of digital communication in QuitNet, an online community promoting smoking cessation. A database of 16,492 de-identified public messages from 1456 users from March 1-April 30, 2007, was used in our study. We analyzed 795 messages using grounded theory techniques to ensure thematic saturation. This analysis enabled identification of key concepts contained in the messages exchanged by QuitNet members, allowing us to understand the sociobehavioral intricacies underlying an individual’s efforts to cease smoking in a group setting. We further ascertained the relevance of the identified themes to theoretical constructs in existing behavior change theories (eg, Health Belief Model) and theoretically linked techniques of behavior change taxonomy. Results We identified 43 different concepts, which were then grouped under 12 themes based on analysis of 795 messages. Examples of concepts include “sleepiness,” “pledge,” “patch,” “spouse,” and “slip.” Examples of themes include “traditions,” “social support,” “obstacles,” “relapse,” and “cravings.” Results indicate that themes consisting of member-generated strategies such as “virtual bonfires” and “pledges” were related to the highest number of theoretical constructs from the existing behavior change theories. In addition, results indicate that the member-generated communication content supports sociocognitive constructs from more than one behavior change model, unlike the majority of the existing theory-driven interventions. Conclusions With the onset of mobile phones and ubiquitous Internet connectivity, online social network data reflect the intricacies of human health behavior as experienced by health consumers in real time. This study offers methodological insights for qualitative investigations that examine the various kinds of behavioral constructs prevalent in the messages exchanged among users of online communities. Theoretically, this study establishes the manifestation of existing behavior change theories in QuitNet-like online health communities. Pragmatically, it sets the stage for real-time, data-driven sociobehavioral interventions promoting healthy lifestyle modifications by allowing us to understand the emergent user needs to sustain a desired behavior change. PMID:26839162

  6. Using a fuzzy comprehensive evaluation method to determine product usability: A proposed theoretical framework.

    PubMed

    Zhou, Ronggang; Chan, Alan H S

    2017-01-01

    In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process.

  7. Electron capture in collisions of N^+ with H and H^+ with N

    NASA Astrophysics Data System (ADS)

    Lin, C. Y.; Stancil, P. C.; Gu, J. P.; Buenker, R. J.; Kimura, M.

    2004-05-01

    Charge transfer processes due to collisions of N^+ with atomic hydrogen and H^+ with atomic nitrogen are investigated using the quantum-mechanical molecular-orbital close-coupling (MOCC) method. The MOCC calculations utilize ab initio adiabatic potential curves and nonadiabatic radial and rotational coupling matrix elements obtained with the multireference single- and double-excitation configuration interaction approach. Total and state-selective cross sections for the energy range 0.1-500 eV/u will be presented and compared with existing experimental and theoretical data.

  8. Genetic Algorithms and Local Search

    NASA Technical Reports Server (NTRS)

    Whitley, Darrell

    1996-01-01

    The first part of this presentation is a tutorial level introduction to the principles of genetic search and models of simple genetic algorithms. The second half covers the combination of genetic algorithms with local search methods to produce hybrid genetic algorithms. Hybrid algorithms can be modeled within the existing theoretical framework developed for simple genetic algorithms. An application of a hybrid to geometric model matching is given. The hybrid algorithm yields results that improve on the current state-of-the-art for this problem.

  9. Minimal time spiking in various ChR2-controlled neuron models.

    PubMed

    Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel

    2018-02-01

    We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.

  10. Privacy-preserving periodical publishing for medical information

    NASA Astrophysics Data System (ADS)

    Jin, Hua; Ju, Shi-guang; Liu, Shan-cheng

    2013-07-01

    Existing privacy-preserving publishing models can not meet the requirement of periodical publishing for medical information whether these models are static or dynamic. This paper presents a (k,l)-anonymity model with keeping individual association and a principle based on (Epsilon)-invariance group for subsequent periodical publishing, and then, the PKIA and PSIGI algorithms are designed for them. The proposed methods can reserve more individual association with privacy-preserving and have better publishing quality. Experiments confirm our theoretical results and its practicability.

  11. Computational intelligence in earth sciences and environmental applications: issues and challenges.

    PubMed

    Cherkassky, V; Krasnopolsky, V; Solomatine, D P; Valdes, J

    2006-03-01

    This paper introduces a generic theoretical framework for predictive learning, and relates it to data-driven and learning applications in earth and environmental sciences. The issues of data quality, selection of the error function, incorporation of the predictive learning methods into the existing modeling frameworks, expert knowledge, model uncertainty, and other application-domain specific problems are discussed. A brief overview of the papers in the Special Issue is provided, followed by discussion of open issues and directions for future research.

  12. Graph theoretical stable allocation as a tool for reproduction of control by human operators

    NASA Astrophysics Data System (ADS)

    van Nooijen, Ronald; Ertsen, Maurits; Kolechkina, Alla

    2016-04-01

    During the design of central control algorithms for existing water resource systems under manual control it is important to consider the interaction with parts of the system that remain under manual control and to compare the proposed new system with the existing manual methods. In graph theory the "stable allocation" problem has good solution algorithms and allows for formulation of flow distribution problems in terms of priorities. As a test case for the use of this approach we used the algorithm to derive water allocation rules for the Gezira Scheme, an irrigation system located between the Blue and White Niles south of Khartoum. In 1925, Gezira started with 300,000 acres; currently it covers close to two million acres.

  13. Determination of ferroelectric contributions to electromechanical response by frequency dependent piezoresponse force microscopy.

    PubMed

    Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V; Lee, Shinbuhm; Lee, Ho Nyung; Morozovska, Anna N; Kim, Yunseok

    2016-07-28

    Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. However, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. Here, we suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. Our combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute to the EM response.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Suvam; Naghma, Rahla; Kaur, Jaspreet

    The total and ionization cross sections for electron scattering by benzene, halobenzenes, toluene, aniline, and phenol are reported over a wide energy domain. The multi-scattering centre spherical complex optical potential method has been employed to find the total elastic and inelastic cross sections. The total ionization cross section is estimated from total inelastic cross section using the complex scattering potential-ionization contribution method. In the present article, the first theoretical calculations for electron impact total and ionization cross section have been performed for most of the targets having numerous practical applications. A reasonable agreement is obtained compared to existing experimental observationsmore » for all the targets reported here, especially for the total cross section.« less

  15. Determination of ferroelectric contributions to electromechanical response by frequency dependent piezoresponse force microscopy

    PubMed Central

    Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V.; Lee, Shinbuhm; Lee, Ho Nyung; Morozovska, Anna N.; Kim, Yunseok

    2016-01-01

    Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. However, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. Here, we suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. Our combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute to the EM response. PMID:27466086

  16. The review on infrared image restoration techniques

    NASA Astrophysics Data System (ADS)

    Li, Sijian; Fan, Xiang; Zhu, Bin Cheng; Zheng, Dong

    2016-11-01

    The goal of infrared image restoration is to reconstruct an original scene from a degraded observation. The restoration process in the application of infrared wavelengths, however, still has numerous research possibilities. In order to give people a comprehensive knowledge of infrared image restoration, the degradation factors divided into two major categories of noise and blur. Many kinds of infrared image restoration method were overviewed. Mathematical background and theoretical basis of infrared image restoration technology, and the limitations or insufficiency of existing methods were discussed. After the survey, the direction and prospects of infrared image restoration technology for the future development were forecast and put forward.

  17. Strengthening of bridges by post-tensioning using monostrands in substituted cable ducts

    NASA Astrophysics Data System (ADS)

    Klusáček, Ladislav; Svoboda, Adam

    2017-09-01

    Post-tensioning is suitable, reliable and durable method of strengthening existing engineering structures, especially bridges. The high efficiency of post-tensioning can be seen in many applications throughout the world. In this paper the method is extended by a structural system of substituted cable ducts, which allows for significantly widening application of prestressing so it’s convenient mostly for application on beam bridges or slab bridges (built in years 1920 - 1960). The method of substituted cable ducts is based on theoretical knowledge and technical procedures, which were made possible through the development in prestressing systems, particularly the development of prestressing tendons (monostrands) and encased anchorages, as well as progress in drilling technology. This technique is highly recommended due to minimization of interventions into the constructions, unseen method of cable arrangement and hence the absence of impact on appearance, which is appreciated not only in case of valuable historical structures but also in general. It is possible to summarise that posttensioning by monostrands in substituted cable ducts is a highly effective method of strengthening existing bridges in order to increase their load capacities in terms of current traffic load and to extend their service life.

  18. WE-FG-207B-12: Quantitative Evaluation of a Spectral CT Scanner in a Phantom Study: Results of Spectral Reconstructions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, X; Arbique, G; Guild, J

    Purpose: To evaluate the quantitative image quality of spectral reconstructions of phantom data from a spectral CT scanner. Methods: The spectral CT scanner (IQon Spectral CT, Philips Healthcare) is equipped with a dual-layer detector and generates conventional 80-140 kVp images and variety of spectral reconstructions, e.g., virtual monochromatic (VM) images, virtual non-contrast (VNC) images, iodine maps, and effective atomic number (Z) images. A cylindrical solid water phantom (Gammex 472, 33 cm diameter and 5 cm thick) with iodine (2.0-20.0 mg I/ml) and calcium (50-600 mg/ml) rod inserts was scanned at 120 kVp and 27 mGy CTDIvol. Spectral reconstructions were evaluatedmore » by comparing image measurements with theoretical values calculated from nominal rod compositions provided by the phantom manufacturer. The theoretical VNC was calculated using water and iodine basis material decomposition, and the theoretical Z was calculated using two common methods, the chemical formula method (Z1) and the dual-energy ratio method (Z2). Results: Beam-hardening-like artifacts between high-attenuation calcium rods (≥300 mg/ml, >800 HU) influenced quantitative measurements, so the quantitative analysis was only performed on iodine rods using the images from the scan with all the calcium rods removed. The CT numbers of the iodine rods in the VM images (50∼150 keV) were close to theoretical values with average difference of 2.4±6.9 HU. Compared with theoretical values, the average difference for iodine concentration, VNC CT number and effective Z of iodine rods were −0.10±0.38 mg/ml, −0.1±8.2 HU, 0.25±0.06 (Z1) and −0.23±0.07 (Z2). Conclusion: The results indicate that the spectral CT scanner generates quantitatively accurate spectral reconstructions at clinically relevant iodine concentrations. Beam-hardening-like artifacts still exist when high-attenuation objects are present and their impact on patient images needs further investigation. YY is an employee of Philips Healthcare.« less

  19. Toward a Theoretical Model of Decision-Making and Resistance to Change among Higher Education Online Course Designers

    ERIC Educational Resources Information Center

    Dodd, Bucky J.

    2013-01-01

    Online course design is an emerging practice in higher education, yet few theoretical models currently exist to explain or predict how the diffusion of innovations occurs in this space. This study used a descriptive, quantitative survey research design to examine theoretical relationships between decision-making style and resistance to change…

  20. Factors Influencing the Use of Learning Management System in Saudi Arabian Higher Education: A Theoretical Framework

    ERIC Educational Resources Information Center

    Asiri, Mohammed J. Sherbib; Mahmud, Rosnaini bt; Bakar, Kamariah Abu; Ayub, Ahmad Fauzi bin Mohd

    2012-01-01

    The purpose of this paper is to present the theoretical framework underlying a research on factors that influence utilization of the Jusur Learning Management System (Jusur LMS) in Saudi Arabian public universities. Development of the theoretical framework was done based on library research approach. Initially, the existing literature relevant to…

  1. Necessary and sufficient liveness condition of GS3PR Petri nets

    NASA Astrophysics Data System (ADS)

    Liu, GaiYun; Barkaoui, Kamel

    2015-05-01

    Structural analysis is one of the most important and efficient methods to investigate the behaviour of Petri nets. Liveness is a significant behavioural property of Petri nets. Siphons, as structural objects of a Petri net, are closely related to its liveness. Many deadlock control policies for flexible manufacturing systems (FMS) modelled by Petri nets are implemented via siphon control. Most of the existing methods design liveness-enforcing supervisors by adding control places for siphons based on their controllability conditions. To compute a liveness-enforcing supervisor with as much as permissive behaviour, it is both theoretically and practically significant to find an exact controllability condition for siphons. However, the existing conditions, max, max‧, and max″-controllability of siphons are all overly restrictive and generally sufficient only. This paper develops a new condition called max*-controllability of the siphons in generalised systems of simple sequential processes with resources (GS3PR), which are a net subclass that can model many real-world automated manufacturing systems. We show that a GS3PR is live if all its strict minimal siphons (SMS) are max*-controlled. Compared with the existing conditions, i.e., max-, max‧-, and max″-controllability of siphons, max*-controllability of the SMS is not only sufficient but also necessary. An example is used to illustrate the proposed method.

  2. Application of laser differential confocal technique in back vertex power measurement for phoropters

    NASA Astrophysics Data System (ADS)

    Li, Fei; Li, Lin; Ding, Xiang; Liu, Wenli

    2012-10-01

    A phoropter is one of the most popular ophthalmic instruments used in optometry and the back vertex power (BVP) is one of the most important parameters to evaluate the refraction characteristics of a phoropter. In this paper, a new laser differential confocal vertex-power measurement method which takes advantage of outstanding focusing ability of laser differential confocal (LDC) system is proposed for measuring the BVP of phoropters. A vertex power measurement system is built up. Experimental results are presented and some influence factor is analyzed. It is demonstrated that the method based on LDC technique has higher measurement precision and stronger environmental anti-interference capability compared to existing methods. Theoretical analysis and experimental results indicate that the measurement error of the method is about 0.02m-1.

  3. Surface electromagnetic waves in Fibonacci superlattices: Theoretical and experimental results

    NASA Astrophysics Data System (ADS)

    El Hassouani, Y.; Aynaou, H.; El Boudouti, E. H.; Djafari-Rouhani, B.; Akjouj, A.; Velasco, V. R.

    2006-07-01

    We study theoretically and experimentally the existence and behavior of the localized surface modes in one-dimensional (1D) quasiperiodic photonic band gap structures. These structures are made of segments and loops arranged according to a Fibonacci sequence. The experiments are carried out by using coaxial cables in the frequency region of a few tens of MHz. We consider 1D periodic structures (superlattice) where each cell is a well-defined Fibonacci generation. In these structures, we generalize a theoretical rule on the surface modes, namely when one considers two semi-infinite superlattices obtained by the cleavage of an infinite superlattice, it exists exactly one surface mode in each gap. This mode is localized on the surface either of one or the other semi-infinite superlattice. We discuss the existence of various types of surface modes and their spatial localization. The experimental observation of these modes is carried out by measuring the transmission through a guide along which a finite superlattice (i.e., constituted of a finite number of quasiperiodic cells) is grafted vertically. The surface modes appear as maxima of the transmission spectrum. These experiments are in good agreement with the theoretical model based on the formalism of the Green function.

  4. Valuing patient and caregiver time: a review of the literature.

    PubMed

    Tranmer, Jennifer E; Guerriere, Denise N; Ungar, Wendy J; Coyte, Peter C

    2005-01-01

    As healthcare expenditures continue to rise, financial pressures have resulted in a desire for countries to shift resources away from traditional areas of spending. The consequent devolution and reform have resulted in increased care being provided and received within homes and communities, and in an increased reliance on unpaid caregivers. Recent empirical work indicates that costs incurred by care recipients and unpaid caregivers, including time and productivity costs, often account for significant proportions of total healthcare expenditures. However, many economic evaluations do not include these costs. Moreover, when indirect costs are assessed, the methods of valuation are inconsistent and frequently controversial. This paper provides an overview and critique of existing valuation methods. Current methods such as the human capital method, friction cost method and the Washington Panel approach are presented and critiqued according to criteria such as potential for inaccuracy, ease of application, and ethical and distributional concerns. The review illustrates the depth to which the methods have been theoretically examined, and highlights a paucity of research on costs that accrue to unpaid caregivers and a lack of research on time lost from unpaid labour and leisure. To ensure accurate and concise reporting of all time costs, it is concluded that a broad conceptual approach for time costing should be developed that draws on and then expands upon theoretical work to date.

  5. Solute Nucleation and Growth in Supercritical Fluid Mixtures

    NASA Technical Reports Server (NTRS)

    Smedley, Gregory T.; Wilemski, Gerald; Rawlins, W. Terry; Joshi, Prakash; Oakes, David B.; Durgin, William W.

    1996-01-01

    This research effort is directed toward two primary scientific objectives: (1) to determine the gravitational effect on the measurement of nucleation and growth rates near a critical point and (2) to investigate the nucleation process in supercritical fluids to aid in the evaluation and development of existing theoretical models and practical applications. A nucleation pulse method will be employed for this investigation using a rapid expansion to a supersaturated state that is maintained for approximately 1 ms followed by a rapid recompression to a less supersaturated state that effectively terminates nucleation while permitting growth to continue. Nucleation, which occurs during the initial supersaturated state, is decoupled from growth by producing rapid pressure changes. Thermodynamic analysis, condensation modeling, apparatus design, and optical diagnostic design necessary for the initiation of a theoretical and experimental investigation of naphthalene nucleation from supercritical CO2 have been completed.

  6. A game-theoretic method for cross-layer stochastic resilient control design in CPS

    NASA Astrophysics Data System (ADS)

    Shen, Jiajun; Feng, Dongqin

    2018-03-01

    In this paper, the cross-layer security problem of cyber-physical system (CPS) is investigated from the game-theoretic perspective. Physical dynamics of plant is captured by stochastic differential game with cyber-physical influence being considered. The sufficient and necessary condition for the existence of state-feedback equilibrium strategies is given. The attack-defence cyber interactions are formulated by a Stackelberg game intertwined with stochastic differential game in physical layer. The condition such that the Stackelberg equilibrium being unique and the corresponding analytical solutions are both provided. An algorithm is proposed for obtaining hierarchical security strategy by solving coupled games, which ensures the operational normalcy and cyber security of CPS subject to uncertain disturbance and unexpected cyberattacks. Simulation results are given to show the effectiveness and performance of the proposed algorithm.

  7. Edge-augmented Fourier partial sums with applications to Magnetic Resonance Imaging (MRI)

    NASA Astrophysics Data System (ADS)

    Larriva-Latt, Jade; Morrison, Angela; Radgowski, Alison; Tobin, Joseph; Iwen, Mark; Viswanathan, Aditya

    2017-08-01

    Certain applications such as Magnetic Resonance Imaging (MRI) require the reconstruction of functions from Fourier spectral data. When the underlying functions are piecewise-smooth, standard Fourier approximation methods suffer from the Gibbs phenomenon - with associated oscillatory artifacts in the vicinity of edges and an overall reduced order of convergence in the approximation. This paper proposes an edge-augmented Fourier reconstruction procedure which uses only the first few Fourier coefficients of an underlying piecewise-smooth function to accurately estimate jump information and then incorporate it into a Fourier partial sum approximation. We provide both theoretical and empirical results showing the improved accuracy of the proposed method, as well as comparisons demonstrating superior performance over existing state-of-the-art sparse optimization-based methods.

  8. Multi-Dimensional High Order Essentially Non-Oscillatory Finite Difference Methods in Generalized Coordinates

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    1998-01-01

    This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.

  9. Progress in Computational Electron-Molecule Collisions

    NASA Astrophysics Data System (ADS)

    Rescigno, Tn

    1997-10-01

    The past few years have witnessed tremendous progress in the development of sophisticated ab initio methods for treating collisions of slow electrons with isolated small molecules. Researchers in this area have benefited greatly from advances in computer technology; indeed, the advent of parallel computers has made it possible to carry out calculations at a level of sophistication inconceivable a decade ago. But bigger and faster computers are only part of the picture. Even with today's computers, the practical need to study electron collisions with the kinds of complex molecules and fragments encountered in real-world plasma processing environments is taxing present methods beyond their current capabilities. Since extrapolation of existing methods to handle increasingly larger targets will ultimately fail as it would require computational resources beyond any imagined, continued progress must also be linked to new theoretical developments. Some of the techniques recently introduced to address these problems will be discussed and illustrated with examples of electron-molecule collision calculations we have carried out on some fairly complex target gases encountered in processing plasmas. Electron-molecule scattering continues to pose many formidable theoretical and computational challenges. I will touch on some of the outstanding open questions.

  10. A deeper look at two concepts of measuring gene-gene interactions: logistic regression and interaction information revisited.

    PubMed

    Mielniczuk, Jan; Teisseyre, Paweł

    2018-03-01

    Detection of gene-gene interactions is one of the most important challenges in genome-wide case-control studies. Besides traditional logistic regression analysis, recently the entropy-based methods attracted a significant attention. Among entropy-based methods, interaction information is one of the most promising measures having many desirable properties. Although both logistic regression and interaction information have been used in several genome-wide association studies, the relationship between them has not been thoroughly investigated theoretically. The present paper attempts to fill this gap. We show that although certain connections between the two methods exist, in general they refer two different concepts of dependence and looking for interactions in those two senses leads to different approaches to interaction detection. We introduce ordering between interaction measures and specify conditions for independent and dependent genes under which interaction information is more discriminative measure than logistic regression. Moreover, we show that for so-called perfect distributions those measures are equivalent. The numerical experiments illustrate the theoretical findings indicating that interaction information and its modified version are more universal tools for detecting various types of interaction than logistic regression and linkage disequilibrium measures. © 2017 WILEY PERIODICALS, INC.

  11. Experimental and theoretical studies of 3-benzyloxy-2-nitropyridine

    NASA Astrophysics Data System (ADS)

    Sun, Wenting; Cui, Yu; Liu, Huimin; Zhao, Haitao; Zhang, Wenqin

    2012-10-01

    The structure of 3-benzyloxy-2-nitropyridine has been investigated both experimentally and theoretically. The X-ray crystallography results show that the nitro group is tilted out of the pyridine ring plane by 66.4(4)°, which is mainly attributed to the electron-electron repulsions of the lone pairs in O atom of the 3-benzyloxy moiety with O atom in nitro group. An interesting centrosymmetric π-stacking molecular pair has been found in the crystalline state, which results in the approximate coplanarity of the pyridine ring with the benzene ring. The calculated results show that the dihedral angle between the nitro group and pyridine ring from the X3LYP method is much closer to the experimental data than that from the M06-2X one. The existing two conformational isomers of 3-benzyloxy-2-nitropyridine with equal energy explain well the disorder of the nitro group at room temperature. In addition, the vibrational frequencies are also calculated by the X3LYP and M06-2X methods and compared with the experimental results. The prediction from the X3LYP method coincides with the locations of the experimental frequencies well.

  12. Implementation of rigorous renormalization group method for ground space and low-energy states of local Hamiltonians

    NASA Astrophysics Data System (ADS)

    Roberts, Brenden; Vidick, Thomas; Motrunich, Olexei I.

    2017-12-01

    The success of polynomial-time tensor network methods for computing ground states of certain quantum local Hamiltonians has recently been given a sound theoretical basis by Arad et al. [Math. Phys. 356, 65 (2017), 10.1007/s00220-017-2973-z]. The convergence proof, however, relies on "rigorous renormalization group" (RRG) techniques which differ fundamentally from existing algorithms. We introduce a practical adaptation of the RRG procedure which, while no longer theoretically guaranteed to converge, finds matrix product state ansatz approximations to the ground spaces and low-lying excited spectra of local Hamiltonians in realistic situations. In contrast to other schemes, RRG does not utilize variational methods on tensor networks. Rather, it operates on subsets of the system Hilbert space by constructing approximations to the global ground space in a treelike manner. We evaluate the algorithm numerically, finding similar performance to density matrix renormalization group (DMRG) in the case of a gapped nondegenerate Hamiltonian. Even in challenging situations of criticality, large ground-state degeneracy, or long-range entanglement, RRG remains able to identify candidate states having large overlap with ground and low-energy eigenstates, outperforming DMRG in some cases.

  13. Measuring magnetic field vector by stimulated Raman transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Wenli; Wei, Rong, E-mail: weirong@siom.ac.cn; Lin, Jinda

    2016-03-21

    We present a method for measuring the magnetic field vector in an atomic fountain by probing the line strength of stimulated Raman transitions. The relative line strength for a Λ-type level system with an existing magnetic field is theoretically analyzed. The magnetic field vector measured by our proposed method is consistent well with that by the traditional bias magnetic field method with an axial resolution of 6.1 mrad and a radial resolution of 0.16 rad. Dependences of the Raman transitions on laser polarization schemes are also analyzed. Our method offers the potential advantages for magnetic field measurement without requiring additional bias fields,more » beyond the limitation of magnetic field intensity, and extending the spatial measurement range. The proposed method can be widely used for measuring magnetic field vector in other precision measurement fields.« less

  14. Using a fuzzy comprehensive evaluation method to determine product usability: A proposed theoretical framework

    PubMed Central

    Zhou, Ronggang; Chan, Alan H. S.

    2016-01-01

    BACKGROUND: In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. OBJECTIVE: This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. METHODS: With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. RESULTS AND CONCLUSIONS: Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process. PMID:28035943

  15. Measuring Spatial Accessibility of Health Care Providers – Introduction of a Variable Distance Decay Function within the Floating Catchment Area (FCA) Method

    PubMed Central

    Groneberg, David A.

    2016-01-01

    We integrated recent improvements within the floating catchment area (FCA) method family into an integrated ‘iFCA`method. Within this method we focused on the distance decay function and its parameter. So far only distance decay functions with constant parameters have been applied. Therefore, we developed a variable distance decay function to be used within the FCA method. We were able to replace the impedance coefficient β by readily available distribution parameter (i.e. median and standard deviation (SD)) within a logistic based distance decay function. Hence, the function is shaped individually for every single population location by the median and SD of all population-to-provider distances within a global catchment size. Theoretical application of the variable distance decay function showed conceptually sound results. Furthermore, the existence of effective variable catchment sizes defined by the asymptotic approach to zero of the distance decay function was revealed, satisfying the need for variable catchment sizes. The application of the iFCA method within an urban case study in Berlin (Germany) confirmed the theoretical fit of the suggested method. In summary, we introduced for the first time, a variable distance decay function within an integrated FCA method. This function accounts for individual travel behaviors determined by the distribution of providers. Additionally, the function inherits effective variable catchment sizes and therefore obviates the need for determining variable catchment sizes separately. PMID:27391649

  16. A thermal/nonthermal approach to solar flares

    NASA Technical Reports Server (NTRS)

    Benka, Stephen G.

    1991-01-01

    An approach for modeling solar flare high-energy emissions is developed in which both thermal and nonthermal particles coexist and contribute to the radiation. The thermal/nonthermal distribution function is interpreted physically by postulating the existence of DC sheets in the flare region. The currents then provide both primary plasma heating through Joule dissipation, and runaway electron acceleration. The physics of runaway acceleration is discussed. Several methods are presented for obtaining approximations to the thermal/nonthermal distribution function, both within the current sheets and outside of them. Theoretical hard x ray spectra are calculated, allowing for thermal bremsstrahlung from the heated plasma electrons impinging on the chromosphere. A simple model for hard x ray images of two-ribbon flares is presented. Theoretical microwave gyrosynchrotron spectra are calculated and analyzed, uncovering important new effects caused by the interplay of thermal and nonthermal particles. The theoretical spectra are compared with observed high resolution spectra of solar flares, and excellent agreement is found, in both hard x rays and microwaves. The future detailed application of this approach to solar flares is discussed, as are possible refinements to this theory.

  17. Constrained orbital intercept-evasion

    NASA Astrophysics Data System (ADS)

    Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh

    2014-06-01

    An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.

  18. The fraction of quiescent massive galaxies in the early Universe

    NASA Astrophysics Data System (ADS)

    Fontana, A.; Santini, P.; Grazian, A.; Pentericci, L.; Fiore, F.; Castellano, M.; Giallongo, E.; Menci, N.; Salimbeni, S.; Cristiani, S.; Nonino, M.; Vanzella, E.

    2009-07-01

    Aims: We attempt to compile a complete, mass-selected sample of galaxies with low specific star-formation rates, and compare their properties with theoretical model predictions. Methods: We use the f(24 μ m})/f(K) flux ratio and the SED fitting to the 0.35-8.0 μm spectral distribution, to select quiescent galaxies from z≃ 0.4 to z≃ 4 in the GOODS-MUSIC sample. Our observational selection can be translated into thresholds in specific star-formation rate dot{M}/M_*, which can be compared with theoretical predictions. Results: In the framework of the well-known global decline in quiescent galaxy fraction with redshift, we find that a non-negligible fraction {≃ 15-20% of massive galaxies with low specific star-formation rate exists up to z≃ 4, including a tail of “red and dead” galaxies with dot{M}/M_*<10-11 yr-1. Theoretical models vary to a large extent in their predictions for the fraction of galaxies with low specific star-formation rates, but are unable to provide a global match to our data.

  19. Theoretical and Experimental Studies of the Transonic Flow Field and Associated Boundary Conditions near a Longitudinally-Slotted Wind-Tunnel Wall. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Everhart, Joel Lee

    1988-01-01

    A theoretical examination of the slotted-wall flow field is conducted to determine the appropriate wall pressure drop (or boundary condition) equation. This analysis improves the understanding of the fluid physics of these types of flow fields and helps in evaluating the uncertainties and limitations existing in previous mathematical developments. It is shown that the resulting slotted-wall boundary condition contains contributions from the airfoil-induced streamline curvature and the non-linear, quadratic, slot crossflow in addition to an often neglected linear term which results from viscous shearing in the slot. Existing and newly acquired experimental data are examined in the light of this formulation and theoretical developments.

  20. Component-based subspace linear discriminant analysis method for face recognition with one training sample

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.

    2005-05-01

    Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.

  1. Total body water and lean body mass estimated by ethanol dilution

    NASA Technical Reports Server (NTRS)

    Loeppky, J. A.; Myhre, L. G.; Venters, M. D.; Luft, U. C.

    1977-01-01

    A method for estimating total body water (TBW) using breath analyses of blood ethanol content is described. Regression analysis of ethanol concentration curves permits determination of a theoretical concentration that would have existed if complete equilibration had taken place immediately upon ingestion of the ethanol; the water fraction of normal blood may then be used to calculate TBW. The ethanol dilution method is applied to 35 subjects, and comparison with a tritium dilution method of determining TBW indicates that the correlation between the two procedures is highly significant. Lean body mass and fat fraction were determined by hydrostatic weighing, and these data also prove compatible with results obtained from the ethanol dilution method. In contrast to the radioactive tritium dilution method, the ethanol dilution method can be repeated daily with its applicability ranging from diseased individuals to individuals subjected to thermal stress, strenuous exercise, water immersion, or the weightless conditions of space flights.

  2. Machine vision application in animal trajectory tracking.

    PubMed

    Koniar, Dušan; Hargaš, Libor; Loncová, Zuzana; Duchoň, František; Beňo, Peter

    2016-04-01

    This article was motivated by the doctors' demand to make a technical support in pathologies of gastrointestinal tract research [10], which would be based on machine vision tools. Proposed solution should be less expensive alternative to already existing RF (radio frequency) methods. The objective of whole experiment was to evaluate the amount of animal motion dependent on degree of pathology (gastric ulcer). In the theoretical part of the article, several methods of animal trajectory tracking are presented: two differential methods based on background subtraction, the thresholding methods based on global and local threshold and the last method used for animal tracking was the color matching with a chosen template containing a searched spectrum of colors. The methods were tested offline on five video samples. Each sample contained situation with moving guinea pig locked in a cage under various lighting conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Chemical databases evaluated by order theoretical tools.

    PubMed

    Voigt, Kristina; Brüggemann, Rainer; Pudenz, Stefan

    2004-10-01

    Data on environmental chemicals are urgently needed to comply with the future chemicals policy in the European Union. The availability of data on parameters and chemicals can be evaluated by chemometrical and environmetrical methods. Different mathematical and statistical methods are taken into account in this paper. The emphasis is set on a new, discrete mathematical method called METEOR (method of evaluation by order theory). Application of the Hasse diagram technique (HDT) of the complete data-matrix comprising 12 objects (databases) x 27 attributes (parameters + chemicals) reveals that ECOTOX (ECO), environmental fate database (EFD) and extoxnet (EXT)--also called multi-database databases--are best. Most single databases which are specialised are found in a minimal position in the Hasse diagram; these are biocatalysis/biodegradation database (BID), pesticide database (PES) and UmweltInfo (UMW). The aggregation of environmental parameters and chemicals (equal weight) leads to a slimmer data-matrix on the attribute side. However, no significant differences are found in the "best" and "worst" objects. The whole approach indicates a rather bad situation in terms of the availability of data on existing chemicals and hence an alarming signal concerning the new and existing chemicals policies of the EEC.

  4. Discrete event performance prediction of speculatively parallel temperature-accelerated dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zamora, Richard James; Voter, Arthur F.; Perez, Danny

    Due to its unrivaled ability to predict the dynamical evolution of interacting atoms, molecular dynamics (MD) is a widely used computational method in theoretical chemistry, physics, biology, and engineering. Despite its success, MD is only capable of modeling time scales within several orders of magnitude of thermal vibrations, leaving out many important phenomena that occur at slower rates. The Temperature Accelerated Dynamics (TAD) method overcomes this limitation by thermally accelerating the state-to-state evolution captured by MD. Due to the algorithmically complex nature of the serial TAD procedure, implementations have yet to improve performance by parallelizing the concurrent exploration of multiplemore » states. Here we utilize a discrete event-based application simulator to introduce and explore a new Speculatively Parallel TAD (SpecTAD) method. We investigate the SpecTAD algorithm, without a full-scale implementation, by constructing an application simulator proxy (SpecTADSim). Finally, following this method, we discover that a nontrivial relationship exists between the optimal SpecTAD parameter set and the number of CPU cores available at run-time. Furthermore, we find that a majority of the available SpecTAD boost can be achieved within an existing TAD application using relatively simple algorithm modifications.« less

  5. Discrete event performance prediction of speculatively parallel temperature-accelerated dynamics

    DOE PAGES

    Zamora, Richard James; Voter, Arthur F.; Perez, Danny; ...

    2016-12-01

    Due to its unrivaled ability to predict the dynamical evolution of interacting atoms, molecular dynamics (MD) is a widely used computational method in theoretical chemistry, physics, biology, and engineering. Despite its success, MD is only capable of modeling time scales within several orders of magnitude of thermal vibrations, leaving out many important phenomena that occur at slower rates. The Temperature Accelerated Dynamics (TAD) method overcomes this limitation by thermally accelerating the state-to-state evolution captured by MD. Due to the algorithmically complex nature of the serial TAD procedure, implementations have yet to improve performance by parallelizing the concurrent exploration of multiplemore » states. Here we utilize a discrete event-based application simulator to introduce and explore a new Speculatively Parallel TAD (SpecTAD) method. We investigate the SpecTAD algorithm, without a full-scale implementation, by constructing an application simulator proxy (SpecTADSim). Finally, following this method, we discover that a nontrivial relationship exists between the optimal SpecTAD parameter set and the number of CPU cores available at run-time. Furthermore, we find that a majority of the available SpecTAD boost can be achieved within an existing TAD application using relatively simple algorithm modifications.« less

  6. Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.

    PubMed

    Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong

    2018-01-01

    Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.

  7. A continuous-wave ultrasound system for displacement amplitude and phase measurement.

    PubMed

    Finneran, James J; Hastings, Mardi C

    2004-06-01

    A noninvasive, continuous-wave ultrasonic technique was developed to measure the displacement amplitude and phase of mechanical structures. The measurement system was based on a method developed by Rogers and Hastings ["Noninvasive vibration measurement system and method for measuring amplitude of vibration of tissue in an object being investigated," U.S. Patent No. 4,819,643 (1989)] and expanded to include phase measurement. A low-frequency sound source was used to generate harmonic vibrations in a target of interest. The target was simultaneously insonified by a low-power, continuous-wave ultrasonic source. Reflected ultrasound was phase modulated by the target motion and detected with a separate ultrasonic transducer. The target displacement amplitude was obtained directly from the received ultrasound frequency spectrum by comparing the carrier and sideband amplitudes. Phase information was obtained by demodulating the received signal using a double-balanced mixer and low-pass filter. A theoretical model for the ultrasonic receiver field is also presented. This model coupled existing models for focused piston radiators and for pulse-echo ultrasonic fields. Experimental measurements of the resulting receiver fields compared favorably with theoretical predictions.

  8. Computation of rare transitions in the barotropic quasi-geostrophic equations

    NASA Astrophysics Data System (ADS)

    Laurie, Jason; Bouchet, Freddy

    2015-01-01

    We investigate the theoretical and numerical computation of rare transitions in simple geophysical turbulent models. We consider the barotropic quasi-geostrophic and two-dimensional Navier-Stokes equations in regimes where bistability between two coexisting large-scale attractors exist. By means of large deviations and instanton theory with the use of an Onsager-Machlup path integral formalism for the transition probability, we show how one can directly compute the most probable transition path between two coexisting attractors analytically in an equilibrium (Langevin) framework and numerically otherwise. We adapt a class of numerical optimization algorithms known as minimum action methods to simple geophysical turbulent models. We show that by numerically minimizing an appropriate action functional in a large deviation limit, one can predict the most likely transition path for a rare transition between two states. By considering examples where theoretical predictions can be made, we show that the minimum action method successfully predicts the most likely transition path. Finally, we discuss the application and extension of such numerical optimization schemes to the computation of rare transitions observed in direct numerical simulations and experiments and to other, more complex, turbulent systems.

  9. A step forward in the study of the electroerosion by optical methods

    NASA Astrophysics Data System (ADS)

    Aparicio, R.; Gale, M. F. Ruiz; Hogert, E. N.; Landau, M. R.; Gaggioli, y. N. G.

    2003-05-01

    This work develops two theoretical models of surfaces to explain the behavior of the light scattered by samples that suffers some alteration. In a first model, it is evaluated the mean intensity scattered by the sample, analyzing the different curves obtained as function of the eroded/total surface ratio. The theoretical results are compared with those obtained experimentally. It can be seen that there exists a strong relation between the electroerosion level and the light scattered by the sample. A second model analyzes a surface with random changes in its roughness. A translucent surface with its roughness changing in a controlled way is studied. Then, the correlation coefficient variation as function of the roughness variation is determined by the transmission speckle correlation method. The obtained experimental values are compared with those obtained with this model. In summary, it can be shown that the first- and second-order statistics properties of the transmitted or reflected light by a sample with a variable topography can be taken account as a parameter to analyze these morphologic changes.

  10. Distributed support vector machine in master-slave mode.

    PubMed

    Chen, Qingguo; Cao, Feilong

    2018-05-01

    It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. RiPPAS: A Ring-Based Privacy-Preserving Aggregation Scheme in Wireless Sensor Networks

    PubMed Central

    Zhang, Kejia; Han, Qilong; Cai, Zhipeng; Yin, Guisheng

    2017-01-01

    Recently, data privacy in wireless sensor networks (WSNs) has been paid increased attention. The characteristics of WSNs determine that users’ queries are mainly aggregation queries. In this paper, the problem of processing aggregation queries in WSNs with data privacy preservation is investigated. A Ring-based Privacy-Preserving Aggregation Scheme (RiPPAS) is proposed. RiPPAS adopts ring structure to perform aggregation. It uses pseudonym mechanism for anonymous communication and uses homomorphic encryption technique to add noise to the data easily to be disclosed. RiPPAS can handle both sum() queries and min()/max() queries, while the existing privacy-preserving aggregation methods can only deal with sum() queries. For processing sum() queries, compared with the existing methods, RiPPAS has advantages in the aspects of privacy preservation and communication efficiency, which can be proved by theoretical analysis and simulation results. For processing min()/max() queries, RiPPAS provides effective privacy preservation and has low communication overhead. PMID:28178197

  12. Multiple sup 3 H-oxytocin binding sites in rat myometrial plasma membranes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crankshaw, D.; Gaspar, V.; Pliska, V.

    1990-01-01

    The affinity spectrum method has been used to analyse binding isotherms for {sup 3}H-oxytocin to rat myometrial plasma membranes. Three populations of binding sites with dissociation constants (Kd) of 0.6-1.5 x 10(-9), 0.4-1.0 x 10(-7) and 7 x 10(-6) mol/l were identified and their existence verified by cluster analysis based on similarities between Kd, binding capacity and Hill coefficient. When experimental values were compared to theoretical curves constructed using the estimated binding parameters, good fits were obtained. Binding parameters obtained by this method were not influenced by the presence of GTP gamma S (guanosine-5'-O-3-thiotriphosphate) in the incubation medium. The bindingmore » parameters agree reasonably well with those found in uterine cells, they support the existence of a medium affinity site and may allow for an explanation of some of the discrepancies between binding and response in this system.« less

  13. The effect of contact angles and capillary dimensions on the burst frequency of super hydrophilic and hydrophilic centrifugal microfluidic platforms, a CFD study.

    PubMed

    Kazemzadeh, Amin; Ganesan, Poo; Ibrahim, Fatimah; He, Shuisheng; Madou, Marc J

    2013-01-01

    This paper employs the volume of fluid (VOF) method to numerically investigate the effect of the width, height, and contact angles on burst frequencies of super hydrophilic and hydrophilic capillary valves in centrifugal microfluidic systems. Existing experimental results in the literature have been used to validate the implementation of the numerical method. The performance of capillary valves in the rectangular and the circular microfluidic structures on super hydrophilic centrifugal microfluidic platforms is studied. The numerical results are also compared with the existing theoretical models and the differences are discussed. Our experimental and computed results show a minimum burst frequency occurring at square capillaries and this result is useful for designing and developing more sophisticated networks of capillary valves. It also predicts that in super hydrophilic microfluidics, the fluid leaks consistently from the capillary valve at low pressures which can disrupt the biomedical procedures in centrifugal microfluidic platforms.

  14. The Effect of Contact Angles and Capillary Dimensions on the Burst Frequency of Super Hydrophilic and Hydrophilic Centrifugal Microfluidic Platforms, a CFD Study

    PubMed Central

    Kazemzadeh, Amin; Ganesan, Poo; Ibrahim, Fatimah; He, Shuisheng; Madou, Marc J.

    2013-01-01

    This paper employs the volume of fluid (VOF) method to numerically investigate the effect of the width, height, and contact angles on burst frequencies of super hydrophilic and hydrophilic capillary valves in centrifugal microfluidic systems. Existing experimental results in the literature have been used to validate the implementation of the numerical method. The performance of capillary valves in the rectangular and the circular microfluidic structures on super hydrophilic centrifugal microfluidic platforms is studied. The numerical results are also compared with the existing theoretical models and the differences are discussed. Our experimental and computed results show a minimum burst frequency occurring at square capillaries and this result is useful for designing and developing more sophisticated networks of capillary valves. It also predicts that in super hydrophilic microfluidics, the fluid leaks consistently from the capillary valve at low pressures which can disrupt the biomedical procedures in centrifugal microfluidic platforms. PMID:24069169

  15. Aircraft interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Gottwald, James A.; Gustaveson, Mark B.; Burton, James R., III; Castellino, Craig

    1989-01-01

    Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at lower, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is being studied which considers aircraft fuselages lines with panels alternately tuned to frequencies above and below the frequency to be attenuated. Adjacent panels would oscillate at equal amplitude, to give equal source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to become cut off and therefore be non-propagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is currently being investigated both theoretically and experimentally. This new concept has potential application to reducing interior noise due to the propellers in advanced turboprop aircraft as well as for existing aircraft configurations. This program summarizes the work carried out at Duke University during the third semester of a contract supported by the Structural Acoustics Branch at NASA Langley Research Center.

  16. Geo-information processing service composition for concurrent tasks: A QoS-aware game theory approach

    NASA Astrophysics Data System (ADS)

    Li, Haifeng; Zhu, Qing; Yang, Xiaoxia; Xu, Linrong

    2012-10-01

    Typical characteristics of remote sensing applications are concurrent tasks, such as those found in disaster rapid response. The existing composition approach to geographical information processing service chain, searches for an optimisation solution and is what can be deemed a "selfish" way. This way leads to problems of conflict amongst concurrent tasks and decreases the performance of all service chains. In this study, a non-cooperative game-based mathematical model to analyse the competitive relationships between tasks, is proposed. A best response function is used, to assure each task maintains utility optimisation by considering composition strategies of other tasks and quantifying conflicts between tasks. Based on this, an iterative algorithm that converges to Nash equilibrium is presented, the aim being to provide good convergence and maximise the utilisation of all tasks under concurrent task conditions. Theoretical analyses and experiments showed that the newly proposed method, when compared to existing service composition methods, has better practical utility in all tasks.

  17. Information theory in systems biology. Part II: protein-protein interaction and signaling networks.

    PubMed

    Mousavian, Zaynab; Díaz, José; Masoudi-Nejad, Ali

    2016-03-01

    By the development of information theory in 1948 by Claude Shannon to address the problems in the field of data storage and data communication over (noisy) communication channel, it has been successfully applied in many other research areas such as bioinformatics and systems biology. In this manuscript, we attempt to review some of the existing literatures in systems biology, which are using the information theory measures in their calculations. As we have reviewed most of the existing information-theoretic methods in gene regulatory and metabolic networks in the first part of the review, so in the second part of our study, the application of information theory in other types of biological networks including protein-protein interaction and signaling networks will be surveyed. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Comparative study between the results of effective index based matrix method and characterization of fabricated SU-8 waveguide

    NASA Astrophysics Data System (ADS)

    Samanta, Swagata; Dey, Pradip Kumar; Banerji, Pallab; Ganguly, Pranabendu

    2017-01-01

    A study regarding the validity of effective-index based matrix method (EIMM) for the fabricated SU-8 channel waveguides is reported. The design method is extremely fast compared to other existing numerical techniques, such as, BPM and FDTD. In EIMM, the effective index method was applied in depth direction of the waveguide and the resulted lateral index profile was analyzed by a transfer matrix method. By EIMM one can compute the guided mode propagation constants and mode profiles for each mode for any dimensions of the waveguides. The technique may also be used to design single mode waveguide. SU-8 waveguide fabrication was carried out by continuous-wave direct laser writing process at 375 nm wavelength. The measured propagation losses of these wire waveguides having air and PDMS as superstrates were 0.51 dB/mm and 0.3 dB/mm respectively. The number of guided modes, obtained theoretically as well as experimentally, for air-cladded waveguide was much more than that of PDMS-cladded waveguide. We were able to excite the isolated fundamental mode for the later by precise fiber positioning, and mode image was recorded. The mode profiles, mode indices, and refractive index profiles were extracted from this mode image of the fundamental mode which matched remarkably well with the theoretical predictions.

  19. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  20. Images of Inherited War: Three American Presidents in Vietnam

    DTIC Science & Technology

    2011-06-01

    Dependent Realism to demonstrate how theoretical advances in modern physical science correlate to cognitive theories in International Relations. We...Quantum Physics and Model-Dependent Realism In his book, The Grand Design, theoretical physicist and cosmologist Stephen Hawking draws on theoretical...exhibited wave-like properties and that existing scientific laws could not account for their behavior. Newtonian physics was “built on a framework

  1. Contrast Gradient-Based Blood Velocimetry With Computed Tomography: Theory, Simulations, and Proof of Principle in a Dynamic Flow Phantom.

    PubMed

    Korporaal, Johannes G; Benz, Matthias R; Schindera, Sebastian T; Flohr, Thomas G; Schmidt, Bernhard

    2016-01-01

    The aim of this study was to introduce a new theoretical framework describing the relationship between the blood velocity, computed tomography (CT) acquisition velocity, and iodine contrast enhancement in CT images, and give a proof of principle of contrast gradient-based blood velocimetry with CT. The time-averaged blood velocity (v(blood)) inside an artery along the axis of rotation (z axis) is described as the mathematical division of a temporal (Hounsfield unit/second) and spatial (Hounsfield unit/centimeter) iodine contrast gradient. From this new theoretical framework, multiple strategies for calculating the time-averaged blood velocity from existing clinical CT scan protocols are derived, and contrast gradient-based blood velocimetry was introduced as a new method that can calculate v(blood) directly from contrast agent gradients and the changes therein. Exemplarily, the behavior of this new method was simulated for image acquisition with an adaptive 4-dimensional spiral mode consisting of repeated spiral acquisitions with alternating scan direction. In a dynamic flow phantom with flow velocities between 5.1 and 21.2 cm/s, the same acquisition mode was used to validate the simulations and give a proof of principle of contrast gradient-based blood velocimetry in a straight cylinder of 2.5 cm diameter, representing the aorta. In general, scanning with the direction of blood flow results in decreased and scanning against the flow in increased temporal contrast agent gradients. Velocity quantification becomes better for low blood and high acquisition speeds because the deviation of the measured contrast agent gradient from the temporal gradient will increase. In the dynamic flow phantom, a modulation of the enhancement curve, and thus alternation of the contrast agent gradients, can be observed for the adaptive 4-dimensional spiral mode and is in agreement with the simulations. The measured flow velocities in the downslopes of the enhancement curves were in good agreement with the expected values, although the accuracy and precision worsened with increasing flow velocities. The new theoretical framework increases the understanding of the relationship between the blood velocity, CT acquisition velocity, and iodine contrast enhancement in CT images, and it interconnects existing blood velocimetry methods with research on transluminary attenuation gradients. With these new insights, novel strategies for CT blood velocimetry, such as the contrast gradient-based method presented in this article, may be developed.

  2. Detection of allosteric signal transmission by information-theoretic analysis of protein dynamics

    PubMed Central

    Pandini, Alessandro; Fornili, Arianna; Fraternali, Franca; Kleinjung, Jens

    2012-01-01

    Allostery offers a highly specific way to modulate protein function. Therefore, understanding this mechanism is of increasing interest for protein science and drug discovery. However, allosteric signal transmission is difficult to detect experimentally and to model because it is often mediated by local structural changes propagating along multiple pathways. To address this, we developed a method to identify communication pathways by an information-theoretical analysis of molecular dynamics simulations. Signal propagation was described as information exchange through a network of correlated local motions, modeled as transitions between canonical states of protein fragments. The method was used to describe allostery in two-component regulatory systems. In particular, the transmission from the allosteric site to the signaling surface of the receiver domain NtrC was shown to be mediated by a layer of hub residues. The location of hubs preferentially connected to the allosteric site was found in close agreement with key residues experimentally identified as involved in the signal transmission. The comparison with the networks of the homologues CheY and FixJ highlighted similarities in their dynamics. In particular, we showed that a preorganized network of fragment connections between the allosteric and functional sites exists already in the inactive state of all three proteins.—Pandini, A., Fornili, A., Fraternali, F., Kleinjung, J. Detection of allosteric signal transmission by information-theoretic analysis of protein dynamics. PMID:22071506

  3. Dielectronic satellite spectra of hydrogen-like titanium (Ti XXII)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bitter, M.; von Goeler, S.; Cohen, S.

    High resolution spectra of the Ly ..cap alpha../sub 1/ and Ly ..cap alpha../sub 2/ lines of hydrogenlike titanium, TiXXII, and the associated dielectronic satellites which are due to transitions 1snl-2pnl with n greater than or equal to 2, have been observed from tokamak discharges with auxiliary ion cyclotron heating (ICRH) with central electron temperatures of 2 keV and central electron densities of 8 x 10/sup 13/ cm/sup -3/ on the Princeton Large Torus (PLT). The data have been used for a detailed comparison with theoretical predictions based on the Z - expansion method and Hartree - Fock calculations. The resultsmore » obtained with the Z - expansion method are in excellent agreement with the observed spectral data except for minor discrepancies between the theoretical and experimental wavelengths of 0.0003 A for the n = 2 satellites and of 0.0001 A for the separation of the Ly ..cap alpha../sub 1/ and Ly ..cap alpha../sub 2/ lines. Very good agreement with the experimental data is also obtained for the results from the Hartree - Fock calculations though somewhat larger discrepancies (approx. = 0.0009 A) exist between experimental and theoretical wavelengths which are systematically too small. The observed spectra are used for diagnosis of the central ion and electron temperatures of the PLT discharges and for a measurement of the dielectronic recombination rate coefficient of TiXXII.« less

  4. Determination of ferroelectric contributions to electromechanical response by frequency dependent piezoresponse force microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V.

    Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. But, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. We suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. This combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute tomore » the EM response.« less

  5. Left ventricular fluid mechanics: the long way from theoretical models to clinical applications.

    PubMed

    Pedrizzetti, Gianni; Domenichini, Federico

    2015-01-01

    The flow inside the left ventricle is characterized by the formation of vortices that smoothly accompany blood from the mitral inlet to the aortic outlet. Computational fluid dynamics permitted to shed some light on the fundamental processes involved with vortex motion. More recently, patient-specific numerical simulations are becoming an increasingly feasible tool that can be integrated with the developing imaging technologies. The existing computational methods are reviewed in the perspective of their potential role as a novel aid for advanced clinical analysis. The current results obtained by simulation methods either alone or in combination with medical imaging are summarized. Open problems are highlighted and perspective clinical applications are discussed.

  6. Clarifying the landscape approach: A Letter to the Editor on "Integrated landscape approaches to managing social and environmental issues in the tropics".

    PubMed

    Erbaugh, James; Agrawal, Arun

    2017-11-01

    Objectives, assumptions, and methods for landscape restoration and the landscape approach. World leaders have pledged 350 Mha for restoration using a landscape approach. The landscape approach is thus poised to become one of the most influential methods for multi-functional land management. Reed et al (2016) meaningfully advance scholarship on the landscape approach, but they incorrectly define the approach as it exists within their text. This Letter to the Editor clarifies the landscape approach as an ethic for land management, demonstrates how it relates to landscape restoration, and motivates continued theoretical development and empirical assessment of the landscape approach. © 2017 John Wiley & Sons Ltd.

  7. Determination of ferroelectric contributions to electromechanical response by frequency dependent piezoresponse force microscopy

    DOE PAGES

    Seol, Daehee; Park, Seongjae; Varenyk, Olexandr V.; ...

    2016-07-28

    Hysteresis loop analysis via piezoresponse force microscopy (PFM) is typically performed to probe the existence of ferroelectricity at the nanoscale. But, such an approach is rather complex in accurately determining the pure contribution of ferroelectricity to the PFM. We suggest a facile method to discriminate the ferroelectric effect from the electromechanical (EM) response through the use of frequency dependent ac amplitude sweep with combination of hysteresis loops in PFM. This combined study through experimental and theoretical approaches verifies that this method can be used as a new tool to differentiate the ferroelectric effect from the other factors that contribute tomore » the EM response.« less

  8. Estimating standard errors in feature network models.

    PubMed

    Frank, Laurence E; Heiser, Willem J

    2007-05-01

    Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.

  9. Multistability and instability analysis of recurrent neural networks with time-varying delays.

    PubMed

    Zhang, Fanghai; Zeng, Zhigang

    2018-01-01

    This paper provides new theoretical results on the multistability and instability analysis of recurrent neural networks with time-varying delays. It is shown that such n-neuronal recurrent neural networks have exactly [Formula: see text] equilibria, [Formula: see text] of which are locally exponentially stable and the others are unstable, where k 0 is a nonnegative integer such that k 0 ≤n. By using the combination method of two different divisions, recurrent neural networks can possess more dynamic properties. This method improves and extends the existing results in the literature. Finally, one numerical example is provided to show the superiority and effectiveness of the presented results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Which stocks are profitable? A network method to investigate the effects of network structure on stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Kun; Luo, Peng; Sun, Bianxia; Wang, Huaiqing

    2015-10-01

    According to asset pricing theory, a stock's expected returns are determined by its exposure to systematic risk. In this paper, we propose a new method for analyzing the interaction effects among industries and stocks on stock returns. We construct a complex network based on correlations of abnormal stock returns and use centrality and modularity, two popular measures in social science, to determine the effect of interconnections on industry and stock returns. Supported by previous studies, our findings indicate that a relationship exists between inter-industry closeness and industry returns and between stock centrality and stock returns. The theoretical and practical contributions of these findings are discussed.

  11. Numerical analysis for trajectory controllability of a coupled multi-order fractional delay differential system via the shifted Jacobi method

    NASA Astrophysics Data System (ADS)

    Priya, B. Ganesh; Muthukumar, P.

    2018-02-01

    This paper deals with the trajectory controllability for a class of multi-order fractional linear systems subject to a constant delay in state vector. The solution for the coupled fractional delay differential equation is established by the Mittag-Leffler function. The necessary and sufficient condition for the trajectory controllability is formulated and proved by the generalized Gronwall's inequality. The approximate trajectory for the proposed system is obtained through the shifted Jacobi operational matrix method. The numerical simulation of the approximate solution shows the theoretical results. Finally, some remarks and comments on the existing results of constrained controllability for the fractional dynamical system are also presented.

  12. Medication competency of nurses according to theoretical and drug calculation online exams: A descriptive correlational study.

    PubMed

    Sneck, Sami; Saarnio, Reetta; Isola, Arja; Boigu, Risto

    2016-01-01

    Medication administration is an important task of registered nurses. According to previous studies, nurses lack theoretical knowledge and drug calculation skills and knowledge-based mistakes do occur in clinical practice. Finnish health care organizations started to develop a systematic verification processes for medication competence at the end of the last decade. No studies have yet been made of nurses' theoretical knowledge and drug calculation skills according to these online exams. The aim of this study was to describe the medication competence of Finnish nurses according to theoretical and drug calculation exams. A descriptive correlation design was adopted. Participants and settings All nurses who participated in the online exam in three Finnish hospitals between 1.1.2009 and 31.05.2014 were selected to the study (n=2479). Quantitative methods like Pearson's chi-squared tests, analysis of variance (ANOVA) with post hoc Tukey tests and Pearson's correlation coefficient were used to test the existence of relationships between dependent and independent variables. The majority of nurses mastered the theoretical knowledge needed in medication administration, but 5% of the nurses struggled with passing the drug calculation exam. Theoretical knowledge and drug calculation skills were better in acute care units than in the other units and younger nurses achieved better results in both exams than their older colleagues. The differences found in this study were statistically significant, but not high. Nevertheless, even the tiniest deficiency in theoretical knowledge and drug calculation skills should be focused on. It is important to identify the nurses who struggle in the exams and to plan targeted educational interventions for supporting them. The next step is to study if verification of medication competence has an effect on patient safety. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. The structural, connectomic and network covariance of the human brain.

    PubMed

    Irimia, Andrei; Van Horn, John D

    2013-02-01

    Though it is widely appreciated that complex structural, functional and morphological relationships exist between distinct areas of the human cerebral cortex, the extent to which such relationships coincide remains insufficiently appreciated. Here we determine the extent to which correlations between brain regions are modulated by either structural, connectomic or network-theoretic properties using a structural neuroimaging data set of magnetic resonance imaging (MRI) and diffusion tensor imaging (DTI) volumes acquired from N=110 healthy human adults. To identify the linear relationships between all available pairs of regions, we use canonical correlation analysis to test whether a statistically significant correlation exists between each pair of cortical parcels as quantified via structural, connectomic or network-theoretic measures. In addition to this, we investigate (1) how each group of canonical variables (whether structural, connectomic or network-theoretic) contributes to the overall correlation and, additionally, (2) whether each individual variable makes a significant contribution to the test of the omnibus null hypothesis according to which no correlation between regions exists across subjects. We find that, although region-to-region correlations are extensively modulated by structural and connectomic measures, there are appreciable differences in how these two groups of measures drive inter-regional correlation patterns. Additionally, our results indicate that the network-theoretic properties of the cortex are strong modulators of region-to-region covariance. Our findings are useful for understanding the structural and connectomic relationship between various parts of the brain, and can inform theoretical and computational models of cortical information processing. Published by Elsevier Inc.

  14. Towards Noise Tomography and Passive Monitoring Using Distributed Acoustic Sensing

    NASA Astrophysics Data System (ADS)

    Paitz, P.; Fichtner, A.

    2017-12-01

    Distributed Acoustic Sensing (DAS) has the potential to revolutionize the field of seismic data acquisition. Thanks to their cost-effectiveness, fiber-optic cables may have the capability of complementing conventional geophones and seismometers by filling a niche of applications utilizing large amounts of data. Therefore, DAS may serve as an additional tool to investigate the internal structure of the Earth and its changes over time; on scales ranging from hydrocarbon or geothermal reservoirs to the entire globe. An additional potential may be in the existence of large fibre networks deployed already for telecommunication purposes. These networks that already exist today could serve as distributed seismic antennas. We investigate theoretically how ambient noise tomography may be used with DAS data. For this we extend the theory of seismic interferometry to the measurement of strain. With numerical, 2D finite-difference examples we investigate the impact of source and receiver effects. We study the effect of heterogeneous source distributions and the cable orientation by assessing similarities and differences to the Green's function. We also compare the obtained interferometric waveforms from strain interferometry to displacement interferometric wave fields obtained with existing methods. Intermediate results show that the obtained interferometric waveforms can be connected to the Green's Functions and provide consistent information about the propagation medium. These simulations will be extended to reservoir scale subsurface structures. Future work will include the application of the theory to real-data examples. The presented research depicts the early stage of a combination of theoretical investigations, numerical simulations and real-world data applications. We will therefore evaluate the potentials and shortcomings of DAS in reservoir monitoring and seismology at the current state, with a long-term vision of global seismic tomography utilizing DAS data from existing fiber-optic cable networks.

  15. Transition properties of the Be-like Kα X-ray from Mg IX

    NASA Astrophysics Data System (ADS)

    Hu, Feng; Zhang, Shufang; Sun, Yan; Mei, Maofei; Sang, Cuicui; Yang, Jiamin

    2017-12-01

    Energy levels among the lowest 40 fine-structure levels in Be-like Mg IX are calculated using grasp2K code. The wavelengths, oscillator strengths, radiative rates and lifetimes for all possible Kα transitions have been calculated using the multiconfiguration Dirac-Fock method. The accuracy of the results is determined through extensive comparisons with the existing laboratory measurements and theoretical results. The present data can be used reliably for many purposes, such as the line identification of the observed spectra, and modelling and diagnostics of magnesium plasma.

  16. Asymptotic behaviors of a cell-to-cell HIV-1 infection model perturbed by white noise

    NASA Astrophysics Data System (ADS)

    Liu, Qun

    2017-02-01

    In this paper, we analyze a mathematical model of cell-to-cell HIV-1 infection to CD4+ T cells perturbed by stochastic perturbations. First of all, we investigate that there exists a unique global positive solution of the system for any positive initial value. Then by using Lyapunov analysis methods, we study the asymptotic property of this solution. Moreover, we discuss whether there is a stationary distribution for this system and if it owns the ergodic property. Numerical simulations are presented to illustrate the theoretical results.

  17. Report of the Terrestrial Bodies Science Working Group. Volume 9: Complementary research and development

    NASA Technical Reports Server (NTRS)

    Fanale, F. P.; Kaula, W. M.; Mccord, T. B.; Trombka, J. L.

    1977-01-01

    Topics discussed include the need for: the conception and development of a wide spectrum of experiments, instruments, and vehicles in order to derive the proper return from an exploration program; the effective use of alternative methods of data acquisition involving ground-based, airborne and near Earth orbital techniques to supplement spacraft mission; and continued reduction and analysis of existing data including laboratory and theoretical studies in order to benefit fully from experiments and to build on the past programs toward a logical and efficient exploration of the solar system.

  18. Spectral representation of the three-body Coulomb problem. II. Autoionizing doubly excited states of unnatural parity in helium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eiglsperger, Johannes; Piraux, Bernard; Madronero, Javier

    2010-04-15

    A spectral approach of configuration interaction type is used to evaluate energies and widths for a wide range of singlet and triplet P{sup e} resonance states of helium up to the eighth single ionization threshold. While the present data are in excellent agreement with existing theoretical results (below the N=3-5 ionization threshold) obtained within an explicitly correlated approach, there are substantial differences with the energies, the widths, and the number of resonances obtained with the stabilization method.

  19. The Hubbard Model and Piezoresistivity

    NASA Astrophysics Data System (ADS)

    Celebonovic, V.; Nikolic, M. G.

    2018-02-01

    Piezoresistivity was discovered in the nineteenth century. Numerous applications of this phenomenon exist nowadays. The aim of the present paper is to explore the possibility of applying the Hubbard model to theoretical work on piezoresistivity. Results are encouraging, in the sense that numerical values of the strain gauge obtained by using the Hubbard model agree with results obtained by other methods. The calculation is simplified by the fact that it uses results for the electrical conductivity of 1D systems previously obtained within the Hubbard model by one of the present authors.

  20. Bifurcation Analysis and Chaos Control in a Modified Finance System with Delayed Feedback

    NASA Astrophysics Data System (ADS)

    Yang, Jihua; Zhang, Erli; Liu, Mei

    2016-06-01

    We investigate the effect of delayed feedback on the finance system, which describes the time variation of the interest rate, for establishing the fiscal policy. By local stability analysis, we theoretically prove the existences of Hopf bifurcation and Hopf-zero bifurcation. By using the normal form method and center manifold theory, we determine the stability and direction of a bifurcating periodic solution. Finally, we give some numerical solutions, which indicate that when the delay passes through certain critical values, chaotic oscillation is converted into a stable equilibrium or periodic orbit.

  1. A sampling framework for incorporating quantitative mass spectrometry data in protein interaction analysis.

    PubMed

    Tucker, George; Loh, Po-Ru; Berger, Bonnie

    2013-10-04

    Comprehensive protein-protein interaction (PPI) maps are a powerful resource for uncovering the molecular basis of genetic interactions and providing mechanistic insights. Over the past decade, high-throughput experimental techniques have been developed to generate PPI maps at proteome scale, first using yeast two-hybrid approaches and more recently via affinity purification combined with mass spectrometry (AP-MS). Unfortunately, data from both protocols are prone to both high false positive and false negative rates. To address these issues, many methods have been developed to post-process raw PPI data. However, with few exceptions, these methods only analyze binary experimental data (in which each potential interaction tested is deemed either observed or unobserved), neglecting quantitative information available from AP-MS such as spectral counts. We propose a novel method for incorporating quantitative information from AP-MS data into existing PPI inference methods that analyze binary interaction data. Our approach introduces a probabilistic framework that models the statistical noise inherent in observations of co-purifications. Using a sampling-based approach, we model the uncertainty of interactions with low spectral counts by generating an ensemble of possible alternative experimental outcomes. We then apply the existing method of choice to each alternative outcome and aggregate results over the ensemble. We validate our approach on three recent AP-MS data sets and demonstrate performance comparable to or better than state-of-the-art methods. Additionally, we provide an in-depth discussion comparing the theoretical bases of existing approaches and identify common aspects that may be key to their performance. Our sampling framework extends the existing body of work on PPI analysis using binary interaction data to apply to the richer quantitative data now commonly available through AP-MS assays. This framework is quite general, and many enhancements are likely possible. Fruitful future directions may include investigating more sophisticated schemes for converting spectral counts to probabilities and applying the framework to direct protein complex prediction methods.

  2. Advanced ab initio relativistic calculations of transition probabilities for some O I and O III emission lines

    NASA Astrophysics Data System (ADS)

    Nguyen, T. V. B.; Chantler, C. T.; Lowe, J. A.; Grant, I. P.

    2014-06-01

    This work presents new ab initio relativistic calculations using the multiconfiguration Dirac-Hartree-Fock method of some O I and O III transition lines detected in B-type and Wolf-Rayet stars. Our results are the first able to be presented in both the length and velocity gauges, with excellent gauge convergence. Compared to previous experimental and theoretical uncertainties of up to 50 per cent, our accuracies appear to be in the range of 0.33-5.60 per cent, with gauge convergence up to 0.6 per cent. Similar impressive convergence of the calculated energies is also shown. Two sets of theoretical computations are compared with earlier tabulated measurements. Excellent agreement is obtained with one set of transitions but an interesting and consistent discrepancy exists between the current work and the prior literature, deserving of future experimental studies.

  3. Noninformative prior in the quantum statistical model of pure states

    NASA Astrophysics Data System (ADS)

    Tanaka, Fuyuhiko

    2012-06-01

    In the present paper, we consider a suitable definition of a noninformative prior on the quantum statistical model of pure states. While the full pure-states model is invariant under unitary rotation and admits the Haar measure, restricted models, which we often see in quantum channel estimation and quantum process tomography, have less symmetry and no compelling rationale for any choice. We adopt a game-theoretic approach that is applicable to classical Bayesian statistics and yields a noninformative prior for a general class of probability distributions. We define the quantum detection game and show that there exist noninformative priors for a general class of a pure-states model. Theoretically, it gives one of the ways that we represent ignorance on the given quantum system with partial information. Practically, our method proposes a default distribution on the model in order to use the Bayesian technique in the quantum-state tomography with a small sample.

  4. Thermomechanical effect of pulse-periodic laser radiation on cartilaginous and eye tissues

    NASA Astrophysics Data System (ADS)

    Baum, O. I.; Zheltov, G. I.; Omelchenko, A. I.; Romanov, G. S.; Romanov, O. G.; Sobol, E. N.

    2013-08-01

    This paper is devoted to theoretical and experimental studies into the thermomechanical action of laser radiation on biological tissues. The thermal stresses and strains developing in biological tissues under the effect of pulse-periodic laser radiation are theoretically modeled for a wide range of laser pulse durations. The models constructed allow one to calculate the magnitude of pressures developing in cartilaginous and eye tissues exposed to laser radiation and predict the evolution of cavitation phenomena occurring therein. The calculation results agree well with experimental data on the growth of pressure and deformations, as well as the dynamics of formation of gas bubbles, in the laser-affected tissues. Experiments on the effect of laser radiation on the trabecular region of the eye in minipigs demonstrated that there existed optimal laser irradiation regimens causing a substantial increase in the hydraulic permeability of the radiation-exposed tissue, which can be used to develop a novel glaucoma treatment method.

  5. Response simulation and theoretical calibration of a dual-induction resistivity LWD tool

    NASA Astrophysics Data System (ADS)

    Xu, Wei; Ke, Shi-Zhen; Li, An-Zong; Chen, Peng; Zhu, Jun; Zhang, Wei

    2014-03-01

    In this paper, responses of a new dual-induction resistivity logging-while-drilling (LWD) tool in 3D inhomogeneous formation models are simulated by the vector finite element method (VFEM), the influences of the borehole, invaded zone, surrounding strata, and tool eccentricity are analyzed, and calibration loop parameters and calibration coefficients of the LWD tool are discussed. The results show that the tool has a greater depth of investigation than that of the existing electromagnetic propagation LWD tools and is more sensitive to azimuthal conductivity. Both deep and medium induction responses have linear relationships with the formation conductivity, considering optimal calibration loop parameters and calibration coefficients. Due to the different depths of investigation and resolution, deep induction and medium induction are affected differently by the formation model parameters, thereby having different correction factors. The simulation results can provide theoretical references for the research and interpretation of the dual-induction resistivity LWD tools.

  6. Meteor showers associated with 2003EH1

    NASA Astrophysics Data System (ADS)

    Babadzhanov, P. B.; Williams, I. P.; Kokhirova, G. I.

    2008-06-01

    Using the Everhart RADAU19 numerical integration method, the orbital evolution of the near-Earth asteroid 2003EH1 is investigated. This asteroid belongs to the Amor group and is moving on a comet-like orbit. The integrations are performed over one cycle of variation of the perihelion argument ω. Over such a cycle, the orbit intersect that of the Earth at eight different values of ω. The orbital parameters are different at each of these intersections and so a meteoroid stream surrounding such an orbit can produce eight different meteor showers, one at each crossing. The geocentric radiants and velocities of the eight theoretical meteor showers associated with these crossing points are determined. Using published data, observed meteor showers are identified with each of the theoretically predicted showers. The character of the orbit and the existence of observed meteor showers associated with 2003EH1 confirm the supposition that this object is an extinct comet.

  7. Correlation estimation and performance optimization for distributed image compression

    NASA Astrophysics Data System (ADS)

    He, Zhihai; Cao, Lei; Cheng, Hui

    2006-01-01

    Correlation estimation plays a critical role in resource allocation and rate control for distributed data compression. A Wyner-Ziv encoder for distributed image compression is often considered as a lossy source encoder followed by a lossless Slepian-Wolf encoder. The source encoder consists of spatial transform, quantization, and bit plane extraction. In this work, we find that Gray code, which has been extensively used in digital modulation, is able to significantly improve the correlation between the source data and its side information. Theoretically, we analyze the behavior of Gray code within the context of distributed image compression. Using this theoretical model, we are able to efficiently allocate the bit budget and determine the code rate of the Slepian-Wolf encoder. Our experimental results demonstrate that the Gray code, coupled with accurate correlation estimation and rate control, significantly improves the picture quality, by up to 4 dB, over the existing methods for distributed image compression.

  8. Dependence of synergy current driven by lower hybrid wave and electron cyclotron wave on the frequency and parallel refractive index of electron cyclotron wave for Tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, J.; Chen, S. Y., E-mail: sychen531@163.com; Tang, C. J.

    2014-01-15

    The physical mechanism of the synergy current driven by lower hybrid wave (LHW) and electron cyclotron wave (ECW) in tokamaks is investigated using theoretical analysis and simulation methods in the present paper. Research shows that the synergy relationship between the two waves in velocity space strongly depends on the frequency ω and parallel refractive index N{sub //} of ECW. For a given spectrum of LHW, the parameter range of ECW, in which the synergy current exists, can be predicted by theoretical analysis, and these results are consistent with the simulation results. It is shown that the synergy effect is mainlymore » caused by the electrons accelerated by both ECW and LHW, and the acceleration of these electrons requires that there is overlap of the resonance regions of the two waves in velocity space.« less

  9. Precision measurement of transition matrix elements via light shift cancellation.

    PubMed

    Herold, C D; Vaidya, V D; Li, X; Rolston, S L; Porto, J V; Safronova, M S

    2012-12-14

    We present a method for accurate determination of atomic transition matrix elements at the 10(-3) level. Measurements of the ac Stark (light) shift around "magic-zero" wavelengths, where the light shift vanishes, provide precise constraints on the matrix elements. We make the first measurement of the 5s - 6p matrix elements in rubidium by measuring the light shift around the 421 and 423 nm zeros through diffraction of a condensate off a sequence of standing wave pulses. In conjunction with existing theoretical and experimental data, we find 0.3235(9)ea(0) and 0.5230(8)ea(0) for the 5s - 6p(1/2) and 5s - 6p(3/2) elements, respectively, an order of magnitude more accurate than the best theoretical values. This technique can provide needed, accurate matrix elements for many atoms, including those used in atomic clocks, tests of fundamental symmetries, and quantum information.

  10. The estimation of convective rainfall by area integrals. I - The theoretical and empirical basis. II - The height-area rainfall threshold (HART) method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Daniel; Short, David A.; Atlas, David

    1990-01-01

    A theory is developed which establishes the basis for the use of rainfall areas within present thresholds as a measure of either the instantaneous areawide rain rate of convective storms or the total volume of rain from an individual storm over its lifetime. The method is based upon the existence of a well-behaved pdf of rain rate either from the many storms at one instant or from a single storm during its life. The generality of the instantaneous areawide method was examined by applying it to quantitative radar data sets from the GARP Tropical Atlantic Experiment for South Africa, Texas, and Darwin (Australia). It is shown that the pdf's developed for each of these areas are consistent with the theory.

  11. Identification of Successive ``Unobservable'' Cyber Data Attacks in Power Systems Through Matrix Decomposition

    NASA Astrophysics Data System (ADS)

    Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.

    2016-11-01

    This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.

  12. Link prediction based on nonequilibrium cooperation effect

    NASA Astrophysics Data System (ADS)

    Li, Lanxi; Zhu, Xuzhen; Tian, Hui

    2018-04-01

    Link prediction in complex networks has become a common focus of many researchers. But most existing methods concentrate on neighbors, and rarely consider degree heterogeneity of two endpoints. Node degree represents the importance or status of endpoints. We describe the large-degree heterogeneity as the nonequilibrium between nodes. This nonequilibrium facilitates a stable cooperation between endpoints, so that two endpoints with large-degree heterogeneity tend to connect stably. We name such a phenomenon as the nonequilibrium cooperation effect. Therefore, this paper proposes a link prediction method based on the nonequilibrium cooperation effect to improve accuracy. Theoretical analysis will be processed in advance, and at the end, experiments will be performed in 12 real-world networks to compare the mainstream methods with our indices in the network through numerical analysis.

  13. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    NASA Astrophysics Data System (ADS)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  14. Mono-isotope Prediction for Mass Spectra Using Bayes Network.

    PubMed

    Li, Hui; Liu, Chunmei; Rwebangira, Mugizi Robert; Burge, Legand

    2014-12-01

    Mass spectrometry is one of the widely utilized important methods to study protein functions and components. The challenge of mono-isotope pattern recognition from large scale protein mass spectral data needs computational algorithms and tools to speed up the analysis and improve the analytic results. We utilized naïve Bayes network as the classifier with the assumption that the selected features are independent to predict mono-isotope pattern from mass spectrometry. Mono-isotopes detected from validated theoretical spectra were used as prior information in the Bayes method. Three main features extracted from the dataset were employed as independent variables in our model. The application of the proposed algorithm to publicMo dataset demonstrates that our naïve Bayes classifier is advantageous over existing methods in both accuracy and sensitivity.

  15. Electrospray ionization time-of-flight mass spectrum analysis method of polyaluminum chloride flocculants.

    PubMed

    Feng, Chenghong; Bi, Zhe; Tang, Hongxiao

    2015-01-06

    Electrospray mass spectrometry has been reported as a novel technique for Al species identification, but to date, the working mechanism is not clear and no unanimous method exists for spectrum analysis of traditional Al salt flocculants, let alone for analysis of polyaluminum chloride (PAC) flocculants. Therefore, this paper introduces a novel theoretical calculation method to identify Al species from a mass spectrum, based on deducing changes in m/z (mass-to-charge ratio) and molecular formulas of oligomers in five typical PAC flocculants. The use of reference chemical species was specially proposed in the method to guarantee the uniqueness of the assigned species. The charge and mass reduction of the Al cluster was found to proceed by hydrolysis, gasification, and change of hydroxyl on the oxy bridge. The novel method was validated both qualitatively and quantitatively by comparing the results to those obtained with the (27)Al NMR spectrometry.

  16. Possibility of determination of the level of antioxidants in human body using spectroscopic methods

    NASA Astrophysics Data System (ADS)

    Timofeeva, E.; Gorbunova, E.

    2016-08-01

    In this work, the processes of antioxidant defence against aggressive free radicals in human body were investigated theoretically; and the existing methods of diagnosis of oxidative stress and disturbance of antioxidant activity were reviewed. Also, the kinetics of free radical reactions in the oxidation of luminol and interaction antioxidants (such as chlorophyll in the multicomponent system of plant's leaves and ubiquinone) with the UV radiation were investigated experimentally by spectroscopic method. The results showed that this method is effective for recording the luminescence of antioxidants, free radicals, chemiluminescent reactions and fluorescence. In addition these results reveal new opportunities for the study of the antioxidant activity and antioxidant balance in a multicomponent system by allocating features of the individual components in spectral composition. A creation of quality control method for drugs, that are required for oxidative stress diagnosis, is a promising direction in the development of given work.

  17. A Behavior-Analytic Account of Motivational Interviewing

    ERIC Educational Resources Information Center

    Christopher, Paulette J.; Dougher, Michael J.

    2009-01-01

    Several published reports have now documented the clinical effectiveness of motivational interviewing (MI). Despite its effectiveness, there are no generally accepted or empirically supported theoretical accounts of its effects. The theoretical accounts that do exist are mentalistic, descriptive, and not based on empirically derived behavioral…

  18. Electron Energy Deposition in Atomic Nitrogen

    DTIC Science & Technology

    1987-10-06

    knovn theoretical results, and their relative accuracy in comparison to existing measurements and calculations is given elsevhere. 20 2.1 The Source Term...with the proper choice of parameters, reduces to vell-known theoretical results. 20 Table 2 gives the parameters for collisional excitation of the...calculations of McGuire 36 and experimental measurements of Brook et al.3 7 Additional theoretical and experimental results are discussed in detail elsevhere

  19. Lithography Radiation Effects Study.

    DTIC Science & Technology

    1984-11-01

    Electronic Industries, 7317 South Washinqton Avenue, Edina, Minnesota 5;435, 1972) A- 2-8 15. J. H. Scofield , Theoretical Photoionization Cross Sections...exci- tation of a qiven shell is proportional to the photoionization cross-section * of the shell. Theoretical photoionization cross-sections as a...Ti (Ka = 4.51 keV), 500 A and for Cr (K. = 5.41 keV), 580 A. Comparison of these data with existing theoretical models were not carried out. The

  20. MIDER: Network Inference with Mutual Information Distance and Entropy Reduction

    PubMed Central

    Villaverde, Alejandro F.; Ross, John; Morán, Federico; Banga, Julio R.

    2014-01-01

    The prediction of links among variables from a given dataset is a task referred to as network inference or reverse engineering. It is an open problem in bioinformatics and systems biology, as well as in other areas of science. Information theory, which uses concepts such as mutual information, provides a rigorous framework for addressing it. While a number of information-theoretic methods are already available, most of them focus on a particular type of problem, introducing assumptions that limit their generality. Furthermore, many of these methods lack a publicly available implementation. Here we present MIDER, a method for inferring network structures with information theoretic concepts. It consists of two steps: first, it provides a representation of the network in which the distance among nodes indicates their statistical closeness. Second, it refines the prediction of the existing links to distinguish between direct and indirect interactions and to assign directionality. The method accepts as input time-series data related to some quantitative features of the network nodes (such as e.g. concentrations, if the nodes are chemical species). It takes into account time delays between variables, and allows choosing among several definitions and normalizations of mutual information. It is general purpose: it may be applied to any type of network, cellular or otherwise. A Matlab implementation including source code and data is freely available (http://www.iim.csic.es/~gingproc/mider.html). The performance of MIDER has been evaluated on seven different benchmark problems that cover the main types of cellular networks, including metabolic, gene regulatory, and signaling. Comparisons with state of the art information–theoretic methods have demonstrated the competitive performance of MIDER, as well as its versatility. Its use does not demand any a priori knowledge from the user; the default settings and the adaptive nature of the method provide good results for a wide range of problems without requiring tuning. PMID:24806471

  1. MIDER: network inference with mutual information distance and entropy reduction.

    PubMed

    Villaverde, Alejandro F; Ross, John; Morán, Federico; Banga, Julio R

    2014-01-01

    The prediction of links among variables from a given dataset is a task referred to as network inference or reverse engineering. It is an open problem in bioinformatics and systems biology, as well as in other areas of science. Information theory, which uses concepts such as mutual information, provides a rigorous framework for addressing it. While a number of information-theoretic methods are already available, most of them focus on a particular type of problem, introducing assumptions that limit their generality. Furthermore, many of these methods lack a publicly available implementation. Here we present MIDER, a method for inferring network structures with information theoretic concepts. It consists of two steps: first, it provides a representation of the network in which the distance among nodes indicates their statistical closeness. Second, it refines the prediction of the existing links to distinguish between direct and indirect interactions and to assign directionality. The method accepts as input time-series data related to some quantitative features of the network nodes (such as e.g. concentrations, if the nodes are chemical species). It takes into account time delays between variables, and allows choosing among several definitions and normalizations of mutual information. It is general purpose: it may be applied to any type of network, cellular or otherwise. A Matlab implementation including source code and data is freely available (http://www.iim.csic.es/~gingproc/mider.html). The performance of MIDER has been evaluated on seven different benchmark problems that cover the main types of cellular networks, including metabolic, gene regulatory, and signaling. Comparisons with state of the art information-theoretic methods have demonstrated the competitive performance of MIDER, as well as its versatility. Its use does not demand any a priori knowledge from the user; the default settings and the adaptive nature of the method provide good results for a wide range of problems without requiring tuning.

  2. Local production of medical technologies and its effect on access in low and middle income countries: a systematic review of the literature

    PubMed Central

    Kaplan, Warren Allan; Ritz, Lindsay Sarah; Vitello, Marie

    2011-01-01

    Objectives: The objective of this study was to assess the existing theoretical and empirical literature examining the link between "local production" of pharmaceuticals and medical devices and increased local access to these products. Our preliminary hypothesis is that studies showing a robust relationship between local production and access to medical products are sparse, at best. Methods: An extensive literature search was conducted using a wide variety of databases and search terms intending to capture as many different aspects of this issue as possible. The results of the search were reviewed and categorized according to their relevance to the research question. The literature was also reviewed to determine the rigor used to examine the effects of local production and what implications these experiences hold for other developing countries. Results: Literature addressing the benefits of local production and the link between it and access to medical products is sparse, mainly descriptive and lacking empirical evidence. Of the literature we reviewed that addressed comparative economics and strategic planning of multinational and domestic firms, there are few dealing with emerging markets and lower-middle income countries and even fewer that compare local biomedical producers with multinational corporations in terms of a reasonable metric. What comparisons exist mainly relate to prices of local versus foreign/multinational produced medicines. Conclusions: An assessment of the existing theoretical and empirical literature examining the link between "local production" of pharmaceuticals and medical devices and increased local access to these products reveals a paucity of literature explicitly dealing with this issue. Of the literature that does exist, methods used to date are insufficient to prove a robust relationship between local production of medical products and access to these products. There are mixed messages from various studies, and although the studies may correctly depict specific situations in specific countries with reference to specific products, such evidence cannot be generalized. Our review strongly supports the need for further research in understanding the dynamic link between local production and access to medical products PMID:23093883

  3. Projection-free approximate balanced truncation of large unstable systems

    NASA Astrophysics Data System (ADS)

    Flinois, Thibault L. B.; Morgans, Aimee S.; Schmid, Peter J.

    2015-08-01

    In this article, we show that the projection-free, snapshot-based, balanced truncation method can be applied directly to unstable systems. We prove that even for unstable systems, the unmodified balanced proper orthogonal decomposition algorithm theoretically yields a converged transformation that balances the Gramians (including the unstable subspace). We then apply the method to a spatially developing unstable system and show that it results in reduced-order models of similar quality to the ones obtained with existing methods. Due to the unbounded growth of unstable modes, a practical restriction on the final impulse response simulation time appears, which can be adjusted depending on the desired order of the reduced-order model. Recommendations are given to further reduce the cost of the method if the system is large and to improve the performance of the method if it does not yield acceptable results in its unmodified form. Finally, the method is applied to the linearized flow around a cylinder at Re = 100 to show that it actually is able to accurately reproduce impulse responses for more realistic unstable large-scale systems in practice. The well-established approximate balanced truncation numerical framework therefore can be safely applied to unstable systems without any modifications. Additionally, balanced reduced-order models can readily be obtained even for large systems, where the computational cost of existing methods is prohibitive.

  4. Theoretical methods for estimating moments of inertia of trees and boles.

    Treesearch

    John A. Sturos

    1973-01-01

    Presents a theoretical method for estimating the mass moments of inertia of full trees and boles about a transverse axis. Estimates from the theoretical model compared closely with experimental data on aspen and red pine trees obtained in the field by the pendulum method. The theoretical method presented may be used to estimate the mass moments of inertia and other...

  5. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    PubMed

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  6. Three dimensional dust-acoustic solitary waves in an electron depleted dusty plasma with two-superthermal ion-temperature

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borhanian, J.; Shahmansouri, M.

    2013-01-15

    A theoretical investigation is carried out to study the existence and characteristics of propagation of dust-acoustic (DA) waves in an electron-depleted dusty plasma with two-temperature ions, which are modeled by kappa distribution functions. A three-dimensional cylindrical Kadomtsev-Petviashvili equation governing evolution of small but finite amplitude DA waves is derived by means of a reductive perturbation method. The influence of physical parameters on solitary wave structure is examined. Furthermore, the energy integral equation is used to study the existence domains of the localized structures. It is found that the present model can be employed to describe the existence of positive asmore » well as negative polarity DA solitary waves by selecting special values for parameters of the system, e.g., superthermal index of cold and/or hot ions, cold to hot ion density ratio, and hot to cold ion temperature ratio. This model may be useful to understand the excitation of nonlinear DA waves in astrophysical objects.« less

  7. Exploring a taxonomy for aggression against women: can it aid conceptual clarity?

    PubMed

    Cook, Sarah; Parrott, Dominic

    2009-01-01

    The assessment of aggression against women is demanding primarily because assessment strategies do not share a common language to describe reliably the wide range of forms of aggression women experience. The lack of a common language impairs efforts to describe these experiences, understand causes and consequences of aggression against women, and develop effective intervention and prevention efforts. This review accomplishes two goals. First, it applies a theoretically and empirically based taxonomy to behaviors assessed by existing measurement instruments. Second, it evaluates whether the taxonomy provides a common language for the field. Strengths of the taxonomy include its ability to describe and categorize all forms of aggression found in existing quantitative measures. The taxonomy also classifies numerous examples of aggression discussed in the literature but notably absent from quantitative measures. Although we use existing quantitative measures as a starting place to evaluate the taxonomy, its use is not limited to quantitative methods. Implications for theory, research, and practice are discussed.

  8. Using Intervention Mapping for child development and wellbeing programs in early childhood education and care settings.

    PubMed

    O'Connor, Amanda; Blewitt, Claire; Nolan, Andrea; Skouteris, Helen

    2018-06-01

    Supporting children's social and emotional learning benefits all elements of children's development and has been associated with positive mental health and wellbeing, development of values and life skills. However, literature relating to the creation of interventions designed for use within the early childhood education and care settings to support children's social and emotional skills and learning is lacking. Intervention Mapping (IM) is a systematic intervention development framework, utilising principles centred on participatory co-design methods, multiple theoretical approaches and existing literature to enable effective decision-making during the development process. Early childhood pedagogical programs are also shaped by these principles; however, educators tend to draw on implicit knowledge when working with families. IM offers this sector the opportunity to formally incorporate theoretical, evidence-based research into the development of early childhood education and care social and emotional interventions. Emerging literature indicates IM is useful for designing health and wellbeing interventions for children within early childhood education and care settings. Considering the similar underlying principles of IM, existing applications within early childhood education and care and development of interventions beyond health behaviour change, it is recommended IM be utilised to design early childhood education and care interventions focusing on supporting children's social and emotional development. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Development and Validation of the Scan of Postgraduate Educational Environment Domains (SPEED): A Brief Instrument to Assess the Educational Environment in Postgraduate Medical Education

    PubMed Central

    Schönrock-Adema, Johanna; Visscher, Maartje; Raat, A. N. Janet; Brand, Paul L. P.

    2015-01-01

    Introduction Current instruments to evaluate the postgraduate medical educational environment lack theoretical frameworks and are relatively long, which may reduce response rates. We aimed to develop and validate a brief instrument that, based on a solid theoretical framework for educational environments, solicits resident feedback to screen the postgraduate medical educational environment quality. Methods Stepwise, we developed a screening instrument, using existing instruments to assess educational environment quality and adopting a theoretical framework that defines three educational environment domains: content, atmosphere and organization. First, items from relevant existing instruments were collected and, after deleting duplicates and items not specifically addressing educational environment, grouped into the three domains. In a Delphi procedure, the item list was reduced to a set of items considered most important and comprehensively covering the three domains. These items were triangulated against the results of semi-structured interviews with 26 residents from three teaching hospitals to achieve face validity. This draft version of the Scan of Postgraduate Educational Environment Domains (SPEED) was administered to residents in a general and university hospital and further reduced and validated based on the data collected. Results Two hundred twenty-three residents completed the 43-item draft SPEED. We used half of the dataset for item reduction, and the other half for validating the resulting SPEED (15 items, 5 per domain). Internal consistencies were high. Correlations between domain scores in the draft and brief versions of SPEED were high (>0.85) and highly significant (p<0.001). Domain score variance of the draft instrument was explained for ≥80% by the items representing the domains in the final SPEED. Conclusions The SPEED comprehensively covers the three educational environment domains defined in the theoretical framework. Because of its validity and brevity, the SPEED is promising as useful and easily applicable tool to regularly screen educational environment quality in postgraduate medical education. PMID:26413836

  10. Subsampled Hessian Newton Methods for Supervised Learning.

    PubMed

    Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen

    2015-08-01

    Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.

  11. Principles of assessing bacterial susceptibility to antibiotics using the agar diffusion method.

    PubMed

    Bonev, Boyan; Hooper, James; Parisot, Judicaël

    2008-06-01

    The agar diffusion assay is one method for quantifying the ability of antibiotics to inhibit bacterial growth. Interpretation of results from this assay relies on model-dependent analysis, which is based on the assumption that antibiotics diffuse freely in the solid nutrient medium. In many cases, this assumption may be incorrect, which leads to significant deviations of the predicted behaviour from the experiment and to inaccurate assessment of bacterial susceptibility to antibiotics. We sought a theoretical description of the agar diffusion assay that takes into consideration loss of antibiotic during diffusion and provides higher accuracy of the MIC determined from the assay. We propose a new theoretical framework for analysis of agar diffusion assays. MIC was determined by this technique for a number of antibiotics and analysis was carried out using both the existing free diffusion and the new dissipative diffusion models. A theory for analysis of antibiotic diffusion in solid media is described, in which we consider possible interactions of the test antibiotic with the solid medium or partial antibiotic inactivation during diffusion. This is particularly relevant to the analysis of diffusion of hydrophobic or amphipathic compounds. The model is based on a generalized diffusion equation, which includes the existing theory as a special case and contains an additional, dissipative term. Analysis of agar diffusion experiments using the new model allows significantly more accurate interpretation of experimental results and determination of MICs. The model has more general validity and is applicable to analysis of other dissipative processes, for example to antigen diffusion and to calculations of substrate load in affinity purification.

  12. The application of epidemiology in aquatic animal health -opportunities and challenges.

    PubMed

    Peeler, Edmund J; Taylor, Nicholas G H

    2011-08-11

    Over recent years the growth in aquaculture, accompanied by the emergence of new and transboundary diseases, has stimulated epidemiological studies of aquatic animal diseases. Great potential exists for both observational and theoretical approaches to investigate the processes driving emergence but, to date, compared to terrestrial systems, relatively few studies exist in aquatic animals. Research using risk methods has assessed routes of introduction of aquatic animal pathogens to facilitate safe trade (e.g. import risk analyses) and support biosecurity. Epidemiological studies of risk factors for disease in aquaculture (most notably Atlantic salmon farming) have effectively supported control measures. Methods developed for terrestrial livestock diseases (e.g. risk-based surveillance) could improve the capacity of aquatic animal surveillance systems to detect disease incursions and emergence. The study of disease in wild populations presents many challenges and the judicious use of theoretical models offers some solutions. Models, parameterised from observational studies of host pathogen interactions, have been used to extrapolate estimates of impacts on the individual to the population level. These have proved effective in estimating the likely impact of parasite infections on wild salmonid populations in Switzerland and Canada (where the importance of farmed salmon as a reservoir of infection was investigated). A lack of data is often the key constraint in the application of new approaches to surveillance and modelling. The need for epidemiological approaches to protect aquatic animal health will inevitably increase in the face of the combined challenges of climate change, increasing anthropogenic pressures, limited water sources and the growth in aquaculture.

  13. The theoretical cognitive process of visualization for science education.

    PubMed

    Mnguni, Lindelani E

    2014-01-01

    The use of visual models such as pictures, diagrams and animations in science education is increasing. This is because of the complex nature associated with the concepts in the field. Students, especially entrant students, often report misconceptions and learning difficulties associated with various concepts especially those that exist at a microscopic level, such as DNA, the gene and meiosis as well as those that exist in relatively large time scales such as evolution. However the role of visual literacy in the construction of knowledge in science education has not been investigated much. This article explores the theoretical process of visualization answering the question "how can visual literacy be understood based on the theoretical cognitive process of visualization in order to inform the understanding, teaching and studying of visual literacy in science education?" Based on various theories on cognitive processes during learning for science and general education the author argues that the theoretical process of visualization consists of three stages, namely, Internalization of Visual Models, Conceptualization of Visual Models and Externalization of Visual Models. The application of this theoretical cognitive process of visualization and the stages of visualization in science education are discussed.

  14. Effective Floquet Hamiltonian theory of multiple-quantum NMR in anisotropic solids involving quadrupolar spins: Challenges and Perspectives

    NASA Astrophysics Data System (ADS)

    Ganapathy, Vinay; Ramachandran, Ramesh

    2017-10-01

    The response of a quadrupolar nucleus (nuclear spin with I > 1/2) to an oscillating radio-frequency pulse/field is delicately dependent on the ratio of the quadrupolar coupling constant to the amplitude of the pulse in addition to its duration and oscillating frequency. Consequently, analytic description of the excitation process in the density operator formalism has remained less transparent within existing theoretical frameworks. As an alternative, the utility of the "concept of effective Floquet Hamiltonians" is explored in the present study to explicate the nuances of the excitation process in multilevel systems. Employing spin I = 3/2 as a case study, a unified theoretical framework for describing the excitation of multiple-quantum transitions in static isotropic and anisotropic solids is proposed within the framework of perturbation theory. The challenges resulting from the anisotropic nature of the quadrupolar interactions are addressed within the effective Hamiltonian framework. The possible role of the various interaction frames on the convergence of the perturbation corrections is discussed along with a proposal for a "hybrid method" for describing the excitation process in anisotropic solids. Employing suitable model systems, the validity of the proposed hybrid method is substantiated through a rigorous comparison between simulations emerging from exact numerical and analytic methods.

  15. MRF energy minimization and beyond via dual decomposition.

    PubMed

    Komodakis, Nikos; Paragios, Nikos; Tziritas, Georgios

    2011-03-01

    This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.

  16. Interpersonal communication as an agent of normative influence: a mixed method study among the urban poor in India.

    PubMed

    Rimal, Rajiv N; Sripad, Pooja; Speizer, Ilene S; Calhoun, Lisa M

    2015-08-12

    Although social norms are thought to play an important role in couples' reproductive decisions, only limited theoretical or empirical guidance exists on how the underlying process works. Using the theory of normative social behavior (TNSB), through a mixed-method design, we investigated the role played by injunctive norms and interpersonal discussion in the relationship between descriptive norms and use of modern contraceptive methods among the urban poor in India. Data from a household survey (N = 11,811) were used to test the underlying theoretical propositions, and focus group interviews among men and women were then conducted to obtain more in-depth knowledge about decision-making processes related to modern contraceptive use. Spousal influence and interpersonal communication emerged as key factors in decision-making, waning in the later years of marriage, and they also moderated the influence of descriptive norms on behaviors. Norms around contraceptive use, which varied by parity, are rapidly changing with the country's urbanization and increased access to health information. Open interpersonal discussion, community norms, and perspectives are integral in enabling women and couples to use modern family planning to meet their current fertility desires and warrant sensitivity in the design of family planning policy and programs.

  17. Inversion of Surface-wave Dispersion Curves due to Low-velocity-layer Models

    NASA Astrophysics Data System (ADS)

    Shen, C.; Xia, J.; Mi, B.

    2016-12-01

    A successful inversion relies on exact forward modeling methods. It is a key step to accurately calculate multi-mode dispersion curves of a given model in high-frequency surface-wave (Rayleigh wave and Love wave) methods. For normal models (shear (S)-wave velocity increasing with depth), their theoretical dispersion curves completely match the dispersion spectrum that is generated based on wave equation. For models containing a low-velocity-layer, however, phase velocities calculated by existing forward-modeling algorithms (e.g. Thomson-Haskell algorithm, Knopoff algorithm, fast vector-transfer algorithm and so on) fail to be consistent with the dispersion spectrum at a high frequency range. They will approach a value that close to the surface-wave velocity of the low-velocity-layer under the surface layer, rather than that of the surface layer when their corresponding wavelengths are short enough. This phenomenon conflicts with the characteristics of surface waves, which results in an erroneous inverted model. By comparing the theoretical dispersion curves with simulated dispersion energy, we proposed a direct and essential solution to accurately compute surface-wave phase velocities due to low-velocity-layer models. Based on the proposed forward modeling technique, we can achieve correct inversion for these types of models. Several synthetic data proved the effectiveness of our method.

  18. Excitation and Ionization Cross Sections for Electron-Beam Energy Deposition in High Temperature Air

    DTIC Science & Technology

    1987-07-09

    are given and compared to existing experimental results or other theoretical approaches. This information can readily be used as input for a deposition...of the doubly-differential, singly- differential and total ionization cross sections which subsequently served to guide theoretical calculations on...coworkers have been leaders in developing a theoretical base for studying electron production and energy deposition in atmospheric gases such as He, N2

  19. Obtaining tight bounds on higher-order interferences with a 5-path interferometer

    NASA Astrophysics Data System (ADS)

    Kauten, Thomas; Keil, Robert; Kaufmann, Thomas; Pressl, Benedikt; Brukner, Časlav; Weihs, Gregor

    2017-03-01

    Within the established theoretical framework of quantum mechanics, interference always occurs between pairs of paths through an interferometer. Higher order interferences with multiple constituents are excluded by Born’s rule and can only exist in generalized probabilistic theories. Thus, high-precision experiments searching for such higher order interferences are a powerful method to distinguish between quantum mechanics and more general theories. Here, we perform such a test in an optical multi-path interferometer, which avoids crucial systematic errors, has access to the entire phase space and is more stable than previous experiments. Our results are in accordance with quantum mechanics and rule out the existence of higher order interference terms in optical interferometry to an extent that is more than four orders of magnitude smaller than the expected pairwise interference, refining previous bounds by two orders of magnitude.

  20. Enumerating virus-like particles in an optically concentrated suspension by fluorescence correlation spectroscopy.

    PubMed

    Hu, Yi; Cheng, Xuanhong; Daniel Ou-Yang, H

    2013-01-01

    Fluorescence correlation spectroscopy (FCS) is one of the most sensitive methods for enumerating low concentration nanoparticles in a suspension. However, biological nanoparticles such as viruses often exist at a concentration much lower than the FCS detection limit. While optically generated trapping potentials are shown to effectively enhance the concentration of nanoparticles, feasibility of FCS for enumerating field-enriched nanoparticles requires understanding of the nanoparticle behavior in the external field. This paper reports an experimental study that combines optical trapping and FCS to examine existing theoretical predictions of particle concentration. Colloidal suspensions of polystyrene (PS) nanospheres and HIV-1 virus-like particles are used as model systems. Optical trapping energies and statistical analysis are used to discuss the applicability of FCS for enumerating nanoparticles in a potential well produced by a force field.

  1. Stability and global Hopf bifurcation in a delayed food web consisting of a prey and two predators

    NASA Astrophysics Data System (ADS)

    Meng, Xin-You; Huo, Hai-Feng; Zhang, Xiao-Bing

    2011-11-01

    This paper is concerned with a predator-prey system with Holling II functional response and hunting delay and gestation. By regarding the sum of delays as the bifurcation parameter, the local stability of the positive equilibrium and the existence of Hopf bifurcation are investigated. We obtained explicit formulas to determine the properties of Hopf bifurcation by using the normal form method and center manifold theorem. Special attention is paid to the global continuation of local Hopf bifurcation. Using a global Hopf bifurcation result of Wu [Wu JH. Symmetric functional differential equations and neural networks with memory, Trans Amer Math Soc 1998;350:4799-4838] for functional differential equations, we may show the global existence of the periodic solutions. Finally, several numerical simulations illustrating the theoretical analysis are also given.

  2. The Educational (Im)possibility for Dietetics: A Poststructural Discourse Analysis

    ERIC Educational Resources Information Center

    Gingras, Jacqui

    2009-01-01

    Inquiring into the theoretical underpinnings of dietetic curriculum provides a means for further understanding who dietitians are (identity) and what dietitians do (performativity). Since dietetic curriculum exists as a structural influence on the dietetic student identity, it is worth inquiring into how such a structure is theoretically informed,…

  3. Designing Educative Curriculum Materials: A Theoretically and Empirically Driven Process

    ERIC Educational Resources Information Center

    Davis, Elizabeth A.; Palincsar, Annemarie Sullivan; Arias, Anna Maria; Bismack, Amber Schultz; Marulis, Loren M.; Iwashyna, Stefanie K.

    2014-01-01

    In this article, the authors argue for a design process in the development of educative curriculum materials that is theoretically and empirically driven. Using a design-based research approach, they describe their design process for incorporating educative features intended to promote teacher learning into existing, high-quality curriculum…

  4. Thermoacoustics of solids: A pathway to solid state engines and refrigerators

    NASA Astrophysics Data System (ADS)

    Hao, Haitian; Scalo, Carlo; Sen, Mihir; Semperlotti, Fabio

    2018-01-01

    Thermoacoustic oscillations have been one of the most exciting discoveries of the physics of fluids in the 19th century. Since its inception, scientists have formulated a comprehensive theoretical explanation of the basic phenomenon which has later found several practical applications to engineering devices. To date, all studies have concentrated on the thermoacoustics of fluid media where this fascinating mechanism was exclusively believed to exist. Our study shows theoretical and numerical evidence of the existence of thermoacoustic instabilities in solid media. Although the underlying physical mechanism exhibits some interesting similarities with its counterpart in fluids, the theoretical framework highlights relevant differences that have important implications on the ability to trigger and sustain the thermoacoustic response. This mechanism could pave the way to the development of highly robust and reliable solid-state thermoacoustic engines and refrigerators.

  5. Propagating confined states in phase dynamics

    NASA Technical Reports Server (NTRS)

    Brand, Helmut R.; Deissler, Robert J.

    1992-01-01

    Theoretical treatment is given to the possibility of the existence of propagating confined states in the nonlinear phase equation by generalizing stationary confined states. The nonlinear phase equation is set forth for the case of propagating patterns with long wavelengths and low-frequency modulation. A large range of parameter values is shown to exist for propagating confined states which have spatially localized regions which travel on a background with unique wavelengths. The theoretical phenomena are shown to correspond to such physical systems as spirals in Taylor instabilities, traveling waves in convective systems, and slot-convection phenomena for binary fluid mixtures.

  6. Existence of k⁻¹ power-law scaling in the equilibrium regions of wall-bounded turbulence explained by Heisenberg's eddy viscosity.

    PubMed

    Katul, Gabriel G; Porporato, Amilcare; Nikora, Vladimir

    2012-12-01

    The existence of a "-1" power-law scaling at low wavenumbers in the longitudinal velocity spectrum of wall-bounded turbulence was explained by multiple mechanisms; however, experimental support has not been uniform across laboratory studies. This letter shows that Heisenberg's eddy viscosity approach can provide a theoretical framework that bridges these multiple mechanisms and explains the elusiveness of the "-1" power law in some experiments. Novel theoretical outcomes are conjectured about the role of intermittency and very-large scale motions in modifying the k⁻¹ scaling.

  7. Reconstruction of Vectorial Acoustic Sources in Time-Domain Tomography

    PubMed Central

    Xia, Rongmin; Li, Xu; He, Bin

    2009-01-01

    A new theory is proposed for the reconstruction of curl-free vector field, whose divergence serves as acoustic source. The theory is applied to reconstruct vector acoustic sources from the scalar acoustic signals measured on a surface enclosing the source area. It is shown that, under certain conditions, the scalar acoustic measurements can be vectorized according to the known measurement geometry and subsequently be used to reconstruct the original vector field. Theoretically, this method extends the application domain of the existing acoustic reciprocity principle from a scalar field to a vector field, indicating that the stimulating vectorial source and the transmitted acoustic pressure vector (acoustic pressure vectorized according to certain measurement geometry) are interchangeable. Computer simulation studies were conducted to evaluate the proposed theory, and the numerical results suggest that reconstruction of a vector field using the proposed theory is not sensitive to variation in the detecting distance. The present theory may be applied to magnetoacoustic tomography with magnetic induction (MAT-MI) for reconstructing current distribution from acoustic measurements. A simulation on MAT-MI shows that, compared to existing methods, the present method can give an accurate estimation on the source current distribution and a better conductivity reconstruction. PMID:19211344

  8. Exchange inlet optimization by genetic algorithm for improved RBCC performance

    NASA Astrophysics Data System (ADS)

    Chorkawy, G.; Etele, J.

    2017-09-01

    A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.

  9. The method of pulsed x-ray detection with a diode laser.

    PubMed

    Liu, Jun; Ouyang, Xiaoping; Zhang, Zhongbing; Sheng, Liang; Chen, Liang; Tan, Xinjian; Weng, Xiufeng

    2016-12-01

    A new class of pulsed X-ray detection methods by sensing carrier changes in a diode laser cavity has been presented and demonstrated. The proof-of-principle experiments on detecting pulsed X-ray temporal profile have been done through the diode laser with a multiple quantum well active layer. The result shows that our method can achieve the aim of detecting the temporal profile of a pulsed X-ray source. We predict that there is a minimum value for the pre-bias current of the diode laser by analyzing the carrier rate equation, which exists near the threshold current of the diode laser chip in experiments. This behaviour generally agrees with the characterizations of theoretical analysis. The relative sensitivity is estimated at about 3.3 × 10 -17 C ⋅ cm 2 . We have analyzed the time scale of about 10 ps response with both rate equation and Monte Carlo methods.

  10. Airfoil self-noise and prediction

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Pope, D. Stuart; Marcolini, Michael A.

    1989-01-01

    A prediction method is developed for the self-generated noise of an airfoil blade encountering smooth flow. The prediction methods for the individual self-noise mechanisms are semiempirical and are based on previous theoretical studies and data obtained from tests of two- and three-dimensional airfoil blade sections. The self-noise mechanisms are due to specific boundary-layer phenomena, that is, the boundary-layer turbulence passing the trailing edge, separated-boundary-layer and stalled flow over an airfoil, vortex shedding due to laminar boundary layer instabilities, vortex shedding from blunt trailing edges, and the turbulent vortex flow existing near the tip of lifting blades. The predictions are compared successfully with published data from three self-noise studies of different airfoil shapes. An application of the prediction method is reported for a large scale-model helicopter rotor, and the predictions compared well with experimental broadband noise measurements. A computer code of the method is given.

  11. Neural network regulation driven by autonomous neural firings

    NASA Astrophysics Data System (ADS)

    Cho, Myoung Won

    2016-07-01

    Biological neurons naturally fire spontaneously due to the existence of a noisy current. Such autonomous firings may provide a driving force for network formation because synaptic connections can be modified due to neural firings. Here, we study the effect of autonomous firings on network formation. For the temporally asymmetric Hebbian learning, bidirectional connections lose their balance easily and become unidirectional ones. Defining the difference between reciprocal connections as new variables, we could express the learning dynamics as if Ising model spins interact with each other in magnetism. We present a theoretical method to estimate the interaction between the new variables in a neural system. We apply the method to some network systems and find some tendencies of autonomous neural network regulation.

  12. Calculation of total and ionization cross sections for electron scattering by primary benzene compounds

    NASA Astrophysics Data System (ADS)

    Singh, Suvam; Naghma, Rahla; Kaur, Jaspreet; Antony, Bobby

    2016-07-01

    The total and ionization cross sections for electron scattering by benzene, halobenzenes, toluene, aniline, and phenol are reported over a wide energy domain. The multi-scattering centre spherical complex optical potential method has been employed to find the total elastic and inelastic cross sections. The total ionization cross section is estimated from total inelastic cross section using the complex scattering potential-ionization contribution method. In the present article, the first theoretical calculations for electron impact total and ionization cross section have been performed for most of the targets having numerous practical applications. A reasonable agreement is obtained compared to existing experimental observations for all the targets reported here, especially for the total cross section.

  13. Interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Gottwald, James A.; Bryce, Jeffrey W.

    1987-01-01

    Existing interior noise reduction techniques for aircraft fuselages perform reasonably well at higher frequencies, but are inadequate at low frequencies, particularly with respect to the low blade passage harmonics with high forcing levels found in propeller aircraft. A method is studied which considers aircraft fuselages lined with panels alternately tuned to frequencies above and below the frequency that must be attenuated. Adjacent panel would oscillate at equal amplitude, to give equal acoustic source strength, but with opposite phase. Provided these adjacent panels are acoustically compact, the resulting cancellation causes the interior acoustic modes to be cut off, and therefore be nonpropagating and evanescent. This interior noise reduction method, called Alternate Resonance Tuning (ART), is being investigated theoretically and experimentally. Progress to date is discussed.

  14. Innovation and design approaches within prospective ergonomics.

    PubMed

    Liem, André; Brangier, Eric

    2012-01-01

    In this conceptual article the topic of "Prospective Ergonomics" will be discussed within the context of innovation, design thinking and design processes & methods. Design thinking is essentially a human-centred innovation process that emphasises observation, collaboration, interpretation, visualisation of ideas, rapid concept prototyping and concurrent business analysis, which ultimately influences innovation and business strategy. The objective of this project is to develop a roadmap for innovation, involving consumers, designers and business people in an integrative process, which can be applied to product, service and business design. A theoretical structure comprising of Innovation perspectives (1), Worldviews supported by rationalist-historicist and empirical-idealistic dimensions (2) and Models of "design" reasoning (3) precedes the development and classification of existing methods as well as the introduction of new ones.

  15. A method to model latent heat for transient analysis using NASTRAN

    NASA Technical Reports Server (NTRS)

    Harder, R. L.

    1982-01-01

    A sample heat transfer analysis is demonstrated which includes the heat of fusion. The method can be used to analyze a system with nonconstant specific heat. The enthalpy is introduced as an independent degree of freedom at each node. The user input consists of a curve of temperature as a function of enthalpy, which may include a constant temperature phase change. The basic NASTRAN heat transfer capability is used to model the effects of latent heat with existing direct matrix output and nonlinear load data cards. Although some user care is required, the numerical stability of the integration is quite good when the given recommendations are followed. The theoretical equations used and the NASTRAN techniques are shown.

  16. Building child trauma theory from longitudinal studies: a meta-analysis.

    PubMed

    Alisic, Eva; Jongmans, Marian J; van Wesel, Floryt; Kleber, Rolf J

    2011-07-01

    Many children are exposed to traumatic events, with potentially serious psychological and developmental consequences. Therefore, understanding development of long-term posttraumatic stress in children is essential. We aimed to contribute to child trauma theory by focusing on theory use and theory validation in longitudinal studies. Forty studies measuring short-term predictors and long-term posttraumatic stress symptoms were identified and coded for theoretical grounding, sample characteristics, and correlational effect sizes. Explicit theoretical frameworks were present in a minority of the studies. Important predictors of long-term posttraumatic stress were symptoms of acute and short-term posttraumatic stress, depression, anxiety, and parental posttraumatic stress. Female gender, injury severity, duration of hospitalization, and elevated heart rate shortly after hospitalization yielded small effect sizes. Age, minority status, and socioeconomic status were not significantly related to long-term posttraumatic stress reactions. Since many other variables were not studied frequently enough to compute effect sizes, existing theoretical frameworks could only be partially confirmed or falsified. Child trauma theory-building can be facilitated by development of encouraging journal policies, the use of comparable methods, and more intense collaboration. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. A theoretical model of speed-dependent steering torque for rolling tyres

    NASA Astrophysics Data System (ADS)

    Wei, Yintao; Oertel, Christian; Liu, Yahui; Li, Xuebing

    2016-04-01

    It is well known that the tyre steering torque is highly dependent on the tyre rolling speed. In limited cases, i.e. parking manoeuvre, the steering torque approaches the maximum. With the increasing tyre speed, the steering torque decreased rapidly. Accurate modelling of the speed-dependent behaviour for the tyre steering torque is a key factor to calibrate the electric power steering (EPS) system and tune the handling performance of vehicles. However, no satisfactory theoretical model can be found in the existing literature to explain this phenomenon. This paper proposes a new theoretical framework to model this important tyre behaviour, which includes three key factors: (1) tyre three-dimensional transient rolling kinematics with turn-slip; (2) dynamical force and moment generation; and (3) the mixed Lagrange-Euler method for contact deformation solving. A nonlinear finite-element code has been developed to implement the proposed approach. It can be found that the main mechanism for the speed-dependent steering torque is due to turn-slip-related kinematics. This paper provides a theory to explain the complex mechanism of the tyre steering torque generation, which helps to understand the speed-dependent tyre steering torque, tyre road feeling and EPS calibration.

  18. Defining Interdisciplinary Research: Conclusions from a Critical Review of the Literature

    PubMed Central

    Aboelela, Sally W; Larson, Elaine; Bakken, Suzanne; Carrasquillo, Olveen; Formicola, Allan; Glied, Sherry A; Haas, Janet; Gebbie, Kristine M

    2007-01-01

    Objective To summarize findings from a systematic exploration of existing literature and views regarding interdisciplinarity, to discuss themes and components of such work, and to propose a theoretically based definition of interdisciplinary research. Data Sources/Study Setting Two major data sources were used: interviews with researchers from various disciplines, and a systematic review of the education, business, and health care literature from January 1980 through January 2005. Study Design Systematic review of literature, one-on-one interviews, field test (survey). Data Collection/Extraction Methods We reviewed 14 definitions of interdisciplinarity, the characteristics of 42 interdisciplinary research publications from multiple fields of study, and 14 researcher interviews to arrive at a preliminary definition of interdisciplinary research. That definition was then field tested by 12 individuals with interdisciplinary research experience, and their responses incorporated into the definition of interdisciplinary research proposed in this paper. Principal Findings Three key definitional characteristics were identified: the qualitative mode of research (and its theoretical underpinnings), existence of a continuum of synthesis among disciplines, and the desired outcome of the interdisciplinary research. Conclusion Existing literature from several fields did not provide a definition for interdisciplinary research of sufficient specificity to facilitate activities such as identification of the competencies, structure, and resources needed for health care and health policy research. This analysis led to the proposed definition, which is designed to aid decision makers in funding agencies/program committees and researchers to identify and take full advantage the interdisciplinary approach, and to serve as a basis for competency-based formalized training to provide researchers with interdisciplinary skills. PMID:17355595

  19. Linear Transforms for Fourier Data on the Sphere: Application to High Angular Resolution Diffusion MRI of the Brain

    PubMed Central

    Haldar, Justin P.; Leahy, Richard M.

    2013-01-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. PMID:23353603

  20. When can a woman resume or initiate contraception after taking emergency contraceptive pills? A systematic review.

    PubMed

    Salcedo, Jennifer; Rodriguez, Maria I; Curtis, Kathryn M; Kapp, Nathalie

    2013-05-01

    Hormonal emergency contraception can postpone ovulation, making a woman vulnerable to pregnancy later in the same cycle. However, concern exists as to whether concurrently administered emergency contraception pills (ECP) and other hormonal methods of contraception may affect the effectiveness of both medications. A systematic review of the literature using PubMed and the Cochrane databases was performed to identify articles concerning the resumption or initiation of regular contraception within the same cycle as ECP use. We searched for articles in any language, published between 1980 and April 2012 and included all methods of emergency contraception pills available in the USA. The search strategy identified 184 articles in the PubMed and Cochrane databases, of which none met inclusion criteria. The drug manufacturer advises continuation or initiation of routine contraception as soon as possible after use of ulipristal acetate, with concomitant use of a reliable barrier method until next menses. However, a theoretical concern exists that given ulipristal acetate's function as a selective progesterone receptor modulator, coadministration of a progestin could decrease its effectiveness as an emergency contraceptive. Initiation of hormonal contraception following levonorgestrel or the Yuzpe regimen for emergency contraception carries no similar concern for decreased method effectiveness. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering

    PubMed Central

    Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani

    2012-01-01

    Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386

  2. Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy

    NASA Technical Reports Server (NTRS)

    Ford, G. E.

    1986-01-01

    To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.

  3. Retinal artery-vein classification via topology estimation

    PubMed Central

    Estrada, Rolando; Allingham, Michael J.; Mettu, Priyatham S.; Cousins, Scott W.; Tomasi, Carlo; Farsiu, Sina

    2015-01-01

    We propose a novel, graph-theoretic framework for distinguishing arteries from veins in a fundus image. We make use of the underlying vessel topology to better classify small and midsized vessels. We extend our previously proposed tree topology estimation framework by incorporating expert, domain-specific features to construct a simple, yet powerful global likelihood model. We efficiently maximize this model by iteratively exploring the space of possible solutions consistent with the projected vessels. We tested our method on four retinal datasets and achieved classification accuracies of 91.0%, 93.5%, 91.7%, and 90.9%, outperforming existing methods. Our results show the effectiveness of our approach, which is capable of analyzing the entire vasculature, including peripheral vessels, in wide field-of-view fundus photographs. This topology-based method is a potentially important tool for diagnosing diseases with retinal vascular manifestation. PMID:26068204

  4. A multi-level systems perspective for the science of team science.

    PubMed

    Börner, Katy; Contractor, Noshir; Falk-Krzesinski, Holly J; Fiore, Stephen M; Hall, Kara L; Keyton, Joann; Spring, Bonnie; Stokols, Daniel; Trochim, William; Uzzi, Brian

    2010-09-15

    This Commentary describes recent research progress and professional developments in the study of scientific teamwork, an area of inquiry termed the "science of team science" (SciTS, pronounced "sahyts"). It proposes a systems perspective that incorporates a mixed-methods approach to SciTS that is commensurate with the conceptual, methodological, and translational complexities addressed within the SciTS field. The theoretically grounded and practically useful framework is intended to integrate existing and future lines of SciTS research to facilitate the field's evolution as it addresses key challenges spanning macro, meso, and micro levels of analysis.

  5. Three waves for quantum gravity

    NASA Astrophysics Data System (ADS)

    Calmet, Xavier; Latosh, Boris

    2018-03-01

    Using effective field theoretical methods, we show that besides the already observed gravitational waves, quantum gravity predicts two further massive classical fields leading to two new massive waves. We set a limit on the masses of these new modes using data from the Eöt-Wash experiment. We point out that the existence of these new states is a model independent prediction of quantum gravity. We then explain how these new classical fields could impact astrophysical processes and in particular the binary inspirals of neutron stars or black holes. We calculate the emission rate of these new states in binary inspirals astrophysical processes.

  6. Slow-wave propagation on monolithic microwave integrated circuits with layered and non-layered structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tzuang, C.K.C.

    1986-01-01

    Various MMIC (monolithic microwave integrated circuit) planar waveguides have shown possible existence of a slow-wave propagation. In many practical applications of these slow-wave circuits, the semiconductor devices have nonuniform material properties that may affect the slow-wave propagation. In the first part of the dissertation, the effects of the nonuniform material properties are studied by a finite-element method. In addition, the transient pulse excitations of these slow-wave circuits also have great theoretical and practical interests. In the second part, the time-domain analysis of a slow-wave coplanar waveguide is presented.

  7. The BAPE 2 balloon-borne CO2

    NASA Technical Reports Server (NTRS)

    Degnan, J. J.; Walker, H. E.; Peruso, C. J.; Johnson, E. H.; Klein, B. J.; Mcelroy, J. H.

    1972-01-01

    The systems and techniques which were utilized in the experiment to establish an air-to-ground CO2 laser heterodyne link are described along with the successes and problems encountered when the heterodyne receiver and laser transmitter package were removed from the controlled environment of the laboratory. Major topics discussed include: existing systems and the underlying principles involved in their operation; experimental techniques and optical alignment methods which were found to be useful; theoretical calculations of signal strengths expected under a variety of test conditions and in actual flight; and the experimental results including problems encountered and their possible solutions.

  8. Testing a Theoretical Model of Perceived Self-Efficacy for Cancer-Related Fatigue Self-Management and Optimal Physical Functional Status

    PubMed Central

    Hoffman, Amy J.; von Eye, Alexander; Gift, Audrey G.; Given, Barbara A.; Given, Charles W.; Rothert, Marilyn

    2009-01-01

    Background Critical gaps exist in the understanding of cancer symptoms, particularly for cancer-related fatigue (CRF). Existing theories and models do not examine the key role perceived self-efficacy (PSE) plays in a person's ability to manage symptoms. Objectives To test the hypothesis that physical functional status (PFS) is predicted through patient characteristics, CRF, other symptoms, and PSE for fatigue self-management in persons with cancer. Methods This study is a secondary data analysis from the baseline observation of two randomized control trials. The combined data set includes 298 subjects who were undergoing a course of chemotherapy. Key variables included physiological and contextual patient characteristics, the severity from CRF and other symptoms, PSE, and PFS. Path analysis examined the relationships among the variables in the proposed theoretical model. Results Persons with cancer reported CRF as the most prevalent symptom among a mean of 7.4 other concurrent symptoms. The severity from CRF had a direct and indirect effect on PFS, with CRF having a direct adverse impact on PFS (t = -7.02) and an indirect adverse effect as part of the severity from the other symptoms (t = 9.69) which also adversely impacted PFS (t = -2.71). Consistent with the proposed theoretical model, PSE had a positive effect on the PFS (t = 2.87) of persons with cancer while serving as a mediator between CRF severity and PFS. Discussion Cancer-related fatigure is prevalent and related to the presence of other symptoms, and PSE for fatigue self-management is an important factor influencing CRF and PFS. A foundation is provided for future intervention studies to increase PSE to achieve optimal PFS in persons with cancer. PMID:19092553

  9. A graph-theoretic approach for inparalog detection.

    PubMed

    Tremblay-Savard, Olivier; Swenson, Krister M

    2012-01-01

    Understanding the history of a gene family that evolves through duplication, speciation, and loss is a fundamental problem in comparative genomics. Features such as function, position, and structural similarity between genes are intimately connected to this history; relationships between genes such as orthology (genes related through a speciation event) or paralogy (genes related through a duplication event) are usually correlated with these features. For example, recent work has shown that in human and mouse there is a strong connection between function and inparalogs, the paralogs that were created since the speciation event separating the human and mouse lineages. Methods exist for detecting inparalogs that either use information from only two species, or consider a set of species but rely on clustering methods. In this paper we present a graph-theoretic approach for finding lower bounds on the number of inparalogs for a given set of species; we pose an edge covering problem on the similarity graph and give an efficient 2/3-approximation as well as a faster heuristic. Since the physical position of inparalogs corresponding to recent speciations is not likely to have changed since the duplication, we also use our predictions to estimate the types of duplications that have occurred in some vertebrates and drosophila.

  10. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  11. [Research on optimization of mathematical model of flow injection-hydride generation-atomic fluorescence spectrometry].

    PubMed

    Cui, Jian; Zhao, Xue-Hong; Wang, Yan; Xiao, Ya-Bing; Jiang, Xue-Hui; Dai, Li

    2014-01-01

    Flow injection-hydride generation-atomic fluorescence spectrometry was a widely used method in the industries of health, environmental, geological and metallurgical fields for the merit of high sensitivity, wide measurement range and fast analytical speed. However, optimization of this method was too difficult as there exist so many parameters affecting the sensitivity and broadening. Generally, the optimal conditions were sought through several experiments. The present paper proposed a mathematical model between the parameters and sensitivity/broadening coefficients using the law of conservation of mass according to the characteristics of hydride chemical reaction and the composition of the system, which was proved to be accurate as comparing the theoretical simulation and experimental results through the test of arsanilic acid standard solution. Finally, this paper has put a relation map between the parameters and sensitivity/broadening coefficients, and summarized that GLS volume, carrier solution flow rate and sample loop volume were the most factors affecting sensitivity and broadening coefficients. Optimizing these three factors with this relation map, the relative sensitivity was advanced by 2.9 times and relative broadening was reduced by 0.76 times. This model can provide a theoretical guidance for the optimization of the experimental conditions.

  12. Convex Banding of the Covariance Matrix

    PubMed Central

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings. PMID:28042189

  13. Food insecurity: A concept analysis

    PubMed Central

    Schroeder, Krista; Smaldone, Arlene

    2015-01-01

    Aim To report an analysis of the concept of food insecurity, in order to 1) propose a theoretical model of food insecurity useful to nursing and 2) discuss its implications for nursing practice, nursing research, and health promotion. Background Forty eight million Americans are food insecure. As food insecurity is associated with multiple negative health effects, nursing intervention is warranted. Design Concept Analysis Data sources A literature search was conducted in May 2014 in Scopus and MEDLINE using the exploded term “food insecur*.” No year limit was placed. Government websites and popular media were searched to ensure a full understanding of the concept. Review Methods Iterative analysis, using the Walker and Avant method Results Food insecurity is defined by uncertain ability or inability to procure food, inability to procure enough food, being unable to live a healthy life, and feeling unsatisfied. A proposed theoretical model of food insecurity, adapted from the Socio-Ecological Model, identifies three layers of food insecurity (individual, community, society), with potential for nursing impact at each level. Conclusion Nurses must work to fight food insecurity. There exists a potential for nursing impact that is currently unrealized. Nursing impact can be guided by a new conceptual model, Food Insecurity within the Nursing Paradigm. PMID:25612146

  14. Convex Banding of the Covariance Matrix.

    PubMed

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  15. An Information-Theoretical Approach to Image Resolution Applied to Neutron Imaging Detectors Based Upon Individual Discriminator Signals

    NASA Astrophysics Data System (ADS)

    Clergeau, Jean-François; Ferraton, Matthieu; Guérard, Bruno; Khaplanov, Anton; Piscitelli, Francesco; Platz, Martin; Rigal, Jean-Marie; Van Esch, Patrick; Daullé, Thibault

    2017-01-01

    1D or 2D neutron position sensitive detectors with individual wire or strip readout using discriminators have the advantage of being able to treat several neutron impacts partially overlapping in time, hence reducing global dead time. A single neutron impact usually gives rise to several discriminator signals. In this paper, we introduce an information-theoretical definition of image resolution. Two point-like spots of neutron impacts with a given distance between them act as a source of information (each neutron hit belongs to one spot or the other), and the detector plus signal treatment is regarded as an imperfect communication channel that transmits this information. The maximal mutual information obtained from this channel as a function of the distance between the spots allows to define a calibration-independent measure of position resolution. We then apply this measure to quantify the power of position resolution of different algorithms treating these individual discriminator signals which can be implemented in firmware. The method is then applied to different detectors existing at the ILL. Center-of-gravity methods usually improve the position resolution over best-wire algorithms which are the standard way of treating these signals.

  16. Structural Optimization of Triboelectric Nanogenerator for Harvesting Water Wave Energy.

    PubMed

    Jiang, Tao; Zhang, Li Min; Chen, Xiangyu; Han, Chang Bao; Tang, Wei; Zhang, Chi; Xu, Liang; Wang, Zhong Lin

    2015-12-22

    Ocean waves are one of the most abundant energy sources on earth, but harvesting such energy is rather challenging due to various limitations of current technologies. Recently, networks formed by triboelectric nanogenerator (TENG) have been proposed as a promising technology for harvesting water wave energy. In this work, a basic unit for the TENG network was studied and optimized, which has a box structure composed of walls made of TENG composed of a wavy-structured Cu-Kapton-Cu film and two FEP thin films, with a metal ball enclosed inside. By combination of the theoretical calculations and experimental studies, the output performances of the TENG unit were investigated for various structural parameters, such as the size, mass, or number of the metal balls. From the viewpoint of theory, the output characteristics of TENG during its collision with the ball were numerically calculated by the finite element method and interpolation method, and there exists an optimum ball size or mass to reach maximized output power and electric energy. Moreover, the theoretical results were well verified by the experimental tests. The present work could provide guidance for structural optimization of wavy-structured TENGs for effectively harvesting water wave energy toward the dream of large-scale blue energy.

  17. A Hyperspherical Adaptive Sparse-Grid Method for High-Dimensional Discontinuity Detection

    DOE PAGES

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.; ...

    2015-06-24

    This study proposes and analyzes a hyperspherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hypersurface of an N-dimensional discontinuous quantity of interest, by virtue of a hyperspherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyperspherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the newmore » technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. In addition, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  18. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max D.

    This work proposes and analyzes a hyper-spherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of themore » hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.« less

  19. Universal Bilingualism.

    ERIC Educational Resources Information Center

    Roeper, Thomas

    1999-01-01

    Suggests that a narrow kind of bilingualism exists within every language and is present whenever two properties exist in a language that are not statable within a single grammar. This theoretical bilingualism is defined in terms of the minimalist theory of syntax presented by Chomsky (1995). (Author/VWL)

  20. The Status of the Concept of "Phoneme" in Psycholinguistics

    ERIC Educational Resources Information Center

    Uppstad, Per Henning; Tonnessen, Finn Egil

    2010-01-01

    The notion of the phoneme counts as a break-through of modern theoretical linguistics in the early twentieth century. It paved the way for descriptions of distinctive features at different levels in linguistics. Although it has since then had a turbulent existence across altering theoretical positions, it remains a powerful concept of a…

  1. Tax Law System

    ERIC Educational Resources Information Center

    Tsindeliani, Imeda A.

    2016-01-01

    The article deals with consideration of the actual theoretic problems of the subject and system of tax law in Russia. The theoretical approaches to determination of the nature of separate institutes of tax law are represented. The existence of pandect system intax law building as financial law sub-branch of Russia is substantiated. The goal of the…

  2. A Model for Designing Peer-Initiated Activities to Promote Racial Awareness and an Appreciation of Differences.

    ERIC Educational Resources Information Center

    Mann, Barbara A.; Moser, Rita M.

    1991-01-01

    Presents a theoretical framework suggesting ways to design peer intervention programs and group existing programs. Suggests criteria for effective racial awareness programs, discussing examples of successful college prejudice activities. Notes diversity education efforts are most successful when based on a theoretical model that recognizes the…

  3. Seven Basic Steps to Solving Ethical Dilemmas in Special Education: A Decision-Making Framework

    ERIC Educational Resources Information Center

    Stockall, Nancy; Dennis, Lindsay R.

    2015-01-01

    This article presents a seven-step framework for decision making to solve ethical issues in special education. The authors developed the framework from the existing literature and theoretical frameworks of justice, critique, care, and professionalism. The authors briefly discuss each theoretical framework and then describe the decision-making…

  4. Proposing a Theoretical Framework for Digital Age Youth Information Behavior Building upon Radical Change Theory

    ERIC Educational Resources Information Center

    Koh, Kyungwon

    2011-01-01

    Contemporary young people are engaged in a variety of information behaviors, such as information seeking, using, sharing, and creating. The ways youth interact with information have transformed in the shifting digital information environment; however, relatively little empirical research exists and no theoretical framework adequately explains…

  5. Chinese Learning Styles: Blending Confucian and Western Theories

    ERIC Educational Resources Information Center

    Corcoran, Charles

    2014-01-01

    The multitude of philosophies that currently exists in workforce education in China makes it difficult to decide on a singular theoretical foundation. Therefore, it seems most prudent to begin with those theories that align with Confucian values as well as include humanistic, pragmatist, behaviorist, and other elements. Such a theoretical base,…

  6. Meta-Theoretical Contributions to the Constitution of a Model-Based Didactics of Science

    NASA Astrophysics Data System (ADS)

    Ariza, Yefrin; Lorenzano, Pablo; Adúriz-Bravo, Agustín

    2016-10-01

    There is nowadays consensus in the community of didactics of science (i.e. science education understood as an academic discipline) regarding the need to include the philosophy of science in didactical research, science teacher education, curriculum design, and the practice of science education in all educational levels. Some authors have identified an ever-increasing use of the concept of `theoretical model', stemming from the so-called semantic view of scientific theories. However, it can be recognised that, in didactics of science, there are over-simplified transpositions of the idea of model (and of other meta-theoretical ideas). In this sense, contemporary philosophy of science is often blurred or distorted in the science education literature. In this paper, we address the discussion around some meta-theoretical concepts that are introduced into didactics of science due to their perceived educational value. We argue for the existence of a `semantic family', and we characterise four different versions of semantic views existing within the family. In particular, we seek to contribute to establishing a model-based didactics of science mainly supported in this semantic family.

  7. Space-Related Applications of Intelligent Control: Which Algorithm to Choose? (Theoretical Analysis of the Problem)

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik

    1996-01-01

    For a space mission to be successful it is vitally important to have a good control strategy. For example, with the Space Shuttle it is necessary to guarantee the success and smoothness of docking, the smoothness and fuel efficiency of trajectory control, etc. For an automated planetary mission it is important to control the spacecraft's trajectory, and after that, to control the planetary rover so that it would be operable for the longest possible period of time. In many complicated control situations, traditional methods of control theory are difficult or even impossible to apply. In general, in uncertain situations, where no routine methods are directly applicable, we must rely on the creativity and skill of the human operators. In order to simulate these experts, an intelligent control methodology must be developed. The research objectives of this project were: to analyze existing control techniques; to find out which of these techniques is the best with respect to the basic optimality criteria (stability, smoothness, robustness); and, if for some problems, none of the existing techniques is satisfactory, to design new, better intelligent control techniques.

  8. Basic Brackets of a 2D Model for the Hodge Theory Without its Canonical Conjugate Momenta

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Gupta, S.; Malik, R. P.

    2016-06-01

    We deduce the canonical brackets for a two (1+1)-dimensional (2D) free Abelian 1-form gauge theory by exploiting the beauty and strength of the continuous symmetries of a Becchi-Rouet-Stora-Tyutin (BRST) invariant Lagrangian density that respects, in totality, six continuous symmetries. These symmetries entail upon this model to become a field theoretic example of Hodge theory. Taken together, these symmetries enforce the existence of exactly the same canonical brackets amongst the creation and annihilation operators that are found to exist within the standard canonical quantization scheme. These creation and annihilation operators appear in the normal mode expansion of the basic fields of this theory. In other words, we provide an alternative to the canonical method of quantization for our present model of Hodge theory where the continuous internal symmetries play a decisive role. We conjecture that our method of quantization is valid for a class of field theories that are tractable physical examples for the Hodge theory. This statement is true in any arbitrary dimension of spacetime.

  9. Coarse-Graining Polymer Field Theory for Fast and Accurate Simulations of Directed Self-Assembly

    NASA Astrophysics Data System (ADS)

    Liu, Jimmy; Delaney, Kris; Fredrickson, Glenn

    To design effective manufacturing processes using polymer directed self-assembly (DSA), the semiconductor industry benefits greatly from having a complete picture of stable and defective polymer configurations. Field-theoretic simulations are an effective way to study these configurations and predict defect populations. Self-consistent field theory (SCFT) is a particularly successful theory for studies of DSA. Although other models exist that are faster to simulate, these models are phenomenological or derived through asymptotic approximations, often leading to a loss of accuracy relative to SCFT. In this study, we employ our recently-developed method to produce an accurate coarse-grained field theory for diblock copolymers. The method uses a force- and stress-matching strategy to map output from SCFT simulations into parameters for an optimized phase field model. This optimized phase field model is just as fast as existing phenomenological phase field models, but makes more accurate predictions of polymer self-assembly, both in bulk and in confined systems. We study the performance of this model under various conditions, including its predictions of domain spacing, morphology and defect formation energies. Samsung Electronics.

  10. Mechanism of Flutter A Theoretical and Experimental Investigation of the Flutter Problem

    NASA Technical Reports Server (NTRS)

    Theodorsen, Theodore; Garrick, I E

    1940-01-01

    The results of the basic flutter theory originally devised in 1934 and published as NACA Technical Report no. 496 are presented in a simpler and more complete form convenient for further studies. The paper attempts to facilitate the judgement of flutter problems by a systematic survey of the theoretical effects of the various parameters. A large number of experiments were conducted on cantilever wings, with and without ailerons, in the NACA high-speed wind tunnel for the purpose of verifying the theory and to study its adaptability to three-dimensional problems. The experiments included studies on wing taper ratios, nacelles, attached floats, and external bracings. The essential effects in the transition to the three-dimensional problem have been established. Of particular interest is the existence of specific flutter modes as distinguished from ordinary vibration modes. It is shown that there exists a remarkable agreement between theoretical and experimental results.

  11. Global analysis of seasonal streamflow predictability using an ensemble prediction system and observations from 6192 small catchments worldwide

    NASA Astrophysics Data System (ADS)

    van Dijk, Albert I. J. M.; Peña-Arancibia, Jorge L.; Wood, Eric F.; Sheffield, Justin; Beck, Hylke E.

    2013-05-01

    Ideally, a seasonal streamflow forecasting system would ingest skilful climate forecasts and propagate these through calibrated hydrological models initialized with observed catchment conditions. At global scale, practical problems exist in each of these aspects. For the first time, we analyzed theoretical and actual skill in bimonthly streamflow forecasts from a global ensemble streamflow prediction (ESP) system. Forecasts were generated six times per year for 1979-2008 by an initialized hydrological model and an ensemble of 1° resolution daily climate estimates for the preceding 30 years. A post-ESP conditional sampling method was applied to 2.6% of forecasts, based on predictive relationships between precipitation and 1 of 21 climate indices prior to the forecast date. Theoretical skill was assessed against a reference run with historic forcing. Actual skill was assessed against streamflow records for 6192 small (<10,000 km2) catchments worldwide. The results show that initial catchment conditions provide the main source of skill. Post-ESP sampling enhanced skill in equatorial South America and Southeast Asia, particularly in terms of tercile probability skill, due to the persistence and influence of the El Niño Southern Oscillation. Actual skill was on average 54% of theoretical skill but considerably more for selected regions and times of year. The realized fraction of the theoretical skill probably depended primarily on the quality of precipitation estimates. Forecast skill could be predicted as the product of theoretical skill and historic model performance. Increases in seasonal forecast skill are likely to require improvement in the observation of precipitation and initial hydrological conditions.

  12. Robustness of continuous-time adaptive control algorithms in the presence of unmodeled dynamics

    NASA Technical Reports Server (NTRS)

    Rohrs, C. E.; Valavani, L.; Athans, M.; Stein, G.

    1985-01-01

    This paper examines the robustness properties of existing adaptive control algorithms to unmodeled plant high-frequency dynamics and unmeasurable output disturbances. It is demonstrated that there exist two infinite-gain operators in the nonlinear dynamic system which determines the time-evolution of output and parameter errors. The pragmatic implications of the existence of such infinite-gain operators is that: (1) sinusoidal reference inputs at specific frequencies and/or (2) sinusoidal output disturbances at any frequency (including dc), can cause the loop gain to increase without bound, thereby exciting the unmodeled high-frequency dynamics, and yielding an unstable control system. Hence, it is concluded that existing adaptive control algorithms as they are presented in the literature referenced in this paper, cannot be used with confidence in practical designs where the plant contains unmodeled dynamics because instability is likely to result. Further understanding is required to ascertain how the currently implemented adaptive systems differ from the theoretical systems studied here and how further theoretical development can improve the robustness of adaptive controllers.

  13. Communication: Electron ionization of DNA bases.

    PubMed

    Rahman, M A; Krishnakumar, E

    2016-04-28

    No reliable experimental data exist for the partial and total electron ionization cross sections for DNA bases, which are very crucial for modeling radiation damage in genetic material of living cell. We have measured a complete set of absolute partial electron ionization cross sections up to 500 eV for DNA bases for the first time by using the relative flow technique. These partial cross sections are summed to obtain total ion cross sections for all the four bases and are compared with the existing theoretical calculations and the only set of measured absolute cross sections. Our measurements clearly resolve the existing discrepancy between the theoretical and experimental results, thereby providing for the first time reliable numbers for partial and total ion cross sections for these molecules. The results on fragmentation analysis of adenine supports the theory of its formation in space.

  14. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  15. Synthesis of 2-(bis(cyanomethyl)amino)-2-oxoethyl methacrylate monomer molecule and its characterization by experimental and theoretical methods

    NASA Astrophysics Data System (ADS)

    Sas, E. B.; Cankaya, N.; Kurt, M.

    2018-06-01

    In this work 2-(bis(cyanomethyl)amino)-2-oxoethyl methacrylate monomer has been synthesized as newly, characterized both experimentally and theoretically. Experimentally, it has been characterized by FT-IR, FT-Raman, 1H and 13C NMR spectroscopy techniques. The theoretical calculations have been performed with Density Functional Theory (DFT) including B3LYP method. The scaled theoretical wavenumbers have been assigned based on total energy distribution (TED). Electronic properties of monomer have been performed using time-dependent TD-DFT/B3LYP/B3LYP/6-311G++(d,p) method. The results of experimental have been compared with theoretical values. Both experimental and theoretical methods have shown that the monomer was suitable for the literature.

  16. Nuclear tetrahedral symmetry: possibly present throughout the periodic table.

    PubMed

    Dudek, J; Goźdź, A; Schunck, N; Miśkiewicz, M

    2002-06-24

    More than half a century after the fundamental, spherical shell structure in nuclei had been established, theoretical predictions indicated that the shell gaps comparable or even stronger than those at spherical shapes may exist. Group-theoretical analysis supported by realistic mean-field calculations indicate that the corresponding nuclei are characterized by the TD(d) ("double-tetrahedral") symmetry group. Strong shell-gap structure is enhanced by the existence of the four-dimensional irreducible representations of TD(d); it can be seen as a geometrical effect that does not depend on a particular realization of the mean field. Possibilities of discovering the TD(d) symmetry in experiment are discussed.

  17. Newer developments on self-modeling curve resolution implementing equality and unimodality constraints.

    PubMed

    Beyramysoltan, Samira; Abdollahi, Hamid; Rajkó, Róbert

    2014-05-27

    Analytical self-modeling curve resolution (SMCR) methods resolve data sets to a range of feasible solutions using only non-negative constraints. The Lawton-Sylvestre method was the first direct method to analyze a two-component system. It was generalized as a Borgen plot for determining the feasible regions in three-component systems. It seems that a geometrical view is required for considering curve resolution methods, because the complicated (only algebraic) conceptions caused a stop in the general study of Borgen's work for 20 years. Rajkó and István revised and elucidated the principles of existing theory in SMCR methods and subsequently introduced computational geometry tools for developing an algorithm to draw Borgen plots in three-component systems. These developments are theoretical inventions and the formulations are not always able to be given in close form or regularized formalism, especially for geometric descriptions, that is why several algorithms should have been developed and provided for even the theoretical deductions and determinations. In this study, analytical SMCR methods are revised and described using simple concepts. The details of a drawing algorithm for a developmental type of Borgen plot are given. Additionally, for the first time in the literature, equality and unimodality constraints are successfully implemented in the Lawton-Sylvestre method. To this end, a new state-of-the-art procedure is proposed to impose equality constraint in Borgen plots. Two- and three-component HPLC-DAD data set were simulated and analyzed by the new analytical curve resolution methods with and without additional constraints. Detailed descriptions and explanations are given based on the obtained abstract spaces. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Estimation of critical supersaturation solubility ratio for predicting diameters of dry particles prepared by air-jet atomization of solutions.

    PubMed

    Sapra, Mahak; Ugrani, Suraj; Mayya, Y S; Venkataraman, Chandra

    2017-08-15

    Air-jet atomization of solution into droplets followed by controlled drying is increasingly being used for producing nanoparticles for drug delivery applications. Nanoparticle size is an important parameter that influences the stability, bioavailability and efficacy of the drug. In air-jet atomization technique, dry particle diameters are generally predicted by using solute diffusion models involving the key concept of critical supersaturation solubility ratio (Sc) that dictates the point of crust formation within the droplet. As no reliable method exists to determine this quantity, the present study proposes an aerosol based method to determine Sc for a given solute-solvent system and process conditions. The feasibility has been demonstrated by conducting experiments for stearic acid in ethanol and chloroform as well as for anti-tubercular drug isoniazid in ethanol. Sc values were estimated by combining the experimentally observed particle and droplet diameters with simulations from a solute diffusion model. Important findings of the study were: (i) the measured droplet diameters systematically decreased with increasing precursor concentration (ii) estimated Sc values were 9.3±0.7, 13.3±2.4 and 18±0.8 for stearic acid in chloroform, stearic acid and isoniazid in ethanol respectively (iii) experimental results pointed at the correct interfacial tension pre-factor to be used in theoretical estimates of Sc and (iv) results showed a consistent evidence for the existence of induction time delay between the attainment of theoretical Sc and crust formation. The proposed approach has been validated by testing its predictive power for a challenge concentration against experimental data. The study not only advances spray-drying technique by establishing an aerosol based approach to determine Sc, but also throws considerable light on the interfacial processes responsible for solid-phase formation in a rapidly supersaturating system. Until satisfactory theoretical formulae for predicting CSS are developed, the present approach appears to offer the best option for engineering nanoparticle size through solute diffusion models. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Application of tabu search to deterministic and stochastic optimization problems

    NASA Astrophysics Data System (ADS)

    Gurtuna, Ozgur

    During the past two decades, advances in computer science and operations research have resulted in many new optimization methods for tackling complex decision-making problems. One such method, tabu search, forms the basis of this thesis. Tabu search is a very versatile optimization heuristic that can be used for solving many different types of optimization problems. Another research area, real options, has also gained considerable momentum during the last two decades. Real options analysis is emerging as a robust and powerful method for tackling decision-making problems under uncertainty. Although the theoretical foundations of real options are well-established and significant progress has been made in the theory side, applications are lagging behind. A strong emphasis on practical applications and a multidisciplinary approach form the basic rationale of this thesis. The fundamental concepts and ideas behind tabu search and real options are investigated in order to provide a concise overview of the theory supporting both of these two fields. This theoretical overview feeds into the design and development of algorithms that are used to solve three different problems. The first problem examined is a deterministic one: finding the optimal servicing tours that minimize energy and/or duration of missions for servicing satellites around Earth's orbit. Due to the nature of the space environment, this problem is modeled as a time-dependent, moving-target optimization problem. Two solution methods are developed: an exhaustive method for smaller problem instances, and a method based on tabu search for larger ones. The second and third problems are related to decision-making under uncertainty. In the second problem, tabu search and real options are investigated together within the context of a stochastic optimization problem: option valuation. By merging tabu search and Monte Carlo simulation, a new method for studying options, Tabu Search Monte Carlo (TSMC) method, is developed. The theoretical underpinnings of the TSMC method and the flow of the algorithm are explained. Its performance is compared to other existing methods for financial option valuation. In the third, and final, problem, TSMC method is used to determine the conditions of feasibility for hybrid electric vehicles and fuel cell vehicles. There are many uncertainties related to the technologies and markets associated with new generation passenger vehicles. These uncertainties are analyzed in order to determine the conditions in which new generation vehicles can compete with established technologies.

  20. Mathematical interpretation of Brownian motor model: Limit cycles and directed transport phenomena

    NASA Astrophysics Data System (ADS)

    Yang, Jianqiang; Ma, Hong; Zhong, Suchuang

    2018-03-01

    In this article, we first suggest that the attractor of Brownian motor model is one of the reasons for the directed transport phenomenon of Brownian particle. We take the classical Smoluchowski-Feynman (SF) ratchet model as an example to investigate the relationship between limit cycles and directed transport phenomenon of the Brownian particle. We study the existence and variation rule of limit cycles of SF ratchet model at changing parameters through mathematical methods. The influences of these parameters on the directed transport phenomenon of a Brownian particle are then analyzed through numerical simulations. Reasonable mathematical explanations for the directed transport phenomenon of Brownian particle in SF ratchet model are also formulated on the basis of the existence and variation rule of the limit cycles and numerical simulations. These mathematical explanations provide a theoretical basis for applying these theories in physics, biology, chemistry, and engineering.

  1. Studies of Neutron-Induced Fission of 235U, 238U, and 239Pu

    NASA Astrophysics Data System (ADS)

    Duke, Dana; TKE Team

    2014-09-01

    A Frisch-gridded ionization chamber and the double energy (2E) analysis method were used to study mass yield distributions and average total kinetic energy (TKE) release from neutron-induced fission of 235U, 238U, and 239Pu. Despite decades of fission research, little or no TKE data exist for high incident neutron energies. Additional average TKE information at incident neutron energies relevant to defense- and energy-related applications will provide a valuable observable for benchmarking simulations. The data can also be used as inputs in theoretical fission models. The Los Alamos Neutron Science Center-Weapons Neutron Research (LANSCE - WNR) provides a neutron beam from thermal to hundreds of MeV, well-suited for filling in the gaps in existing data and exploring fission behavior in the fast neutron region. The results of the studies on 238U, 235U, and 239Pu will be presented. LA-UR-14-24921.

  2. The ferromagnetic-spin glass transition in PdMn alloys: symmetry breaking of ferromagnetism and spin glass studied by a multicanonical method.

    PubMed

    Kato, Tomohiko; Saita, Takahiro

    2011-03-16

    The magnetism of Pd(1-x)Mn(x) is investigated theoretically. A localized spin model for Mn spins that interact with short-range antiferromagnetic interactions and long-range ferromagnetic interactions via itinerant d electrons is set up, with no adjustable parameters. A multicanonical Monte Carlo simulation, combined with a procedure of symmetry breaking, is employed to discriminate between the ferromagnetic and spin glass orders. The transition temperature and the low-temperature phase are determined from the temperature variation of the specific heat and the probability distributions of the ferromagnetic order parameter and the spin glass order parameter at different concentrations. The calculation results reveal that only the ferromagnetic phase exists at x < 0.02, that only the spin glass phase exists at x > 0.04, and that the two phases coexist at intermediate concentrations. This result agrees semi-quantitatively with experimental results.

  3. Theory-Guided Selection of Discrimination Measures for Racial/Ethnic Health Disparities Research among Older Adults

    PubMed Central

    Thrasher, Angela D.; Clay, Olivio J.; Ford, Chandra L.; Stewart, Anita L.

    2013-01-01

    Objectives Discrimination may contribute to health disparities among older adults. Existing measures of perceived discrimination have provided important insights but may have limitations when used in studies of older adults. This paper illustrates the process of assessing the appropriateness of existing measures for theory-based research on perceived discrimination and health. Methods First we describe three theoretical frameworks that are relevant to the study of perceived discrimination and health – stress-process models, life course models, and the Public Health Critical Race praxis. We then review four widely-used measures of discrimination, comparing their content and describing how well they address key aspects of each theory, and discussing potential areas of modification. Discussion Using theory to guide measure selection can help improve understanding of how perceived discrimination may contribute to racial/ethnic health disparities among older adults. PMID:22451527

  4. JLab Measurements of the He 3 Form Factors at Large Momentum Transfers

    DOE PAGES

    Camsonne, A.; Katramatou, A. T.; Olson, M.; ...

    2017-10-19

    The charge and magnetic form factors, F C and F M, respectively, of 3He are extracted in the kinematic range 25 fm –2 ≤ Q 2 ≤ 61 fm –2 from elastic electron scattering by detecting 3He recoil nuclei and scattered electrons in coincidence with the two High Resolution Spectrometers of the Hall A Facility at Jefferson Lab. The measurements find evidence for the existence of a second diffraction minimum for the magnetic form factor at Q 2 = 49.3 fm –2 and for the charge form factor at Q 2 = 62.0 fm –2. Both minima are predicted tomore » exist in the Q 2 range accessible by this Jefferson Lab experiment. Here, the data are in qualitative agreement with theoretical calculations based on realistic interactions and accurate methods to solve the three-body nuclear problem.« less

  5. JLab Measurements of the He 3 Form Factors at Large Momentum Transfers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Camsonne, A.; Katramatou, A. T.; Olson, M.

    The charge and magnetic form factors, F C and F M, respectively, of 3He are extracted in the kinematic range 25 fm –2 ≤ Q 2 ≤ 61 fm –2 from elastic electron scattering by detecting 3He recoil nuclei and scattered electrons in coincidence with the two High Resolution Spectrometers of the Hall A Facility at Jefferson Lab. The measurements find evidence for the existence of a second diffraction minimum for the magnetic form factor at Q 2 = 49.3 fm –2 and for the charge form factor at Q 2 = 62.0 fm –2. Both minima are predicted tomore » exist in the Q 2 range accessible by this Jefferson Lab experiment. Here, the data are in qualitative agreement with theoretical calculations based on realistic interactions and accurate methods to solve the three-body nuclear problem.« less

  6. Grain formation in astronomical systems: A critical review of condensation processes

    NASA Technical Reports Server (NTRS)

    Donn, B.

    1978-01-01

    An analysis is presented of the assumption and the applicability of the three theoretical methods for calculating condensations in cosmic clouds where no pre-existing nuclei exist. The three procedures are: thermodynamic equilibrium calculations, nucleation theory, and a kinetic treatment which would take into account the characteristics of each individual collision. Thermodynamics provide detailed results on the composition temperature and composition of the condensate provided the system attains equilibrium. Because of the cosmic abundance mixture of elements, large supersaturations in some cases and low pressures, equilibrium is not expected in astronomical clouds. Nucleation theory, a combination of thermodynamics and kinetics, has the limitations of each scheme. Kinetics, not requiring equilibrium, avoids nearly all the thermodynamics difficulties but requires detailed knowledge of many reactions which thermodynamics avoids. It appears to be the only valid way to treat grain formation in space. A review of experimental studies is given.

  7. Existence and global exponential stability of periodic solution of memristor-based BAM neural networks with time-varying delays.

    PubMed

    Li, Hongfei; Jiang, Haijun; Hu, Cheng

    2016-03-01

    In this paper, we investigate a class of memristor-based BAM neural networks with time-varying delays. Under the framework of Filippov solutions, boundedness and ultimate boundedness of solutions of memristor-based BAM neural networks are guaranteed by Chain rule and inequalities technique. Moreover, a new method involving Yoshizawa-like theorem is favorably employed to acquire the existence of periodic solution. By applying the theory of set-valued maps and functional differential inclusions, an available Lyapunov functional and some new testable algebraic criteria are derived for ensuring the uniqueness and global exponential stability of periodic solution of memristor-based BAM neural networks. The obtained results expand and complement some previous work on memristor-based BAM neural networks. Finally, a numerical example is provided to show the applicability and effectiveness of our theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. The Objective Borderline method (OBM): a probability-based model for setting up an objective pass/fail cut-off score in medical programme assessments.

    PubMed

    Shulruf, Boaz; Turner, Rolf; Poole, Phillippa; Wilkinson, Tim

    2013-05-01

    The decision to pass or fail a medical student is a 'high stakes' one. The aim of this study is to introduce and demonstrate the feasibility and practicality of a new objective standard-setting method for determining the pass/fail cut-off score from borderline grades. Three methods for setting up pass/fail cut-off scores were compared: the Regression Method, the Borderline Group Method, and the new Objective Borderline Method (OBM). Using Year 5 students' OSCE results from one medical school we established the pass/fail cut-off scores by the abovementioned three methods. The comparison indicated that the pass/fail cut-off scores generated by the OBM were similar to those generated by the more established methods (0.840 ≤ r ≤ 0.998; p < .0001). Based on theoretical and empirical analysis, we suggest that the OBM has advantages over existing methods in that it combines objectivity, realism, robust empirical basis and, no less importantly, is simple to use.

  9. Microarray missing data imputation based on a set theoretic framework and biological knowledge.

    PubMed

    Gan, Xiangchao; Liew, Alan Wee-Chung; Yan, Hong

    2006-01-01

    Gene expressions measured using microarrays usually suffer from the missing value problem. However, in many data analysis methods, a complete data matrix is required. Although existing missing value imputation algorithms have shown good performance to deal with missing values, they also have their limitations. For example, some algorithms have good performance only when strong local correlation exists in data while some provide the best estimate when data is dominated by global structure. In addition, these algorithms do not take into account any biological constraint in their imputation. In this paper, we propose a set theoretic framework based on projection onto convex sets (POCS) for missing data imputation. POCS allows us to incorporate different types of a priori knowledge about missing values into the estimation process. The main idea of POCS is to formulate every piece of prior knowledge into a corresponding convex set and then use a convergence-guaranteed iterative procedure to obtain a solution in the intersection of all these sets. In this work, we design several convex sets, taking into consideration the biological characteristic of the data: the first set mainly exploit the local correlation structure among genes in microarray data, while the second set captures the global correlation structure among arrays. The third set (actually a series of sets) exploits the biological phenomenon of synchronization loss in microarray experiments. In cyclic systems, synchronization loss is a common phenomenon and we construct a series of sets based on this phenomenon for our POCS imputation algorithm. Experiments show that our algorithm can achieve a significant reduction of error compared to the KNNimpute, SVDimpute and LSimpute methods.

  10. Double Stimulation in Strategic Concept Formation: An Activity-Theoretical Analysis of Business Planning in a Small Technology Firm

    ERIC Educational Resources Information Center

    Virkkunen, Jaakko; Ristimaki, Paivi

    2012-01-01

    In this article, we study the relationships between culturally existing general strategy concepts and a small information and communication technology firm's specific strategic challenge in its management team's search for a new strategy concept. We apply three theoretical ideas of cultural historical activity theory: (a) the idea of double…

  11. The Black-White-Other Test Score Gap: Academic Achievement among Mixed Race Adolescents. Institute for Policy Research Working Paper.

    ERIC Educational Resources Information Center

    Herman, Melissa R.

    This paper describes the achievement patterns of a sample of 1,492 multiracial high school students and examines how their achievement fits into existing theoretical models that explain monoracial differences in achievement. These theoretical models include status attainment, parenting style, oppositional culture, and educational attitudes. The…

  12. A comparison of experimental and theoretical results for leakage, pressure gradients, and rotordynamic coefficients for tapered annular gas seal

    NASA Technical Reports Server (NTRS)

    Elrod, D. A.; Childs, D. W.

    1986-01-01

    A brief review of current annular seal theory and a discussion of the predicted effect on stiffness of tapering the seal stator are presented. An outline of Nelson's analytical-computational method for determining rotordynamic coefficients for annular compressible-flow seals is included. Modifications to increase the maximum rotor speed of an existing air-seal test apparatus at Texas A&M University are described. Experimental results, including leakage, entrance-loss coefficients, pressure distributions, and normalized rotordynamic coefficients, are presented for four convergent-tapered, smooth-rotor, smooth-stator seals. A comparison of the test results shows that an inlet-to-exit clearance ratio of 1.5 to 2.0 provides the maximum direct stiffness, a clearance ratio of 2.5 provides the greatest stability, and a clearance ratio of 1.0 provides the least stability. The experimental results are compared to theoretical results from Nelson's analysis with good agreement. Test results for cross-coupled stiffness show less sensitivity of fluid prerotation than predicted.

  13. Price game and chaos control among three oligarchs with different rationalities in property insurance market.

    PubMed

    Ma, Junhai; Zhang, Junling

    2012-12-01

    Combining with the actual competition in Chinese property insurance market and assuming that the property insurance companies take the marginal utility maximization as the basis of decision-making when they play price games, we first established the price game model with three oligarchs who have different rationalities. Then, we discussed the existence and stability of equilibrium points. Third, we studied the theoretical value of Lyapunov exponent at Nash equilibrium point and its change process with the main parameters' changes though having numerical simulation for the system such as the bifurcation, chaos attractors, and so on. Finally, we analyzed the influences which the changes of different parameters have on the profits and utilities of oligarchs and their corresponding competition advantage. Based on this, we used the variable feedback control method to control the chaos of the system and stabilized the chaos state to Nash equilibrium point again. The results have significant theoretical and practical application value.

  14. A mechanically driven form of Kirigami as a route to 3D mesostructures in micro/nanomembranes.

    PubMed

    Zhang, Yihui; Yan, Zheng; Nan, Kewang; Xiao, Dongqing; Liu, Yuhao; Luan, Haiwen; Fu, Haoran; Wang, Xizhu; Yang, Qinglin; Wang, Jiechen; Ren, Wen; Si, Hongzhi; Liu, Fei; Yang, Lihen; Li, Hejun; Wang, Juntong; Guo, Xuelin; Luo, Hongying; Wang, Liang; Huang, Yonggang; Rogers, John A

    2015-09-22

    Assembly of 3D micro/nanostructures in advanced functional materials has important implications across broad areas of technology. Existing approaches are compatible, however, only with narrow classes of materials and/or 3D geometries. This paper introduces ideas for a form of Kirigami that allows precise, mechanically driven assembly of 3D mesostructures of diverse materials from 2D micro/nanomembranes with strategically designed geometries and patterns of cuts. Theoretical and experimental studies demonstrate applicability of the methods across length scales from macro to nano, in materials ranging from monocrystalline silicon to plastic, with levels of topographical complexity that significantly exceed those that can be achieved using other approaches. A broad set of examples includes 3D silicon mesostructures and hybrid nanomembrane-nanoribbon systems, including heterogeneous combinations with polymers and metals, with critical dimensions that range from 100 nm to 30 mm. A 3D mechanically tunable optical transmission window provides an application example of this Kirigami process, enabled by theoretically guided design.

  15. An information-theoretic approach for the evaluation of surrogate endpoints based on causal inference.

    PubMed

    Alonso, Ariel; Van der Elst, Wim; Molenberghs, Geert; Buyse, Marc; Burzykowski, Tomasz

    2016-09-01

    In this work a new metric of surrogacy, the so-called individual causal association (ICA), is introduced using information-theoretic concepts and a causal inference model for a binary surrogate and true endpoint. The ICA has a simple and appealing interpretation in terms of uncertainty reduction and, in some scenarios, it seems to provide a more coherent assessment of the validity of a surrogate than existing measures. The identifiability issues are tackled using a two-step procedure. In the first step, the region of the parametric space of the distribution of the potential outcomes, compatible with the data at hand, is geometrically characterized. Further, in a second step, a Monte Carlo approach is proposed to study the behavior of the ICA on the previous region. The method is illustrated using data from the Collaborative Initial Glaucoma Treatment Study. A newly developed and user-friendly R package Surrogate is provided to carry out the evaluation exercise. © 2016, The International Biometric Society.

  16. Nanoscale Imaging of Light-Matter Coupling Inside Metal-Coated Cavities with a Pulsed Electron Beam.

    PubMed

    Moerland, Robert J; Weppelman, I Gerward C; Scotuzzi, Marijke; Hoogenboom, Jacob P

    2018-05-02

    Many applications in (quantum) nanophotonics rely on controlling light-matter interaction through strong, nanoscale modification of the local density of states (LDOS). All-optical techniques probing emission dynamics in active media are commonly used to measure the LDOS and benchmark experimental performance against theoretical predictions. However, metal coatings needed to obtain strong LDOS modifications in, for instance, nanocavities, are incompatible with all-optical characterization. So far, no reliable method exists to validate theoretical predictions. Here, we use subnanosecond pulses of focused electrons to penetrate the metal and excite a buried active medium at precisely defined locations inside subwavelength resonant nanocavities. We reveal the spatial layout of the spontaneous-emission decay dynamics inside the cavities with deep-subwavelength detail, directly mapping the LDOS. We show that emission enhancement converts to inhibition despite an increased number of modes, emphasizing the critical role of optimal emitter location. Our approach yields fundamental insight in dynamics at deep-subwavelength scales for a wide range of nano-optical systems.

  17. Molecular Theory of Detonation Initiation: Insight from First Principles Modeling of the Decomposition Mechanisms of Organic Nitro Energetic Materials.

    PubMed

    Tsyshevsky, Roman V; Sharia, Onise; Kuklja, Maija M

    2016-02-19

    This review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our own first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.

  18. Ontological addiction theory: Attachment to me, mine, and I.

    PubMed

    Van Gordon, William; Shonin, Edo; Diouri, Sofiane; Garcia-Campayo, Javier; Kotera, Yasuhiro; Griffiths, Mark D

    2018-06-07

    Background Ontological addiction theory (OAT) is a novel metaphysical model of psychopathology and posits that human beings are prone to forming implausible beliefs concerning the way they think they exist, and that these beliefs can become addictive leading to functional impairments and mental illness. The theoretical underpinnings of OAT derive from the Buddhist philosophical perspective that all phenomena, including the self, do not manifest inherently or independently. Aims and methods This paper outlines the theoretical foundations of OAT along with indicative supportive empirical evidence from studies evaluating meditation awareness training as well as studies investigating non-attachment, emptiness, compassion, and loving-kindness. Results OAT provides a novel perspective on addiction, the factors that underlie mental illness, and how beliefs concerning selfhood are shaped and reified. Conclusion In addition to continuing to test the underlying assumptions of OAT, future empirical research needs to determine how ontological addiction fits with extant theories of self, reality, and suffering, as well with more established models of addiction.

  19. HERAFitter: Open source QCD fit project

    DOE PAGES

    Alekhin, S.; Behnke, O.; Belov, P.; ...

    2015-07-01

    HERAFitter is an open-source package that provides a framework for the determination of the parton distribution functions (PDFs) of the proton and for many different kinds of analyses in Quantum Chromodynamics (QCD). It encodes results from a wide range of experimental measurements in lepton-proton deep inelastic scattering and proton-proton (proton-antiproton) collisions at hadron colliders. These are complemented with a variety of theoretical options for calculating PDF-dependent cross section predictions corresponding to the measurements. The framework covers a large number of the existing methods and schemes used for PDF determination. The data and theoretical predictions are brought together through numerous methodologicalmore » options for carrying out PDF fits and plotting tools to help visualise the results. While primarily based on the approach of collinear factorisation, HERAFitter also provides facilities for fits of dipole models and transverse-momentum dependent PDFs. The package can be used to study the impact of new precise measurements from hadron colliders. This paper describes the general structure of HERAFitter and its wide choice of options.« less

  20. Molecular Theory of Detonation Initiation: Insight from First Principles Modeling of the Decomposition Mechanisms of Organic Nitro Energetic Materials

    DOE PAGES

    Tsyshevsky, Roman; Sharia, Onise; Kuklja, Maija

    2016-02-19

    Our review presents a concept, which assumes that thermal decomposition processes play a major role in defining the sensitivity of organic energetic materials to detonation initiation. As a science and engineering community we are still far away from having a comprehensive molecular detonation initiation theory in a widely agreed upon form. However, recent advances in experimental and theoretical methods allow for a constructive and rigorous approach to design and test the theory or at least some of its fundamental building blocks. In this review, we analyzed a set of select experimental and theoretical articles, which were augmented by our ownmore » first principles modeling and simulations, to reveal new trends in energetic materials and to refine known existing correlations between their structures, properties, and functions. Lastly, our consideration is intentionally limited to the processes of thermally stimulated chemical reactions at the earliest stage of decomposition of molecules and materials containing defects.« less

  1. A mechanically driven form of Kirigami as a route to 3D mesostructures in micro/nanomembranes

    PubMed Central

    Zhang, Yihui; Yan, Zheng; Nan, Kewang; Xiao, Dongqing; Liu, Yuhao; Luan, Haiwen; Fu, Haoran; Wang, Xizhu; Yang, Qinglin; Wang, Jiechen; Ren, Wen; Si, Hongzhi; Liu, Fei; Yang, Lihen; Li, Hejun; Wang, Juntong; Guo, Xuelin; Luo, Hongying; Wang, Liang; Huang, Yonggang; Rogers, John A.

    2015-01-01

    Assembly of 3D micro/nanostructures in advanced functional materials has important implications across broad areas of technology. Existing approaches are compatible, however, only with narrow classes of materials and/or 3D geometries. This paper introduces ideas for a form of Kirigami that allows precise, mechanically driven assembly of 3D mesostructures of diverse materials from 2D micro/nanomembranes with strategically designed geometries and patterns of cuts. Theoretical and experimental studies demonstrate applicability of the methods across length scales from macro to nano, in materials ranging from monocrystalline silicon to plastic, with levels of topographical complexity that significantly exceed those that can be achieved using other approaches. A broad set of examples includes 3D silicon mesostructures and hybrid nanomembrane–nanoribbon systems, including heterogeneous combinations with polymers and metals, with critical dimensions that range from 100 nm to 30 mm. A 3D mechanically tunable optical transmission window provides an application example of this Kirigami process, enabled by theoretically guided design. PMID:26372959

  2. Theoretical Investigations on the Influence of Artificially Altered Rock Mass Properties on Mechanical Excavation

    NASA Astrophysics Data System (ADS)

    Hartlieb, Philipp; Bock, Stefan

    2018-03-01

    This study presents a theoretical analysis of the influence of the rock mass rating on the cutting performance of roadheaders. Existing performance prediction models are assessed for their suitability for forecasting the influence of pre-damaging the rock mass with alternative methods like lasers or microwaves, prior to the mechanical excavation process. Finally, the RMCR model was chosen because it is the only reported model incorporating a range of rock mass properties into its calculations. The results show that even very tough rocks could be mechanically excavated if the occurrence, orientation and condition of joints are favourable for the cutting process. The calculated improvements in the cutting rate (m3/h) are up to 350% for the most favourable cases. In case of microwave irradiation of hard rocks with an UCS of 200 MPa, a reasonable improvement in the performance by 120% can be achieved with as little as an extra 0.7 kWh/m3 (= 1% more energy) compared to cutting only.

  3. Analysis of Nonplanar Wing-tip-mounted Lifting Surfaces on Low-speed Airplanes

    NASA Technical Reports Server (NTRS)

    Vandam, C. P.; Roskam, J.

    1983-01-01

    Nonplanar wing tip mounted lifting surfaces reduce lift induced drag substantially. Winglets, which are small, nearly vertical, winglike surfaces, are an example of these devices. To achieve reduction in lift induced drag, winglets produce significant side forces. Consequently, these surfaces can seriously affect airplane lateral directional aerodynamic characteristics. Therefore, the effects of nonplanar wing tip mounted surfaces on the lateral directional stability and control of low speed general aviation airplanes were studied. The study consists of a theoretical and an experimental, in flight investigation. The experimental investigation involves flight tests of winglets on an agricultural airplane. Results of these tests demonstrate the significant influence of winglets on airplane lateral directional aerodynamic characteristics. It is shown that good correlations exist between experimental data and theoretically predicted results. In addition, a lifting surface method was used to perform a parametric study of the effects of various winglet parameters on lateral directional stability derivatives of general aviation type wings.

  4. Price game and chaos control among three oligarchs with different rationalities in property insurance market

    NASA Astrophysics Data System (ADS)

    Ma, Junhai; Zhang, Junling

    2012-12-01

    Combining with the actual competition in Chinese property insurance market and assuming that the property insurance companies take the marginal utility maximization as the basis of decision-making when they play price games, we first established the price game model with three oligarchs who have different rationalities. Then, we discussed the existence and stability of equilibrium points. Third, we studied the theoretical value of Lyapunov exponent at Nash equilibrium point and its change process with the main parameters' changes though having numerical simulation for the system such as the bifurcation, chaos attractors, and so on. Finally, we analyzed the influences which the changes of different parameters have on the profits and utilities of oligarchs and their corresponding competition advantage. Based on this, we used the variable feedback control method to control the chaos of the system and stabilized the chaos state to Nash equilibrium point again. The results have significant theoretical and practical application value.

  5. A constrained registration problem based on Ciarlet-Geymonat stored energy

    NASA Astrophysics Data System (ADS)

    Derfoul, Ratiba; Le Guyader, Carole

    2014-03-01

    In this paper, we address the issue of designing a theoretically well-motivated registration model capable of handling large deformations and including geometrical constraints, namely landmark points to be matched, in a variational framework. The theory of linear elasticity being unsuitable in this case, since assuming small strains and the validity of Hooke's law, the introduced functional is based on nonlinear elasticity principles. More precisely, the shapes to be matched are viewed as Ciarlet-Geymonat materials. We demonstrate the existence of minimizers of the related functional minimization problem and prove a convergence result when the number of geometric constraints increases. We then describe and analyze a numerical method of resolution based on the introduction of an associated decoupled problem under inequality constraint in which an auxiliary variable simulates the Jacobian matrix of the deformation field. A theoretical result of 􀀀-convergence is established. We then provide preliminary 2D results of the proposed matching model for the registration of mouse brain gene expression data to a neuroanatomical mouse atlas.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yihui; Yan, Zheng; Nan, Kewang

    Assembly of 3D micro/nanostructures in advanced functional materials has important implications across broad areas of technology. Existing approaches are compatible, however, only with narrow classes of materials and/or 3D geometries. This article introduces ideas for a form of Kirigami that allows precise, mechanically driven assembly of 3D mesostructures of diverse materials from 2D micro/nanomembranes with strategically designed geometries and patterns of cuts. Theoretical and experimental studies demonstrate applicability of the methods across length scales from macro to nano, in materials ranging from monocrystalline silicon to plastic, with levels of topographical complexity that significantly exceed those that can be achieved usingmore » other approaches. A broad set of examples includes 3D silicon mesostructures and hybrid nanomembrane-nanoribbon systems, including heterogeneous combinations with polymers and metals, with critical dimensions that range from 100 nm to 30 mm. Lastly, a 3D mechanically tunable optical transmission window provides an application example of this Kirigami process, enabled by theoretically guided design.« less

  7. Designing Hyperchaotic Cat Maps With Any Desired Number of Positive Lyapunov Exponents.

    PubMed

    Hua, Zhongyun; Yi, Shuang; Zhou, Yicong; Li, Chengqing; Wu, Yue

    2018-02-01

    Generating chaotic maps with expected dynamics of users is a challenging topic. Utilizing the inherent relation between the Lyapunov exponents (LEs) of the Cat map and its associated Cat matrix, this paper proposes a simple but efficient method to construct an -dimensional ( -D) hyperchaotic Cat map (HCM) with any desired number of positive LEs. The method first generates two basic -D Cat matrices iteratively and then constructs the final -D Cat matrix by performing similarity transformation on one basic -D Cat matrix by the other. Given any number of positive LEs, it can generate an -D HCM with desired hyperchaotic complexity. Two illustrative examples of -D HCMs were constructed to show the effectiveness of the proposed method, and to verify the inherent relation between the LEs and Cat matrix. Theoretical analysis proves that the parameter space of the generated HCM is very large. Performance evaluations show that, compared with existing methods, the proposed method can construct -D HCMs with lower computation complexity and their outputs demonstrate strong randomness and complex ergodicity.

  8. Sensing Methods for Detecting Analog Television Signals

    NASA Astrophysics Data System (ADS)

    Rahman, Mohammad Azizur; Song, Chunyi; Harada, Hiroshi

    This paper introduces a unified method of spectrum sensing for all existing analog television (TV) signals including NTSC, PAL and SECAM. We propose a correlation based method (CBM) with a single reference signal for sensing any analog TV signals. In addition we also propose an improved energy detection method. The CBM approach has been implemented in a hardware prototype specially designed for participating in Singapore TV white space (WS) test trial conducted by Infocomm Development Authority (IDA) of the Singapore government. Analytical and simulation results of the CBM method will be presented in the paper, as well as hardware testing results for sensing various analog TV signals. Both AWGN and fading channels will be considered. It is shown that the theoretical results closely match with those from simulations. Sensing performance of the hardware prototype will also be presented in fading environment by using a fading simulator. We present performance of the proposed techniques in terms of probability of false alarm, probability of detection, sensing time etc. We also present a comparative study of the various techniques.

  9. A Theoretical Framework for the Associations between Identity and Psychopathology

    ERIC Educational Resources Information Center

    Klimstra, Theo A.; Denissen, Jaap J. A.

    2017-01-01

    Identity research largely emerged from clinical observations. Decades of empirical work advanced the field in refining existing approaches and adding new approaches. Furthermore, the existence of linkages of identity with psychopathology is now well established. Unfortunately, both the directionality of effects between identity aspects and…

  10. Sprint performance and mechanical outputs computed with an iPhone app: Comparison with existing reference methods.

    PubMed

    Romero-Franco, Natalia; Jiménez-Reyes, Pedro; Castaño-Zambudio, Adrián; Capelo-Ramírez, Fernando; Rodríguez-Juan, Juan José; González-Hernández, Jorge; Toscano-Bendala, Francisco Javier; Cuadrado-Peñafiel, Víctor; Balsalobre-Fernández, Carlos

    2017-05-01

    The purpose of this study was to assess validity and reliability of sprint performance outcomes measured with an iPhone application (named: MySprint) and existing field methods (i.e. timing photocells and radar gun). To do this, 12 highly trained male sprinters performed 6 maximal 40-m sprints during a single session which were simultaneously timed using 7 pairs of timing photocells, a radar gun and a newly developed iPhone app based on high-speed video recording. Several split times as well as mechanical outputs computed from the model proposed by Samozino et al. [(2015). A simple method for measuring power, force, velocity properties, and mechanical effectiveness in sprint running. Scandinavian Journal of Medicine & Science in Sports. https://doi.org/10.1111/sms.12490] were then measured by each system, and values were compared for validity and reliability purposes. First, there was an almost perfect correlation between the values of time for each split of the 40-m sprint measured with MySprint and the timing photocells (r = 0.989-0.999, standard error of estimate = 0.007-0.015 s, intraclass correlation coefficient (ICC) = 1.0). Second, almost perfect associations were observed for the maximal theoretical horizontal force (F 0 ), the maximal theoretical velocity (V 0 ), the maximal power (P max ) and the mechanical effectiveness (DRF - decrease in the ratio of force over acceleration) measured with the app and the radar gun (r = 0.974-0.999, ICC = 0.987-1.00). Finally, when analysing the performance outputs of the six different sprints of each athlete, almost identical levels of reliability were observed as revealed by the coefficient of variation (MySprint: CV = 0.027-0.14%; reference systems: CV = 0.028-0.11%). Results on the present study showed that sprint performance can be evaluated in a valid and reliable way using a novel iPhone app.

  11. A YinYang bipolar fuzzy cognitive TOPSIS method to bipolar disorder diagnosis.

    PubMed

    Han, Ying; Lu, Zhenyu; Du, Zhenguang; Luo, Qi; Chen, Sheng

    2018-05-01

    Bipolar disorder is often mis-diagnosed as unipolar depression in the clinical diagnosis. The main reason is that, different from other diseases, bipolarity is the norm rather than exception in bipolar disorder diagnosis. YinYang bipolar fuzzy set captures bipolarity and has been successfully used to construct a unified inference mathematical modeling method to bipolar disorder clinical diagnosis. Nevertheless, symptoms and their interrelationships are not considered in the existing method, circumventing its ability to describe complexity of bipolar disorder. Thus, in this paper, a YinYang bipolar fuzzy multi-criteria group decision making method to bipolar disorder clinical diagnosis is developed. Comparing with the existing method, the new one is more comprehensive. The merits of the new method are listed as follows: First of all, multi-criteria group decision making method is introduced into bipolar disorder diagnosis for considering different symptoms and multiple doctors' opinions. Secondly, the discreet diagnosis principle is adopted by the revised TOPSIS method. Last but not the least, YinYang bipolar fuzzy cognitive map is provided for the understanding of interrelations among symptoms. The illustrated case demonstrates the feasibility, validity, and necessity of the theoretical results obtained. Moreover, the comparison analysis demonstrates that the diagnosis result is more accurate, when interrelations about symptoms are considered in the proposed method. In a conclusion, the main contribution of this paper is to provide a comprehensive mathematical approach to improve the accuracy of bipolar disorder clinical diagnosis, in which both bipolarity and complexity are considered. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Methods to calibrate and scale axial distances in confocal microscopy as a function of refractive index.

    PubMed

    Besseling, T H; Jose, J; Van Blaaderen, A

    2015-02-01

    Accurate distance measurement in 3D confocal microscopy is important for quantitative analysis, volume visualization and image restoration. However, axial distances can be distorted by both the point spread function (PSF) and by a refractive-index mismatch between the sample and immersion liquid, which are difficult to separate. Additionally, accurate calibration of the axial distances in confocal microscopy remains cumbersome, although several high-end methods exist. In this paper we present two methods to calibrate axial distances in 3D confocal microscopy that are both accurate and easily implemented. With these methods, we measured axial scaling factors as a function of refractive-index mismatch for high-aperture confocal microscopy imaging. We found that our scaling factors are almost completely linearly dependent on refractive index and that they were in good agreement with theoretical predictions that take the full vectorial properties of light into account. There was however a strong deviation with the theoretical predictions using (high-angle) geometrical optics, which predict much lower scaling factors. As an illustration, we measured the PSF of a correctly calibrated point-scanning confocal microscope and showed that a nearly index-matched, micron-sized spherical object is still significantly elongated due to this PSF, which signifies that care has to be taken when determining axial calibration or axial scaling using such particles. © 2014 The Authors Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  13. W -Boson Production in Association with a Jet at Next-to-Next-to-Leading Order in Perturbative QCD

    NASA Astrophysics Data System (ADS)

    Boughezal, Radja; Focke, Christfried; Liu, Xiaohui; Petriello, Frank

    2015-08-01

    We present the complete calculation of W -boson production in association with a jet in hadronic collisions through next-to-next-to-leading order (NNLO) in perturbative QCD. To cancel infrared divergences, we discuss a new subtraction method that exploits the fact that the N -jettiness event-shape variable fully captures the singularity structure of QCD amplitudes with final-state partons. This method holds for processes with an arbitrary number of jets and is easily implemented into existing frameworks for higher-order calculations. We present initial phenomenological results for W +jet production at the LHC. The NNLO corrections are small and lead to a significantly reduced theoretical error, opening the door to precision measurements in the W +jet channel at the LHC.

  14. Adaptive backstepping control of train systems with traction/braking dynamics and uncertain resistive forces

    NASA Astrophysics Data System (ADS)

    Song, Qi; Song, Y. D.; Cai, Wenchuan

    2011-09-01

    Although backstepping control design approach has been widely utilised in many practical systems, little effort has been made in applying this useful method to train systems. The main purpose of this paper is to apply this popular control design technique to speed and position tracking control of high-speed trains. By integrating adaptive control with backstepping control, we develop a control scheme that is able to address not only the traction and braking dynamics ignored in most existing methods, but also the uncertain friction and aerodynamic drag forces arisen from uncertain resistance coefficients. As such, the resultant control algorithms are able to achieve high precision train position and speed tracking under varying operation railway conditions, as validated by theoretical analysis and numerical simulations.

  15. Inferring Markov chains: Bayesian estimation, model comparison, entropy rate, and out-of-class modeling.

    PubMed

    Strelioff, Christopher C; Crutchfield, James P; Hübler, Alfred W

    2007-07-01

    Markov chains are a natural and well understood tool for describing one-dimensional patterns in time or space. We show how to infer kth order Markov chains, for arbitrary k , from finite data by applying Bayesian methods to both parameter estimation and model-order selection. Extending existing results for multinomial models of discrete data, we connect inference to statistical mechanics through information-theoretic (type theory) techniques. We establish a direct relationship between Bayesian evidence and the partition function which allows for straightforward calculation of the expectation and variance of the conditional relative entropy and the source entropy rate. Finally, we introduce a method that uses finite data-size scaling with model-order comparison to infer the structure of out-of-class processes.

  16. Ultrasmooth Patterned Metals for Plasmonics and Metamaterials

    NASA Astrophysics Data System (ADS)

    Nagpal, Prashant; Lindquist, Nathan C.; Oh, Sang-Hyun; Norris, David J.

    2009-07-01

    Surface plasmons are electromagnetic waves that can exist at metal interfaces because of coupling between light and free electrons. Restricted to travel along the interface, these waves can be channeled, concentrated, or otherwise manipulated by surface patterning. However, because surface roughness and other inhomogeneities have so far limited surface-plasmon propagation in real plasmonic devices, simple high-throughput methods are needed to fabricate high-quality patterned metals. We combined template stripping with precisely patterned silicon substrates to obtain ultrasmooth pure metal films with grooves, bumps, pyramids, ridges, and holes. Measured surface-plasmon-propagation lengths on the resulting surfaces approach theoretical values for perfectly flat films. With the use of our method, we demonstrated structures that exhibit Raman scattering enhancements above 107 for sensing applications and multilayer films for optical metamaterials.

  17. Apparatus and method for phase fronts based on superluminal polarization current

    DOEpatents

    Singleton, John [Los Alamos, NM; Ardavan, Houshang [Cambridge, GB; Ardavan, Arzhang [Cambridge, GB

    2012-02-28

    An apparatus and method for a radiation source involving phase fronts emanating from an accelerated, oscillating polarization current whose distribution pattern moves superluminally (that is, faster than light in vacuo). Theoretical predictions and experimental measurements using an existing prototype superluminal source show that the phase fronts from such a source can be made to be very complex. Consequently, it will be very difficult for an aircraft imaged by such a radiation to detect where this radiation has come from. Moreover, the complexity of the phase fronts makes it almost impossible for electronics on an aircraft to synthesize a rogue reflection. A simple directional antenna and timing system should, on the other hand, be sufficient for the radar operators to locate the aircraft, given knowledge of their own source's speed and modulation pattern.

  18. Adaptive Synchronization of Fractional Order Complex-Variable Dynamical Networks via Pinning Control

    NASA Astrophysics Data System (ADS)

    Ding, Da-Wei; Yan, Jie; Wang, Nian; Liang, Dong

    2017-09-01

    In this paper, the synchronization of fractional order complex-variable dynamical networks is studied using an adaptive pinning control strategy based on close center degree. Some effective criteria for global synchronization of fractional order complex-variable dynamical networks are derived based on the Lyapunov stability theory. From the theoretical analysis, one concludes that under appropriate conditions, the complex-variable dynamical networks can realize the global synchronization by using the proper adaptive pinning control method. Meanwhile, we succeed in solving the problem about how much coupling strength should be applied to ensure the synchronization of the fractional order complex networks. Therefore, compared with the existing results, the synchronization method in this paper is more general and convenient. This result extends the synchronization condition of the real-variable dynamical networks to the complex-valued field, which makes our research more practical. Finally, two simulation examples show that the derived theoretical results are valid and the proposed adaptive pinning method is effective. Supported by National Natural Science Foundation of China under Grant No. 61201227, National Natural Science Foundation of China Guangdong Joint Fund under Grant No. U1201255, the Natural Science Foundation of Anhui Province under Grant No. 1208085MF93, 211 Innovation Team of Anhui University under Grant Nos. KJTD007A and KJTD001B, and also supported by Chinese Scholarship Council

  19. Limitations of poster presentations reporting educational innovations at a major international medical education conference

    PubMed Central

    Gordon, Morris; Darbyshire, Daniel; Saifuddin, Aamir; Vimalesvaran, Kavitha

    2013-01-01

    Background In most areas of medical research, the label of ‘quality’ is associated with well-accepted standards. Whilst its interpretation in the field of medical education is contentious, there is agreement on the key elements required when reporting novel teaching strategies. We set out to assess if these features had been fulfilled by poster presentations at a major international medical education conference. Methods Such posters were analysed in four key areas: reporting of theoretical underpinning, explanation of instructional design methods, descriptions of the resources needed for introduction, and the offering of materials to support dissemination. Results Three hundred and twelve posters were reviewed with 170 suitable for analysis. Forty-one percent described their methods of instruction or innovation design. Thirty-three percent gave details of equipment, and 29% of studies described resources that may be required for delivering such an intervention. Further resources to support dissemination of their innovation were offered by 36%. Twenty-three percent described the theoretical underpinning or conceptual frameworks upon which their work was based. Conclusions These findings suggest that posters presenting educational innovation are currently limited in what they offer to educators. Presenters should seek to enhance their reporting of these crucial aspects by employing existing published guidance, and organising committees may wish to consider explicitly requesting such information at the time of initial submission. PMID:24199272

  20. Improved methods to estimate the effective impervious area in urban catchments using rainfall-runoff data

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Ali; Wilson, Bruce N.; Gulliver, John S.

    2016-05-01

    Impervious surfaces are useful indicators of the urbanization impacts on water resources. Effective impervious area (EIA), which is the portion of total impervious area (TIA) that is hydraulically connected to the drainage system, is a better catchment parameter in the determination of actual urban runoff. Development of reliable methods for quantifying EIA rather than TIA is currently one of the knowledge gaps in the rainfall-runoff modeling context. The objective of this study is to improve the rainfall-runoff data analysis method for estimating EIA fraction in urban catchments by eliminating the subjective part of the existing method and by reducing the uncertainty of EIA estimates. First, the theoretical framework is generalized using a general linear least square model and using a general criterion for categorizing runoff events. Issues with the existing method that reduce the precision of the EIA fraction estimates are then identified and discussed. Two improved methods, based on ordinary least square (OLS) and weighted least square (WLS) estimates, are proposed to address these issues. The proposed weighted least squares method is then applied to eleven urban catchments in Europe, Canada, and Australia. The results are compared to map measured directly connected impervious area (DCIA) and are shown to be consistent with DCIA values. In addition, both of the improved methods are applied to nine urban catchments in Minnesota, USA. Both methods were successful in removing the subjective component inherent in the analysis of rainfall-runoff data of the current method. The WLS method is more robust than the OLS method and generates results that are different and more precise than the OLS method in the presence of heteroscedastic residuals in our rainfall-runoff data.

  1. Psychophysical "blinding" methods reveal a functional hierarchy of unconscious visual processing.

    PubMed

    Breitmeyer, Bruno G

    2015-09-01

    Numerous non-invasive experimental "blinding" methods exist for suppressing the phenomenal awareness of visual stimuli. Not all of these suppressive methods occur at, and thus index, the same level of unconscious visual processing. This suggests that a functional hierarchy of unconscious visual processing can in principle be established. The empirical results of extant studies that have used a number of different methods and additional reasonable theoretical considerations suggest the following tentative hierarchy. At the highest levels in this hierarchy is unconscious processing indexed by object-substitution masking. The functional levels indexed by crowding, the attentional blink (and other attentional blinding methods), backward pattern masking, metacontrast masking, continuous flash suppression, sandwich masking, and single-flash interocular suppression, fall at progressively lower levels, while unconscious processing at the lowest levels is indexed by eye-based binocular-rivalry suppression. Although unconscious processing levels indexed by additional blinding methods is yet to be determined, a tentative placement at lower levels in the hierarchy is also given for unconscious processing indexed by Troxler fading and adaptation-induced blindness, and at higher levels in the hierarchy indexed by attentional blinding effects in addition to the level indexed by the attentional blink. The full mapping of levels in the functional hierarchy onto cortical activation sites and levels is yet to be determined. The existence of such a hierarchy bears importantly on the search for, and the distinctions between, neural correlates of conscious and unconscious vision. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. a New Improved Threshold Segmentation Method for Scanning Images of Reservoir Rocks Considering Pore Fractal Characteristics

    NASA Astrophysics Data System (ADS)

    Lin, Wei; Li, Xizhe; Yang, Zhengming; Lin, Lijun; Xiong, Shengchun; Wang, Zhiyuan; Wang, Xiangyang; Xiao, Qianhua

    Based on the basic principle of the porosity method in image segmentation, considering the relationship between the porosity of the rocks and the fractal characteristics of the pore structures, a new improved image segmentation method was proposed, which uses the calculated porosity of the core images as a constraint to obtain the best threshold. The results of comparative analysis show that the porosity method can best segment images theoretically, but the actual segmentation effect is deviated from the real situation. Due to the existence of heterogeneity and isolated pores of cores, the porosity method that takes the experimental porosity of the whole core as the criterion cannot achieve the desired segmentation effect. On the contrary, the new improved method overcomes the shortcomings of the porosity method, and makes a more reasonable binary segmentation for the core grayscale images, which segments images based on the actual porosity of each image by calculated. Moreover, the image segmentation method based on the calculated porosity rather than the measured porosity also greatly saves manpower and material resources, especially for tight rocks.

  3. Analysis of cigarette purchase task instrument data with a left-censored mixed effects model.

    PubMed

    Liao, Wenjie; Luo, Xianghua; Le, Chap T; Chu, Haitao; Epstein, Leonard H; Yu, Jihnhee; Ahluwalia, Jasjit S; Thomas, Janet L

    2013-04-01

    The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. Although a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug's RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, for example, 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method, and future directions of research are also discussed.

  4. Did you have an impact? A theory-based method for planning and evaluating knowledge-transfer and exchange activities in occupational health and safety.

    PubMed

    Kramer, Desré M; Wells, Richard P; Carlan, Nicolette; Aversa, Theresa; Bigelow, Philip P; Dixon, Shane M; McMillan, Keith

    2013-01-01

    Few evaluation tools are available to assess knowledge-transfer and exchange interventions. The objective of this paper is to develop and demonstrate a theory-based knowledge-transfer and exchange method of evaluation (KEME) that synthesizes 3 theoretical frameworks: the promoting action on research implementation of health services (PARiHS) model, the transtheoretical model of change, and a model of knowledge use. It proposes a new term, keme, to mean a unit of evidence-based transferable knowledge. The usefulness of the evaluation method is demonstrated with 4 occupational health and safety knowledge transfer and exchange (KTE) implementation case studies that are based upon the analysis of over 50 pre-existing interviews. The usefulness of the evaluation model has enabled us to better understand stakeholder feedback, frame our interpretation, and perform a more comprehensive evaluation of the knowledge use outcomes of our KTE efforts.

  5. Visual attention capacity: a review of TVA-based patient studies.

    PubMed

    Habekost, Thomas; Starrfelt, Randi

    2009-02-01

    Psychophysical studies have identified two distinct limitations of visual attention capacity: processing speed and apprehension span. Using a simple test, these cognitive factors can be analyzed by Bundesen's Theory of Visual Attention (TVA). The method has strong specificity and sensitivity, and measurements are highly reliable. As the method is theoretically founded, it also has high validity. TVA-based assessment has recently been used to investigate a broad range of neuropsychological and neurological conditions. We present the method, including the experimental paradigm and practical guidelines to patient testing, and review existing TVA-based patient studies organized by lesion anatomy. Lesions in three anatomical regions affect visual capacity: The parietal lobes, frontal cortex and basal ganglia, and extrastriate cortex. Visual capacity thus depends on large, bilaterally distributed anatomical networks that include several regions outside the visual system. The two visual capacity parameters are functionally separable, but seem to rely on largely overlapping brain areas.

  6. On coupling fluid plasma and kinetic neutral physics models

    DOE PAGES

    Joseph, I.; Rensink, M. E.; Stotler, D. P.; ...

    2017-03-01

    The coupled fluid plasma and kinetic neutral physics equations are analyzed through theory and simulation of benchmark cases. It is shown that coupling methods that do not treat the coupling rates implicitly are restricted to short time steps for stability. Fast charge exchange, ionization and recombination coupling rates exist, even after constraining the solution by requiring that the neutrals are at equilibrium. For explicit coupling, the present implementation of Monte Carlo correlated sampling techniques does not allow for complete convergence in slab geometry. For the benchmark case, residuals decay with particle number and increase with grid size, indicating that theymore » scale in a manner that is similar to the theoretical prediction for nonlinear bias error. Progress is reported on implementation of a fully implicit Jacobian-free Newton–Krylov coupling scheme. The present block Jacobi preconditioning method is still sensitive to time step and methods that better precondition the coupled system are under investigation.« less

  7. Analysis of Cigarette Purchase Task Instrument Data with a Left-Censored Mixed Effects Model

    PubMed Central

    Liao, Wenjie; Luo, Xianghua; Le, Chap; Chu, Haitao; Epstein, Leonard H.; Yu, Jihnhee; Ahluwalia, Jasjit S.; Thomas, Janet L.

    2015-01-01

    The drug purchase task is a frequently used instrument for measuring the relative reinforcing efficacy (RRE) of a substance, a central concept in psychopharmacological research. While a purchase task instrument, such as the cigarette purchase task (CPT), provides a comprehensive and inexpensive way to assess various aspects of a drug’s RRE, the application of conventional statistical methods to data generated from such an instrument may not be adequate by simply ignoring or replacing the extra zeros or missing values in the data with arbitrary small consumption values, e.g. 0.001. We applied the left-censored mixed effects model to CPT data from a smoking cessation study of college students and demonstrated its superiority over the existing methods with simulation studies. Theoretical implications of the findings, limitations of the proposed method and future directions of research are also discussed. PMID:23356731

  8. Invariant and partially-invariant solutions of the equations describing a non-stationary and isentropic flow for an ideal and compressible fluid in (3 + 1) dimensions

    NASA Astrophysics Data System (ADS)

    Grundland, A. M.; Lalague, L.

    1996-04-01

    This paper presents a new method of constructing, certain classes of solutions of a system of partial differential equations (PDEs) describing the non-stationary and isentropic flow for an ideal compressible fluid. A generalization of the symmetry reduction method to the case of partially-invariant solutions (PISs) has been formulated. We present a new algorithm for constructing PISs and discuss in detail the necessary conditions for the existence of non-reducible PISs. All these solutions have the defect structure 0305-4470/29/8/019/img1 and are computed from four-dimensional symmetric subalgebras. These theoretical considerations are illustrated by several examples. Finally, some new classes of invariant solutions obtained by the symmetry reduction method are included. These solutions represent central, conical, rational, spherical, cylindrical and non-scattering double waves.

  9. On the importance of cotranscriptional RNA structure formation

    PubMed Central

    Lai, Daniel; Proctor, Jeff R.; Meyer, Irmtraud M.

    2013-01-01

    The expression of genes, both coding and noncoding, can be significantly influenced by RNA structural features of their corresponding transcripts. There is by now mounting experimental and some theoretical evidence that structure formation in vivo starts during transcription and that this cotranscriptional folding determines the functional RNA structural features that are being formed. Several decades of research in bioinformatics have resulted in a wide range of computational methods for predicting RNA secondary structures. Almost all state-of-the-art methods in terms of prediction accuracy, however, completely ignore the process of structure formation and focus exclusively on the final RNA structure. This review hopes to bridge this gap. We summarize the existing evidence for cotranscriptional folding and then review the different, currently used strategies for RNA secondary-structure prediction. Finally, we propose a range of ideas on how state-of-the-art methods could be potentially improved by explicitly capturing the process of cotranscriptional structure formation. PMID:24131802

  10. Finding the truth in the noise - potentials and limitations of big ecological datasets for new knowledge generation

    NASA Astrophysics Data System (ADS)

    Kutsch, Werner Leo

    2016-04-01

    Nowadays,technical possibilities in Earth Observation provide enormous amounts of data that open great possibilities to review existing ecological theories and develop new ones. Several examples for that are shown in the presentation in order to discuss potentials and limitations of the underlying concepts and provide feedback to large infrastructures carrying out ecological observations or experiments. Since different ecological questions or theoretical approaches require different methods, data interoperability and co-location are practical challenges. Nevertheless, we also have to learn that not every method is applicable in all ecosystems and that data have to be critically scrutinized before being sure that we can really draw ecological conclusions. This is time consuming and very often frustrating since we may learn that we have sometimes invested lots of work and money for building infrastructure at a site that is not suitable for the method.

  11. Complex dark-field contrast and its retrieval in x-ray phase contrast imaging implemented with Talbot interferometry.

    PubMed

    Yang, Yi; Tang, Xiangyang

    2014-10-01

    Under the existing theoretical framework of x-ray phase contrast imaging methods implemented with Talbot interferometry, the dark-field contrast refers to the reduction in interference fringe visibility due to small-angle x-ray scattering of the subpixel microstructures of an object to be imaged. This study investigates how an object's subpixel microstructures can also affect the phase of the intensity oscillations. Instead of assuming that the object's subpixel microstructures distribute in space randomly, the authors' theoretical derivation starts by assuming that an object's attenuation projection and phase shift vary at a characteristic size that is not smaller than the period of analyzer grating G₂ and a characteristic length dc. Based on the paraxial Fresnel-Kirchhoff theory, the analytic formulae to characterize the zeroth- and first-order Fourier coefficients of the x-ray irradiance recorded at each detector cell are derived. Then the concept of complex dark-field contrast is introduced to quantify the influence of the object's microstructures on both the interference fringe visibility and the phase of intensity oscillations. A method based on the phase-attenuation duality that holds for soft tissues and high x-ray energies is proposed to retrieve the imaginary part of the complex dark-field contrast for imaging. Through computer simulation study with a specially designed numerical phantom, they evaluate and validate the derived analytic formulae and the proposed retrieval method. Both theoretical analysis and computer simulation study show that the effect of an object's subpixel microstructures on x-ray phase contrast imaging method implemented with Talbot interferometry can be fully characterized by a complex dark-field contrast. The imaginary part of complex dark-field contrast quantifies the influence of the object's subpixel microstructures on the phase of intensity oscillations. Furthermore, at relatively high energies, for soft tissues it can be retrieved for imaging with a method based on the phase-attenuation duality. The analytic formulae derived in this work to characterize the complex dark-field contrast in x-ray phase contrast imaging method implemented with Talbot interferometry are of significance, which may initiate more activities in the research and development of x-ray differential phase contrast imaging for extensive biomedical applications.

  12. Communication: Electron ionization of DNA bases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rahman, M. A.; Krishnakumar, E., E-mail: ekkumar@tifr.res.in

    2016-04-28

    No reliable experimental data exist for the partial and total electron ionization cross sections for DNA bases, which are very crucial for modeling radiation damage in genetic material of living cell. We have measured a complete set of absolute partial electron ionization cross sections up to 500 eV for DNA bases for the first time by using the relative flow technique. These partial cross sections are summed to obtain total ion cross sections for all the four bases and are compared with the existing theoretical calculations and the only set of measured absolute cross sections. Our measurements clearly resolve themore » existing discrepancy between the theoretical and experimental results, thereby providing for the first time reliable numbers for partial and total ion cross sections for these molecules. The results on fragmentation analysis of adenine supports the theory of its formation in space.« less

  13. Roles Played by Electrostatic Waves in Producing Radio Emissions

    NASA Technical Reports Server (NTRS)

    Cairns, Iver H.

    2000-01-01

    Processes in which electromagnetic radiation is produced directly or indirectly via intermediate waves are reviewed. It is shown that strict theoretical constraints exist for electrons to produce nonthermal levels of radiation directly by the Cerenkov or cyclotron resonances. In contrast, indirect emission processes in which intermediary plasma waves are converted into radiation are often favored on general and specific grounds. Four classes of mechanisms involving the conversion of electrostatic waves into radiation are linear mode conversion, hybrid linear/nonlinear mechanisms, nonlinear wave-wave and wave-particle processes, and radiation from localized wave packets. These processes are reviewed theoretically and observational evidence summarized for their occurrence. Strong evidence exists that specific nonlinear wave processes and mode conversion can explain quantitatively phenomena involving type III solar radio bursts and ionospheric emissions. On the other hand, no convincing evidence exists that magnetospheric continuum radiation is produced by mode conversion instead of nonlinear wave processes. Further research on these processes is needed.

  14. Experimental investigation of leaky lamb modes by an optically induced grating.

    PubMed

    Van de Rostyne, Kris; Glorieux, Christ; Gao, Weimin; Lauriks, Walter; Thoen, Jan

    2002-09-01

    By removing the symmetry of a free plate configuration, fluid loading significantly modifies the nature of acoustic waves travelling along a plate, and it even gives existence to new acoustic modes. We present theoretical predictions for the existence, dispersive behavior, and spatial distribution of leaky Lamb waves in a fluid-loaded film. Although Lamb modes are often investigated by studying the radiated fluid waves resulting from their leakage, here their properties are assessed by detecting the wave displacements directly using laser beam deflection. By using crossed laser beam excitation, the detection and analysis of the different modes is done at a fixed wavelength, allowing one to verify the existence, the velocity, and the damping of each predicted mode in a simple and unambiguous way. Our theoretical predictions for the nature of the modes in a water-loaded Plexiglas film, including parts of looping modes, are experimentally confirmed.

  15. Using experimental studies and theoretical calculations to analyze the molecular mechanism of coumarin, p-hydroxybenzoic acid, and cinnamic acid

    NASA Astrophysics Data System (ADS)

    Hsieh, Tiane-Jye; Su, Chia-Ching; Chen, Chung-Yi; Liou, Chyong-Huey; Lu, Li-Hwa

    2005-05-01

    Three natural products, Coumarin ( 1), p-hydroxybenzoic acid ( 2), trans-cinnamic acid ( 3) were isolated from the natural plant of indigenous cinnamon and the structures including relative stereochemistry were elucidated on the basis of spectroscopic data and theoretical calculations. Their sterochemical structures were determined by NMR spectroscopy, mass spectroscopy, and X-ray crystallography. The p-hydroxybenzoic acid complex with water is reported to show the existence of two hydrogen bonds. The two hydrogen bonds are formed in the water molecule of two hydrogen-accepting oxygen of carbonyl group of the p-hydroxybenzoic acid. The intermolecular interaction two hydrogen bond of the model system of the water- p-hydroxybenzoic acid was investigated. An experimental study and a theoretical analysis using the B3LYP/6-31G* method in the GAUSSIAN-03 package program were conducted on the three natural products. The theoretical results are supplemented by experimental data. Optimal geometric structures of three compounds were also determined. The calculated molecular mechanics compared quite well with those obtained from the experimental data. The ionization potentials, highest occupied molecular orbital energy, lowest unoccupied molecular orbital energy, energy gaps, heat of formation, atomization energies, and vibration frequencies of the compounds were also calculated. The results of the calculations show that three natural products are stable molecules with high reactive and various other physical properties. The study also provided an explicit understanding of the sterochemical structure and thermodynamic properties of the three natural products.

  16. Linear transforms for Fourier data on the sphere: application to high angular resolution diffusion MRI of the brain.

    PubMed

    Haldar, Justin P; Leahy, Richard M

    2013-05-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. Elastic solitons in delaminated bars: splitting leads to fission

    NASA Astrophysics Data System (ADS)

    Samsonov, A. M.; Dreiden, G. V.; Khusnutdinova, K. R.; Semenova, I. V.

    2008-06-01

    Recent theoretical and successful experimental studies confirmed existence and demonstrated main properties of bulk strain solitary waves in nonlinearly elastic solid wave guides. Our current research is devoted to nonlinear wave processes in layered elastic wave guides with inhomogeneities modelling delamination. We present first theoretical and experimental results showing the influence of delamination on the parameters of the longitudinal strain solitary wave.

  18. A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events

    NASA Astrophysics Data System (ADS)

    Kholodovsky, V.

    2017-12-01

    Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.

  19. Pinning synchronization of delayed complex dynamical networks with nonlinear coupling

    NASA Astrophysics Data System (ADS)

    Cheng, Ranran; Peng, Mingshu; Yu, Weibin

    2014-11-01

    In this paper, we find that complex networks with the Watts-Strogatz or scale-free BA random topological architecture can be synchronized more easily by pin-controlling fewer nodes than regular systems. Theoretical analysis is included by means of Lyapunov functions and linear matrix inequalities (LMI) to make all nodes reach complete synchronization. Numerical examples are also provided to illustrate the importance of our theoretical analysis, which implies that there exists a gap between the theoretical prediction and numerical results about the minimum number of pinning controlled nodes.

  20. Positron follow-up in liquid water: I. A new Monte Carlo track-structure code.

    PubMed

    Champion, C; Le Loirec, C

    2006-04-07

    When biological matter is irradiated by charged particles, a wide variety of interactions occur, which lead to a deep modification of the cellular environment. To understand the fine structure of the microscopic distribution of energy deposits, Monte Carlo event-by-event simulations are particularly suitable. However, the development of these track-structure codes needs accurate interaction cross sections for all the electronic processes: ionization, excitation, positronium formation and even elastic scattering. Under these conditions, we have recently developed a Monte Carlo code for positrons in water, the latter being commonly used to simulate the biological medium. All the processes are studied in detail via theoretical differential and total cross-section calculations performed by using partial wave methods. Comparisons with existing theoretical and experimental data in terms of stopping powers, mean energy transfers and ranges show very good agreements. Moreover, thanks to the theoretical description of positronium formation, we have access, for the first time, to the complete kinematics of the electron capture process. Then, the present Monte Carlo code is able to describe the detailed positronium history, which will provide useful information for medical imaging (like positron emission tomography) where improvements are needed to define with the best accuracy the tumoural volumes.

  1. Crystal structure, vibrational and theoretical studies of bis(4-amino-1,2,4-triazolium) hexachloridostannate(IV)

    NASA Astrophysics Data System (ADS)

    Daszkiewicz, Marek; Marchewka, Mariusz K.

    2012-06-01

    X-ray structure of new hybrid organic-inorganic compound, bis(4-amino-1,2,4-triazolium) hexachloridostannate(IV), [1t(4at)]2SnCl6 (P1¯ space group) was determined. Crystal structure of 4-amino-1,2,4-triazole (Pbca space group) was reinvestigated. Non-planar orientation of NH2 group was found. The geometry of the amino group does not significantly change upon protonation. The route of protonation of 4-aminotriazole and tautomer equilibrium constants for the cationic forms were theoretically studied by means of B3LYP/6-31G* method. The most stable monoprotonated species is 1H-trans-4-amino-1,2,4-triazole, 1t(4at)+, whereas the final product of the protonation route is 12(4at)2+. Potential Energy Distribution (PED) analysis was carried out for two conformers, 1c(4at)+ and 1t(4at)+. Very good agreement between theoretical and experimental frequencies was achieved due to very weak interactions existing in [1t(4at)]2SnCl6. Infrared and Raman bands were assigned on the basis of PED analysis. Comparison of vibrational spectra of [1t(4at)]2SnCl6 and [1t(4at)]Cl indicates significantly weaker intermolecular interactions in the former compound.

  2. A Multi-Level Systems Perspective for the Science of Team Science

    PubMed Central

    Börner, Katy; Contractor, Noshir; Falk-Krzesinski, Holly J.; Fiore, Stephen M.; Hall, Kara L.; Keyton, Joann; Spring, Bonnie; Stokols, Daniel; Trochim, William; Uzzi, Brian

    2012-01-01

    This Commentary describes recent research progress and professional developments in the study of scientific teamwork, an area of inquiry termed the “science of team science” (SciTS, pronounced “sahyts”). It proposes a systems perspective that incorporates a mixed-methods approach to SciTS that is commensurate with the conceptual, methodological, and translational complexities addressed within the SciTS field. The theoretically grounded and practically useful framework is intended to integrate existing and future lines of SciTS research to facilitate the field’s evolution as it addresses key challenges spanning macro, meso, and micro levels of analysis. PMID:20844283

  3. Fluid/Solid Boundary Conditions in Non-Isothermal Systems

    NASA Technical Reports Server (NTRS)

    Rosner, Daniel E.

    1999-01-01

    The existing theoretical research concerned with thermal creep at fluid/solid interfaces is briefly reviewed, and the importance of microgravity-based experimental data is then discussed. It is noted that the ultimate goal of this research is a rational molecular level theory that predicts the dependence of a dimensionless thermal creep coefficient, Ctc, on relevant dimensionless parameters describing the way fluid molecules interact with the solid surface and how they interact among themselves. The discussion covers thermophoresis of isolated solid spheres and aggregates in gases; solid sphere thermophoresis in liquids and dense vapors; thermophoresis of small immiscible liquid droplets; and applications of the direct simulation Monte Carlo method.

  4. Experimental study on slow flexural waves around the defect modes in a phononic crystal beam using fiber Bragg gratings

    NASA Astrophysics Data System (ADS)

    Chuang, Kuo-Chih; Zhang, Zhi-Qiang; Wang, Hua-Xin

    2016-12-01

    This work experimentally studies influences of the point defect modes on the group velocity of flexural waves in a phononic crystal Timoshenko beam. Using the transfer matrix method with a supercell technique, the band structures and the group velocities around the defect modes are theoretically obtained. Particularly, to demonstrate the existence of the localized defect modes inside the band gaps, a high-sensitivity fiber Bragg grating sensing system is set up and the displacement transmittance is measured. Slow propagation of flexural waves via defect coupling in the phononic crystal beam is then experimentally demonstrated with Hanning windowed tone burst excitations.

  5. E-documentation as a process management tool for nursing care in hospitals.

    PubMed

    Rajkovic, Uros; Sustersic, Olga; Rajkovic, Vladislav

    2009-01-01

    Appropriate documentation plays a key role in process management in nursing care. It includes holistic data management based on patient's data along the clinical path with regard to nursing care. We developed an e-documentation model that follows the process method of work in nursing care. It assesses the patient's status on the basis of Henderson's theoretical model of 14 basic living activities and is aligned with internationally recognized nursing classifications. E-documentation development requires reengineering of existing documentation and facilitates process reengineering. A prototype solution of an e-nursing documentation, already being in testing process at University medical centres in Ljubljana and Maribor, will be described.

  6. Emotion Generation and Emotion Regulation: One or Two Depends on Your Point of View

    PubMed Central

    Gross, James J.; Barrett, Lisa Feldman

    2010-01-01

    Emotion regulation has the odd distinction of being a wildly popular construct whose scientific existence is in considerable doubt. In this article, we discuss the confusion about whether emotion generation and emotion regulation can and should be distinguished from one another. We describe a continuum of perspectives on emotion, and highlight how different (often mutually incompatible) perspectives on emotion lead to different views about whether emotion generation and emotion regulation can be usefully distinguished. We argue that making differences in perspective explicit serves the function of allowing researchers with different theoretical commitments to collaborate productively despite seemingly insurmountable differences in terminology and methods. PMID:21479078

  7. A corrugated perfect magnetic conductor surface supporting spoof surface magnon polaritons.

    PubMed

    Liu, Liang-liang; Li, Zhuo; Gu, Chang-qing; Ning, Ping-ping; Xu, Bing-zheng; Niu, Zhen-yi; Zhao, Yong-jiu

    2014-05-05

    In this paper, we demonstrate that spoof surface magnon polaritons (SSMPs) can propagate along a corrugated perfect magnetic conductor (PMC) surface. From duality theorem, the existence of surface electromagnetic modes on corrugated PMC surfaces are manifest to be transverse electric (TE) mode compared with the transverse magnetic (TM) mode of spoof surface plasmon plaritons (SSPPs) excited on corrugated perfect electric conductor surfaces. Theoretical deduction through modal expansion method and simulation results clearly verify that SSMPs share the same dispersion relationship with the SSPPs. It is worth noting that this metamaterial will have more similar properties and potential applications as the SSPPs in large number of areas.

  8. Free energy landscapes of short peptide chains using adaptively biased molecular dynamics

    NASA Astrophysics Data System (ADS)

    Karpusenka, Vadzim; Babin, Volodymyr; Roland, Christopher; Sagui, Celeste

    2009-03-01

    We present the results of a computational study of the free energy landscapes of short polypeptide chains, as a function of several reaction coordinates meant to distinguish between several known types of helices. The free energy landscapes were calculated using the recently developed adaptively biased molecular dynamics method followed up with equilibrium ``umbrella correction'' runs. Specific polypeptides investigated include small chains of pure and mixed alanine, glutamate, leucine, lysine and methionine (all amino acids with strong helix-forming propensities), as well as glycine, proline(having a low helix forming propensities), tyrosine, serine and arginine. Our results are consistent with the existing experimental and other theoretical evidence.

  9. Breaking Megrelishvili protocol using matrix diagonalization

    NASA Astrophysics Data System (ADS)

    Arzaki, Muhammad; Triantoro Murdiansyah, Danang; Adi Prabowo, Satrio

    2018-03-01

    In this article we conduct a theoretical security analysis of Megrelishvili protocol—a linear algebra-based key agreement between two participants. We study the computational complexity of Megrelishvili vector-matrix problem (MVMP) as a mathematical problem that strongly relates to the security of Megrelishvili protocol. In particular, we investigate the asymptotic upper bounds for the running time and memory requirement of the MVMP that involves diagonalizable public matrix. Specifically, we devise a diagonalization method for solving the MVMP that is asymptotically faster than all of the previously existing algorithms. We also found an important counterintuitive result: the utilization of primitive matrix in Megrelishvili protocol makes the protocol more vulnerable to attacks.

  10. Gesellschaft fuer angewandte Mathematik und Mechanik, Scientific Annual Meeting, Universitaet Stuttgart, Federal Republic of Germany, Apr. 13-17, 1987, Reports

    NASA Astrophysics Data System (ADS)

    Recent experimental, theoretical, and numerical investigations of problems in applied mechanics are discussed in reviews and reports. The fields covered include vibration and stability; the mechanics of elastic and plastic materials; fluid mechanics; the numerical treatment of differential equations; finite and boundary elements; optimization, decision theory, stochastics, and actuarial analysis; applied analysis and mathematical physics; and numerical analysis. Reviews are presented on mathematical applications of geometric-optics methods, biomechanics and implant technology, vibration theory in engineering, the stiffness and strength of damaged materials, and the existence of slow steady flows of viscoelastic fluids of integral type.

  11. FOR LOVE OR REWARD? CHARACTERISING PREFERENCES FOR GIVING TO PARENTS IN AN EXPERIMENTAL SETTING*

    PubMed Central

    Porter, Maria; Adams, Abi

    2017-01-01

    Understanding the motivations behind intergenerational transfers is an important and active research area in economics. The existence and responsiveness of familial transfers have consequences for the design of intra and intergenerational redistributive programmes, particularly as such programmes may crowd out private transfers amongst altruistic family members. Yet, despite theoretical and empirical advances in this area, significant gaps in our knowledge remain. In this article, we advance the current literature by shedding light on both the motivation for providing intergenerational transfers, and on the nature of preferences for such giving behaviour, by using experimental techniques and revealed preference methods. PMID:29151611

  12. A method for digital image registration using a mathematical programming technique

    NASA Technical Reports Server (NTRS)

    Yao, S. S.

    1973-01-01

    A new algorithm based on a nonlinear programming technique to correct the geometrical distortions of one digital image with respect to another is discussed. This algorithm promises to be superior to existing ones in that it is capable of treating localized differential scaling, translational and rotational errors over the whole image plane. A series of piece-wise 'rubber-sheet' approximations are used, constrained in such a manner that a smooth approximation over the entire image can be obtained. The theoretical derivation is included. The result of using the algorithm to register four channel S065 Apollo IX digitized photography over Imperial Valley, California, is discussed in detail.

  13. Amplified total internal reflection: theory, analysis, and demonstration of existence via FDTD.

    PubMed

    Willis, Keely J; Schneider, John B; Hagness, Susan C

    2008-02-04

    The explanation of wave behavior upon total internal reflection from a gainy medium has defied consensus for 40 years. We examine this question using both the finite-difference time-domain (FDTD) method and theoretical analyses. FDTD simulations of a localized wave impinging on a gainy half space are based directly on Maxwell's equations and make no underlying assumptions. They reveal that amplification occurs upon total internal reflection from a gainy medium; conversely, amplification does not occur for incidence below the critical angle. Excellent agreement is obtained between the FDTD results and an analytical formulation that employs a new branch cut in the complex "propagation-constant" plane.

  14. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan B., E-mail: wollaber@lanl.go; Larsen, Edward W., E-mail: edlarsen@umich.ed

    2011-02-20

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used 'Implicit Monte Carlo' (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or 'Semi-Analog Monte Carlo' (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if {alpha}, the IMC time-discretization parameter, satisfies 0.5 < {alpha} {<=} 1. This is consistent with conventional wisdom. However, wemore » also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.« less

  15. A linear stability analysis for nonlinear, grey, thermal radiative transfer problems

    NASA Astrophysics Data System (ADS)

    Wollaber, Allan B.; Larsen, Edward W.

    2011-02-01

    We present a new linear stability analysis of three time discretizations and Monte Carlo interpretations of the nonlinear, grey thermal radiative transfer (TRT) equations: the widely used “Implicit Monte Carlo” (IMC) equations, the Carter Forest (CF) equations, and the Ahrens-Larsen or “Semi-Analog Monte Carlo” (SMC) equations. Using a spatial Fourier analysis of the 1-D Implicit Monte Carlo (IMC) equations that are linearized about an equilibrium solution, we show that the IMC equations are unconditionally stable (undamped perturbations do not exist) if α, the IMC time-discretization parameter, satisfies 0.5 < α ⩽ 1. This is consistent with conventional wisdom. However, we also show that for sufficiently large time steps, unphysical damped oscillations can exist that correspond to the lowest-frequency Fourier modes. After numerically confirming this result, we develop a method to assess the stability of any time discretization of the 0-D, nonlinear, grey, thermal radiative transfer problem. Subsequent analyses of the CF and SMC methods then demonstrate that the CF method is unconditionally stable and monotonic, but the SMC method is conditionally stable and permits unphysical oscillatory solutions that can prevent it from reaching equilibrium. This stability theory provides new conditions on the time step to guarantee monotonicity of the IMC solution, although they are likely too conservative to be used in practice. Theoretical predictions are tested and confirmed with numerical experiments.

  16. Complex amplitude reconstruction for dynamic beam quality M2 factor measurement with self-referencing interferometer wavefront sensor.

    PubMed

    Du, Yongzhao; Fu, Yuqing; Zheng, Lixin

    2016-12-20

    A real-time complex amplitude reconstruction method for determining the dynamic beam quality M2 factor based on a Mach-Zehnder self-referencing interferometer wavefront sensor is developed. By using the proposed complex amplitude reconstruction method, full characterization of the laser beam, including amplitude (intensity profile) and phase information, can be reconstructed from a single interference pattern with the Fourier fringe pattern analysis method in a one-shot measurement. With the reconstructed complex amplitude, the beam fields at any position z along its propagation direction can be obtained by first utilizing the diffraction integral theory. Then the beam quality M2 factor of the dynamic beam is calculated according to the specified method of the Standard ISO11146. The feasibility of the proposed method is demonstrated with the theoretical analysis and experiment, including the static and dynamic beam process. The experimental method is simple, fast, and operates without movable parts and is allowed in order to investigate the laser beam in inaccessible conditions using existing methods.

  17. An extended GS method for dense linear systems

    NASA Astrophysics Data System (ADS)

    Niki, Hiroshi; Kohno, Toshiyuki; Abe, Kuniyoshi

    2009-09-01

    Davey and Rosindale [K. Davey, I. Rosindale, An iterative solution scheme for systems of boundary element equations, Internat. J. Numer. Methods Engrg. 37 (1994) 1399-1411] derived the GSOR method, which uses an upper triangular matrix [Omega] in order to solve dense linear systems. By applying functional analysis, the authors presented an expression for the optimum [Omega]. Moreover, Davey and Bounds [K. Davey, S. Bounds, A generalized SOR method for dense linear systems of boundary element equations, SIAM J. Comput. 19 (1998) 953-967] also introduced further interesting results. In this note, we employ a matrix analysis approach to investigate these schemes, and derive theorems that compare these schemes with existing preconditioners for dense linear systems. We show that the convergence rate of the Gauss-Seidel method with preconditioner PG is superior to that of the GSOR method. Moreover, we define some splittings associated with the iterative schemes. Some numerical examples are reported to confirm the theoretical analysis. We show that the EGS method with preconditioner produces an extremely small spectral radius in comparison with the other schemes considered.

  18. Limits in the evolution of biological form: a theoretical morphologic perspective.

    PubMed

    McGhee, George R

    2015-12-06

    Limits in the evolution of biological form can be empirically demonstrated by using theoretical morphospace analyses, and actual analytic examples are given for univalved ammonoid shell form, bivalved brachiopod shell form and helical bryozoan colony form. Limits in the evolution of form in these animal groups can be shown to be due to functional and developmental constraints on possible evolutionary trajectories in morphospace. Future evolutionary-limit research is needed to analyse the possible existence of temporal constraint in the evolution of biological form on Earth, and in the search for the possible existence of functional alien life forms on Titan and Triton that are developmentally impossible for Earth life.

  19. Clarifying values: an updated review

    PubMed Central

    2013-01-01

    Background Consensus guidelines have recommended that decision aids include a process for helping patients clarify their values. We sought to examine the theoretical and empirical evidence related to the use of values clarification methods in patient decision aids. Methods Building on the International Patient Decision Aid Standards (IPDAS) Collaboration’s 2005 review of values clarification methods in decision aids, we convened a multi-disciplinary expert group to examine key definitions, decision-making process theories, and empirical evidence about the effects of values clarification methods in decision aids. To summarize the current state of theory and evidence about the role of values clarification methods in decision aids, we undertook a process of evidence review and summary. Results Values clarification methods (VCMs) are best defined as methods to help patients think about the desirability of options or attributes of options within a specific decision context, in order to identify which option he/she prefers. Several decision making process theories were identified that can inform the design of values clarification methods, but no single “best” practice for how such methods should be constructed was determined. Our evidence review found that existing VCMs were used for a variety of different decisions, rarely referenced underlying theory for their design, but generally were well described in regard to their development process. Listing the pros and cons of a decision was the most common method used. The 13 trials that compared decision support with or without VCMs reached mixed results: some found that VCMs improved some decision-making processes, while others found no effect. Conclusions Values clarification methods may improve decision-making processes and potentially more distal outcomes. However, the small number of evaluations of VCMs and, where evaluations exist, the heterogeneity in outcome measures makes it difficult to determine their overall effectiveness or the specific characteristics that increase effectiveness. PMID:24625261

  20. Radiative Transfer of Solar Light in Dense Complex Media : Theoretical and Experimental Achievements by the Planetary Community

    NASA Astrophysics Data System (ADS)

    Doute, S.; Schmitt, B.

    2004-05-01

    Visible and near infrared imaging spectroscopy is one of the key techniques to detect, map and characterize mineral and volatile species existing at the surface of the planets. Indeed the chemical composition, granularity, texture, physical state, etc, of the materials determine the existence and morphology of the absorption bands. However the development of quantitative methods to analyze reflectance spectra requires mastering of a very challenging physics: the reflection of solar light by densely packed, absorbent and highly scattering materials that usually present a fantastic structural complexity at different spatial scales. Volume scattering of photons depends on many parameters like the intrinsic optical properties, the shapes, sizes and the packing density of the mineral or icy grains forming the natural media. Their discontinuous and stochastic nature plays a great role especially for reflection and shading by the top few grains of the surface. Over several decades, the planetary community has developed increasingly sophisticated tools to handle this problem of radiative transfer in dense complex media in order to fulfill its needs. Analytical functions with a small number of non physical adjusting parameters were first proposed to reproduce the photometry of the planets and satellites. Then reflectance models were built by implementing methods of radiative transfer in continuously absorbent and scattering medium. A number of very restricting hypothesis forms the basis of these methods, e.g. low particles density, scattering treated in the far field approximation. A majority of these assumptions does not stand when treating planetary regoliths or volatile deposits. In addition, the classical methods completely bypass effects due to the constructive interference of scattered waves for backscattering or specular geometries (e.g. the opposition effect). Different, sometimes competing, approaches have been proposed to overcome some of these limitations. In particular Monte Carlo ray tracing simulations have been recently carried out to investigate properties of particulate media that are traditionally ignored or crudely treated: packing density, micro-roughness, etc. The efforts of the community to address the later problems are not only theoretical but also experimental with the development of several dedicated goniometers.

  1. Evolutionary conceptual analysis: faith community nursing.

    PubMed

    Ziebarth, Deborah

    2014-12-01

    The aim of the study was to report an evolutionary concept analysis of faith community nursing (FCN). FCN is a source of healthcare delivery in the USA which has grown in comprehensiveness and complexity. With increasing healthcare cost and a focus on access and prevention, FCN has extended beyond the physical walls of the faith community building. Faith communities and healthcare organizations invest in FCN and standardized training programs exist. Using Rodgers' evolutionary analysis, the literature was examined for antecedents, attributes, and consequences of the concept. This design allows for understanding the historical and social nature of the concept and how it changes over time. A search of databases using the keywords FCN, faith community nurse, parish nursing, and parish nurse was done. The concept of FCN was explored using research and theoretical literature. A theoretical definition and model were developed with relevant implications. The search results netted a sample of 124 reports of research and theoretical articles from multiple disciplines: medicine, education, religion and philosophy, international health, and nursing. Theoretical definition: FCN is a method of healthcare delivery that is centered in a relationship between the nurse and client (client as person, family, group, or community). The relationship occurs in an iterative motion over time when the client seeks or is targeted for wholistic health care with the goal of optimal wholistic health functioning. Faith integrating is a continuous occurring attribute. Health promoting, disease managing, coordinating, empowering and accessing health care are other essential attributes. All essential attributes occur with intentionality in a faith community, home, health institution and other community settings with fluidity as part of a community, national, or global health initiative. A new theoretical definition and corresponding conceptual model of FCN provides a basis for future nursing knowledge and model-based applications for evidence-based practice and research.

  2. Conceptualizing and Measuring Working Memory and its Relationship to Aphasia

    PubMed Central

    Wright, Heather Harris; Fergadiotis, Gerasimos

    2011-01-01

    Background General agreement exists in the literature that individuals with aphasia can exhibit a working memory deficit that contributes to their language processing impairments. Though conceptualized within different working memory frameworks, researchers have suggested that individuals with aphasia have limited working memory capacity, impaired attention-control processes as well as impaired inhibitory mechanisms. However, across studies investigating working memory ability in individuals with aphasia, different measures have been used to quantify their working memory ability and identify the relationship between working memory and language performance. Aims The primary objectives of this article are to (1) review current working memory theoretical frameworks, (2) review tasks used to measure working memory, and (3) discuss findings from studies that have investigated working memory as they relate to language processing in aphasia. Main Contribution Though findings have been consistent across studies investigating working memory ability in individuals with aphasia, discussion of how working memory is conceptualized and defined is often missing, as is discussion of results within a theoretical framework. This is critical, as working memory is conceptualized differently across the different theoretical frameworks. They differ in explaining what limits capacity and the source of individual differences as well as how information is encoded, maintained, and retrieved. When test methods are considered within a theoretical framework, specific hypotheses can be tested and stronger conclusions that are less susceptible to different interpretations can be made. Conclusions Working memory ability has been investigated in numerous studies with individuals with aphasia. To better understand the underlying cognitive constructs that contribute to the language deficits exhibited by individuals with aphasia, future investigations should operationally define the cognitive constructs of interest and discuss findings within theoretical frameworks. PMID:22639480

  3. Interaction of eta mesons with nuclei.

    PubMed

    Kelkar, N G; Khemchandani, K P; Upadhyay, N J; Jain, B K

    2013-06-01

    Back in the mid-1980s, a new branch of investigation related to the interaction of eta mesons with nuclei came into existence. It started with the theoretical prediction of possible exotic states of eta mesons and nuclei bound by the strong interaction and later developed into an extensive experimental program to search for such unstable states as well as understand the underlying interaction via eta-meson producing reactions. The vast literature of experimental as well as theoretical works that studied various aspects of eta-producing reactions such as the π(+)n → ηp, pd → (3)Heη, p (6)Li → (7)Be η and γ (3)He → η X, to name a few, had but one objective in mind: to understand the eta-nucleon (ηN) and hence the η-nucleus interaction which could explain the production data and confirm the existence of some η-mesic nuclei. In spite of these efforts, there remain uncertainties in the knowledge of the ηN and hence the η-nucleus interaction. Therefore, this review is an attempt to bind together the findings in these works and draw some global and specific conclusions which can be useful for future explorations.The ηN scattering length (which represents the strength of the η-nucleon interaction) using different theoretical models and analyzing the data on η production in pion, photon and proton induced reactions was found to be spread out in a wide range, namely, 0.18 ≤ Re aηN ≤ 1.03 fm and 0.16 ≤ Rm aηN ≤ 0.49 fm. Theoretical searches of heavy η-mesic nuclei based on η-nucleus optical potentials and lighter ones based on Faddeev type few-body approaches predict the existence of several quasibound and resonant states. Although some hints of η-mesic states such as (3)(η)He and (25)(η)Mg do exist from previous experiments, the promise of clearer signals for the existence of η-mesic nuclei lies in the experiments to be performed at the J-PARC, MAMI and COSY facilities in the near future. This review is aimed at giving an overall status of these efforts.

  4. Scaling of spectra in grid turbulence with a mean cross-stream temperature gradient

    NASA Astrophysics Data System (ADS)

    Bahri, Carla; Arwatz, Gilad; Mueller, Michael E.; George, William K.; Hultmark, Marcus

    2014-11-01

    Scaling of grid turbulence with a constant mean cross-stream temperature gradient is investigated using a combination of theoretical predictions, DNS, and experimental data. Conditions for self-similarity of the governing equations and the scalar spectrum are investigated, which reveals necessary conditions for self-similarity to exist. These conditions provide a theoretical framework for scaling of the temperature spectrum as well as the temperature flux spectrum. One necessary condition, predicted by the theory, is that the characteristic length scale describing the scalar spectrum must vary as √{ t} for a self-similar solution to exist. In order to investigate this, T-NSTAP sensors, specially designed for temperature measurements at high frequencies, were deployed in a heated passive grid turbulence setup together with conventional cold-wires, and complementary DNS calculations were performed to complement and complete the experimental data. These data are used to compare the behavior of different length scales and validate the theoretical predictions.

  5. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  6. Case Mis-Conceptualization in Psychological Treatment: An Enduring Clinical Problem.

    PubMed

    Ridley, Charles R; Jeffrey, Christina E; Roberson, Richard B

    2017-04-01

    Case conceptualization, an integral component of mental health treatment, aims to facilitate therapeutic gains by formulating a clear picture of a client's psychological presentation. However, despite numerous attempts to improve this clinical activity, it remains unclear how well existing methods achieve their purported purpose. Case formulation is inconsistently defined in the literature and implemented in practice, with many methods varying in complexity, theoretical grounding, and empirical support. In addition, many of the methods demand a precise clinical acumen that is easily influenced by judgmental and inferential errors. These errors occur regardless of clinicians' level of training or amount of clinical experience. Overall, the lack of a consensus definition, a diversity of methods, and susceptibility of clinicians to errors are manifestations of the state of crisis in case conceptualization. This article, the 2nd in a series of 5 on thematic mapping, argues the need for more reliable and valid models of case conceptualization. © 2017 Wiley Periodicals, Inc.

  7. Rolling Element Bearing Stiffness Matrix Determination (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Y.; Parker, R.

    2014-01-01

    Current theoretical bearing models differ in their stiffness estimates because of different model assumptions. In this study, a finite element/contact mechanics model is developed for rolling element bearings with the focus of obtaining accurate bearing stiffness for a wide range of bearing types and parameters. A combined surface integral and finite element method is used to solve for the contact mechanics between the rolling elements and races. This model captures the time-dependent characteristics of the bearing contact due to the orbital motion of the rolling elements. A numerical method is developed to determine the full bearing stiffness matrix corresponding tomore » two radial, one axial, and two angular coordinates; the rotation about the shaft axis is free by design. This proposed stiffness determination method is validated against experiments in the literature and compared to existing analytical models and widely used advanced computational methods. The fully-populated stiffness matrix demonstrates the coupling between bearing radial, axial, and tilting bearing deflections.« less

  8. A direct method of extracting surface recombination velocity from an electron beam induced current line scan

    NASA Astrophysics Data System (ADS)

    Ong, Vincent K. S.

    1998-04-01

    The extraction of diffusion length and surface recombination velocity in a semiconductor with the use of an electron beam induced current line scan has traditionally been done by fitting the line scan into complicated theoretical equations. It was recently shown that a much simpler equation is sufficient for the extraction of diffusion length. The linearization coefficient is the only variable that is needed to be adjusted in the curve fitting process. However, complicated equations are still necessary for the extraction of surface recombination velocity. It is shown in this article that it is indeed possible to extract surface recombination velocity with a simple equation, using only one variable, the linearization coefficient. An intuitive feel for the reason behind the method was discussed. The accuracy of the method was verified with the use of three-dimensional computer simulation, and was found to be even slightly better than that of the best existing method.

  9. Calibrated Multivariate Regression with Application to Neural Semantic Basis Discovery.

    PubMed

    Liu, Han; Wang, Lie; Zhao, Tuo

    2015-08-01

    We propose a calibrated multivariate regression method named CMR for fitting high dimensional multivariate regression models. Compared with existing methods, CMR calibrates regularization for each regression task with respect to its noise level so that it simultaneously attains improved finite-sample performance and tuning insensitiveness. Theoretically, we provide sufficient conditions under which CMR achieves the optimal rate of convergence in parameter estimation. Computationally, we propose an efficient smoothed proximal gradient algorithm with a worst-case numerical rate of convergence O (1/ ϵ ), where ϵ is a pre-specified accuracy of the objective function value. We conduct thorough numerical simulations to illustrate that CMR consistently outperforms other high dimensional multivariate regression methods. We also apply CMR to solve a brain activity prediction problem and find that it is as competitive as a handcrafted model created by human experts. The R package camel implementing the proposed method is available on the Comprehensive R Archive Network http://cran.r-project.org/web/packages/camel/.

  10. A recursive method for calculating the total number of spanning trees and its applications in self-similar small-world scale-free network models

    NASA Astrophysics Data System (ADS)

    Ma, Fei; Su, Jing; Yao, Bing

    2018-05-01

    The problem of determining and calculating the number of spanning trees of any finite graph (model) is a great challenge, and has been studied in various fields, such as discrete applied mathematics, theoretical computer science, physics, chemistry and the like. In this paper, firstly, thank to lots of real-life systems and artificial networks built by all kinds of functions and combinations among some simpler and smaller elements (components), we discuss some helpful network-operation, including link-operation and merge-operation, to design more realistic and complicated network models. Secondly, we present a method for computing the total number of spanning trees. As an accessible example, we apply this method to space of trees and cycles respectively, and our results suggest that it is indeed a better one for such models. In order to reflect more widely practical applications and potentially theoretical significance, we study the enumerating method in some existing scale-free network models. On the other hand, we set up a class of new models displaying scale-free feature, that is to say, following P(k) k-γ, where γ is the degree exponent. Based on detailed calculation, the degree exponent γ of our deterministic scale-free models satisfies γ > 3. In the rest of our discussions, we not only calculate analytically the solutions of average path length, which indicates our models have small-world property being prevailing in amounts of complex systems, but also derive the number of spanning trees by means of the recursive method described in this paper, which clarifies our method is convenient to research these models.

  11. What we know about the purpose, theoretical foundation, scope and dimensionality of existing self-management measurement tools: A scoping review.

    PubMed

    Packer, Tanya L; Fracini, America; Audulv, Åsa; Alizadeh, Neda; van Gaal, Betsie G I; Warner, Grace; Kephart, George

    2018-04-01

    To identify self-report, self-management measures for adults with chronic conditions, and describe their purpose, theoretical foundation, dimensionality (multi versus uni), and scope (generic versus condition specific). A search of four databases (8479 articles) resulted in a scoping review of 28 self-management measures. Although authors identified tools as measures of self-management, wide variation in constructs measured, purpose, and theoretical foundations existed. Subscales on 13 multidimensional tools collectively measure domains of self-management relevant to clients, however no one tool's subscales cover all domains. Viewing self-management as a complex, multidimensional whole, demonstrated that existing measures assess different, related aspects of self-management. Activities and social roles, though important to patients, are rarely measured. Measures with capacity to quantify and distinguish aspects of self-management may promote tailored patient care. In selecting tools for research or assessment, the reason for development, definitions, and theories underpinning the measure should be scrutinized. Our ability to measure self-management must be rigorously mapped to provide comprehensive and system-wide care for clients with chronic conditions. Viewing self-management as a complex whole will help practitioners to understand the patient perspective and their contribution in supporting each individual patient. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Gender Inequality in British and German Universities

    ERIC Educational Resources Information Center

    Pritchard, Rosalind

    2007-01-01

    Gender inequality exists within higher education in the UK and Germany. In the UK only 15.3% of professors in pre-and post-1992 universities were women (2003), whilst in Germany only 8.6% attained the highest grade of professorship (2003). The research uses existing data sets combined with theoretical constructs to investigate the reasons for…

  13. Peer Commentaries on Roeper's "Universal Bilingualism."

    ERIC Educational Resources Information Center

    Ayoun, Dalila; Haider, Hubert; Hawkins, Roger; Hulk, Aafke; Meechan, Marjory; O'Neil, Wayne; Yang, Charles D.

    1999-01-01

    Seven peer commentaries are included in response to an article on the notion that a narrow kind of bilingualism exists within every language and is present whenever two properties exist in a language that are not statable within a single grammar. This theoretical bilingualism is defined in terms of the minimalist theory of syntax presented by…

  14. The Great Fallacy of the H Plus Ion and the True Nature of H30 Plus.

    ERIC Educational Resources Information Center

    Giguere, Paul A.

    1979-01-01

    Experimental and theoretical data are presented which verifies the existence of the hydronium ion. This existence was confirmed directly by x-ray and neutron diffraction in hydrochloric acid. Recommended is the abandonment of the erroneous hydrogen ion formulation and names such as proton hydrate. (BT)

  15. An Analysis of the Community College Concept in the Socialist Republic of Viet Nam

    ERIC Educational Resources Information Center

    Epperson, Cynthia K.

    2010-01-01

    The purpose of this study was to discover if core characteristics exist forming a Vietnamese community college model and to determine if the characteristics would explain the model. This study utilized three theoretical orientations while reviewing the existing literature, formulating the research questions, examining the data and drawing…

  16. Discussion of production logging as an integral part of horizontal-well transient-pressure test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Babu, D.K.; Odeh, A.S.

    1994-09-01

    Ahmed and Badry discussed the identification of flow regimes for a horizontal well. The well produces from an infinitely extending slab-like reservoir of finite thickness. The system allows a top and bottom boundary. Reference 1 indicates the possible existence of two early radial-flow periods and illustrates them in Figures. Kuchuk et al., and Daviau give the theoretical basis for the existence of such flow regimes. The flow is essentially 2D and in vertical planes. The authors agree that a second early radial-flow period could exist from a strictly theoretical viewpoint. However, certain important physical constraints, which were not explicitly mentionedmore » in the above works, must be met before it can occur and for a reliable and valid analysis of the pressure data. The authors will show that the second early radial-flow regime could exist only if the well were extremely close to a no-flow boundary and they quantify extremely close. Hence, an engineer must use extreme caution in conducting pressure analysis on the basis of a second early radial-flow regime.« less

  17. Limitations of poster presentations reporting educational innovations at a major international medical education conference.

    PubMed

    Gordon, Morris; Darbyshire, Daniel; Saifuddin, Aamir; Vimalesvaran, Kavitha

    2013-02-19

    In most areas of medical research, the label of 'quality' is associated with well-accepted standards. Whilst its interpretation in the field of medical education is contentious, there is agreement on the key elements required when reporting novel teaching strategies. We set out to assess if these features had been fulfilled by poster presentations at a major international medical education conference. Such posters were analysed in four key areas: reporting of theoretical underpinning, explanation of instructional design methods, descriptions of the resources needed for introduction, and the offering of materials to support dissemination. Three hundred and twelve posters were reviewed with 170 suitable for analysis. Forty-one percent described their methods of instruction or innovation design. Thirty-three percent gave details of equipment, and 29% of studies described resources that may be required for delivering such an intervention. Further resources to support dissemination of their innovation were offered by 36%. Twenty-three percent described the theoretical underpinning or conceptual frameworks upon which their work was based. These findings suggest that posters presenting educational innovation are currently limited in what they offer to educators. Presenters should seek to enhance their reporting of these crucial aspects by employing existing published guidance, and organising committees may wish to consider explicitly requesting such information at the time of initial submission.

  18. Induction of therapeutic hypothermia by pharmacological modulation of temperature-sensitive TRP channels: theoretical framework and practical considerations.

    PubMed

    Feketa, Viktor V; Marrelli, Sean P

    2015-01-01

    Therapeutic hypothermia has emerged as a remarkably effective method of neuroprotection from ischemia and is being increasingly used in clinics. Accordingly, it is also a subject of considerable attention from a basic scientific research perspective. One of the fundamental problems, with which current studies are concerned, is the optimal method of inducing hypothermia. This review seeks to provide a broad theoretical framework for approaching this problem, and to discuss how a novel promising strategy of pharmacological modulation of the thermosensitive ion channels fits into this framework. Various physical, anatomical, physiological and molecular aspects of thermoregulation, which provide the foundation for this text, have been comprehensively reviewed and will not be discussed exhaustively here. Instead, the first part of the current review, which may be helpful for a broader readership outside of thermoregulation research, will build on this existing knowledge to outline possible opportunities and research directions aimed at controlling body temperature. The second part, aimed at a more specialist audience, will highlight the conceptual advantages and practical limitations of novel molecular agents targeting thermosensitive Transient Receptor Potential (TRP) channels in achieving this goal. Two particularly promising members of this channel family, namely TRP melastatin 8 (TRPM8) and TRP vanilloid 1 (TRPV1), will be discussed in greater detail.

  19. A hierarchy of effective teaching and learning to acquire competence in evidenced-based medicine

    PubMed Central

    Khan, Khalid S; Coomarasamy, Arri

    2006-01-01

    Background A variety of methods exists for teaching and learning evidence-based medicine (EBM). However, there is much debate about the effectiveness of various EBM teaching and learning activities, resulting in a lack of consensus as to what methods constitute the best educational practice. There is a need for a clear hierarchy of educational activities to effectively impart and acquire competence in EBM skills. This paper develops such a hierarchy based on current empirical and theoretical evidence. Discussion EBM requires that health care decisions be based on the best available valid and relevant evidence. To achieve this, teachers delivering EBM curricula need to inculcate amongst learners the skills to gain, assess, apply, integrate and communicate new knowledge in clinical decision-making. Empirical and theoretical evidence suggests that there is a hierarchy of teaching and learning activities in terms of their educational effectiveness: Level 1, interactive and clinically integrated activities; Level 2(a), interactive but classroom based activities; Level 2(b), didactic but clinically integrated activities; and Level 3, didactic, classroom or standalone teaching. Summary All health care professionals need to understand and implement the principles of EBM to improve care of their patients. Interactive and clinically integrated teaching and learning activities provide the basis for the best educational practice in this field. PMID:17173690

  20. Limitations of poster presentations reporting educational innovations at a major international medical education conference.

    PubMed

    Gordon, Morris; Darbyshire, Daniel; Saifuddin, Aamir; Vimalesvaran, Kavitha

    2013-01-01

    In most areas of medical research, the label of 'quality' is associated with well-accepted standards. Whilst its interpretation in the field of medical education is contentious, there is agreement on the key elements required when reporting novel teaching strategies. We set out to assess if these features had been fulfilled by poster presentations at a major international medical education conference. Such posters were analysed in four key areas: reporting of theoretical underpinning, explanation of instructional design methods, descriptions of the resources needed for introduction, and the offering of materials to support dissemination. Three hundred and twelve posters were reviewed with 170 suitable for analysis. Forty-one percent described their methods of instruction or innovation design. Thirty-three percent gave details of equipment, and 29% of studies described resources that may be required for delivering such an intervention. Further resources to support dissemination of their innovation were offered by 36%. Twenty-three percent described the theoretical underpinning or conceptual frameworks upon which their work was based. These findings suggest that posters presenting educational innovation are currently limited in what they offer to educators. Presenters should seek to enhance their reporting of these crucial aspects by employing existing published guidance, and organising committees may wish to consider explicitly requesting such information at the time of initial submission.

  1. Dynamical mean-field theoretical approach to explore the temperature-dependent magnetization in Ta-doped TiO2

    NASA Astrophysics Data System (ADS)

    Majidi, M. A.; Umar, A. S.; Rusydi, A.

    2017-04-01

    TiO2 has, in recent years, become a hot subject as it holds a promise for spintronic application. Recent experimental study on anatase Ti1-x Ta x O2 (x ~ 0.05) thin films shows that the system changes from non-magnetic to ferromagnetic due to Ti vacancies that are formed when a small percentage of Ti atoms are substituted by Ta. Motivated by those results that reveal the ferromagnetic phase at room temperature, we conduct a theoretical study on the temperature-dependent magnetization and the Currie temperature of that system. We hypothesize that when several Ti vacancies are formed in the system, each of them induces a local magnetic moment, then such moments couple each other through Ruderman-Kittel-Kasuya-Yosida (RKKY) interaction, forming a ferromagnetic order. To study the temperature dependence of the magnetization and predict the Curie temperature, we construct a tight-binding based Hamiltonian for this system and use the method of dynamical mean-field theory to perform calculations for various temperatures. Our work is still preliminary. The model and method may need further improvement to be consistent with known existing facts. We present our preliminary results to show how the present model works.

  2. Strongly Interacting Matter at Very High Energy Density

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLerran, L.

    2011-06-05

    The authors discuss the study of matter at very high energy density. In particular: what are the scientific questions; what are the opportunities to makes significant progress in the study of such matter and what facilities are now or might be available in the future to answer the scientific questions? The theoretical and experimental study of new forms of high energy density matter is still very much a 'wild west' field. There is much freedom for developing new concepts which can have order one effects on the way we think about such matter. It is also a largely 'lawless' field,more » in that concepts and methods are being developed as new information is generated. There is also great possibility for new experimental discovery. Most of the exciting results from RHIC experiments were unanticipated. The methods used for studying various effects like flow, jet quenching, the ridge, two particle correlations etc. were developed as experiments evolved. I believe this will continue to be the case at LHC and as we use existing and proposed accelerators to turn theoretical conjecture into tangible reality. At some point this will no doubt evolve into a precision science, and that will make the field more respectable, but for my taste, the 'wild west' times are the most fun.« less

  3. Semi-analytical and Numerical Studies on the Flattened Brazilian Splitting Test Used for Measuring the Indirect Tensile Strength of Rocks

    NASA Astrophysics Data System (ADS)

    Huang, Y. G.; Wang, L. G.; Lu, Y. L.; Chen, J. R.; Zhang, J. H.

    2015-09-01

    Based on the two-dimensional elasticity theory, this study established a mechanical model under chordally opposing distributed compressive loads, in order to perfect the theoretical foundation of the flattened Brazilian splitting test used for measuring the indirect tensile strength of rocks. The stress superposition method was used to obtain the approximate analytic solutions of stress components inside the flattened Brazilian disk. These analytic solutions were then verified through a comparison with the numerical results of the finite element method (FEM). Based on the theoretical derivation, this research carried out a contrastive study on the effect of the flattened loading angles on the stress value and stress concentration degree inside the disk. The results showed that the stress concentration degree near the loading point and the ratio of compressive/tensile stress inside the disk dramatically decreased as the flattened loading angle increased, avoiding the crushing failure near-loading point of Brazilian disk specimens. However, only the tensile stress value and the tensile region were slightly reduced with the increase of the flattened loading angle. Furthermore, this study found that the optimal flattened loading angle was 20°-30°; flattened load angles that were too large or too small made it difficult to guarantee the central tensile splitting failure principle of the Brazilian splitting test. According to the Griffith strength failure criterion, the calculative formula of the indirect tensile strength of rocks was derived theoretically. This study obtained a theoretical indirect tensile strength that closely coincided with existing and experimental results. Finally, this paper simulated the fracture evolution process of rocks under different loading angles through the use of the finite element numerical software ANSYS. The modeling results showed that the Flattened Brazilian Splitting Test using the optimal loading angle could guarantee the tensile splitting failure initiated by a central crack.

  4. Acceptability of healthcare interventions: an overview of reviews and development of a theoretical framework.

    PubMed

    Sekhon, Mandeep; Cartwright, Martin; Francis, Jill J

    2017-01-26

    It is increasingly acknowledged that 'acceptability' should be considered when designing, evaluating and implementing healthcare interventions. However, the published literature offers little guidance on how to define or assess acceptability. The purpose of this study was to develop a multi-construct theoretical framework of acceptability of healthcare interventions that can be applied to assess prospective (i.e. anticipated) and retrospective (i.e. experienced) acceptability from the perspective of intervention delivers and recipients. Two methods were used to select the component constructs of acceptability. 1) An overview of reviews was conducted to identify systematic reviews that claim to define, theorise or measure acceptability of healthcare interventions. 2) Principles of inductive and deductive reasoning were applied to theorise the concept of acceptability and develop a theoretical framework. Steps included (1) defining acceptability; (2) describing its properties and scope and (3) identifying component constructs and empirical indicators. From the 43 reviews included in the overview, none explicitly theorised or defined acceptability. Measures used to assess acceptability focused on behaviour (e.g. dropout rates) (23 reviews), affect (i.e. feelings) (5 reviews), cognition (i.e. perceptions) (7 reviews) or a combination of these (8 reviews). From the methods described above we propose a definition: Acceptability is a multi-faceted construct that reflects the extent to which people delivering or receiving a healthcare intervention consider it to be appropriate, based on anticipated or experienced cognitive and emotional responses to the intervention. The theoretical framework of acceptability (TFA) consists of seven component constructs: affective attitude, burden, perceived effectiveness, ethicality, intervention coherence, opportunity costs, and self-efficacy. Despite frequent claims that healthcare interventions have assessed acceptability, it is evident that acceptability research could be more robust. The proposed definition of acceptability and the TFA can inform assessment tools and evaluations of the acceptability of new or existing interventions.

  5. The Influence of Theoretical Mathematical Foundations on Teaching and Learning: A Case Study of Whole Numbers in Elementary School

    ERIC Educational Resources Information Center

    Chambris, Christine

    2018-01-01

    This paper examines the existence and impact of theoretical mathematical foundations on the teaching and learning of whole numbers in elementary school in France. It shows that the study of the New Math reform--which was eventually itself replaced in the longer term--provides some keys to understanding the influence of mathematical theories on…

  6. Lazy collaborative filtering for data sets with missing values.

    PubMed

    Ren, Yongli; Li, Gang; Zhang, Jun; Zhou, Wanlei

    2013-12-01

    As one of the biggest challenges in research on recommender systems, the data sparsity issue is mainly caused by the fact that users tend to rate a small proportion of items from the huge number of available items. This issue becomes even more problematic for the neighborhood-based collaborative filtering (CF) methods, as there are even lower numbers of ratings available in the neighborhood of the query item. In this paper, we aim to address the data sparsity issue in the context of neighborhood-based CF. For a given query (user, item), a set of key ratings is first identified by taking the historical information of both the user and the item into account. Then, an auto-adaptive imputation (AutAI) method is proposed to impute the missing values in the set of key ratings. We present a theoretical analysis to show that the proposed imputation method effectively improves the performance of the conventional neighborhood-based CF methods. The experimental results show that our new method of CF with AutAI outperforms six existing recommendation methods in terms of accuracy.

  7. Wavefront reconstruction method based on wavelet fractal interpolation for coherent free space optical communication

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng

    2018-03-01

    Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.

  8. An Analysis of Periodic Components in BL Lac Object S5 0716 +714 with MUSIC Method

    NASA Astrophysics Data System (ADS)

    Tang, J.

    2012-01-01

    Multiple signal classification (MUSIC) algorithms are introduced to the estimation of the period of variation of BL Lac objects.The principle of MUSIC spectral analysis method and theoretical analysis of the resolution of frequency spectrum using analog signals are included. From a lot of literatures, we have collected a lot of effective observation data of BL Lac object S5 0716 + 714 in V, R, I bands from 1994 to 2008. The light variation periods of S5 0716 +714 are obtained by means of the MUSIC spectral analysis method and periodogram spectral analysis method. There exist two major periods: (3.33±0.08) years and (1.24±0.01) years for all bands. The estimation of the period of variation of the algorithm based on the MUSIC spectral analysis method is compared with that of the algorithm based on the periodogram spectral analysis method. It is a super-resolution algorithm with small data length, and could be used to detect the period of variation of weak signals.

  9. Which came first, people or pollution? A review of theory and evidence from longitudinal environmental justice studies

    NASA Astrophysics Data System (ADS)

    Mohai, Paul; Saha, Robin

    2015-12-01

    A considerable number of quantitative analyses have been conducted in the past several decades that demonstrate the existence of racial and socioeconomic disparities in the distribution of a wide variety of environmental hazards. The vast majority of these have been cross-sectional, snapshot studies employing data on hazardous facilities and population characteristics at only one point in time. Although some limited hypotheses can be tested with cross-sectional data, fully understanding how present-day disparities come about requires longitudinal analyses that examine the demographic characteristics of sites at the time of facility siting and track demographic changes after siting. Relatively few such studies exist and those that do exist have often led to confusing and contradictory findings. In this paper we review the theoretical arguments, methods, findings, and conclusions drawn from existing longitudinal environmental justice studies. Our goal is to make sense of this literature and to identify the direction future research should take in order to resolve confusion and arrive at a clearer understanding of the processes and contributory factors by which present-day racial and socioeconomic disparities in the distribution of environmental hazards have come about. Such understandings also serve as an important step in identifying appropriate and effective societal responses to ameliorate environmental disparities.

  10. Ionization potential for the 1s{sup 2}2s{sup 2} of berylliumlike systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, K.T.; Zhu, X.W.; Wang, Z.W.

    1993-05-01

    The 1s{sup 2}2s{sup 2}, ground state energies of beryllium- like systems are calculated with a full-core plus correlation method. A partial saturation of basis functions method is used to extrapolated a better nonrelativistic energy. The 1s{sup 2}2s{sup 2} ionization potentials are calculated by including the relativistic corrections, mass polarization and QED effects. These results are compared with the existing theoretical and experimental data in the literature. The predicted BeI, CIII, NIV, and OV ionization potentials are within the quoted experimental error. Our result for FVI, 1267606.7 cm{sup -1}, supports the recent experiment of Engstrom, 1267606(2) cm{sup -1}, over the datummore » in the existing data tables. The predicted specific mass polarization contribution to the ionization potential for BeI, 0.00688 a.u., agrees with the 0.00674(100) a.u. from the experiment of Wen. Using the calculated results of Z=4-10, 15, and 20, we extrapolated the results for other Z systems up to Z=25 for which the ionization potentials are not explicitly computed.« less

  11. Observations of the Geometry of Horizon-Based Optical Navigation

    NASA Technical Reports Server (NTRS)

    Christian, John; Robinson, Shane

    2016-01-01

    NASA's Orion Project has sparked a renewed interest in horizon-based optical navigation(OPNAV) techniques for spacecraft in the Earth-Moon system. Some approaches have begun to explore the geometry of horizon-based OPNAV and exploit the fact that it is a conic section problem. Therefore, the present paper focuses more deeply on understanding and leveraging the various geometric interpretations of horizon-based OPNAV. These results provide valuable insight into the fundamental workings of OPNAV solution methods, their convergence properties, and associated estimate covariance. Most importantly, the geometry and transformations uncovered in this paper lead to a simple and non-iterative solution to the generic horizon-based OPNAV problem. This represents a significant theoretical advancement over existing methods. Thus, we find that a clear understanding of geometric relationships is central to the prudent design, use, and operation of horizon-based OPNAV techniques.

  12. Differential phase-shift keying and channel equalization in free space optical communication system

    NASA Astrophysics Data System (ADS)

    Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Wan, Xiongfeng; Xu, Chenlu

    2018-01-01

    We present the performance benefits of differential phase-shift keying (DPSK) modulation in eliminating influence from atmospheric turbulence, especially for coherent free space optical (FSO) communication with a high communication rate. Analytic expression of detected signal is derived, based on which, homodyne detection efficiency is calculated to indicate the performance of wavefront compensation. Considered laser pulses always suffer from atmospheric scattering effect by clouds, intersymbol interference (ISI) in high-speed FSO communication link is analyzed. Correspondingly, the channel equalization method of a binormalized modified constant modulus algorithm based on set-membership filtering (SM-BNMCMA) is proposed to solve the ISI problem. Finally, through the comparison with existing channel equalization methods, its performance benefits of both ISI elimination and convergence speed are verified. The research findings have theoretical significance in a high-speed FSO communication system.

  13. Reciprocal Allocation Method in Service Departments. The Case of a Production Enterprise

    NASA Astrophysics Data System (ADS)

    Papaj, Ewelina

    2017-12-01

    The main aim of this article is to indicate the role of reciprocal allocation method in the process of costs calculation. In the environment of nowadays companies, often taking very complex organisational forms, the existence of service departments becomes of great importance. Although, as far as management accounting processes are concerned, which lead to identifying the product cost, the service departments' costs come out to be of minor importance. This article means to prove that the service departments' costs and their reliable settlement are a desirable source of information about the products. This work consists of two parts. First of them features theoretical considerations and a critical analysis of subject literature. In the latter part, the service departments' costs calculation will be presented, basing on reciprocal services in a production enterprise from chemical industry.

  14. Critical review and hydrologic application of threshold detection methods for the generalized Pareto (GP) distribution

    NASA Astrophysics Data System (ADS)

    Mamalakis, Antonios; Langousis, Andreas; Deidda, Roberto

    2016-04-01

    Estimation of extreme rainfall from data constitutes one of the most important issues in statistical hydrology, as it is associated with the design of hydraulic structures and flood water management. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a generalized Pareto (GP) distribution model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches, such as non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data, graphical methods where one studies the dependence of GP distribution parameters (or related metrics) on the threshold level u, and Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GP distribution model is applicable. In this work, we review representative methods for GP threshold detection, discuss fundamental differences in their theoretical bases, and apply them to 1714 daily rainfall records from the NOAA-NCDC open-access database, with more than 110 years of data. We find that non-parametric methods that are intended to locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while methods that are based on asymptotic properties of the upper distribution tail lead to unrealistically high threshold and shape parameter estimates. The latter is justified by theoretical arguments, and it is especially the case in rainfall applications, where the shape parameter of the GP distribution is low; i.e. on the order of 0.1 ÷ 0.2. Better performance is demonstrated by graphical methods and GoF metrics that rely on pre-asymptotic properties of the GP distribution. For daily rainfall, we find that GP threshold estimates range between 2÷12 mm/d with a mean value of 6.5 mm/d, while the existence of quantization in the empirical records, as well as variations in their size, constitute the two most important factors that may significantly affect the accuracy of the obtained results. Acknowledgments The research project was implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and co-financed by the European Social Fund (ESF) and the Greek State. The work conducted by Roberto Deidda was funded under the Sardinian Regional Law 7/2007 (funding call 2013).

  15. Characterizing the In-Phase Reflection Bandwidth Theoretical Limit of Artificial Magnetic Conductors With a Transmission Line Model

    NASA Technical Reports Server (NTRS)

    Xie, Yunsong; Fan, Xin; Chen, Yunpeng; Wilson, Jeefrey D.; Simons, Rainee N.; Xiao, John Q.

    2013-01-01

    We validate through simulation and experiment that artificial magnetic conductors (AMC s) can be well characterized by a transmission line model. The theoretical bandwidth limit of the in-phase reflection can be expressed in terms of the effective RLC parameters from the surface patch and the properties of the substrate. It is found that the existence of effective inductive components will reduce the in-phase reflection bandwidth of the AMC. Furthermore, we propose design strategies to optimize AMC structures with an in-phase reflection bandwidth closer to the theoretical limit.

  16. Crystal study and econometric model

    NASA Technical Reports Server (NTRS)

    1975-01-01

    An econometric model was developed that can be used to predict demand and supply figures for crystals over a time horizon roughly concurrent with that of NASA's Space Shuttle Program - that is, 1975 through 1990. The model includes an equation to predict the impact on investment in the crystal-growing industry. Actually, two models are presented. The first is a theoretical model which follows rather strictly the standard theoretical economic concepts involved in supply and demand analysis, and a modified version of the model was developed which, though not quite as theoretically sound, was testable utilizing existing data sources.

  17. An investigation of interface transferring mechanism of surface-bonded fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Wu, Rujun; Fu, Kunkun; Chen, Tian

    2017-08-01

    Surface-bonded fiber Bragg grating sensor has been widely used in measuring strain in materials. The existence of fiber Bragg grating sensor affects strain distribution of the host material, which may result in a decrease in strain measurement accuracy. To improve the measurement accuracy, a theoretical model of strain transfer from the host material to optical fiber was developed, incorporating the influence of the fiber Bragg grating sensor. Subsequently, theoretical predictions were validated by comparing with data from finite element analysis and the existing experiment [F. Ansari and Y. Libo, J. Eng. Mech. 124(4), 385-394 (1998)]. Finally, the effect of parameters of fiber Bragg grating sensors on the average strain transfer rate was discussed.

  18. Maximum likelihood estimation of protein kinetic parameters under weak assumptions from unfolding force spectroscopy experiments

    NASA Astrophysics Data System (ADS)

    Aioanei, Daniel; Samorì, Bruno; Brucale, Marco

    2009-12-01

    Single molecule force spectroscopy (SMFS) is extensively used to characterize the mechanical unfolding behavior of individual protein domains under applied force by pulling chimeric polyproteins consisting of identical tandem repeats. Constant velocity unfolding SMFS data can be employed to reconstruct the protein unfolding energy landscape and kinetics. The methods applied so far require the specification of a single stretching force increase function, either theoretically derived or experimentally inferred, which must then be assumed to accurately describe the entirety of the experimental data. The very existence of a suitable optimal force model, even in the context of a single experimental data set, is still questioned. Herein, we propose a maximum likelihood (ML) framework for the estimation of protein kinetic parameters which can accommodate all the established theoretical force increase models. Our framework does not presuppose the existence of a single force characteristic function. Rather, it can be used with a heterogeneous set of functions, each describing the protein behavior in the stretching time range leading to one rupture event. We propose a simple way of constructing such a set of functions via piecewise linear approximation of the SMFS force vs time data and we prove the suitability of the approach both with synthetic data and experimentally. Additionally, when the spontaneous unfolding rate is the only unknown parameter, we find a correction factor that eliminates the bias of the ML estimator while also reducing its variance. Finally, we investigate which of several time-constrained experiment designs leads to better estimators.

  19. Recommendations for choosing an analysis method that controls Type I error for unbalanced cluster sample designs with Gaussian outcomes.

    PubMed

    Johnson, Jacqueline L; Kreidler, Sarah M; Catellier, Diane J; Murray, David M; Muller, Keith E; Glueck, Deborah H

    2015-11-30

    We used theoretical and simulation-based approaches to study Type I error rates for one-stage and two-stage analytic methods for cluster-randomized designs. The one-stage approach uses the observed data as outcomes and accounts for within-cluster correlation using a general linear mixed model. The two-stage model uses the cluster specific means as the outcomes in a general linear univariate model. We demonstrate analytically that both one-stage and two-stage models achieve exact Type I error rates when cluster sizes are equal. With unbalanced data, an exact size α test does not exist, and Type I error inflation may occur. Via simulation, we compare the Type I error rates for four one-stage and six two-stage hypothesis testing approaches for unbalanced data. With unbalanced data, the two-stage model, weighted by the inverse of the estimated theoretical variance of the cluster means, and with variance constrained to be positive, provided the best Type I error control for studies having at least six clusters per arm. The one-stage model with Kenward-Roger degrees of freedom and unconstrained variance performed well for studies having at least 14 clusters per arm. The popular analytic method of using a one-stage model with denominator degrees of freedom appropriate for balanced data performed poorly for small sample sizes and low intracluster correlation. Because small sample sizes and low intracluster correlation are common features of cluster-randomized trials, the Kenward-Roger method is the preferred one-stage approach. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Alcohol Warning Label Awareness and Attention: A Multi-method Study.

    PubMed

    Pham, Cuong; Rundle-Thiele, Sharyn; Parkinson, Joy; Li, Shanshi

    2018-01-01

    Evaluation of alcohol warning labels requires careful consideration ensuring that research captures more than awareness given that labels may not be prominent enough to attract attention. This study investigates attention of current in market alcohol warning labels and examines whether attention can be enhanced through theoretically informed design. Attention scores obtained through self-report methods are compared to objective measures (eye-tracking). A multi-method experimental design was used delivering four conditions, namely control, colour, size and colour and size. The first study (n = 559) involved a self-report survey to measure attention. The second study (n = 87) utilized eye-tracking to measure fixation count and duration and time to first fixation. Analysis of Variance (ANOVA) was utilized. Eye-tracking identified that 60% of participants looked at the current in market alcohol warning label while 81% looked at the optimized design (larger and red). In line with observed attention self-reported attention increased for the optimized design. The current study casts doubt on dominant practices (largely self-report), which have been used to evaluate alcohol warning labels. Awareness cannot be used to assess warning label effectiveness in isolation in cases where attention does not occur 100% of the time. Mixed methods permit objective data collection methodologies to be triangulated with surveys to assess warning label effectiveness. Attention should be incorporated as a measure in warning label effectiveness evaluations. Colour and size changes to the existing Australian warning labels aided by theoretically informed design increased attention. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.

  1. Masked Phonological Priming Effects in English: Are They Real? Do They Matter?

    ERIC Educational Resources Information Center

    Rastle, Kathleen; Brysbaert, Marc

    2006-01-01

    For over 15 years, masked phonological priming effects have been offered as evidence that phonology plays a leading role in visual word recognition. The existence of these effects--along with their theoretical implications--has, however, been disputed. The authors present three sources of evidence relevant to an assessment of the existence and…

  2. Road safety performance indicators for the interurban road network.

    PubMed

    Yannis, George; Weijermars, Wendy; Gitelman, Victoria; Vis, Martijn; Chaziris, Antonis; Papadimitriou, Eleonora; Azevedo, Carlos Lima

    2013-11-01

    Various road safety performance indicators (SPIs) have been proposed for different road safety research areas, mainly as regards driver behaviour (e.g. seat belt use, alcohol, drugs, etc.) and vehicles (e.g. passive safety); however, no SPIs for the road network and design have been developed. The objective of this research is the development of an SPI for the road network, to be used as a benchmark for cross-region comparisons. The developed SPI essentially makes a comparison of the existing road network to the theoretically required one, defined as one which meets some minimum requirements with respect to road safety. This paper presents a theoretical concept for the determination of this SPI as well as a translation of this theory into a practical method. Also, the method is applied in a number of pilot countries namely the Netherlands, Portugal, Greece and Israel. The results show that the SPI could be efficiently calculated in all countries, despite some differences in the data sources. In general, the calculated overall SPI scores were realistic and ranged from 81 to 94%, with the exception of Greece where the SPI was relatively lower (67%). However, the SPI should be considered as a first attempt to determine the safety level of the road network. The proposed method has some limitations and could be further improved. The paper presents directions for further research to further develop the SPI. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Synchronization of two homodromy rotors installed on a double vibro-body in a coupling vibration system.

    PubMed

    Fang, Pan; Hou, Yongjun; Nan, Yanghai

    2015-01-01

    A new mechanism is proposed to implement synchronization of the two unbalanced rotors in a vibration system, which consists of a double vibro-body, two induction motors and spring foundations. The coupling relationship between the vibro-bodies is ascertained with the Laplace transformation method for the dynamics equation of the system obtained with the Lagrange's equation. An analytical approach, the average method of modified small parameters, is employed to study the synchronization characteristics between the two unbalanced rotors, which is converted into that of existence and the stability of zero solutions for the non-dimensional differential equations of the angular velocity disturbance parameters. By assuming the disturbance parameters that infinitely approach to zero, the synchronization condition for the two rotors is obtained. It indicated that the absolute value of the residual torque between the two motors should be equal to or less than the maximum of their coupling torques. Meanwhile, the stability criterion of synchronization is derived with the Routh-Hurwitz method, and the region of the stable phase difference is confirmed. At last, computer simulations are preformed to verify the correctness of the approximate solution of the theoretical computation for the stable phase difference between the two unbalanced rotors, and the results of theoretical computation is in accordance with that of computer simulations. To sum up, only the parameters of the vibration system satisfy the synchronization condition and the stability criterion of the synchronization, the two unbalanced rotors can implement the synchronization operation.

  4. Synchronization of Two Homodromy Rotors Installed on a Double Vibro-Body in a Coupling Vibration System

    PubMed Central

    Fang, Pan; Hou, Yongjun; Nan, Yanghai

    2015-01-01

    A new mechanism is proposed to implement synchronization of the two unbalanced rotors in a vibration system, which consists of a double vibro-body, two induction motors and spring foundations. The coupling relationship between the vibro-bodies is ascertained with the Laplace transformation method for the dynamics equation of the system obtained with the Lagrange’s equation. An analytical approach, the average method of modified small parameters, is employed to study the synchronization characteristics between the two unbalanced rotors, which is converted into that of existence and the stability of zero solutions for the non-dimensional differential equations of the angular velocity disturbance parameters. By assuming the disturbance parameters that infinitely approach to zero, the synchronization condition for the two rotors is obtained. It indicated that the absolute value of the residual torque between the two motors should be equal to or less than the maximum of their coupling torques. Meanwhile, the stability criterion of synchronization is derived with the Routh-Hurwitz method, and the region of the stable phase difference is confirmed. At last, computer simulations are preformed to verify the correctness of the approximate solution of the theoretical computation for the stable phase difference between the two unbalanced rotors, and the results of theoretical computation is in accordance with that of computer simulations. To sum up, only the parameters of the vibration system satisfy the synchronization condition and the stability criterion of the synchronization, the two unbalanced rotors can implement the synchronization operation. PMID:25993472

  5. Theoretical frameworks used to discuss ethical issues in private physiotherapy practice and proposal of a new ethical tool.

    PubMed

    Drolet, Marie-Josée; Hudon, Anne

    2015-02-01

    In the past, several researchers in the field of physiotherapy have asserted that physiotherapy clinicians rarely use ethical knowledge to solve ethical issues raised by their practice. Does this assertion still hold true? Do the theoretical frameworks used by researchers and clinicians allow them to analyze thoroughly the ethical issues they encounter in their everyday practice? In our quest for answers, we conducted a literature review and analyzed the ethical theoretical frameworks used by physiotherapy researchers and clinicians to discuss the ethical issues raised by private physiotherapy practice. Our final analysis corpus consisted of thirty-nine texts. Our main finding is that researchers and clinicians in physiotherapy rarely use ethical knowledge to analyze the ethical issues raised in their practice and that gaps exist in the theoretical frameworks currently used to analyze these issues. Consequently, we developed, for ethical analysis, a four-part prism which we have called the Quadripartite Ethical Tool (QET). This tool can be incorporated into existing theoretical frameworks to enable professionals to integrate ethical knowledge into their ethical analyses. The innovative particularity of the QET is that it encompasses three ethical theories (utilitarism, deontologism, and virtue ethics) and axiological ontology (professional values) and also draws on both deductive and inductive approaches. It is our hope that this new tool will help researchers and clinicians integrate ethical knowledge into their analysis of ethical issues and contribute to fostering ethical analyses that are grounded in relevant philosophical and axiological foundations.

  6. Excited Negative Ions and Molecules and Negative Ion Production

    DTIC Science & Technology

    1992-01-01

    theoretically to have negative electron affinities, analogous to the rare gases. Then, Froese Fischer et al.I found theoretically that Ca- exists...AD-A247 017 Final Report - January 1992 EXCITED NEGATIVE IONS AND MOLECULES AND NEGATIVE ION PRODUCTION OTIC James R. Peterson, Senior Staff...Vice President 92-05594Physical Sciences Division1111111111II fuii 1111 ii 92 3 ’ Final Report . January 1992 EXCITED NEGATIVE IONS AND MOLECULES AND

  7. Bias correction for selecting the minimal-error classifier from many machine learning models.

    PubMed

    Ding, Ying; Tang, Shaowu; Liao, Serena G; Jia, Jia; Oesterreich, Steffi; Lin, Yan; Tseng, George C

    2014-11-15

    Supervised machine learning is commonly applied in genomic research to construct a classifier from the training data that is generalizable to predict independent testing data. When test datasets are not available, cross-validation is commonly used to estimate the error rate. Many machine learning methods are available, and it is well known that no universally best method exists in general. It has been a common practice to apply many machine learning methods and report the method that produces the smallest cross-validation error rate. Theoretically, such a procedure produces a selection bias. Consequently, many clinical studies with moderate sample sizes (e.g. n = 30-60) risk reporting a falsely small cross-validation error rate that could not be validated later in independent cohorts. In this article, we illustrated the probabilistic framework of the problem and explored the statistical and asymptotic properties. We proposed a new bias correction method based on learning curve fitting by inverse power law (IPL) and compared it with three existing methods: nested cross-validation, weighted mean correction and Tibshirani-Tibshirani procedure. All methods were compared in simulation datasets, five moderate size real datasets and two large breast cancer datasets. The result showed that IPL outperforms the other methods in bias correction with smaller variance, and it has an additional advantage to extrapolate error estimates for larger sample sizes, a practical feature to recommend whether more samples should be recruited to improve the classifier and accuracy. An R package 'MLbias' and all source files are publicly available. tsenglab.biostat.pitt.edu/software.htm. ctseng@pitt.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Factors influencing tests of auditory processing: a perspective on current issues and relevant concerns.

    PubMed

    Cacace, Anthony T; McFarland, Dennis J

    2013-01-01

    Tests of auditory perception, such as those used in the assessment of central auditory processing disorders ([C]APDs), represent a domain in audiological assessment where measurement of this theoretical construct is often confounded by nonauditory abilities due to methodological shortcomings. These confounds include the effects of cognitive variables such as memory and attention and suboptimal testing paradigms, including the use of verbal reproduction as a form of response selection. We argue that these factors need to be controlled more carefully and/or modified so that their impact on tests of auditory and visual perception is only minimal. To advocate for a stronger theoretical framework than currently exists and to suggest better methodological strategies to improve assessment of auditory processing disorders (APDs). Emphasis is placed on adaptive forced-choice psychophysical methods and the use of matched tasks in multiple sensory modalities to achieve these goals. Together, this approach has potential to improve the construct validity of the diagnosis, enhance and develop theory, and evolve into a preferred method of testing. Examination of methods commonly used in studies of APDs. Where possible, currently used methodology is compared to contemporary psychophysical methods that emphasize computer-controlled forced-choice paradigms. In many cases, the procedures used in studies of APD introduce confounding factors that could be minimized if computer-controlled forced-choice psychophysical methods were utilized. Ambiguities of interpretation, indeterminate diagnoses, and unwanted confounds can be avoided by minimizing memory and attentional demands on the input end and precluding the use of response-selection strategies that use complex motor processes on the output end. Advocated are the use of computer-controlled forced-choice psychophysical paradigms in combination with matched tasks in multiple sensory modalities to enhance the prospect of obtaining a valid diagnosis. American Academy of Audiology.

  9. Research Developments in Li-Paczyński Novae (II): Observational Aspect

    NASA Astrophysics Data System (ADS)

    Shan-qin, Wang; Zi-gao, Dai; Xue-feng, Wu

    2016-10-01

    Since the LP-Nova models were proposed, and the short gamma-ray burst (SGRB) afterglows were confirmed, people have actively made searches for the evidence of the existence of LP-Novae among the optical (or near-infrared) counterparts of SGRBs. In this paper, we first summarize these observational progresses before 2012 in Section 2. In Section 3 and 4, we respectively introduce the basic properties of GRBs 130603B and 060614, as well as the theoretical interpretation for their near-infrared (NIR) counterparts, and their NIR excess may be the signature of the existence of LP-Novae. In Section 5, we describe the basic properties of GRB 080503, and the theoretical interpretation for its optical and X-ray counterparts, and the later re-brightening of its optical and X-ray light curves is explained as the ejecta radiation (merger-nova radiation) of magnetar heating after the neutron star merging. If the interpretations for the SGRB-associated optical and infrared counterparts are correct, they may provide the first series of direct evidence to show that SGRBs and some special LGRBs are originated from the compact star mergers. Besides LP-novae (and merger-novae), the high-speed orbital motion before the compact star merging and the merger itself will produce strong gravitational-wave bursts (GWBs). In the coming era of gravitational wave detection, the theoretical and observational studies on the electromagnetic counterparts of compact star mergers will receive more and more attentions. Due to the larger uncertainty of GWB's location, the LP-Novae associated with GWBs can serve as the best candidates for the precise location of GWBs. The fast developing high-cadence and wide-field optical-NIR surveys will make effective explorations on the LP-Novae and similar phenomena, and interact the detection and research of gravitational waves. Therefore, in the last section we present the methods for the future detections of LP-Novae, and the prospect of their multi-messenger detections.

  10. Theoretical study of the gas-phase structures of sodiated and cesiated leucine and isoleucine: zwitterionic structure disfavored in kinetic method experiments.

    PubMed

    Rozman, Marko

    2005-10-01

    The most stable charge-solvated (CS) and zwitterionic (ZW) structures of sodiated and cesiated leucine and isoleucine were studied by density functional theory methods. According to the Boltzmann distribution in gas phase, both forms of LeuNa+ and IleNa+ exist, but in LeuCs+ and IleCs+, the ZW forms are dominant. Results for the sodiated compounds are consistent with the relationship found between decrease in relative stability of CS versus ZW form and aliphatic amino acid side chain length. The observed degeneracy in energy for IleNa+ conformers is at odds with kinetic method results. Additional calculations showed that kinetic method structural determinations for IleNa+ do not reflect relative order of populations in the lowest energy conformers. Since complexation of cationized amino acids into ion-bound dimers disfavors ZW structure by approximately 8 kJ mol(-1), it is suggested that for energy close conformers of sodium-cationized amino acids, the kinetic method may not be reliable for structural determinations. Copyright (c) 2005 John Wiley & Sons, Ltd.

  11. Is Best-Worst Scaling Suitable for Health State Valuation? A Comparison with Discrete Choice Experiments.

    PubMed

    Krucien, Nicolas; Watson, Verity; Ryan, Mandy

    2017-12-01

    Health utility indices (HUIs) are widely used in economic evaluation. The best-worst scaling (BWS) method is being used to value dimensions of HUIs. However, little is known about the properties of this method. This paper investigates the validity of the BWS method to develop HUI, comparing it to another ordinal valuation method, the discrete choice experiment (DCE). Using a parametric approach, we find a low level of concordance between the two methods, with evidence of preference reversals. BWS responses are subject to decision biases, with significant effects on individuals' preferences. Non parametric tests indicate that BWS data has lower stability, monotonicity and continuity compared to DCE data, suggesting that the BWS provides lower quality data. As a consequence, for both theoretical and technical reasons, practitioners should be cautious both about using the BWS method to measure health-related preferences, and using HUI based on BWS data. Given existing evidence, it seems that the DCE method is a better method, at least because its limitations (and measurement properties) have been extensively researched. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Who's in and why? A typology of stakeholder analysis methods for natural resource management.

    PubMed

    Reed, Mark S; Graves, Anil; Dandy, Norman; Posthumus, Helena; Hubacek, Klaus; Morris, Joe; Prell, Christina; Quinn, Claire H; Stringer, Lindsay C

    2009-04-01

    Stakeholder analysis means many things to different people. Various methods and approaches have been developed in different fields for different purposes, leading to confusion over the concept and practice of stakeholder analysis. This paper asks how and why stakeholder analysis should be conducted for participatory natural resource management research. This is achieved by reviewing the development of stakeholder analysis in business management, development and natural resource management. The normative and instrumental theoretical basis for stakeholder analysis is discussed, and a stakeholder analysis typology is proposed. This consists of methods for: i) identifying stakeholders; ii) differentiating between and categorising stakeholders; and iii) investigating relationships between stakeholders. The range of methods that can be used to carry out each type of analysis is reviewed. These methods and approaches are then illustrated through a series of case studies funded through the Rural Economy and Land Use (RELU) programme. These case studies show the wide range of participatory and non-participatory methods that can be used, and discuss some of the challenges and limitations of existing methods for stakeholder analysis. The case studies also propose new tools and combinations of methods that can more effectively identify and categorise stakeholders and help understand their inter-relationships.

  13. Proton Magnetic Form Factor from Existing Elastic e-p Cross Section Data

    NASA Astrophysics Data System (ADS)

    Ou, Longwu; Christy, Eric; Gilad, Shalev; Keppel, Cynthia; Schmookler, Barak; Wojtsekhowski, Bogdan

    2015-04-01

    The proton magnetic form factor GMp, in addition to being an important benchmark for all cross section measurements in hadron physics, provides critical information on proton structure. Extraction of GMp from e-p cross section data is complicated by two-photon exchange (TPE) effects, where available calculations still have large theoretical uncertainties. Studies of TPE contributions to e-p scattering have observed no nonlinear effects in Rosenbluth separations. Recent theoretical investigations show that the TPE correction goes to 0 when ɛ approaches 1, where ɛ is the virtual photon polarization parameter. In this talk, existing e-p elastic cross section data are reanalyzed by extrapolating the reduced cross section for ɛ approaching 1. Existing polarization transfer data, which is supposed to be relatively immune to TPE effects, are used to produce a ratio of electric and magnetic form factors. The extrapolated reduced cross section and polarization transfer ratio are then used to calculate GEp and GMp at different Q2 values.

  14. "We're not short of people telling us what the problems are. We're short of people telling us what to do": An appraisal of public policy and mental health

    PubMed Central

    Petticrew, Mark; Platt, Stephen; McCollam, Allyson; Wilson, Sarah; Thomas, Sian

    2008-01-01

    Background There is sustained interest in public health circles in assessing the effects of policies on health and health inequalities. We report on the theory, methods and findings of a project which involved an appraisal of current Scottish policy with respect to its potential impacts on mental health and wellbeing. Methods We developed a method of assessing the degree of alignment between Government policies and the 'evidence base', involving: reviewing theoretical frameworks; analysis of policy documents, and nineteen in-depth interviews with policymakers which explored influences on, and barriers to cross-cutting policymaking and the use of research evidence in decisionmaking. Results Most policy documents did not refer to mental health; however most referred indirectly to the determinants of mental health and well-being. Unsurprisingly research evidence was rarely cited; this was more common in health policy documents. The interviews highlighted the barriers to intersectoral policy making, and pointed to the relative value of qualitative and quantitative research, as well as to the imbalance of evidence between "what is known" and "what is to be done". Conclusion Healthy public policy depends on effective intersectoral working between government departments, along with better use of research evidence to identify policy impacts. This study identified barriers to both these. We also demonstrated an approach to rapidly appraising the mental health effects of mainly non-health sector policies, drawing on theoretical understandings of mental health and its determinants, research evidence and policy documents. In the case of the social determinants of health, we conclude that an evidence-based approach to policymaking and to policy appraisal requires drawing strongly upon existing theoretical frameworks, as well as upon research evidence, but that there are significant practical barriers and disincentives. PMID:18793414

  15. Determination of wind tunnel constraint effects by a unified pressure signature method. Part 2: Application to jet-in-crossflow

    NASA Technical Reports Server (NTRS)

    Hackett, J. E.; Sampath, S.; Phillips, C. G.

    1981-01-01

    The development of an improved jet-in-crossflow model for estimating wind tunnel blockage and angle-of-attack interference is described. Experiments showed that the simpler existing models fall seriously short of representing far-field flows properly. A new, vortex-source-doublet (VSD) model was therefore developed which employs curved trajectories and experimentally-based singularity strengths. The new model is consistent with existing and new experimental data and it predicts tunnel wall (i.e. far-field) pressures properly. It is implemented as a preprocessor to the wall-pressure-signature-based tunnel interference predictor. The supporting experiments and theoretical studies revealed some new results. Comparative flow field measurements with 1-inch "free-air" and 3-inch impinging jets showed that vortex penetration into the flow, in diameters, was almost unaltered until 'hard' impingement occurred. In modeling impinging cases, a 'plume redirection' term was introduced which is apparently absent in previous models. The effects of this term were found to be very significant.

  16. Empirical Equation Based Chirality (n, m) Assignment of Semiconducting Single Wall Carbon Nanotubes from Resonant Raman Scattering Data

    PubMed Central

    Arefin, Md Shamsul

    2012-01-01

    This work presents a technique for the chirality (n, m) assignment of semiconducting single wall carbon nanotubes by solving a set of empirical equations of the tight binding model parameters. The empirical equations of the nearest neighbor hopping parameters, relating the term (2n− m) with the first and second optical transition energies of the semiconducting single wall carbon nanotubes, are also proposed. They provide almost the same level of accuracy for lower and higher diameter nanotubes. An algorithm is presented to determine the chiral index (n, m) of any unknown semiconducting tube by solving these empirical equations using values of radial breathing mode frequency and the first or second optical transition energy from resonant Raman spectroscopy. In this paper, the chirality of 55 semiconducting nanotubes is assigned using the first and second optical transition energies. Unlike the existing methods of chirality assignment, this technique does not require graphical comparison or pattern recognition between existing experimental and theoretical Kataura plot. PMID:28348319

  17. CCOMP: An efficient algorithm for complex roots computation of determinantal equations

    NASA Astrophysics Data System (ADS)

    Zouros, Grigorios P.

    2018-01-01

    In this paper a free Python algorithm, entitled CCOMP (Complex roots COMPutation), is developed for the efficient computation of complex roots of determinantal equations inside a prescribed complex domain. The key to the method presented is the efficient determination of the candidate points inside the domain which, in their close neighborhood, a complex root may lie. Once these points are detected, the algorithm proceeds to a two-dimensional minimization problem with respect to the minimum modulus eigenvalue of the system matrix. In the core of CCOMP exist three sub-algorithms whose tasks are the efficient estimation of the minimum modulus eigenvalues of the system matrix inside the prescribed domain, the efficient computation of candidate points which guarantee the existence of minima, and finally, the computation of minima via bound constrained minimization algorithms. Theoretical results and heuristics support the development and the performance of the algorithm, which is discussed in detail. CCOMP supports general complex matrices, and its efficiency, applicability and validity is demonstrated to a variety of microwave applications.

  18. An H(∞) control approach to robust learning of feedforward neural networks.

    PubMed

    Jing, Xingjian

    2011-09-01

    A novel H(∞) robust control approach is proposed in this study to deal with the learning problems of feedforward neural networks (FNNs). The analysis and design of a desired weight update law for the FNN is transformed into a robust controller design problem for a discrete dynamic system in terms of the estimation error. The drawbacks of some existing learning algorithms can therefore be revealed, especially for the case that the output data is fast changing with respect to the input or the output data is corrupted by noise. Based on this approach, the optimal learning parameters can be found by utilizing the linear matrix inequality (LMI) optimization techniques to achieve a predefined H(∞) "noise" attenuation level. Several existing BP-type algorithms are shown to be special cases of the new H(∞)-learning algorithm. Theoretical analysis and several examples are provided to show the advantages of the new method. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. What procedure to choose while designing a fuzzy control? Towards mathematical foundations of fuzzy control

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik YA.; Quintana, Chris; Lea, Robert

    1991-01-01

    Fuzzy control has been successfully applied in industrial systems. However, there is some caution in using it. The reason is that it is based on quite reasonable ideas, but each of these ideas can be implemented in several different ways, and depending on which of the implementations chosen different results are achieved. Some implementations lead to a high quality control, some of them not. And since there are no theoretical methods for choosing the implementation, the basic way to choose it now is experimental. But if one chooses a method that is good for several examples, there is no guarantee that it will work fine in all of them. Hence the caution. A theoretical basis for choosing the fuzzy control procedures is provided. In order to choose a procedure that transforms a fuzzy knowledge into a control, one needs, first, to choose a membership function for each of the fuzzy terms that the experts use, second, to choose operations of uncertainty values that corresponds to 'and' and 'or', and third, when a membership function for control is obtained, one must defuzzy it, that is, somehow generate a value of the control u that will be actually used. A general approach that will help to make all these choices is described: namely, it is proved that under reasonable assumptions membership functions should be linear or fractionally linear, defuzzification must be described by a centroid rule and describe all possible 'and' and 'or' operations. Thus, a theoretical explanation of the existing semi-heuristic choices is given and the basis for the further research on optimal fuzzy control is formulated.

  20. Describing three-class task performance: three-class linear discriminant analysis and three-class ROC analysis

    NASA Astrophysics Data System (ADS)

    He, Xin; Frey, Eric C.

    2007-03-01

    Binary ROC analysis has solid decision-theoretic foundations and a close relationship to linear discriminant analysis (LDA). In particular, for the case of Gaussian equal covariance input data, the area under the ROC curve (AUC) value has a direct relationship to the Hotelling trace. Many attempts have been made to extend binary classification methods to multi-class. For example, Fukunaga extended binary LDA to obtain multi-class LDA, which uses the multi-class Hotelling trace as a figure-of-merit, and we have previously developed a three-class ROC analysis method. This work explores the relationship between conventional multi-class LDA and three-class ROC analysis. First, we developed a linear observer, the three-class Hotelling observer (3-HO). For Gaussian equal covariance data, the 3- HO provides equivalent performance to the three-class ideal observer and, under less strict conditions, maximizes the signal to noise ratio for classification of all pairs of the three classes simultaneously. The 3-HO templates are not the eigenvectors obtained from multi-class LDA. Second, we show that the three-class Hotelling trace, which is the figureof- merit in the conventional three-class extension of LDA, has significant limitations. Third, we demonstrate that, under certain conditions, there is a linear relationship between the eigenvectors obtained from multi-class LDA and 3-HO templates. We conclude that the 3-HO based on decision theory has advantages both in its decision theoretic background and in the usefulness of its figure-of-merit. Additionally, there exists the possibility of interpreting the two linear features extracted by the conventional extension of LDA from a decision theoretic point of view.

  1. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    PubMed Central

    Chen, Yun; Yang, Hui

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581

  2. Antileishmanial activity study and theoretical calculations for 4-amino-1,2,4-triazole derivatives

    NASA Astrophysics Data System (ADS)

    Süleymanoğlu, Nevin; Ünver, Yasemin; Ustabaş, Reşat; Direkel, Şahin; Alpaslan, Gökhan

    2017-09-01

    4-amino-1,2,4-triazole derivatives; 4-amino-1-((5-mercapto-1,3,4-oxadiazole-2-yl)methyl)-3-(thiophene-2-ylmethyl)-1H-1,2,4-triazole-5(4H)-one (1) and 4-amino-1-((4-amino-5 mercapto-4H-1,2,4-triazole-3-yl)methyl)-3-(thiophene-2-ylmethyl)-1H-1,2,4-triazole-5(4H)-one (2) were studied theoretically by Density Functional Theory (DFT) method with 6-311++G(d,p) basis set, structural and some spectroscopic parameters were determined. Significant differences between the experimental and calculated values of vibrational frequencies and chemical shifts were explained by the presence of intermolecular (Ssbnd H⋯O and Ssbnd H⋯N type) hydrogen bonds in structures. The Molecular Electrostatic Potential (MEP) maps obtained at B3LYP/6-311G++(d,p) support the existence of hydrogen bonds. Compounds were tested against to Leishmania infantum promastigots by microdilution broth assay with Alamar Blue Dye. Antileishmanial activity of 4-amino-1,2,4-triazole derivative (2) is remarkable.

  3. Manifestation of quark clusters in the emission of cumulative protons in the experiment on the fragmentation of carbon ions

    NASA Astrophysics Data System (ADS)

    Abramov, B. M.; Alekseev, P. N.; Borodin, Yu. A.; Bulychjov, S. A.; Dukhovskoy, I. A.; Krutenkova, A. P.; Kulikov, V. V.; Martemyanov, M. A.; Matsyuk, M. A.; Turdakina, E. N.; Khanov, A. I.

    2013-06-01

    The proton yields at an angle of 3.5° have been measured in the FRAGM experiment on the fragmentation of carbon ions with the energies T 0 = 0.6, 0.95, and 2.0 GeV/nucleon on a beryllium target at the heavy-ion accelerator complex TWAC (terawatt accumulator, Institute for Theoretical and Experimental Physics). The data are represented in the form of the dependences of the invariant cross section for proton yield on the cumulative variable x in the range of 0.9 < x < 2.4. This invariant cross section varies within six orders of magnitude. The proton spectra have been analyzed within the theoretical approach of the fragmentation of quark clusters with the fragmentation functions obtained in the quark-gluon string model. The probabilities of the existence of six- and nine-quark clusters in the carbon nuclei are estimated as 8-12 and 0.2-0.6%, respectively. The results are compared to the estimated of quark effects obtained by other methods.

  4. A mechanically driven form of Kirigami as a route to 3D mesostructures in micro/nanomembranes

    DOE PAGES

    Zhang, Yihui; Yan, Zheng; Nan, Kewang; ...

    2015-09-08

    Assembly of 3D micro/nanostructures in advanced functional materials has important implications across broad areas of technology. Existing approaches are compatible, however, only with narrow classes of materials and/or 3D geometries. This article introduces ideas for a form of Kirigami that allows precise, mechanically driven assembly of 3D mesostructures of diverse materials from 2D micro/nanomembranes with strategically designed geometries and patterns of cuts. Theoretical and experimental studies demonstrate applicability of the methods across length scales from macro to nano, in materials ranging from monocrystalline silicon to plastic, with levels of topographical complexity that significantly exceed those that can be achieved usingmore » other approaches. A broad set of examples includes 3D silicon mesostructures and hybrid nanomembrane-nanoribbon systems, including heterogeneous combinations with polymers and metals, with critical dimensions that range from 100 nm to 30 mm. Lastly, a 3D mechanically tunable optical transmission window provides an application example of this Kirigami process, enabled by theoretically guided design.« less

  5. Recovery of time-dependent volatility in option pricing model

    NASA Astrophysics Data System (ADS)

    Deng, Zui-Cha; Hon, Y. C.; Isakov, V.

    2016-11-01

    In this paper we investigate an inverse problem of determining the time-dependent volatility from observed market prices of options with different strikes. Due to the non linearity and sparsity of observations, an analytical solution to the problem is generally not available. Numerical approximation is also difficult to obtain using most of the existing numerical algorithms. Based on our recent theoretical results, we apply the linearisation technique to convert the problem into an inverse source problem from which recovery of the unknown volatility function can be achieved. Two kinds of strategies, namely, the integral equation method and the Landweber iterations, are adopted to obtain the stable numerical solution to the inverse problem. Both theoretical analysis and numerical examples confirm that the proposed approaches are effective. The work described in this paper was partially supported by a grant from the Research Grant Council of the Hong Kong Special Administrative Region (Project No. CityU 101112) and grants from the NNSF of China (Nos. 11261029, 11461039), and NSF grants DMS 10-08902 and 15-14886 and by Emylou Keith and Betty Dutcher Distinguished Professorship at the Wichita State University (USA).

  6. Temperature-strain discrimination in distributed optical fiber sensing using phase-sensitive optical time-domain reflectometry.

    PubMed

    Lu, Xin; Soto, Marcelo A; Thévenaz, Luc

    2017-07-10

    A method based on coherent Rayleigh scattering distinctly evaluating temperature and strain is proposed and experimentally demonstrated for distributed optical fiber sensing. Combining conventional phase-sensitive optical time-domain domain reflectometry (ϕOTDR) and ϕOTDR-based birefringence measurements, independent distributed temperature and strain profiles are obtained along a polarization-maintaining fiber. A theoretical analysis, supported by experimental data, indicates that the proposed system for temperature-strain discrimination is intrinsically better conditioned than an equivalent existing approach that combines classical Brillouin sensing with Brillouin dynamic gratings. This is due to the higher sensitivity of coherent Rayleigh scatting compared to Brillouin scattering, thus offering better performance and lower temperature-strain uncertainties in the discrimination. Compared to the Brillouin-based approach, the ϕOTDR-based system here proposed requires access to only one fiber-end, and a much simpler experimental layout. Experimental results validate the full discrimination of temperature and strain along a 100 m-long elliptical-core polarization-maintaining fiber with measurement uncertainties of ~40 mK and ~0.5 με, respectively. These values agree very well with the theoretically expected measurand resolutions.

  7. Lα and Mαβ X-ray production cross-sections of Bi by 6-30 keV electron impact

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Xu, M. X.; Yuan, Y.; Wu, Y.; Qian, Z. C.; Chang, C. H.; Mei, C. S.; Zhu, J. J.; Moharram, K.

    2017-12-01

    In this paper, the Lα and Mαβ X-ray production cross-sections for Bi impacted by 6-30 keV electron have been measured. The experiments were performed at a Scanning Electron Microscope equipped with a silicon drift detector. The thin film with thick C substrate and the thin film deposited on self-supporting thin C film were both used as the targets to make a comparison. For the thick carbon substrate target, the Monte Carlo method has been used to eliminate the contribution of backscattering particles. The measured data are compared with the DWBA theoretical model and the experimental results in the literature. The experimental data for the thin film with thick C substrate target and the thin film deposited on self-supporting thin C film target are within reasonable gaps. The DWBA theoretical model gives good fit to the experimental data both for L- and M- shells. Besides, we also analyze the reasons why the discrepancies exist between our measurements and the experimental results in the literature.

  8. Experimental study of the energy dependence of the total cross section for the 6He + natSi and 9Li + natSi reactions

    NASA Astrophysics Data System (ADS)

    Sobolev, Yu. G.; Penionzhkevich, Yu. E.; Aznabaev, D.; Zemlyanaya, E. V.; Ivanov, M. P.; Kabdrakhimova, G. D.; Kabyshev, A. M.; Knyazev, A. G.; Kugler, A.; Lashmanov, N. A.; Lukyanov, K. V.; Maj, A.; Maslov, V. A.; Mendibayev, K.; Skobelev, N. K.; Slepnev, R. S.; Smirnov, V. V.; Testov, D.

    2017-11-01

    New experimental measurements of the total reaction cross sections for the 6He + natSi and 9Li + natSi processes in the energy range of 5 to 40 A MeV are presented. A modified transmission method based on high-efficiency detection of prompt n-γ radiation has been used in the experiment. A bump is observed for the first time in the energy dependence σR( E) at E ˜ 10-30 A MeV for the 9Li + natSi reaction, and existence of the bump in σR( E) at E ˜ 10-20 A MeV first observed in the standard transmission experiments is experimentally confirmed for the 6He + natSi reaction. Theoretical analysis of the measured 6He + natSi and 9Li + natSi reaction cross sections is performed within the microscopic double folding model. Disagreement is observed between the experimental and theoretical cross sections in the region of the bump at the energies of 10 to 20 A MeV, which requires further study.

  9. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    PubMed

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  10. Integrated experimental and theoretical approach for the structural characterization of Hg2+ aqueous solutions

    NASA Astrophysics Data System (ADS)

    D'Angelo, Paola; Migliorati, Valentina; Mancini, Giordano; Barone, Vincenzo; Chillemi, Giovanni

    2008-02-01

    The structural and dynamic properties of the solvated Hg2+ ion in aqueous solution have been investigated by a combined experimental-theoretical approach employing x-ray absorption spectroscopy and molecular dynamics (MD) simulations. This method allows one to perform a quantitative analysis of the x-ray absorption near-edge structure (XANES) spectra of ionic solutions using a proper description of the thermal and structural fluctuations. XANES spectra have been computed starting from the MD trajectory, without carrying out any minimization in the structural parameter space. The XANES experimental data are accurately reproduced by a first-shell heptacoordinated cluster only if the second hydration shell is included in the calculations. These results confirm at the same time the existence of a sevenfold first hydration shell for the Hg2+ ion in aqueous solution and the reliability of the potentials used in the MD simulations. The combination of MD and XANES is found to be very helpful to get important new insights into the quantitative estimation of structural properties of disordered systems.

  11. TDDFT calculations and photoacoustic spectroscopy experiments used to identify phenolic acid functional biomolecules in Brazilian tropical fruits in natura

    NASA Astrophysics Data System (ADS)

    Lourenço Neto, M.; Agra, K. L.; Suassuna Filho, J.; Jorge, F. E.

    2018-03-01

    Time-dependent density functional theory (TDDFT) calculations of electronic transitions have been widely used to determine molecular structures. The excitation wavelengths and oscillator strengths obtained with the hybrid exchange-correlation functional B3LYP in conjunction with the ADZP basis set are employed to simulate the UV-Vis spectra of eight phenolic acids. Experimental and theoretical UV-Vis spectra reported previously in the literature are compared with our results. The fast, sensitive and non-destructive technique of photoacoustic spectroscopy (PAS) is used to determine the UV-Vis spectra of four Brazilian tropical fresh fruits in natura. Then, the PAS along with the TDDFT results are for the first time used to investigate and identify the presence of phenolic acids in the fruits studied in this work. This theoretical method with this experimental technique show to be a powerful and cheap tool to detect the existence of phenolic acids in fruits, vegetables, cereals, and grains. Comparison with high performance liquid chromatography results, when available, is also carried out.

  12. The Four Faces of Competition: The Development of the Multidimensional Competitive Orientation Inventory

    PubMed Central

    Orosz, Gábor; Tóth-Király, István; Büki, Noémi; Ivaskevics, Krisztián; Bőthe, Beáta; Fülöp, Márta

    2018-01-01

    To date, no short scale exists with established factor structure that can assess individual differences in competition. The aim of the present study was to uncover and operationalize the facets of competitive orientations with theoretical underpinning and strong psychometric properties. A total of 2676 respondents were recruited for four studies. The items were constructed based on qualitative research in different cultural contexts. A combined method of exploratory structural equation modeling (ESEM) and confirmatory factor analysis (CFA) was employed. ESEM resulted in a four-factor structure of the competitive orientations and this structure was supported by a series of CFAs on different comprehensive samples. The Multidimensional Competitive Orientation Inventory (MCOI) included 12 items and four factors: hypercompetitive orientation, self-developmental competitive orientation, anxiety-driven competition avoidance, and lack of interest toward competition. Strong gender invariance was established. The four facets of competition have differentiated relationship patterns with adaptive and maladaptive personality and motivational constructs. The MCOI can assess the adaptive and maladaptive facets of competitive orientations with a short, reliable, valid and theoretically underlined multidimensional measure. PMID:29872415

  13. Combination of real options and game-theoretic approach in investment analysis

    NASA Astrophysics Data System (ADS)

    Arasteh, Abdollah

    2016-09-01

    Investments in technology create a large amount of capital investments by major companies. Assessing such investment projects is identified as critical to the efficient assignment of resources. Viewing investment projects as real options, this paper expands a method for assessing technology investment decisions in the linkage existence of uncertainty and competition. It combines the game-theoretic models of strategic market interactions with a real options approach. Several key characteristics underlie the model. First, our study shows how investment strategies rely on competitive interactions. Under the force of competition, firms hurry to exercise their options early. The resulting "hurry equilibrium" destroys the option value of waiting and involves violent investment behavior. Second, we get best investment policies and critical investment entrances. This suggests that integrating will be unavoidable in some information product markets. The model creates some new intuitions into the forces that shape market behavior as noticed in the information technology industry. It can be used to specify best investment policies for technology innovations and adoptions, multistage R&D, and investment projects in information technology.

  14. 3-Amino-1,2,4-triazolium ion in [24(3at)]Cl and [24(3at)]2SnCl6·H2O. Comparative X-ray, vibrational and theoretical studies

    NASA Astrophysics Data System (ADS)

    Daszkiewicz, Marek; Marchewka, Mariusz K.

    2012-09-01

    Crystal structures of 3-amino-1,2,4-triazolium chloride and bis(3-amino-1,2,4-triazolium) hexachloridostannate monohydrate were determined by means of X-ray single crystal diffraction. The route of protonation of organic molecule and tautomer equilibrium constants for the cationic forms were calculated using B3LYP/6-31G* method. The most stable protonated species is 2,4-H2-3-amino-1,2,4-triazolium ion, 24(3at)+. Very good agreement between theoretical and experimental frequencies was achieved due to very weak interactions existing in studied compounds. Significantly weaker intermolecular interactions are found in [24(3at)]2SnCl6·H2O than in [24(3at)]Cl. The differences in strength of interactions are manifested in red and blue shifts for stretching and bending motions, respectively. PED calculations show that for 24(3at)+ ion the stretching type of motion of two Nringsbnd H bonds is independent, whereas bending is coupled.

  15. High-Contrast Gratings based Spoof Surface Plasmons

    NASA Astrophysics Data System (ADS)

    Li, Zhuo; Liu, Liangliang; Xu, Bingzheng; Ning, Pingping; Chen, Chen; Xu, Jia; Chen, Xinlei; Gu, Changqing; Qing, Quan

    2016-02-01

    In this work, we explore the existence of spoof surface plasmons (SSPs) supported by deep-subwavelength high-contrast gratings (HCGs) on a perfect electric conductor plane. The dispersion relation of the HCGs-based SSPs is derived analyt- ically by combining multimode network theory with rigorous mode matching method, which has nearly the same form with and can be degenerated into that of the SSPs arising from deep-subwavelength metallic gratings (MGs). Numerical simula- tions validate the analytical dispersion relation and an effective medium approximation is also presented to obtain the same analytical dispersion formula. This work sets up a unified theoretical framework for SSPs and opens up new vistas in surface plasmon optics.

  16. Slender-Body Theory Based On Approximate Solution of the Transonic Flow Equation

    NASA Technical Reports Server (NTRS)

    Spreiter, John R.; Alksne, Alberta Y.

    1959-01-01

    Approximate solution of the nonlinear equations of the small disturbance theory of transonic flow are found for the pressure distribution on pointed slender bodies of revolution for flows with free-stream, Mach number 1, and for flows that are either purely subsonic or purely supersonic. These results are obtained by application of a method based on local linearization that was introduced recently in the analysis of similar problems in two-dimensional flows. The theory is developed for bodies of arbitrary shape, and specific results are given for cone-cylinders and for parabolic-arc bodies at zero angle of attack. All results are compared either with existing theoretical results or with experimental data.

  17. Generation of Microbubbles with Applications to Industry and Medicine

    NASA Astrophysics Data System (ADS)

    Rodríguez-Rodríguez, Javier; Sevilla, Alejandro; Martínez-Bazán, Carlos; Gordillo, José Manuel

    2015-01-01

    We provide a comprehensive and systematic description of the diverse microbubble generation methods recently developed to satisfy emerging technological, pharmaceutical, and medical demands. We first introduce a theoretical framework unifying the physics of bubble formation in the wide variety of existing types of generators. These devices are then classified according to the way the bubbling process is controlled: outer liquid flows (e.g., coflows, cross flows, and flow-focusing flows), acoustic forcing, and electric fields. We also address modern techniques developed to produce bubbles coated with surfactants and liquid shells. The stringent requirements to precisely control the bubbling frequency, the bubble size, and the properties of the coating make microfluidics the natural choice to implement such techniques.

  18. Subcopula-based measure of asymmetric association for contingency tables.

    PubMed

    Wei, Zheng; Kim, Daeyoung

    2017-10-30

    For the analysis of a two-way contingency table, a new asymmetric association measure is developed. The proposed method uses the subcopula-based regression between the discrete variables to measure the asymmetric predictive powers of the variables of interest. Unlike the existing measures of asymmetric association, the subcopula-based measure is insensitive to the number of categories in a variable, and thus, the magnitude of the proposed measure can be interpreted as the degree of asymmetric association in the contingency table. The theoretical properties of the proposed subcopula-based asymmetric association measure are investigated. We illustrate the performance and advantages of the proposed measure using simulation studies and real data examples. Copyright © 2017 John Wiley & Sons, Ltd.

  19. On the Correct Analysis of the Foundations of Theoretical Physics

    NASA Astrophysics Data System (ADS)

    Kalanov, Temur Z.

    2007-04-01

    The problem of truth in science -- the most urgent problem of our time -- is discussed. The correct theoretical analysis of the foundations of theoretical physics is proposed. The principle of the unity of formal logic and rational dialectics is a methodological basis of the analysis. The main result is as follows: the generally accepted foundations of theoretical physics (i.e. Newtonian mechanics, Maxwell electrodynamics, thermodynamics, statistical physics and physical kinetics, the theory of relativity, quantum mechanics) contain the set of logical errors. These errors are explained by existence of the global cause: the errors are a collateral and inevitable result of the inductive way of cognition of the Nature, i.e. result of movement from formation of separate concepts to formation of the system of concepts. Consequently, theoretical physics enters the greatest crisis. It means that physics as a science of phenomenon leaves the progress stage for a science of essence (information). Acknowledgment: The books ``Surprises in Theoretical Physics'' (1979) and ``More Surprises in Theoretical Physics'' (1991) by Sir Rudolf Peierls stimulated my 25-year work.

  20. The Role of Trait Emotional Intelligence in Academic Performance: Theoretical Overview and Empirical Update.

    PubMed

    Perera, Harsha N

    2016-01-01

    Considerable debate still exists among scholars over the role of trait emotional intelligence (TEI) in academic performance. The dominant theoretical position is that TEI should be orthogonal or only weakly related to achievement; yet, there are strong theoretical reasons to believe that TEI plays a key role in performance. The purpose of the current article is to provide (a) an overview of the possible theoretical mechanisms linking TEI with achievement and (b) an update on empirical research examining this relationship. To elucidate these theoretical mechanisms, the overview draws on multiple theories of emotion and regulation, including TEI theory, social-functional accounts of emotion, and expectancy-value and psychobiological model of emotion and regulation. Although these theoretical accounts variously emphasize different variables as focal constructs, when taken together, they provide a comprehensive picture of the possible mechanisms linking TEI with achievement. In this regard, the article redresses the problem of vaguely specified theoretical links currently hampering progress in the field. The article closes with a consideration of directions for future research.

Top