Sample records for theoretical method based

  1. Qualitative Assessment of Inquiry-Based Teaching Methods

    ERIC Educational Resources Information Center

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  2. Synthesis of 2-(bis(cyanomethyl)amino)-2-oxoethyl methacrylate monomer molecule and its characterization by experimental and theoretical methods

    NASA Astrophysics Data System (ADS)

    Sas, E. B.; Cankaya, N.; Kurt, M.

    2018-06-01

    In this work 2-(bis(cyanomethyl)amino)-2-oxoethyl methacrylate monomer has been synthesized as newly, characterized both experimentally and theoretically. Experimentally, it has been characterized by FT-IR, FT-Raman, 1H and 13C NMR spectroscopy techniques. The theoretical calculations have been performed with Density Functional Theory (DFT) including B3LYP method. The scaled theoretical wavenumbers have been assigned based on total energy distribution (TED). Electronic properties of monomer have been performed using time-dependent TD-DFT/B3LYP/B3LYP/6-311G++(d,p) method. The results of experimental have been compared with theoretical values. Both experimental and theoretical methods have shown that the monomer was suitable for the literature.

  3. Ordering of the O-O stretching vibrational frequencies in ozone

    NASA Technical Reports Server (NTRS)

    Scuseria, Gustavo E.; Lee, Timothy J.; Scheiner, Andrew C.; Schaefer, Henry F., III

    1989-01-01

    The ordering of nu1 and nu3 for O3 is incorrectly predicted by most theoretical methods, including some very high level methods. The first systematic electron correlation method based on one-reference configuration to solve this problem is the coupled cluster single and double excitation method. However, a relatively large basis set, triple zeta plus double polarization is required. Comparison with other theoretical methods is made.

  4. Establishment and validation for the theoretical model of the vehicle airbag

    NASA Astrophysics Data System (ADS)

    Zhang, Junyuan; Jin, Yang; Xie, Lizhe; Chen, Chao

    2015-05-01

    The current design and optimization of the occupant restraint system (ORS) are based on numerous actual tests and mathematic simulations. These two methods are overly time-consuming and complex for the concept design phase of the ORS, though they're quite effective and accurate. Therefore, a fast and directive method of the design and optimization is needed in the concept design phase of the ORS. Since the airbag system is a crucial part of the ORS, in this paper, a theoretical model for the vehicle airbag is established in order to clarify the interaction between occupants and airbags, and further a fast design and optimization method of airbags in the concept design phase is made based on the proposed theoretical model. First, the theoretical expression of the simplified mechanical relationship between the airbag's design parameters and the occupant response is developed based on classical mechanics, then the momentum theorem and the ideal gas state equation are adopted to illustrate the relationship between airbag's design parameters and occupant response. By using MATLAB software, the iterative algorithm method and discrete variables are applied to the solution of the proposed theoretical model with a random input in a certain scope. And validations by MADYMO software prove the validity and accuracy of this theoretical model in two principal design parameters, the inflated gas mass and vent diameter, within a regular range. This research contributes to a deeper comprehension of the relation between occupants and airbags, further a fast design and optimization method for airbags' principal parameters in the concept design phase, and provides the range of the airbag's initial design parameters for the subsequent CAE simulations and actual tests.

  5. Theoretical Principles to Guide the Teaching of Adjectives to Children Who Struggle With Word Learning: Synthesis of Experimental and Naturalistic Research With Principles of Learning Theory.

    PubMed

    Ricks, Samantha L; Alt, Mary

    2016-07-01

    The purpose of this tutorial is to provide clinicians with a theoretically motivated and evidence-based approach to teaching adjectives to children who struggle with word learning. Given that there are almost no treatment studies to guide this topic, we have synthesized findings from experimental and theoretical literature to come up with a principles-based approach to treatment. We provide a sample lesson plan, incorporating our 3 theoretical principles, and describe the materials chosen and methods used during treatment and assessment. This approach is theoretically motivated, but it needs to be empirically tested.

  6. Eclecticism as the foundation of meta-theoretical, mixed methods and interdisciplinary research in social sciences.

    PubMed

    Kroos, Karmo

    2012-03-01

    This article examines the value of "eclecticism" as the foundation of meta-theoretical, mixed methods and interdisciplinary research in social sciences. On the basis of the analysis of the historical background of the concept, it is first suggested that eclecticism-based theoretical scholarship in social sciences could benefit from the more systematic research method that has been developed for synthesizing theoretical works under the name metatheorizing. Second, it is suggested that the mixed methods community could base its research approach on philosophical eclecticism instead of pragmatism because the basic idea of eclecticism is much more in sync with the nature of the combined research tradition. Finally, the Kuhnian frame is used to support the argument for interdisciplinary research and, hence, eclecticism in social sciences (rather than making an argument against multiple paradigms). More particularly, it is suggested that integrating the different (inter)disciplinary traditions and schools into one is not necessarily desirable at all in social sciences because of the complexity and openness of the research field. If it is nevertheless attempted, experience in economics suggests that paradigmatic unification comes at a high price.

  7. Changes in Teaching Efficacy during a Professional Development School-Based Science Methods Course

    ERIC Educational Resources Information Center

    Swars, Susan L.; Dooley, Caitlin McMunn

    2010-01-01

    This mixed methods study offers a theoretically grounded description of a field-based science methods course within a Professional Development School (PDS) model (i.e., PDS-based course). The preservice teachers' (n = 21) experiences within the PDS-based course prompted significant changes in their personal teaching efficacy, with the…

  8. A new frequency matching technique for FRF-based model updating

    NASA Astrophysics Data System (ADS)

    Yang, Xiuming; Guo, Xinglin; Ouyang, Huajiang; Li, Dongsheng

    2017-05-01

    Frequency Response Function (FRF) residues have been widely used to update Finite Element models. They are a kind of original measurement information and have the advantages of rich data and no extraction errors, etc. However, like other sensitivity-based methods, an FRF-based identification method also needs to face the ill-conditioning problem which is even more serious since the sensitivity of the FRF in the vicinity of a resonance is much greater than elsewhere. Furthermore, for a given frequency measurement, directly using a theoretical FRF at a frequency may lead to a huge difference between the theoretical FRF and the corresponding experimental FRF which finally results in larger effects of measurement errors and damping. Hence in the solution process, correct selection of the appropriate frequency to get the theoretical FRF in every iteration in the sensitivity-based approach is an effective way to improve the robustness of an FRF-based algorithm. A primary tool for right frequency selection based on the correlation of FRFs is the Frequency Domain Assurance Criterion. This paper presents a new frequency selection method which directly finds the frequency that minimizes the difference of the order of magnitude between the theoretical and experimental FRFs. A simulated truss structure is used to compare the performance of different frequency selection methods. For the sake of reality, it is assumed that not all the degrees of freedom (DoFs) are available for measurement. The minimum number of DoFs required in each approach to correctly update the analytical model is regarded as the right identification standard.

  9. Theory and applications of structured light single pixel imaging

    NASA Astrophysics Data System (ADS)

    Stokoe, Robert J.; Stockton, Patrick A.; Pezeshki, Ali; Bartels, Randy A.

    2018-02-01

    Many single-pixel imaging techniques have been developed in recent years. Though the methods of image acquisition vary considerably, the methods share unifying features that make general analysis possible. Furthermore, the methods developed thus far are based on intuitive processes that enable simple and physically-motivated reconstruction algorithms, however, this approach may not leverage the full potential of single-pixel imaging. We present a general theoretical framework of single-pixel imaging based on frame theory, which enables general, mathematically rigorous analysis. We apply our theoretical framework to existing single-pixel imaging techniques, as well as provide a foundation for developing more-advanced methods of image acquisition and reconstruction. The proposed frame theoretic framework for single-pixel imaging results in improved noise robustness, decrease in acquisition time, and can take advantage of special properties of the specimen under study. By building on this framework, new methods of imaging with a single element detector can be developed to realize the full potential associated with single-pixel imaging.

  10. A Unifying Framework for Causal Analysis in Set-Theoretic Multimethod Research

    ERIC Educational Resources Information Center

    Rohlfing, Ingo; Schneider, Carsten Q.

    2018-01-01

    The combination of Qualitative Comparative Analysis (QCA) with process tracing, which we call set-theoretic multimethod research (MMR), is steadily becoming more popular in empirical research. Despite the fact that both methods have an elected affinity based on set theory, it is not obvious how a within-case method operating in a single case and a…

  11. A new theoretical approach to analyze complex processes in cytoskeleton proteins.

    PubMed

    Li, Xin; Kolomeisky, Anatoly B

    2014-03-20

    Cytoskeleton proteins are filament structures that support a large number of important biological processes. These dynamic biopolymers exist in nonequilibrium conditions stimulated by hydrolysis chemical reactions in their monomers. Current theoretical methods provide a comprehensive picture of biochemical and biophysical processes in cytoskeleton proteins. However, the description is only qualitative under biologically relevant conditions because utilized theoretical mean-field models neglect correlations. We develop a new theoretical method to describe dynamic processes in cytoskeleton proteins that takes into account spatial correlations in the chemical composition of these biopolymers. Our approach is based on analysis of probabilities of different clusters of subunits. It allows us to obtain exact analytical expressions for a variety of dynamic properties of cytoskeleton filaments. By comparing theoretical predictions with Monte Carlo computer simulations, it is shown that our method provides a fully quantitative description of complex dynamic phenomena in cytoskeleton proteins under all conditions.

  12. The successful merger of theoretical thermochemistry with fragment-based methods in quantum chemistry.

    PubMed

    Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-12-16

    CONSPECTUS: Quantum chemistry and electronic structure theory have proven to be essential tools to the experimental chemist, in terms of both a priori predictions that pave the way for designing new experiments and rationalizing experimental observations a posteriori. Translating the well-established success of electronic structure theory in obtaining the structures and energies of small chemical systems to increasingly larger molecules is an exciting and ongoing central theme of research in quantum chemistry. However, the prohibitive computational scaling of highly accurate ab initio electronic structure methods poses a fundamental challenge to this research endeavor. This scenario necessitates an indirect fragment-based approach wherein a large molecule is divided into small fragments and is subsequently reassembled to compute its energy accurately. In our quest to further reduce the computational expense associated with the fragment-based methods and overall enhance the applicability of electronic structure methods to large molecules, we realized that the broad ideas involved in a different area, theoretical thermochemistry, are transferable to the area of fragment-based methods. This Account focuses on the effective merger of these two disparate frontiers in quantum chemistry and how new concepts inspired by theoretical thermochemistry significantly reduce the total number of electronic structure calculations needed to be performed as part of a fragment-based method without any appreciable loss of accuracy. Throughout, the generalized connectivity based hierarchy (CBH), which we developed to solve a long-standing problem in theoretical thermochemistry, serves as the linchpin in this merger. The accuracy of our method is based on two strong foundations: (a) the apt utilization of systematic and sophisticated error-canceling schemes via CBH that result in an optimal cutting scheme at any given level of fragmentation and (b) the use of a less expensive second layer of electronic structure method to recover all the missing long-range interactions in the parent large molecule. Overall, the work featured here dramatically decreases the computational expense and empowers the execution of very accurate ab initio calculations (gold-standard CCSD(T)) on large molecules and thereby facilitates sophisticated electronic structure applications to a wide range of important chemical problems.

  13. Development Mechanism of an Integrated Model for Training of a Specialist and Conceptual-Theoretical Activity of a Teacher

    ERIC Educational Resources Information Center

    Marasulov, Akhmat; Saipov, Amangeldi; ?rymbayeva, Kulimkhan; Zhiyentayeva, Begaim; Demeuov, Akhan; Konakbaeva, Ulzhamal; Bekbolatova, Akbota

    2016-01-01

    The aim of the study is to examine the methodological-theoretical construction bases for development mechanism of an integrated model for a specialist's training and teacher's conceptual-theoretical activity. Using the methods of generalization of teaching experience, pedagogical modeling and forecasting, the authors determine the urgent problems…

  14. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  15. A convenient method for preparation of 2-amino-4,6-diphenylnicotinonitrile using HBF4 as an efficient catalyst via an anomeric based oxidation: A joint experimental and theoretical study

    NASA Astrophysics Data System (ADS)

    Zolfigol, Mohammad Ali; Kiafar, Mahya; Yarie, Meysam; Taherpour, Avat(Arman); Fellowes, Thomas; Nicole Hancok, Amber; Yari, Ako

    2017-06-01

    Experimental and computational studies in the synthesis of 2-amino-4,6-diphenylnicotinonitrile using HBF4 as an oxidizing promoter catalyst under mild and solvent free conditions were carried out. The suggested anomeric based oxidation (ABO) mechanism is supported by experimental and theoretical evidence. The theoretical study shows that the intermediate isomers with 5R- and 5S- chiral positions have suitable structures for the aromatization through an anomeric based oxidation in the final step of the mechanistic pathway.

  16. Theoretical and experimental research on laser-beam homogenization based on metal gauze

    NASA Astrophysics Data System (ADS)

    Liu, Libao; Zhang, Shanshan; Wang, Ling; Zhang, Yanchao; Tian, Zhaoshuo

    2018-03-01

    Method of homogenization of CO2 laser heating by means of metal gauze is researched theoretically and experimentally. Distribution of light-field of expanded beam passing through metal gauze was numerically calculated with diffractive optical theory and the conclusion is that method is effective, with comparing the results to the situation without metal gauze. Experimentally, using the 30W DC discharge laser as source and enlarging beam by concave lens, with and without metal gauze, beam intensity distributions in thermal paper were compared, meanwhile the experiments based on thermal imager were performed. The experimental result was compatible with theoretical calculation, and all these show that the homogeneity of CO2 laser heating could be enhanced by metal gauze.

  17. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  18. Storytelling as an Instructional Method: Definitions and Research Questions

    ERIC Educational Resources Information Center

    Andrews, Dee H.; Hull, Thomas D.; Donahue, Jennifer A.

    2009-01-01

    This paper discusses the theoretical and empirical foundations of the use of storytelling in instruction. The definition of "story" is given and four instructional methods are identified related to storytelling: case-based, narrative-based, scenario-based, and problem-based instruction. The article provides descriptions of the four…

  19. Moiré deflectometry-based position detection for optical tweezers.

    PubMed

    Khorshad, Ali Akbar; Reihani, S Nader S; Tavassoly, Mohammad Taghi

    2017-09-01

    Optical tweezers have proven to be indispensable tools for pico-Newton range force spectroscopy. A quadrant photodiode (QPD) positioned at the back focal plane of an optical tweezers' condenser is commonly used for locating the trapped object. In this Letter, for the first time, to the best of our knowledge, we introduce a moiré pattern-based detection method for optical tweezers. We show, both theoretically and experimentally, that this detection method could provide considerably better position sensitivity compared to the commonly used detection systems. For instance, position sensitivity for a trapped 2.17 μm polystyrene bead is shown to be 71% better than the commonly used QPD-based detection method. Our theoretical and experimental results are in good agreement.

  20. Nursing management of sensory overload in psychiatry – Theoretical densification and modification of the framework model

    PubMed

    Scheydt, Stefan; Needham, Ian; Behrens, Johann

    2017-01-01

    Background: Within the scope of the research project on the subjects of sensory overload and stimulus regulation, a theoretical framework model of the nursing care of patients with sensory overload in psychiatry was developed. In a second step, this theoretical model should now be theoretically compressed and, if necessary, modified. Aim: Empirical verification as well as modification, enhancement and theoretical densification of the framework model of nursing care of patients with sensory overload in psychiatry. Method: Analysis of 8 expert interviews by summarizing and structuring content analysis methods based on Meuser and Nagel (2009) as well as Mayring (2010). Results: The developed framework model (Scheydt et al., 2016b) could be empirically verified, theoretically densificated and extended by one category (perception modulation). Thus, four categories of nursing care of patients with sensory overload can be described in inpatient psychiatry: removal from stimuli, modulation of environmental factors, perceptual modulation as well as help somebody to help him- or herself / coping support. Conclusions: Based on the methodological approach, a relatively well-saturated, credible conceptualization of a theoretical model for the description of the nursing care of patients with sensory overload in stationary psychiatry could be worked out. In further steps, these measures have to be further developed, implemented and evaluated regarding to their efficacy.

  1. Research on the thickness control method of workbench oil film based on theoretical model

    NASA Astrophysics Data System (ADS)

    Pei, Tang; Lin, Lin; Liu, Ge; Yu, Liping; Xu, Zhen; Zhao, Di

    2018-06-01

    To improve the thickness adjustability of the workbench oil film, we designed a software system to control the thickness of oil film based on the Siemens 840dsl CNC system and set up an experimental platform. A regulation scheme of oil film thickness based on theoretical model is proposed, the accuracy and feasibility of which is proved by experiment results. It's verified that the method mentioned above can meet the demands of workbench oil film thickness control, the experiment is simple and efficient with high control precision. Reliable theory support is supplied for the development of workbench oil film active control system as well.

  2. Subgrade evaluation based on theoretical concepts.

    DOT National Transportation Integrated Search

    1971-01-01

    Evaluations of pavement soil subgrades for the purpose of design are mostly based on empirical methods such as the CBR, California soil resistance method, etc. The need for the application of theory and the evaluation of subgrade strength in terms of...

  3. Analysis and development of adjoint-based h-adaptive direct discontinuous Galerkin method for the compressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Cheng, Jian; Yue, Huiqiang; Yu, Shengjiao; Liu, Tiegang

    2018-06-01

    In this paper, an adjoint-based high-order h-adaptive direct discontinuous Galerkin method is developed and analyzed for the two dimensional steady state compressible Navier-Stokes equations. Particular emphasis is devoted to the analysis of the adjoint consistency for three different direct discontinuous Galerkin discretizations: including the original direct discontinuous Galerkin method (DDG), the direct discontinuous Galerkin method with interface correction (DDG(IC)) and the symmetric direct discontinuous Galerkin method (SDDG). Theoretical analysis shows the extra interface correction term adopted in the DDG(IC) method and the SDDG method plays a key role in preserving the adjoint consistency. To be specific, for the model problem considered in this work, we prove that the original DDG method is not adjoint consistent, while the DDG(IC) method and the SDDG method can be adjoint consistent with appropriate treatment of boundary conditions and correct modifications towards the underlying output functionals. The performance of those three DDG methods is carefully investigated and evaluated through typical test cases. Based on the theoretical analysis, an adjoint-based h-adaptive DDG(IC) method is further developed and evaluated, numerical experiment shows its potential in the applications of adjoint-based adaptation for simulating compressible flows.

  4. A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing

    2018-01-01

    To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.

  5. Qualitative methods: beyond the cookbook.

    PubMed

    Harding, G; Gantley, M

    1998-02-01

    Qualitative methods appear increasingly in vogue in health services research (HSR). Such research, however, has utilized, often uncritically, a 'cookbook' of methods for data collection, and common-sense principles for data analysis. This paper argues that qualitative HSR benefits from recognizing and drawing upon theoretical principles underlying qualitative data collection and analysis. A distinction is drawn between problem-orientated and theory-orientated research, in order to illustrate how problem-orientated research would benefit from the introduction of theoretical perspectives in order to develop the knowledge base of health services research.

  6. Sport fishing: a comparison of three indirect methods for estimating benefits.

    Treesearch

    Darrell L. Hueth; Elizabeth J. Strong; Roger D. Fight

    1988-01-01

    Three market-based methods for estimating values of sport fishing were compared by using a common data base. The three approaches were the travel-cost method, the hedonic travel-cost method, and the household-production method. A theoretical comparison of the resulting values showed that the results were not fully comparable in several ways. The comparison of empirical...

  7. Iso standardization of theoretical activity evaluation method for low and intermediate level activated waste generated at nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Makoto Kashiwagi; Garamszeghy, Mike; Lantes, Bertrand

    Disposal of low-and intermediate-level activated waste generated at nuclear power plants is being planned or carried out in many countries. The radioactivity concentrations and/or total quantities of long-lived, difficult-to-measure nuclides (DTM nuclides), such as C-14, Ni-63, Nb-94, α emitting nuclides etc., are often restricted by the safety case for a final repository as determined by each country's safety regulations, and these concentrations or amounts are required to be known and declared. With respect to waste contaminated by contact with process water, the Scaling Factor method (SF method), which is empirically based on sampling and analysis data, has been applied asmore » an important method for determining concentrations of DTM nuclides. This method was standardized by the International Organization for Standardization (ISO) and published in 2007 as ISO21238 'Scaling factor method to determine the radioactivity of low and intermediate-level radioactive waste packages generated at nuclear power plants' [1]. However, for activated metal waste with comparatively high concentrations of radioactivity, such as may be found in reactor control rods and internal structures, direct sampling and radiochemical analysis methods to evaluate the DTM nuclides are limited by access to the material and potentially high personnel radiation exposure. In this case, theoretical calculation methods in combination with empirical methods based on remote radiation surveys need to be used to best advantage for determining the disposal inventory of DTM nuclides while minimizing exposure to radiation workers. Pursuant to this objective a standard for the theoretical evaluation of the radioactivity concentration of DTM nuclides in activated waste, is in process through ISO TC85/SC5 (ISO Technical Committee 85: Nuclear energy, nuclear technologies, and radiological protection; Subcommittee 5: Nuclear fuel cycle). The project team for this ISO standard was formed in 2011 and is composed of experts from 11 countries. The project team has been conducting technical discussions on theoretical methods for determining concentrations of radioactivity, and has developed the draft International Standard of ISO16966 'Theoretical activation calculation method to evaluate the radioactivity of activated waste generated at nuclear reactors' [2]. This paper describes the international standardization process developed by the ISO project team, and outlines the following two theoretical activity evaluation methods:? Point method? Range method. (authors)« less

  8. Study on the millimeter-wave scale absorber based on the Salisbury screen

    NASA Astrophysics Data System (ADS)

    Yuan, Liming; Dai, Fei; Xu, Yonggang; Zhang, Yuan

    2018-03-01

    In order to solve the problem on the millimeter-wave scale absorber, the Salisbury screen absorber is employed and designed based on the RL. By optimizing parameters including the sheet resistance of the surface resistive layer, the permittivity and the thickness of the grounded dielectric layer, the RL of the Salisbury screen absorber could be identical with that of the theoretical scale absorber. An example is given to verify the effectiveness of the method, where the Salisbury screen absorber is designed by the proposed method and compared with the theoretical scale absorber. Meanwhile, plate models and tri-corner reflector (TCR) models are constructed according to the designed result and their scattering properties are simulated by FEKO. Results reveal that the deviation between the designed Salisbury screen absorber and the theoretical scale absorber falls within the tolerance of radar Cross section (RCS) measurement. The work in this paper has important theoretical and practical significance in electromagnetic measurement of large scale ratio.

  9. On the design of a hierarchical SS7 network: A graph theoretical approach

    NASA Astrophysics Data System (ADS)

    Krauss, Lutz; Rufa, Gerhard

    1994-04-01

    This contribution is concerned with the design of Signaling System No. 7 networks based on graph theoretical methods. A hierarchical network topology is derived by combining the advantage of the hierarchical network structure with the realization of node disjoint routes between nodes of the network. By using specific features of this topology, we develop an algorithm to construct circle-free routing data and to assure bidirectionality also in case of failure situations. The methods described are based on the requirements that the network topology, as well as the routing data, may be easily changed.

  10. An Information-Theoretic-Cluster Visualization for Self-Organizing Maps.

    PubMed

    Brito da Silva, Leonardo Enzo; Wunsch, Donald C

    2018-06-01

    Improved data visualization will be a significant tool to enhance cluster analysis. In this paper, an information-theoretic-based method for cluster visualization using self-organizing maps (SOMs) is presented. The information-theoretic visualization (IT-vis) has the same structure as the unified distance matrix, but instead of depicting Euclidean distances between adjacent neurons, it displays the similarity between the distributions associated with adjacent neurons. Each SOM neuron has an associated subset of the data set whose cardinality controls the granularity of the IT-vis and with which the first- and second-order statistics are computed and used to estimate their probability density functions. These are used to calculate the similarity measure, based on Renyi's quadratic cross entropy and cross information potential (CIP). The introduced visualizations combine the low computational cost and kernel estimation properties of the representative CIP and the data structure representation of a single-linkage-based grouping algorithm to generate an enhanced SOM-based visualization. The visual quality of the IT-vis is assessed by comparing it with other visualization methods for several real-world and synthetic benchmark data sets. Thus, this paper also contains a significant literature survey. The experiments demonstrate the IT-vis cluster revealing capabilities, in which cluster boundaries are sharply captured. Additionally, the information-theoretic visualizations are used to perform clustering of the SOM. Compared with other methods, IT-vis of large SOMs yielded the best results in this paper, for which the quality of the final partitions was evaluated using external validity indices.

  11. An information-theoretical perspective on weighted ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Weijs, Steven V.; van de Giesen, Nick

    2013-08-01

    This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.

  12. Response mechanism for surface acoustic wave gas sensors based on surface-adsorption.

    PubMed

    Liu, Jiansheng; Lu, Yanyan

    2014-04-16

    A theoretical model is established to describe the response mechanism of surface acoustic wave (SAW) gas sensors based on physical adsorption on the detector surface. Wohljent's method is utilized to describe the relationship of sensor output (frequency shift of SAW oscillator) and the mass loaded on the detector surface. The Brunauer-Emmett-Teller (BET) formula and its improved form are introduced to depict the adsorption behavior of gas on the detector surface. By combining the two methods, we obtain a theoretical model for the response mechanism of SAW gas sensors. By using a commercial SAW gas chromatography (GC) analyzer, an experiment is performed to measure the frequency shifts caused by different concentration of dimethyl methylphosphonate (DMMP). The parameters in the model are given by fitting the experimental results and the theoretical curve agrees well with the experimental data.

  13. Exchange coupling and magnetic anisotropy of exchanged-biased quantum tunnelling single-molecule magnet Ni3Mn2 complexes using theoretical methods based on Density Functional Theory.

    PubMed

    Gómez-Coca, Silvia; Ruiz, Eliseo

    2012-03-07

    The magnetic properties of a new family of single-molecule magnet Ni(3)Mn(2) complexes were studied using theoretical methods based on Density Functional Theory (DFT). The first part of this study is devoted to analysing the exchange coupling constants, focusing on the intramolecular as well as the intermolecular interactions. The calculated intramolecular J values were in excellent agreement with the experimental data, which show that all the couplings are ferromagnetic, leading to an S = 7 ground state. The intermolecular interactions were investigated because the two complexes studied do not show tunnelling at zero magnetic field. Usually, this exchange-biased quantum tunnelling is attributed to the presence of intermolecular interactions calculated with the help of theoretical methods. The results indicate the presence of weak intermolecular antiferromagnetic couplings that cannot explain the ferromagnetic value found experimentally for one of the systems. In the second part, the goal is to analyse magnetic anisotropy through the calculation of the zero-field splitting parameters (D and E), using DFT methods including the spin-orbit effect.

  14. Blade resonance parameter identification based on tip-timing method without the once-per revolution sensor

    NASA Astrophysics Data System (ADS)

    Guo, Haotian; Duan, Fajie; Zhang, Jilong

    2016-01-01

    Blade tip-timing is the most effective method for blade vibration online measurement of turbomachinery. In this article a synchronous resonance vibration measurement method of blade based on tip-timing is presented. This method requires no once-per revolution sensor which makes it more generally applicable in the condition where this sensor is difficult to install, especially for the high-pressure rotors of dual-rotor engines. Only three casing mounted probes are required to identify the engine order, amplitude, natural frequency and the damping coefficient of the blade. A method is developed to identify the blade which a tip-timing data belongs to without once-per revolution sensor. Theoretical analyses of resonance parameter measurement are presented. Theoretic error of the method is investigated and corrected. Experiments are conducted and the results indicate that blade resonance parameter identification is achieved without once-per revolution sensor.

  15. A theoretical method for the analysis and design of axisymmetric bodies. [flow distribution and incompressible fluids

    NASA Technical Reports Server (NTRS)

    Beatty, T. D.

    1975-01-01

    A theoretical method is presented for the computation of the flow field about an axisymmetric body operating in a viscous, incompressible fluid. A potential flow method was used to determine the inviscid flow field and to yield the boundary conditions for the boundary layer solutions. Boundary layer effects in the forces of displacement thickness and empirically modeled separation streamlines are accounted for in subsequent potential flow solutions. This procedure is repeated until the solutions converge. An empirical method was used to determine base drag allowing configuration drag to be computed.

  16. The uses and limitations of the square‐root‐impedance method for computing site amplification

    USGS Publications Warehouse

    Boore, David

    2013-01-01

    The square‐root‐impedance (SRI) method is a fast way of computing approximate site amplification that does not depend on the details from velocity models. The SRI method underestimates the peak response of models with large impedance contrasts near their base, but the amplifications for those models is often close to or equal to the root mean square of the theoretical full resonant (FR) response of the higher modes. On the other hand, for velocity models made up of gradients, with no significant impedance changes across small ranges of depth, the SRI method systematically underestimates the theoretical FR response over a wide frequency range. For commonly used gradient models for generic rock sites, the SRI method underestimates the FR response by about 20%–30%. Notwithstanding the persistent underestimation of amplifications from theoretical FR calculations, however, amplifications from the SRI method may often provide more useful estimates of amplifications than the FR method, because the SRI amplifications are not sensitive to details of the models and will not exhibit the many peaks and valleys characteristic of theoretical full resonant amplifications (jaggedness sometimes not seen in amplifications based on averages of site response from multiple recordings at a given site). The lack of sensitivity to details of the velocity models also makes the SRI method useful in comparing the response of various velocity models, in spite of any systematic underestimation of the response. The quarter‐wavelength average velocity, which is fundamental to the SRI method, is useful by itself in site characterization, and as such, is the fundamental parameter used to characterize the site response in a number of recent ground‐motion prediction equations.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yuan, Chris, E-mail: cyuan@uwm.edu; Wang, Endong; Zhai, Qiang

    Temporal homogeneity of inventory data is one of the major problems in life cycle assessment (LCA). Addressing temporal homogeneity of life cycle inventory data is important in reducing the uncertainties and improving the reliability of LCA results. This paper attempts to present a critical review and discussion on the fundamental issues of temporal homogeneity in conventional LCA and propose a theoretical framework for temporal discounting in LCA. Theoretical perspectives for temporal discounting in life cycle inventory analysis are discussed first based on the key elements of a scientific mechanism for temporal discounting. Then generic procedures for performing temporal discounting inmore » LCA is derived and proposed based on the nature of the LCA method and the identified key elements of a scientific temporal discounting method. A five-step framework is proposed and reported in details based on the technical methods and procedures needed to perform a temporal discounting in life cycle inventory analysis. Challenges and possible solutions are also identified and discussed for the technical procedure and scientific accomplishment of each step within the framework. - Highlights: • A critical review for temporal homogeneity problem of life cycle inventory data • A theoretical framework for performing temporal discounting on inventory data • Methods provided to accomplish each step of the temporal discounting framework.« less

  18. Theoretical and experimental physical methods of neutron-capture therapy

    NASA Astrophysics Data System (ADS)

    Borisov, G. I.

    2011-09-01

    This review is based to a substantial degree on our priority developments and research at the IR-8 reactor of the Russian Research Centre Kurchatov Institute. New theoretical and experimental methods of neutron-capture therapy are developed and applied in practice; these are: A general analytical and semi-empiric theory of neutron-capture therapy (NCT) based on classical neutron physics and its main sections (elementary theories of moderation, diffuse, reflection, and absorption of neutrons) rather than on methods of mathematical simulation. The theory is, first of all, intended for practical application by physicists, engineers, biologists, and physicians. This theory can be mastered by anyone with a higher education of almost any kind and minimal experience in operating a personal computer.

  19. Stability of Castering Wheels for Aircraft Landing Gears

    NASA Technical Reports Server (NTRS)

    Kantrowitz, Arthur

    1940-01-01

    A theoretical study was made of the shimmy of castering wheels. The theory is based on the discovery of a phenomenon called kinematic shimmy. Experimental checks, use being made of a model having low-pressure tires, are reported and the applicability of the results to full scale is discussed. Theoretical methods of estimating the spindle viscous damping and the spindle solid friction necessary to avoid shimmy are given. A new method of avoiding shimmy -- lateral freedom -- is introduced.

  20. Theoretical validation for changing magnetic fields of systems of permanent magnets of drum separators

    NASA Astrophysics Data System (ADS)

    Lozovaya, S. Y.; Lozovoy, N. M.; Okunev, A. N.

    2018-03-01

    This article is devoted to the theoretical validation of the change in magnetic fields created by the permanent magnet systems of the drum separators. In the article, using the example of a magnetic separator for enrichment of highly magnetic ores, the method of analytical calculation of the magnetic fields of systems of permanent magnets based on the Biot-Savart-Laplace law, the equivalent solenoid method, and the superposition principle of fields is considered.

  1. Layover and shadow detection based on distributed spaceborne single-baseline InSAR

    NASA Astrophysics Data System (ADS)

    Huanxin, Zou; Bin, Cai; Changzhou, Fan; Yun, Ren

    2014-03-01

    Distributed spaceborne single-baseline InSAR is an effective technique to get high quality Digital Elevation Model. Layover and Shadow are ubiquitous phenomenon in SAR images because of geometric relation of SAR imaging. In the signal processing of single-baseline InSAR, the phase singularity of Layover and Shadow leads to the phase difficult to filtering and unwrapping. This paper analyzed the geometric and signal model of the Layover and Shadow fields. Based on the interferometric signal autocorrelation matrix, the paper proposed the signal number estimation method based on information theoretic criteria, to distinguish Layover and Shadow from normal InSAR fields. The effectiveness and practicability of the method proposed in the paper are validated in the simulation experiments and theoretical analysis.

  2. Response Mechanism for Surface Acoustic Wave Gas Sensors Based on Surface-Adsorption

    PubMed Central

    Liu, Jiansheng; Lu, Yanyan

    2014-01-01

    A theoretical model is established to describe the response mechanism of surface acoustic wave (SAW) gas sensors based on physical adsorption on the detector surface. Wohljent's method is utilized to describe the relationship of sensor output (frequency shift of SAW oscillator) and the mass loaded on the detector surface. The Brunauer-Emmett-Teller (BET) formula and its improved form are introduced to depict the adsorption behavior of gas on the detector surface. By combining the two methods, we obtain a theoretical model for the response mechanism of SAW gas sensors. By using a commercial SAW gas chromatography (GC) analyzer, an experiment is performed to measure the frequency shifts caused by different concentration of dimethyl methylphosphonate (DMMP). The parameters in the model are given by fitting the experimental results and the theoretical curve agrees well with the experimental data. PMID:24743157

  3. Theoretical analysis of stack gas emission velocity measurement by optical scintillation

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Dong, Feng-Zhong; Ni, Zhi-Bo; Pang, Tao; Zeng, Zong-Yong; Wu, Bian; Zhang, Zhi-Rong

    2014-04-01

    Theoretical analysis for an online measurement of the stack gas flow velocity based on the optical scintillation method with a structure of two parallel optical paths is performed. The causes of optical scintillation in a stack are first introduced. Then, the principle of flow velocity measurement and its mathematical expression based on cross correlation of the optical scintillation are presented. The field test results show that the flow velocity measured by the proposed technique in this article is consistent with the value tested by the Pitot tube. It verifies the effectiveness of this method. Finally, by use of the structure function of logarithmic light intensity fluctuations, the theoretical explanation of optical scintillation spectral characteristic in low frequency is given. The analysis of the optical scintillation spectrum provides the basis for the measurement of the stack gas flow velocity and particle concentration simultaneously.

  4. Description and Recognition of the Concept of Social Capital in Higher Education System

    ERIC Educational Resources Information Center

    Tonkaboni, Forouzan; Yousefy, Alireza; Keshtiaray, Narges

    2013-01-01

    The current research is intended to describe and recognize the concept of social capital in higher education based on theoretical method in a descriptive-analytical approach. Description and Recognition of the data, gathered from theoretical and experimental studies, indicated that social capital is one of the most important indices for…

  5. Performance Templates and the Regulation of Learning

    ERIC Educational Resources Information Center

    Lyons, Paul

    2009-01-01

    Purpose: The purpose of this paper is to provide a detailed, theoretical underpinning for the training and performance improvement method: performance template (P-T). The efficacy of P-T, with limitations, has been demonstrated in this journal and in others. However, the theoretical bases of the P-T approach had not been well-developed. The other…

  6. Neuroethics and animals: methods and philosophy.

    PubMed

    Takala, Tuija; Häyry, Matti

    2014-04-01

    This article provides an overview of the six other contributions in the Neuroethics and Animals special section. In addition, it discusses the methodological and theoretical problems of interdisciplinary fields. The article suggests that interdisciplinary approaches without established methodological and theoretical bases are difficult to assess scientifically. This might cause these fields to expand without actually advancing.

  7. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  8. Transonic pressure measurements and comparison of theory to experiment for an arrow-wing configuration. Volume 1: Experimental data report, base configuration and effects of wing twist and leading-edge configuration. [wind tunnel tests, aircraft models

    NASA Technical Reports Server (NTRS)

    Manro, M. E.; Manning, K. J. R.; Hallstaff, T. H.; Rogers, J. T.

    1975-01-01

    A wind tunnel test of an arrow-wing-body configuration consisting of flat and twisted wings, as well as a variety of leading- and trailing-edge control surface deflections, was conducted at Mach numbers from 0.4 to 1.1 to provide an experimental pressure data base for comparison with theoretical methods. Theory-to-experiment comparisons of detailed pressure distributions were made using current state-of-the-art attached and separated flow methods. The purpose of these comparisons was to delineate conditions under which these theories are valid for both flat and twisted wings and to explore the use of empirical methods to correct the theoretical methods where theory is deficient.

  9. Design and performance analysis of gas and liquid radial turbines

    NASA Astrophysics Data System (ADS)

    Tan, Xu

    In the first part of the research, pumps running in reverse as turbines are studied. This work uses experimental data of wide range of pumps representing the centrifugal pumps' configurations in terms of specific speed. Based on specific speed and specific diameter an accurate correlation is developed to predict the performances at best efficiency point of the centrifugal pump in its turbine mode operation. The proposed prediction method yields very good results to date compared to previous such attempts. The present method is compared to nine previous methods found in the literature. The comparison results show that the method proposed in this paper is the most accurate. The proposed method can be further complemented and supplemented by more future tests to increase its accuracy. The proposed method is meaningful because it is based both specific speed and specific diameter. The second part of the research is focused on the design and analysis of the radial gas turbine. The specification of the turbine is obtained from the solar biogas hybrid system. The system is theoretically analyzed and constructed based on the purchased compressor. Theoretical analysis results in a specification of 100lb/min, 900ºC inlet total temperature and 1.575atm inlet total pressure. 1-D and 3-D geometry of the rotor is generated based on Aungier's method. 1-D loss model analysis and 3-D CFD simulations are performed to examine the performances of the rotor. The total-to-total efficiency of the rotor is more than 90%. With the help of CFD analysis, modifications on the preliminary design obtained optimized aerodynamic performances. At last, the theoretical performance analysis on the hybrid system is performed with the designed turbine.

  10. Flight-Test Evaluation of Flutter-Prediction Methods

    NASA Technical Reports Server (NTRS)

    Lind, RIck; Brenner, Marty

    2003-01-01

    The flight-test community routinely spends considerable time and money to determine a range of flight conditions, called a flight envelope, within which an aircraft is safe to fly. The cost of determining a flight envelope could be greatly reduced if there were a method of safely and accurately predicting the speed associated with the onset of an instability called flutter. Several methods have been developed with the goal of predicting flutter speeds to improve the efficiency of flight testing. These methods include (1) data-based methods, in which one relies entirely on information obtained from the flight tests and (2) model-based approaches, in which one relies on a combination of flight data and theoretical models. The data-driven methods include one based on extrapolation of damping trends, one that involves an envelope function, one that involves the Zimmerman-Weissenburger flutter margin, and one that involves a discrete-time auto-regressive model. An example of a model-based approach is that of the flutterometer. These methods have all been shown to be theoretically valid and have been demonstrated on simple test cases; however, until now, they have not been thoroughly evaluated in flight tests. An experimental apparatus called the Aerostructures Test Wing (ATW) was developed to test these prediction methods.

  11. Accurate airway centerline extraction based on topological thinning using graph-theoretic analysis.

    PubMed

    Bian, Zijian; Tan, Wenjun; Yang, Jinzhu; Liu, Jiren; Zhao, Dazhe

    2014-01-01

    The quantitative analysis of the airway tree is of critical importance in the CT-based diagnosis and treatment of popular pulmonary diseases. The extraction of airway centerline is a precursor to identify airway hierarchical structure, measure geometrical parameters, and guide visualized detection. Traditional methods suffer from extra branches and circles due to incomplete segmentation results, which induce false analysis in applications. This paper proposed an automatic and robust centerline extraction method for airway tree. First, the centerline is located based on the topological thinning method; border voxels are deleted symmetrically to preserve topological and geometrical properties iteratively. Second, the structural information is generated using graph-theoretic analysis. Then inaccurate circles are removed with a distance weighting strategy, and extra branches are pruned according to clinical anatomic knowledge. The centerline region without false appendices is eventually determined after the described phases. Experimental results show that the proposed method identifies more than 96% branches and keep consistency across different cases and achieves superior circle-free structure and centrality.

  12. The Use of a Corpus in Contrastive Studies.

    ERIC Educational Resources Information Center

    Filipovic, Rudolf

    1973-01-01

    Before beginning the Serbocroatian-English Contrastive Project, it was necessary to determine whether to base the analysis on a corpus or on native intuitions. It seemed that the best method would combine the theoretical and the empirical. A translation method based on a corpus of text was adopted. The Brown University "Standard Sample of…

  13. The crystallographic, spectroscopic and theoretical studies on (E)-2-(((4-chlorophenyl)imino)methyl)-5-(diethylamino)phenol and (E)-2-(((3-chlorophenyl)imino)methyl)-5-(diethylamino)phenol molecules

    NASA Astrophysics Data System (ADS)

    Demirtaş, Güneş; Dege, Necmi; Ağar, Erbil; Uzun, Sümeyye Gümüş

    2018-01-01

    Two new salicylideneaniline (SA) derivative compounds (E)-2-(((4-chlorophenyl)imino)methyl)-5-(diethylamino)phenol, compound (I), and (E)-2-(((3-chlorophenyl)imino)methyl)-5-(diethylamino)phenol, compound (II), have been synthesized and characterized by single crystal X-ray diffraction, IR spectroscopy, 1H NMR, 13C NMR and theoretical methods. Both of the compounds which are Schiff base derivatives are isomer of each other. While the compound (I) crystallizes in centrosymmetric monoclinic space group P 21/c, the compound (II) crystallizes in orthorhombic space group P 212121. The theoretical parameters of the molecules have been calculated by using Hartree-Fock (HF) and density functional theory (DFT/B3LYP) with 6-31G (d,p) basis set. These theoretical parameters have been compared with the experimental parameters obtained by XRD. The experimental geometries of the compounds have been superimposed with the theoretical geometries calculated by HF and DFT methods. Furthermore, the theoretical IR calculations, molecular electrostatic potential maps (MEP) and frontier molecular orbitals have been created for the compounds.

  14. Validation of the theoretical domains framework for use in behaviour change and implementation research

    PubMed Central

    2012-01-01

    Background An integrative theoretical framework, developed for cross-disciplinary implementation and other behaviour change research, has been applied across a wide range of clinical situations. This study tests the validity of this framework. Methods Validity was investigated by behavioural experts sorting 112 unique theoretical constructs using closed and open sort tasks. The extent of replication was tested by Discriminant Content Validation and Fuzzy Cluster Analysis. Results There was good support for a refinement of the framework comprising 14 domains of theoretical constructs (average silhouette value 0.29): ‘Knowledge’, ‘Skills’, ‘Social/Professional Role and Identity’, ‘Beliefs about Capabilities’, ‘Optimism’, ‘Beliefs about Consequences’, ‘Reinforcement’, ‘Intentions’, ‘Goals’, ‘Memory, Attention and Decision Processes’, ‘Environmental Context and Resources’, ‘Social Influences’, ‘Emotions’, and ‘Behavioural Regulation’. Conclusions The refined Theoretical Domains Framework has a strengthened empirical base and provides a method for theoretically assessing implementation problems, as well as professional and other health-related behaviours as a basis for intervention development. PMID:22530986

  15. Theoretical and Experimental Studies of the Electro-Optic Effect: Toward a Microscopic Understanding.

    DTIC Science & Technology

    1981-08-01

    electro - optic effect is investigated both theoretically and experimentally. The theoretical approach is based upon W.A. Harrison’s ’Bond-Orbital Model’. The separate electronic and lattice contributions to the second-order, electro - optic susceptibility are examined within the context of this model and formulae which can accommodate any crystal structure are presented. In addition, a method for estimating the lattice response to a low frequency (dc) electric field is outlined. Finally, experimental measurements of the electro -

  16. An Exploration of E-Learning Benefits for Saudi Arabia: Toward Policy Reform

    ERIC Educational Resources Information Center

    Alrashidi, Abdulaziz

    2013-01-01

    Purpose: The purpose of this study was to examine policies and solutions addressing (a) improving education for citizens of the Kingdom of Saudi Arabia and (b) providing alternative instructional delivery methods, including e-learning for those living in remote areas. Theoretical Framework: The theoretical framework of this study was based on the…

  17. Transactors, Transformers and Beyond. A Multi-Method Development of a Theoretical Typology of Leadership.

    ERIC Educational Resources Information Center

    Pearce, Craig L.; Sims, Henry P., Jr.; Cox, Jonathan F.; Ball, Gail; Schnell, Eugene; Smith, Ken A.; Trevino, Linda

    2003-01-01

    To extend the transactional-transformational model of leadership, four theoretical behavioral types of leadership were developed based on literature review and data from studies of executive behavior (n=253) and subordinate attitudes (n=208). Confirmatory factor analysis of a third data set (n=702) support the existence of four leadership types:…

  18. The development of an adolescent smoking cessation intervention--an Intervention Mapping approach to planning.

    PubMed

    Dalum, Peter; Schaalma, Herman; Kok, Gerjo

    2012-02-01

    The objective of this project was to develop a theory- and evidence-based adolescent smoking cessation intervention using both new and existing materials. We used the Intervention Mapping framework for planning health promotion programmes. Based on a needs assessment, we identified important and changeable determinants of cessation behaviour, specified change objectives for the intervention programme, selected theoretical change methods for accomplishing intervention objectives and finally operationalized change methods into practical intervention strategies. We found that guided practice, modelling, self-monitoring, coping planning, consciousness raising, dramatic relief and decisional balance were suitable methods for adolescent smoking cessation. We selected behavioural journalism, guided practice and Motivational Interviewing as strategies in our intervention. Intervention Mapping helped us to develop as systematic adolescent smoking cessation intervention with a clear link between behavioural goals, theoretical methods, practical strategies and materials and with a strong focus on implementation and recruitment. This paper does not present evaluation data.

  19. Wavelength selection for portable noninvasive blood component measurement system based on spectral difference coefficient and dynamic spectrum

    NASA Astrophysics Data System (ADS)

    Feng, Ximeng; Li, Gang; Yu, Haixia; Wang, Shaohui; Yi, Xiaoqing; Lin, Ling

    2018-03-01

    Noninvasive blood component analysis by spectroscopy has been a hotspot in biomedical engineering in recent years. Dynamic spectrum provides an excellent idea for noninvasive blood component measurement, but studies have been limited to the application of broadband light sources and high-resolution spectroscopy instruments. In order to remove redundant information, a more effective wavelength selection method has been presented in this paper. In contrast to many common wavelength selection methods, this method is based on sensing mechanism which has a clear mechanism and can effectively avoid the noise from acquisition system. The spectral difference coefficient was theoretically proved to have a guiding significance for wavelength selection. After theoretical analysis, the multi-band spectral difference coefficient-wavelength selection method combining with the dynamic spectrum was proposed. An experimental analysis based on clinical trial data from 200 volunteers has been conducted to illustrate the effectiveness of this method. The extreme learning machine was used to develop the calibration models between the dynamic spectrum data and hemoglobin concentration. The experiment result shows that the prediction precision of hemoglobin concentration using multi-band spectral difference coefficient-wavelength selection method is higher compared with other methods.

  20. Learning outcomes of "The Oncology Patient" study among nursing students: A comparison of teaching strategies.

    PubMed

    Roca, Judith; Reguant, Mercedes; Canet, Olga

    2016-11-01

    Teaching strategies are essential in order to facilitate meaningful learning and the development of high-level thinking skills in students. To compare three teaching methodologies (problem-based learning, case-based teaching and traditional methods) in terms of the learning outcomes achieved by nursing students. This quasi-experimental research was carried out in the Nursing Degree programme in a group of 74 students who explored the subject of The Oncology Patient through the aforementioned strategies. A performance test was applied based on Bloom's Revised Taxonomy. A significant correlation was found between the intragroup theoretical and theoretical-practical dimensions. Likewise, intergroup differences were related to each teaching methodology. Hence, significant differences were estimated between the traditional methodology (x-=9.13), case-based teaching (x-=12.96) and problem-based learning (x-=14.84). Problem-based learning was shown to be the most successful learning method, followed by case-based teaching and the traditional methodology. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Travel into a fairy land: a critique of modern qualitative and mixed methods psychologies.

    PubMed

    Toomela, Aaro

    2011-03-01

    In this article modern qualitative and mixed methods approaches are criticized from the standpoint of structural-systemic epistemology. It is suggested that modern qualitative methodologies suffer from several fallacies: some of them are grounded on inherently contradictory epistemology, the others ask scientific questions after the methods have been chosen, conduct studies inductively so that not only answers but even questions are often supposed to be discovered, do not create artificial situations and constraints on study-situations, are adevelopmental by nature, study not the external things and phenomena but symbols and representations--often the object of studies turns out to be the researcher rather than researched, rely on ambiguous data interpretation methods based to a large degree on feelings and opinions, aim to understand unique which is theoretically impossible, or have theoretical problems with sampling. Any one of these fallacies would be sufficient to exclude any possibility to achieve structural-systemic understanding of the studied things and phenomena. It also turns out that modern qualitative methodologies share several fallacies with the quantitative methodology. Therefore mixed methods approaches are not able to overcome the fundamental difficulties that characterize mixed methods taken separately. It is proposed that structural-systemic methodology that dominated psychological thought in the pre-WWII continental Europe is philosophically and theoretically better grounded than the other methodologies that can be distinguished in psychology today. Future psychology should be based on structural-systemic methodology.

  2. Joint research effort on vibrations of twisted plates, phase 1: Final results

    NASA Technical Reports Server (NTRS)

    Kielb, R. E.; Leissa, A. W.; Macbain, J. C.; Carney, K. S.

    1985-01-01

    The complete theoretical and experimental results of the first phase of a joint government/industry/university research study on the vibration characteristics of twisted cantilever plates are given. The study is conducted to generate an experimental data base and to compare many different theoretical methods with each other and with the experimental results. Plates with aspect ratios, thickness ratios, and twist angles representative of current gas turbine engine blading are investigated. The theoretical results are generated by numerous finite element, shell, and beam analysis methods. The experimental results are obtained by precision matching a set of twisted plates and testing them at two laboratories. The second and final phase of the study will concern the effects of rotation.

  3. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    NASA Astrophysics Data System (ADS)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  4. Two Improved Access Methods on Compact Binary (CB) Trees.

    ERIC Educational Resources Information Center

    Shishibori, Masami; Koyama, Masafumi; Okada, Makoto; Aoe, Jun-ichi

    2000-01-01

    Discusses information retrieval and the use of binary trees as a fast access method for search strategies such as hashing. Proposes new methods based on compact binary trees that provide faster access and more compact storage, explains the theoretical basis, and confirms the validity of the methods through empirical observations. (LRW)

  5. Improved omit set displacement recoveries in dynamic analysis

    NASA Technical Reports Server (NTRS)

    Allen, Tom; Cook, Greg; Walls, Bill

    1993-01-01

    Two related methods for improving the dependent (OMIT set) displacements after performing a Guyan reduction are presented. The theoretical bases for the methods are derived. The NASTRAN DMAP ALTERs used to implement the methods in a NASTRAN execution are described. Data are presented that verify the methods and the NASTRAN DMAP ALTERs.

  6. Theoretical geology

    NASA Astrophysics Data System (ADS)

    Mikeš, Daniel

    2010-05-01

    Theoretical geology Present day geology is mostly empirical of nature. I claim that geology is by nature complex and that the empirical approach is bound to fail. Let's consider the input to be the set of ambient conditions and the output to be the sedimentary rock record. I claim that the output can only be deduced from the input if the relation from input to output be known. The fundamental question is therefore the following: Can one predict the output from the input or can one predict the behaviour of a sedimentary system? If one can, than the empirical/deductive method has changes, if one can't than that method is bound to fail. The fundamental problem to solve is therefore the following: How to predict the behaviour of a sedimentary system? It is interesting to observe that this question is never asked and many a study is conducted by the empirical/deductive method; it seems that the empirical method has been accepted as being appropriate without question. It is, however, easy to argument that a sedimentary system is by nature complex and that several input parameters vary at the same time and that they can create similar output in the rock record. It follows trivially from these first principles that in such a case the deductive solution cannot be unique. At the same time several geological methods depart precisely from the assumption, that one particular variable is the dictator/driver and that the others are constant, even though the data do not support such an assumption. The method of "sequence stratigraphy" is a typical example of such a dogma. It can be easily argued that all the interpretation resulting from a method that is built on uncertain or wrong assumptions is erroneous. Still, this method has survived for many years, nonwithstanding all the critics it has received. This is just one example of the present day geological world and is not unique. Even the alternative methods criticising sequence stratigraphy actually depart from the same erroneous assumptions and do not solve the very fundamental issue that lies at the base of the problem. This problem is straighforward and obvious: a sedimentary system is inherently four-dimensional (3 spatial dimensions + 1 temporal dimension). Any method using an inferior number or dimensions is bound to fail to describe the evolution of a sedimentary system. It is indicative of the present day geological world that such fundamental issues be overlooked. The only reason for which one can appoint the socalled "rationality" in todays society. Simple "common sense" leads us to the conclusion that in this case the empirical method is bound to fail and the only method that can solve the problem is the theoretical approach. Reasoning that is completely trivial for the traditional exact sciences like physics and mathematics and applied sciences like engineering. However, not for geology, a science that was traditionally descriptive and jumped to empirical science, skipping the stage of theoretical science. I argue that the gap of theoretical geology is left open and needs to be filled. Every discipline in geology lacks a theoretical base. This base can only be filled by the theoretical/inductive approach and can impossibly be filled by the empirical/deductive approach. Once a critical mass of geologists realises this flaw in todays geology, we can start solving the fundamental problems in geology.

  7. Research on theoretical optimization and experimental verification of minimum resistance hull form based on Rankine source method

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-Ji; Zhang, Zhu-Xin

    2015-09-01

    To obtain low resistance and high efficiency energy-saving ship, minimum total resistance hull form design method is studied based on potential flow theory of wave-making resistance and considering the effects of tail viscous separation. With the sum of wave resistance and viscous resistance as objective functions and the parameters of B-Spline function as design variables, mathematical models are built using Nonlinear Programming Method (NLP) ensuring the basic limit of displacement and considering rear viscous separation. We develop ship lines optimization procedures with intellectual property rights. Series60 is used as parent ship in optimization design to obtain improved ship (Series60-1) theoretically. Then drag tests for the improved ship (Series60-1) is made to get the actual minimum total resistance hull form.

  8. Transmedia Teaching Framework: From Group Projects to Curriculum Development

    ERIC Educational Resources Information Center

    Reid, James; Gilardi, Filippo

    2016-01-01

    This paper describes an innovative project-based learning framework theoretically based on the ideas of Transmedia Storytelling, Participatory Cultures and Multiple intelligences that can be integrated into the f?lipped classroom method, and practically addressed using Content- Based Instruction (CBI) and Project-Based Learning (PBL) approaches.…

  9. Wave transmission approach based on modal analysis for embedded mechanical systems

    NASA Astrophysics Data System (ADS)

    Cretu, Nicolae; Nita, Gelu; Ioan Pop, Mihail

    2013-09-01

    An experimental method for determining the phase velocity in small solid samples is proposed. The method is based on measuring the resonant frequencies of a binary or ternary solid elastic system comprising the small sample of interest and a gauge material of manageable size. The wave transmission matrix of the combined system is derived and the theoretical values of its eigenvalues are used to determine the expected eigenfrequencies that, equated with the measured values, allow for the numerical estimation of the phase velocities in both materials. The known phase velocity of the gauge material is then used to asses the accuracy of the method. Using computer simulation and the experimental values for phase velocities, the theoretical values for the eigenfrequencies of the eigenmodes of the embedded elastic system are obtained, to validate the method. We conclude that the proposed experimental method may be reliably used to determine the elastic properties of small solid samples whose geometries do not allow a direct measurement of their resonant frequencies.

  10. The Systematic Development of an Internet-Based Smoking Cessation Intervention for Adults.

    PubMed

    Dalum, Peter; Brandt, Caroline Lyng; Skov-Ettrup, Lise; Tolstrup, Janne; Kok, Gerjo

    2016-07-01

    Objectives The objective of this project was to determine whether intervention mapping is a suitable strategy for developing an Internet- and text message-based smoking cessation intervention. Method We used the Intervention Mapping framework for planning health promotion programs. After a needs assessment, we identified important changeable determinants of cessation behavior, specified objectives for the intervention, selected theoretical methods for meeting our objectives, and operationalized change methods into practical intervention strategies. Results We found that "social cognitive theory," the "transtheoretical model/stages of change," "self-regulation theory," and "appreciative inquiry" were relevant theories for smoking cessation interventions. From these theories, we selected modeling/behavioral journalism, feedback, planning coping responses/if-then statements, gain frame/positive imaging, consciousness-raising, helping relationships, stimulus control, and goal-setting as suitable methods for an Internet- and text-based adult smoking cessation program. Furthermore, we identified computer tailoring as a useful strategy for adapting the intervention to individual users. Conclusion The Intervention Mapping method, with a clear link between behavioral goals, theoretical methods, and practical strategies and materials, proved useful for systematic development of a digital smoking cessation intervention for adults. © 2016 Society for Public Health Education.

  11. Blurred Palmprint Recognition Based on Stable-Feature Extraction Using a Vese–Osher Decomposition Model

    PubMed Central

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  12. Blurred palmprint recognition based on stable-feature extraction using a Vese-Osher decomposition model.

    PubMed

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.

  13. The Schwinger Variational Method

    NASA Technical Reports Server (NTRS)

    Huo, Winifred M.

    1995-01-01

    Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. For collisional problems they can be grouped into two types: those based on the Schroedinger equation and those based on the Lippmann-Schwinger equation. The application of the Schwinger variational (SV) method to e-molecule collisions and photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andronov, V.A.; Zhidov, I.G.; Meskov, E.E.

    The report presents the basic results of some calculations, theoretical and experimental efforts in the study of Rayleigh-Taylor, Kelvin-Helmholtz, Richtmyer-Meshkov instabilities and the turbulent mixing which is caused by their evolution. Since the late forties the VNIIEF has been conducting these investigations. This report is based on the data which were published in different times in Russian and foreign journals. The first part of the report deals with calculations an theoretical techniques for the description of hydrodynamic instabilities applied currently, as well as with the results of several individual problems and their comparison with the experiment. These methods can bemore » divided into two types: direct numerical simulation methods and phenomenological methods. The first type includes the regular 2D and 3D gasdynamical techniques as well as the techniques based on small perturbation approximation and on incompressible liquid approximation. The second type comprises the techniques based on various phenomenological turbulence models. The second part of the report describes the experimental methods and cites the experimental results of Rayleigh-Taylor and Richtmyer-Meskov instability studies as well as of turbulent mixing. The applied methods were based on thin-film gaseous models, on jelly models and liquid layer models. The research was done for plane and cylindrical geometries. As drivers, the shock tubes of different designs were used as well as gaseous explosive mixtures, compressed air and electric wire explosions. The experimental results were applied in calculational-theoretical technique calibrations. The authors did not aim at covering all VNIIEF research done in this field of science. To a great extent the choice of the material depended on the personal contribution of the author in these studies.« less

  15. Use of an expert system data analysis manager for space shuttle main engine test evaluation

    NASA Technical Reports Server (NTRS)

    Abernethy, Ken

    1988-01-01

    The ability to articulate, collect, and automate the application of the expertise needed for the analysis of space shuttle main engine (SSME) test data would be of great benefit to NASA liquid rocket engine experts. This paper describes a project whose goal is to build a rule-based expert system which incorporates such expertise. Experiential expertise, collected directly from the experts currently involved in SSME data analysis, is used to build a rule base to identify engine anomalies similar to those analyzed previously. Additionally, an alternate method of expertise capture is being explored. This method would generate rules inductively based on calculations made using a theoretical model of the SSME's operation. The latter rules would be capable of diagnosing anomalies which may not have appeared before, but whose effects can be predicted by the theoretical model.

  16. Caprylate Salts Based on Amines as Volatile Corrosion Inhibitors for Metallic Zinc: Theoretical and Experimental Studies.

    PubMed

    Valente, Marco A G; Teixeira, Deiver A; Azevedo, David L; Feliciano, Gustavo T; Benedetti, Assis V; Fugivara, Cecílio S

    2017-01-01

    The interaction of volatile corrosion inhibitors (VCI), caprylate salt derivatives from amines, with zinc metallic surfaces is assessed by density functional theory (DFT) computer simulations, electrochemical impedance (EIS) measurements and humid chamber tests. The results obtained by the different methods were compared, and linear correlations were obtained between theoretical and experimental data. The correlations between experimental and theoretical results showed that the molecular size is the determining factor in the inhibition efficiency. The models used and experimental results indicated that dicyclohexylamine caprylate is the most efficient inhibitor.

  17. [Theoretical and methodological bases for formation of future drivers 'readiness to application of physical-rehabilitation technologies].

    PubMed

    Yemets, Anatoliy V; Donchenko, Viktoriya I; Scrinick, Eugenia O

    2018-01-01

    Introduction: Experimental work is aimed at introducing theoretical and methodological foundations for the professional training of the future doctor. The aim: Identify the dynamics of quantitative and qualitative indicators of the readiness of a specialist in medicine. Materials and methods: The article presents the course and results of the experimental work of the conditions of forming the readiness of future specialists in medicine. Results: Our methodical bases for studying the disciplines of the general practice and specialized professional stage of experimental training of future physicians have been worked out. Conclusions: It is developed taking into account the peculiarities of future physician training of materials for various stages of experimental implementation in the educational process of higher medical educational institutions.

  18. Deterministic convergence of chaos injection-based gradient method for training feedforward neural networks.

    PubMed

    Zhang, Huisheng; Zhang, Ying; Xu, Dongpo; Liu, Xiaodong

    2015-06-01

    It has been shown that, by adding a chaotic sequence to the weight update during the training of neural networks, the chaos injection-based gradient method (CIBGM) is superior to the standard backpropagation algorithm. This paper presents the theoretical convergence analysis of CIBGM for training feedforward neural networks. We consider both the case of batch learning as well as the case of online learning. Under mild conditions, we prove the weak convergence, i.e., the training error tends to a constant and the gradient of the error function tends to zero. Moreover, the strong convergence of CIBGM is also obtained with the help of an extra condition. The theoretical results are substantiated by a simulation example.

  19. An EGR performance evaluation and decision-making approach based on grey theory and grey entropy analysis

    PubMed Central

    2018-01-01

    Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization. PMID:29377956

  20. An EGR performance evaluation and decision-making approach based on grey theory and grey entropy analysis.

    PubMed

    Zu, Xianghuan; Yang, Chuanlei; Wang, Hechun; Wang, Yinyan

    2018-01-01

    Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization.

  1. Content based image retrieval for matching images of improvised explosive devices in which snake initialization is viewed as an inverse problem

    NASA Astrophysics Data System (ADS)

    Acton, Scott T.; Gilliam, Andrew D.; Li, Bing; Rossi, Adam

    2008-02-01

    Improvised explosive devices (IEDs) are common and lethal instruments of terrorism, and linking a terrorist entity to a specific device remains a difficult task. In the effort to identify persons associated with a given IED, we have implemented a specialized content based image retrieval system to search and classify IED imagery. The system makes two contributions to the art. First, we introduce a shape-based matching technique exploiting shape, color, and texture (wavelet) information, based on novel vector field convolution active contours and a novel active contour initialization method which treats coarse segmentation as an inverse problem. Second, we introduce a unique graph theoretic approach to match annotated printed circuit board images for which no schematic or connectivity information is available. The shape-based image retrieval method, in conjunction with the graph theoretic tool, provides an efficacious system for matching IED images. For circuit imagery, the basic retrieval mechanism has a precision of 82.1% and the graph based method has a precision of 98.1%. As of the fall of 2007, the working system has processed over 400,000 case images.

  2. The Effectiveness of Teaching Methods Used in Graphic Design Pedagogy in Both Analogue and Digital Education Systems

    ERIC Educational Resources Information Center

    Alhajri, Salman

    2016-01-01

    Purpose: this paper investigates the effectiveness of teaching methods used in graphic design pedagogy in both analogue and digital education systems. Methodology and approach: the paper is based on theoretical study using a qualitative, case study approach. Comparison between the digital teaching methods and traditional teaching methods was…

  3. Study on some useful Operators for Graph-theoretic Image Processing

    NASA Astrophysics Data System (ADS)

    Moghani, Ali; Nasiri, Parviz

    2010-11-01

    In this paper we describe a human perception based approach to pixel color segmentation which applied in color reconstruction by numerical method associated with graph-theoretic image processing algorithm typically in grayscale. Fuzzy sets defined on the Hue, Saturation and Value components of the HSV color space, provide a fuzzy logic model that aims to follow the human intuition of color classification.

  4. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E.L.; Seong, J.C.; Steinwand, D.R.; Finn, M.P.

    2002-01-01

    This research aims at building a decision support system (DSS) for selecting an optimum projection considering various factors, such as pixel size, areal extent, number of categories, spatial pattern of categories, resampling methods, and error correction methods. Specifically, this research will investigate three goals theoretically and empirically and, using the already developed empirical base of knowledge with these results, develop an expert system for map projection of raster data for regional and global database modeling. The three theoretical goals are as follows: (1) The development of a dynamic projection that adjusts projection formulas for latitude on the basis of raster cell size to maintain equal-sized cells. (2) The investigation of the relationships between the raster representation and the distortion of features, number of categories, and spatial pattern. (3) The development of an error correction and resampling procedure that is based on error analysis of raster projection.

  5. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter.

    PubMed

    Choi, Jihoon; Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-09-13

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected.

  6. Leak Detection and Location of Water Pipes Using Vibration Sensors and Modified ML Prefilter

    PubMed Central

    Shin, Joonho; Song, Choonggeun; Han, Suyong; Park, Doo Il

    2017-01-01

    This paper proposes a new leak detection and location method based on vibration sensors and generalised cross-correlation techniques. Considering the estimation errors of the power spectral densities (PSDs) and the cross-spectral density (CSD), the proposed method employs a modified maximum-likelihood (ML) prefilter with a regularisation factor. We derive a theoretical variance of the time difference estimation error through summation in the discrete-frequency domain, and find the optimal regularisation factor that minimises the theoretical variance in practical water pipe channels. The proposed method is compared with conventional correlation-based techniques via numerical simulations using a water pipe channel model, and it is shown through field measurement that the proposed modified ML prefilter outperforms conventional prefilters for the generalised cross-correlation. In addition, we provide a formula to calculate the leak location using the time difference estimate when different types of pipes are connected. PMID:28902154

  7. Rethinking the Elementary Science Methods Course: A Case for Content, Pedagogy, and Informal Science Education.

    ERIC Educational Resources Information Center

    Kelly, Janet

    2000-01-01

    Indicates the importance of preparing prospective teachers who will be elementary science teachers with different methods. Presents the theoretical and practical rationale for developing a constructivist-based elementary science methods course. Discusses the impact student knowledge and understanding of science and student attitudes has on…

  8. Patient centredness in integrated care: results of a qualitative study based on a systems theoretical framework

    PubMed Central

    Lüdecke, Daniel

    2014-01-01

    Introduction Health care providers seek to improve patient-centred care. Due to fragmentation of services, this can only be achieved by establishing integrated care partnerships. The challenge is both to control costs while enhancing the quality of care and to coordinate this process in a setting with many organisations involved. The problem is to establish control mechanisms, which ensure sufficiently consideration of patient centredness. Theory and methods Seventeen qualitative interviews have been conducted in hospitals of metropolitan areas in northern Germany. The documentary method, embedded into a systems theoretical framework, was used to describe and analyse the data and to provide an insight into the specific perception of organisational behaviour in integrated care. Results The findings suggest that integrated care partnerships rely on networks based on professional autonomy in the context of reliability. The relationships of network partners are heavily based on informality. This correlates with a systems theoretical conception of organisations, which are assumed autonomous in their decision-making. Conclusion and discussion Networks based on formal contracts may restrict professional autonomy and competition. Contractual bindings that suppress the competitive environment have negative consequences for patient-centred care. Drawbacks remain due to missing self-regulation of the network. To conclude, less regimentation of integrated care partnerships is recommended. PMID:25411573

  9. Development of Support Service for Prevention and Recovery from Dementia and Science of Lethe

    NASA Astrophysics Data System (ADS)

    Otake, Mihoko

    Purpose of this study is to explore service design method through the development of support service for prevention and recovery from dementia towards science of lethe. We designed and implemented conversation support service via coimagination method based on multiscale service design method, both were proposed by the author. Multiscale service model consists of tool, event, human, network, style and rule. Service elements at different scales are developed according to the model. Interactive conversation supported by coimagination method activates cognitive functions so as to prevent progress of dementia. This paper proposes theoretical bases for science of lethe. Firstly, relationship among coimagination method and three cognitive functions including division of attention, planning, episodic memory which decline at mild cognitive imparement. Secondly, thought state transition model during conversation which describes cognitive enhancement via interactive communication. Thirdly, Set Theoretical Measure of Interaction is proposed for evaluating effectiveness of conversation to cognitive enhancement. Simulation result suggests that the ideas which cannot be explored by each speaker are explored during interactive conversation. Finally, coimagination method compared with reminiscence therapy and its possibility for collaboration is discussed.

  10. Reducing the time-lag between onset of chest pain and seeking professional medical help: a theory-based review

    PubMed Central

    2013-01-01

    Background Research suggests that there are a number of factors which can be associated with delay in a patient seeking professional help following chest pain, including demographic and social factors. These factors may have an adverse impact on the efficacy of interventions which to date have had limited success in improving patient action times. Theory-based methods of review are becoming increasingly recognised as important additions to conventional systematic review methods. They can be useful to gain additional insights into the characteristics of effective interventions by uncovering complex underlying mechanisms. Methods This paper describes the further analysis of research papers identified in a conventional systematic review of published evidence. The aim of this work was to investigate the theoretical frameworks underpinning studies exploring the issue of why people having a heart attack delay seeking professional medical help. The study used standard review methods to identify papers meeting the inclusion criterion, and carried out a synthesis of data relating to theoretical underpinnings. Results Thirty six papers from the 53 in the original systematic review referred to a particular theoretical perspective, or contained data which related to theoretical assumptions. The most frequently mentioned theory was the self-regulatory model of illness behaviour. Papers reported the potential significance of aspects of this model including different coping mechanisms, strategies of denial and varying models of treatment seeking. Studies also drew attention to the potential role of belief systems, applied elements of attachment theory, and referred to models of maintaining integrity, ways of knowing, and the influence of gender. Conclusions The review highlights the need to examine an individual’s subjective experience of and response to health threats, and confirms the gap between knowledge and changed behaviour. Interventions face key challenges if they are to influence patient perceptions regarding seriousness of symptoms; varying processes of coping; and obstacles created by patient perceptions of their role and responsibilities. A theoretical approach to review of these papers provides additional insight into the assumptions underpinning interventions, and illuminates factors which may impact on their efficacy. The method thus offers a useful supplement to conventional systematic review methods. PMID:23388093

  11. It's All in the Mind--Program It for Success.

    ERIC Educational Resources Information Center

    Jacover, Neal

    1980-01-01

    A combination of Eastern philosophy and cybernetics leads to a method of improving athletic skills (especially basketball) which is based on the theoretical basis of Maltz's philosophy of successful goal attainment. The method is relevant to the total educational process and to counselors. (SB)

  12. Validation of the theoretical domains framework for use in behaviour change and implementation research.

    PubMed

    Cane, James; O'Connor, Denise; Michie, Susan

    2012-04-24

    An integrative theoretical framework, developed for cross-disciplinary implementation and other behaviour change research, has been applied across a wide range of clinical situations. This study tests the validity of this framework. Validity was investigated by behavioural experts sorting 112 unique theoretical constructs using closed and open sort tasks. The extent of replication was tested by Discriminant Content Validation and Fuzzy Cluster Analysis. There was good support for a refinement of the framework comprising 14 domains of theoretical constructs (average silhouette value 0.29): 'Knowledge', 'Skills', 'Social/Professional Role and Identity', 'Beliefs about Capabilities', 'Optimism', 'Beliefs about Consequences', 'Reinforcement', 'Intentions', 'Goals', 'Memory, Attention and Decision Processes', 'Environmental Context and Resources', 'Social Influences', 'Emotions', and 'Behavioural Regulation'. The refined Theoretical Domains Framework has a strengthened empirical base and provides a method for theoretically assessing implementation problems, as well as professional and other health-related behaviours as a basis for intervention development.

  13. Energetics of protein-DNA interactions.

    PubMed

    Donald, Jason E; Chen, William W; Shakhnovich, Eugene I

    2007-01-01

    Protein-DNA interactions are vital for many processes in living cells, especially transcriptional regulation and DNA modification. To further our understanding of these important processes on the microscopic level, it is necessary that theoretical models describe the macromolecular interaction energetics accurately. While several methods have been proposed, there has not been a careful comparison of how well the different methods are able to predict biologically important quantities such as the correct DNA binding sequence, total binding free energy and free energy changes caused by DNA mutation. In addition to carrying out the comparison, we present two important theoretical models developed initially in protein folding that have not yet been tried on protein-DNA interactions. In the process, we find that the results of these knowledge-based potentials show a strong dependence on the interaction distance and the derivation method. Finally, we present a knowledge-based potential that gives comparable or superior results to the best of the other methods, including the molecular mechanics force field AMBER99.

  14. Theory of viscous transonic flow over airfoils at high Reynolds number

    NASA Technical Reports Server (NTRS)

    Melnik, R. E.; Chow, R.; Mead, H. R.

    1977-01-01

    This paper considers viscous flows with unseparated turbulent boundary layers over two-dimensional airfoils at transonic speeds. Conventional theoretical methods are based on boundary layer formulations which do not account for the effect of the curved wake and static pressure variations across the boundary layer in the trailing edge region. In this investigation an extended viscous theory is developed that accounts for both effects. The theory is based on a rational analysis of the strong turbulent interaction at airfoil trailing edges. The method of matched asymptotic expansions is employed to develop formal series solutions of the full Reynolds equations in the limit of Reynolds numbers tending to infinity. Procedures are developed for combining the local trailing edge solution with numerical methods for solving the full potential flow and boundary layer equations. Theoretical results indicate that conventional boundary layer methods account for only about 50% of the viscous effect on lift, the remaining contribution arising from wake curvature and normal pressure gradient effects.

  15. Applications of graph theory in protein structure identification

    PubMed Central

    2011-01-01

    There is a growing interest in the identification of proteins on the proteome wide scale. Among different kinds of protein structure identification methods, graph-theoretic methods are very sharp ones. Due to their lower costs, higher effectiveness and many other advantages, they have drawn more and more researchers’ attention nowadays. Specifically, graph-theoretic methods have been widely used in homology identification, side-chain cluster identification, peptide sequencing and so on. This paper reviews several methods in solving protein structure identification problems using graph theory. We mainly introduce classical methods and mathematical models including homology modeling based on clique finding, identification of side-chain clusters in protein structures upon graph spectrum, and de novo peptide sequencing via tandem mass spectrometry using the spectrum graph model. In addition, concluding remarks and future priorities of each method are given. PMID:22165974

  16. EXPERIMENTAL AND THEORETICAL EVALUATIONS OF OBSERVATIONAL-BASED TECHNIQUES

    EPA Science Inventory

    Observational Based Methods (OBMs) can be used by EPA and the States to develop reliable ozone controls approaches. OBMs use actual measured concentrations of ozone, its precursors, and other indicators to determine the most appropriate strategy for ozone control. The usual app...

  17. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    NASA Astrophysics Data System (ADS)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  18. The lack of theoretical support for using person trade-offs in QALY-type models.

    PubMed

    Østerdal, Lars Peter

    2009-10-01

    Considerable support for the use of person trade-off methods to assess the quality-adjustment factor in quality-adjusted life years (QALY) models has been expressed in the literature. The WHO has occasionally used similar methods to assess the disability weights for calculation of disability-adjusted life years (DALYs). This paper discusses the theoretical support for the use of person trade-offs in QALY-type measurement of (changes in) population health. It argues that measures of this type based on such quality-adjustment factors almost always violate the Pareto principle, and so lack normative justification.

  19. Theoretical investigation of gas-surface interactions

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.

    1989-01-01

    Four reprints are presented from four projects which are to be published in a refereed journal. Two are of interest to us and are presented herein. One is a description of a very detailed theoretical study of four anionic hydrogen bonded complexes. The other is a detailed study of the first generally reliable diagnostic for determining the quality of results that may be expected from single reference based electron correlation methods.

  20. Nurses' adherence to the Kangaroo Care Method: support for nursing care management1

    PubMed Central

    da Silva, Laura Johanson; Leite, Josete Luzia; Scochi, Carmen Gracinda Silvan; da Silva, Leila Rangel; da Silva, Thiago Privado

    2015-01-01

    OBJECTIVE: construct an explanatory theoretical model about nurses' adherence to the Kangaroo Care Method at the Neonatal Intensive Care Unit, based on the meanings and interactions for care management. METHOD: qualitative research, based on the reference framework of the Grounded Theory. Eight nurses were interviewed at a Neonatal Intensive Care Unit in the city of Rio de Janeiro. The comparative analysis of the data comprised the phases of open, axial and selective coding. A theoretical conditional-causal model was constructed. RESULTS: four main categories emerged that composed the analytic paradigm: Giving one's best to the Kangaroo Method; Working with the complexity of the Kangaroo Method; Finding (de)motivation to apply the Kangaroo Method; and Facing the challenges for the adherence to and application of the Kangaroo Method. CONCLUSIONS: the central phenomenon revealed that each nurse and team professional has a role of multiplying values and practices that may or may not be constructive, potentially influencing the (dis)continuity of the Kangaroo Method at the Neonatal Intensive Care Unit. The findings can be used to outline management strategies that go beyond the courses and training and guarantee the strengthening of the care model. PMID:26155013

  1. Signal-to-noise ratio comparison of encoding methods for hyperpolarized noble gas MRI

    NASA Technical Reports Server (NTRS)

    Zhao, L.; Venkatesh, A. K.; Albert, M. S.; Panych, L. P.

    2001-01-01

    Some non-Fourier encoding methods such as wavelet and direct encoding use spatially localized bases. The spatial localization feature of these methods enables optimized encoding for improved spatial and temporal resolution during dynamically adaptive MR imaging. These spatially localized bases, however, have inherently reduced image signal-to-noise ratio compared with Fourier or Hadamad encoding for proton imaging. Hyperpolarized noble gases, on the other hand, have quite different MR properties compared to proton, primarily the nonrenewability of the signal. It could be expected, therefore, that the characteristics of image SNR with respect to encoding method will also be very different from hyperpolarized noble gas MRI compared to proton MRI. In this article, hyperpolarized noble gas image SNRs of different encoding methods are compared theoretically using a matrix description of the encoding process. It is shown that image SNR for hyperpolarized noble gas imaging is maximized for any orthonormal encoding method. Methods are then proposed for designing RF pulses to achieve normalized encoding profiles using Fourier, Hadamard, wavelet, and direct encoding methods for hyperpolarized noble gases. Theoretical results are confirmed with hyperpolarized noble gas MRI experiments. Copyright 2001 Academic Press.

  2. Infrared super-resolution imaging based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Sui, Xiubao; Chen, Qian; Gu, Guohua; Shen, Xuewei

    2014-03-01

    The theoretical basis of traditional infrared super-resolution imaging method is Nyquist sampling theorem. The reconstruction premise is that the relative positions of the infrared objects in the low-resolution image sequences should keep fixed and the image restoration means is the inverse operation of ill-posed issues without fixed rules. The super-resolution reconstruction ability of the infrared image, algorithm's application area and stability of reconstruction algorithm are limited. To this end, we proposed super-resolution reconstruction method based on compressed sensing in this paper. In the method, we selected Toeplitz matrix as the measurement matrix and realized it by phase mask method. We researched complementary matching pursuit algorithm and selected it as the recovery algorithm. In order to adapt to the moving target and decrease imaging time, we take use of area infrared focal plane array to acquire multiple measurements at one time. Theoretically, the method breaks though Nyquist sampling theorem and can greatly improve the spatial resolution of the infrared image. The last image contrast and experiment data indicate that our method is effective in improving resolution of infrared images and is superior than some traditional super-resolution imaging method. The compressed sensing super-resolution method is expected to have a wide application prospect.

  3. Designing a Double-Pole Nanoscale Relay Based on a Carbon Nanotube: A Theoretical Study

    NASA Astrophysics Data System (ADS)

    Mu, Weihua; Ou-Yang, Zhong-can; Dresselhaus, Mildred S.

    2017-08-01

    We theoretically investigate a novel and powerful double-pole nanoscale relay based on a carbon nanotube, which is one of the nanoelectromechanical switches being able to work under the strong nuclear radiation, and analyze the physical mechanism of the operating stages in the operation, including "pull in," "connection," and "pull back," as well as the key factors influencing the efficiency of the devices. We explicitly provide the analytical expression of the two important operation voltages, Vpull in and Vpull back , therefore clearly showing the dependence of the material properties and geometry of the present devices by the analytical method from basic physics, avoiding complex numerical calculations. Our method is easy to use in preparing the design guide for fabricating the present device and other nanoelectromechanical devices.

  4. On event-based optical flow detection

    PubMed Central

    Brosch, Tobias; Tschechne, Stephan; Neumann, Heiko

    2015-01-01

    Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations. PMID:25941470

  5. Caprylate Salts Based on Amines as Volatile Corrosion Inhibitors for Metallic Zinc: Theoretical and Experimental Studies

    PubMed Central

    Valente, Marco A. G.; Teixeira, Deiver A.; Azevedo, David L.; Feliciano, Gustavo T.; Benedetti, Assis V.; Fugivara, Cecílio S.

    2017-01-01

    The interaction of volatile corrosion inhibitors (VCI), caprylate salt derivatives from amines, with zinc metallic surfaces is assessed by density functional theory (DFT) computer simulations, electrochemical impedance (EIS) measurements and humid chamber tests. The results obtained by the different methods were compared, and linear correlations were obtained between theoretical and experimental data. The correlations between experimental and theoretical results showed that the molecular size is the determining factor in the inhibition efficiency. The models used and experimental results indicated that dicyclohexylamine caprylate is the most efficient inhibitor. PMID:28620602

  6. Forcing scheme analysis for the axisymmetric lattice Boltzmann method under incompressible limit.

    PubMed

    Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Chen, Jie; Yin, Linmao; Chew, Jia Wei

    2017-04-01

    Because the standard lattice Boltzmann (LB) method is proposed for Cartesian Navier-Stokes (NS) equations, additional source terms are necessary in the axisymmetric LB method for representing the axisymmetric effects. Therefore, the accuracy and applicability of the axisymmetric LB models depend on the forcing schemes adopted for discretization of the source terms. In this study, three forcing schemes, namely, the trapezium rule based scheme, the direct forcing scheme, and the semi-implicit centered scheme, are analyzed theoretically by investigating their derived macroscopic equations in the diffusive scale. Particularly, the finite difference interpretation of the standard LB method is extended to the LB equations with source terms, and then the accuracy of different forcing schemes is evaluated for the axisymmetric LB method. Theoretical analysis indicates that the discrete lattice effects arising from the direct forcing scheme are part of the truncation error terms and thus would not affect the overall accuracy of the standard LB method with general force term (i.e., only the source terms in the momentum equation are considered), but lead to incorrect macroscopic equations for the axisymmetric LB models. On the other hand, the trapezium rule based scheme and the semi-implicit centered scheme both have the advantage of avoiding the discrete lattice effects and recovering the correct macroscopic equations. Numerical tests applied for validating the theoretical analysis show that both the numerical stability and the accuracy of the axisymmetric LB simulations are affected by the direct forcing scheme, which indicate that forcing schemes free of the discrete lattice effects are necessary for the axisymmetric LB method.

  7. Analysis and testing of a new method for drop size measurement using laser scatter interferometry

    NASA Technical Reports Server (NTRS)

    Bachalo, W. D.; Houser, M. J.

    1984-01-01

    Research was conducted on a laser light scatter detection method for measuring the size and velocity of spherical particles. The method is based upon the measurement of the interference fringe pattern produced by spheres passing through the intersection of two laser beams. A theoretical analysis of the method was carried out using the geometrical optics theory. Experimental verification of the theory was obtained by using monodisperse droplet streams. Several optical configurations were tested to identify all of the parametric effects upon the size measurements. Both off-axis forward and backscatter light detection were utilized. Simulated spray environments and fuel spray nozzles were used in the evaluation of the method. The measurements of the monodisperse drops showed complete agreement with the theoretical predictions. The method was demonstrated to be independent of the beam intensity and extinction resulting from the surrounding drops. Signal processing concepts were considered and a method was selected for development.

  8. Evidence-Based Administration for Decision Making in the Framework of Knowledge Strategic Management

    ERIC Educational Resources Information Center

    Del Junco, Julio Garcia; Zaballa, Rafael De Reyna; de Perea, Juan Garcia Alvarez

    2010-01-01

    Purpose: This paper seeks to present a model based on evidence-based administration (EBA), which aims to facilitate the creation, transformation and diffusion of knowledge in learning organizations. Design/methodology/approach: A theoretical framework is proposed based on EBA and the case method. Accordingly, an empirical study was carried out in…

  9. Individual behavioral phenotypes: an integrative meta-theoretical framework. Why "behavioral syndromes" are not analogs of "personality".

    PubMed

    Uher, Jana

    2011-09-01

    Animal researchers are increasingly interested in individual differences in behavior. Their interpretation as meaningful differences in behavioral strategies stable over time and across contexts, adaptive, heritable, and acted upon by natural selection has triggered new theoretical developments. However, the analytical approaches used to explore behavioral data still address population-level phenomena, and statistical methods suitable to analyze individual behavior are rarely applied. I discuss fundamental investigative principles and analytical approaches to explore whether, in what ways, and under which conditions individual behavioral differences are actually meaningful. I elaborate the meta-theoretical ideas underlying common theoretical concepts and integrate them into an overarching meta-theoretical and methodological framework. This unravels commonalities and differences, and shows that assumptions of analogy to concepts of human personality are not always warranted and that some theoretical developments may be based on methodological artifacts. Yet, my results also highlight possible directions for new theoretical developments in animal behavior research. Copyright © 2011 Wiley Periodicals, Inc.

  10. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics

    PubMed Central

    2012-01-01

    Background Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. Methods The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. Results The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. Conclusion The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. Virtual Slides The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923. PMID:23035717

  11. Approximation of reliability of direct genomic breeding values

    USDA-ARS?s Scientific Manuscript database

    Two methods to efficiently approximate theoretical genomic reliabilities are presented. The first method is based on the direct inverse of the left hand side (LHS) of mixed model equations. It uses the genomic relationship matrix for a small subset of individuals with the highest genomic relationshi...

  12. Approaching the Limit in Atomic Spectrochemical Analysis.

    ERIC Educational Resources Information Center

    Hieftje, Gary M.

    1982-01-01

    To assess the ability of current analytical methods to approach the single-atom detection level, theoretical and experimentally determined detection levels are presented for several chemical elements. A comparison of these methods shows that the most sensitive atomic spectrochemical technique currently available is based on emission from…

  13. Space shuttle booster multi-engine base flow analysis

    NASA Technical Reports Server (NTRS)

    Tang, H. H.; Gardiner, C. R.; Anderson, W. A.; Navickas, J.

    1972-01-01

    A comprehensive review of currently available techniques pertinent to several prominent aspects of the base thermal problem of the space shuttle booster is given along with a brief review of experimental results. A tractable engineering analysis, capable of predicting the power-on base pressure, base heating, and other base thermal environmental conditions, such as base gas temperature, is presented and used for an analysis of various space shuttle booster configurations. The analysis consists of a rational combination of theoretical treatments of the prominent flow interaction phenomena in the base region. These theories consider jet mixing, plume flow, axisymmetric flow effects, base injection, recirculating flow dynamics, and various modes of heat transfer. Such effects as initial boundary layer expansion at the nozzle lip, reattachment, recompression, choked vent flow, and nonisoenergetic mixing processes are included in the analysis. A unified method was developed and programmed to numerically obtain compatible solutions for the various flow field components in both flight and ground test conditions. Preliminary prediction for a 12-engine space shuttle booster base thermal environment was obtained for a typical trajectory history. Theoretical predictions were also obtained for some clustered-engine experimental conditions. Results indicate good agreement between the data and theoretical predicitons.

  14. The construction of arbitrary order ERKN methods based on group theory for solving oscillatory Hamiltonian systems with applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Lijie, E-mail: bxhanm@126.com; Wu, Xinyuan, E-mail: xywu@nju.edu.cn

    In general, extended Runge–Kutta–Nyström (ERKN) methods are more effective than traditional Runge–Kutta–Nyström (RKN) methods in dealing with oscillatory Hamiltonian systems. However, the theoretical analysis for ERKN methods, such as the order conditions, the symplectic conditions and the symmetric conditions, becomes much more complicated than that for RKN methods. Therefore, it is a bottleneck to construct high-order ERKN methods efficiently. In this paper, we first establish the ERKN group Ω for ERKN methods and the RKN group G for RKN methods, respectively. We then rigorously show that ERKN methods are a natural extension of RKN methods, that is, there exists anmore » epimorphism η of the ERKN group Ω onto the RKN group G. This epimorphism gives a global insight into the structure of the ERKN group by the analysis of its kernel and the corresponding RKN group G. Meanwhile, we establish a particular mapping φ of G into Ω so that each image element is an ideal representative element of the congruence class in Ω. Furthermore, an elementary theoretical analysis shows that this map φ can preserve many structure-preserving properties, such as the order, the symmetry and the symplecticity. From the epimorphism η together with its section φ, we may gain knowledge about the structure of the ERKN group Ω via the RKN group G. In light of the theoretical analysis of this paper, we obtain high-order structure-preserving ERKN methods in an effective way for solving oscillatory Hamiltonian systems. Numerical experiments are carried out and the results are very promising, which strongly support our theoretical analysis presented in this paper.« less

  15. Perinatal Bereavement: A Principle-based Concept Analysis

    PubMed Central

    FENSTERMACHER, Kimberly; HUPCEY, Judith E.

    2013-01-01

    Aim This paper is a report of an analysis of the concept of perinatal bereavement. Background The concept of perinatal bereavement emerged in the scientific literature during the 1970s. Perinatal bereavement is a practice based concept, although it is not well defined in the scientific literature and is often intermingled with the concepts of mourning and grief. Design Concept Analysis. Data Sources Using the term ‘perinatal bereavement’ and limits of only English and human, Pub Med and CINAHL were searched to yield 278 available references dating from 1974 – 2011. Articles specific to the experience of perinatal bereavement were reviewed. The final data set was 143 articles. Review Methods The methods of principle-based concept analysis were used. Results reveal conceptual components (antecedents, attributes and outcomes) which are delineated to create a theoretical definition of perinatal bereavement. Results The concept is epistemologically immature, with few explicit definitions to describe the phenomenon. Inconsistency in conceptual meaning threatens the construct validity of measurement tools for perinatal bereavement and contributes to incongruent theoretical definitions. This has implications for both nursing science (how the concept is studied and theoretically integrated) and clinical practice (timing and delivery of support interventions). Conclusions Perinatal bereavement is a multifaceted global phenomenon that follows perinatal loss. Lack of conceptual clarity and lack of a clearly articulated conceptual definition impede the synthesis and translation of research findings into practice. A theoretical definition of perinatal bereavement is offered as a platform for researchers to advance the concept through research and theory development. PMID:23458030

  16. Mapping functional connectivity

    Treesearch

    Peter Vogt; Joseph R. Ferrari; Todd R. Lookingbill; Robert H. Gardner; Kurt H. Riitters; Katarzyna Ostapowicz

    2009-01-01

    An objective and reliable assessment of wildlife movement is important in theoretical and applied ecology. The identification and mapping of landscape elements that may enhance functional connectivity is usually a subjective process based on visual interpretations of species movement patterns. New methods based on mathematical morphology provide a generic, flexible,...

  17. Design and simulation of GaN based Schottky betavoltaic nuclear micro-battery.

    PubMed

    San, Haisheng; Yao, Shulin; Wang, Xiang; Cheng, Zaijun; Chen, Xuyuan

    2013-10-01

    The current paper presents a theoretical analysis of Ni-63 nuclear micro-battery based on a wide-band gap semiconductor GaN thin-film covered with thin Ni/Au films to form Schottky barrier for carrier separation. The total energy deposition in GaN was calculated using Monte Carlo methods by taking into account the full beta spectral energy, which provided an optimal design on Schottky barrier width. The calculated results show that an 8 μm thick Schottky barrier can collect about 95% of the incident beta particle energy. Considering the actual limitations of current GaN growth technique, a Fe-doped compensation technique by MOCVD method can be used to realize the n-type GaN with a carrier concentration of 1×10(15) cm(-3), by which a GaN based Schottky betavoltaic micro-battery can achieve an energy conversion efficiency of 2.25% based on the theoretical calculations of semiconductor device physics. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Coupling of ultrasound-assisted extraction and expanded bed adsorption for simplified medicinal plant processing and its theoretical model: extraction and enrichment of ginsenosides from Radix Ginseng as a case study.

    PubMed

    Mi, Jianing; Zhang, Min; Zhang, Hongyang; Wang, Yuerong; Wu, Shikun; Hu, Ping

    2013-02-01

    A high-efficient and environmental-friendly method for the preparation of ginsenosides from Radix Ginseng using the method of coupling of ultrasound-assisted extraction with expanded bed adsorption is described. Based on the optimal extraction conditions screened by surface response methodology, ginsenosides were extracted and adsorbed, then eluted by the two-step elution protocol. The comparison results between the coupling of ultrasound-assisted extraction with expanded bed adsorption method and conventional method showed that the former was better than the latter in both process efficiency and greenness. The process efficiency and energy efficiency of the coupling of ultrasound-assisted extraction with expanded bed adsorption method rapidly increased by 1.4-fold and 18.5-fold of the conventional method, while the environmental cost and CO(2) emission of the conventional method were 12.9-fold and 17.0-fold of the new method. Furthermore, the theoretical model for the extraction of targets was derived. The results revealed that the theoretical model suitably described the process of preparing ginsenosides by the coupling of ultrasound-assisted extraction with expanded bed adsorption system. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Theoretical estimates of exposure timescales of protein binding sites on DNA regulated by nucleosome kinetics.

    PubMed

    Parmar, Jyotsana J; Das, Dibyendu; Padinhateeri, Ranjith

    2016-02-29

    It is being increasingly realized that nucleosome organization on DNA crucially regulates DNA-protein interactions and the resulting gene expression. While the spatial character of the nucleosome positioning on DNA has been experimentally and theoretically studied extensively, the temporal character is poorly understood. Accounting for ATPase activity and DNA-sequence effects on nucleosome kinetics, we develop a theoretical method to estimate the time of continuous exposure of binding sites of non-histone proteins (e.g. transcription factors and TATA binding proteins) along any genome. Applying the method to Saccharomyces cerevisiae, we show that the exposure timescales are determined by cooperative dynamics of multiple nucleosomes, and their behavior is often different from expectations based on static nucleosome occupancy. Examining exposure times in the promoters of GAL1 and PHO5, we show that our theoretical predictions are consistent with known experiments. We apply our method genome-wide and discover huge gene-to-gene variability of mean exposure times of TATA boxes and patches adjacent to TSS (+1 nucleosome region); the resulting timescale distributions have non-exponential tails. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    NASA Astrophysics Data System (ADS)

    Figueroa, Aldo; Meunier, Patrice; Cuevas, Sergio; Villermaux, Emmanuel; Ramos, Eduardo

    2014-01-01

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, "The diffusive strip method for scalar mixing in two-dimensions," J. Fluid Mech. 662, 134-172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement with quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.

  1. [Health assessment and economic assessment in health: introduction to the debate on the points of intersection].

    PubMed

    Sancho, Leyla Gomes; Dain, Sulamis

    2012-03-01

    The study aims to infer the existence of a continuum between Health Assessment and Economic Assessment in Health, by highlighting points of intersection of these forms of appraisal. To achieve this, a review of the theoretical foundations, methods and approaches of both forms of assessment was conducted. It was based on the theoretical model of health evaluation as reported by Hartz et al and economic assessment in health approaches reported by Brouwer et al. It was seen that there is a continuum between the theoretical model of evaluative research and the extrawelfarist approach for economic assessment in health, and between the normative theoretical model for health assessment and the welfarist approaches for economic assessment in health. However, in practice the assessment is still conducted using the normative theoretical model and with a welfarist approach.

  2. Waveguide-type optical circuits for recognition of optical 8QAM-coded label

    NASA Astrophysics Data System (ADS)

    Surenkhorol, Tumendemberel; Kishikawa, Hiroki; Goto, Nobuo; Gonchigsumlaa, Khishigjargal

    2017-10-01

    Optical signal processing is expected to be applied in network nodes. In photonic routers, label recognition is one of the important functions. We have studied different kinds of label recognition methods so far for on-off keying, binary phase-shift keying, quadrature phase-shift keying, and 16 quadrature amplitude modulation-coded labels. We propose a method based on waveguide circuits to recognize an optical eight quadrature amplitude modulation (8QAM)-coded label by simple passive optical signal processing. The recognition of the proposed method is theoretically analyzed and numerically simulated by the finite difference beam propagation method. The noise tolerance is discussed, and bit-error rate against optical signal-to-noise ratio is evaluated. The scalability of the proposed method is also discussed theoretically for two-symbol length 8QAM-coded labels.

  3. Estimating 3D positions and velocities of projectiles from monocular views.

    PubMed

    Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P

    2009-05-01

    In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.

  4. Ottawa Model of Implementation Leadership and Implementation Leadership Scale: mapping concepts for developing and evaluating theory-based leadership interventions

    PubMed Central

    Gifford, Wendy; Graham, Ian D; Ehrhart, Mark G; Davies, Barbara L; Aarons, Gregory A

    2017-01-01

    Purpose Leadership in health care is instrumental to creating a supportive organizational environment and positive staff attitudes for implementing evidence-based practices to improve patient care and outcomes. The purpose of this study is to demonstrate the alignment of the Ottawa Model of Implementation Leadership (O-MILe), a theoretical model for developing implementation leadership, with the Implementation Leadership Scale (ILS), an empirically validated tool for measuring implementation leadership. A secondary objective is to describe the methodological process for aligning concepts of a theoretical model with an independently established measurement tool for evaluating theory-based interventions. Methods Modified template analysis was conducted to deductively map items of the ILS onto concepts of the O-MILe. An iterative process was used in which the model and scale developers (n=5) appraised the relevance, conceptual clarity, and fit of each ILS items with the O-MILe concepts through individual feedback and group discussions until consensus was reached. Results All 12 items of the ILS correspond to at least one O-MILe concept, demonstrating compatibility of the ILS as a measurement tool for the O-MILe theoretical constructs. Conclusion The O-MILe provides a theoretical basis for developing implementation leadership, and the ILS is a compatible tool for measuring leadership based on the O-MILe. Used together, the O-MILe and ILS provide an evidence- and theory-based approach for developing and measuring leadership for implementing evidence-based practices in health care. Template analysis offers a convenient approach for determining the compatibility of independently developed evaluation tools to test theoretical models. PMID:29355212

  5. Theoretical analysis of a method for extracting the phase of a phase-amplitude modulated signal generated by a direct-modulated optical injection-locked semiconductor laser

    NASA Astrophysics Data System (ADS)

    Lee, Hwan; Cho, Jun-Hyung; Sung, Hyuk-Kee

    2017-05-01

    The phase modulation (PM) and amplitude modulation (AM) of optical signals can be achieved using a direct-modulated (DM) optical injection-locked (OIL) semiconductor laser. We propose and theoretically analyze a simple method to extract the phase component of a PM signal produced by a DM-OIL semiconductor laser. The pure AM component of the combined PM-AM signal can be isolated by square-law detection in a photodetector and can then be used to compensate for the PM-AM signal based on an optical homodyne method. Using the AM compensation technique, we successfully developed a simple and cost-effective phase extraction method applicable to the PM-AM optical signal of a DM-OIL semiconductor laser.

  6. A theoretical signal processing framework for linear diffusion MRI: Implications for parameter estimation and experiment design.

    PubMed

    Varadarajan, Divya; Haldar, Justin P

    2017-11-01

    The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Low-loss electron energy loss spectroscopy: An atomic-resolution complement to optical spectroscopies and application to graphene

    DOE PAGES

    Kapetanakis, Myron; Zhou, Wu; Oxley, Mark P.; ...

    2015-09-25

    Photon-based spectroscopies have played a central role in exploring the electronic properties of crystalline solids and thin films. They are a powerful tool for probing the electronic properties of nanostructures, but they are limited by lack of spatial resolution. On the other hand, electron-based spectroscopies, e.g., electron energy loss spectroscopy (EELS), are now capable of subangstrom spatial resolution. Core-loss EELS, a spatially resolved analog of x-ray absorption, has been used extensively in the study of inhomogeneous complex systems. In this paper, we demonstrate that low-loss EELS in an aberration-corrected scanning transmission electron microscope, which probes low-energy excitations, combined with amore » theoretical framework for simulating and analyzing the spectra, is a powerful tool to probe low-energy electron excitations with atomic-scale resolution. The theoretical component of the method combines density functional theory–based calculations of the excitations with dynamical scattering theory for the electron beam. We apply the method to monolayer graphene in order to demonstrate that atomic-scale contrast is inherent in low-loss EELS even in a perfectly periodic structure. The method is a complement to optical spectroscopy as it probes transitions entailing momentum transfer. The theoretical analysis identifies the spatial and orbital origins of excitations, holding the promise of ultimately becoming a powerful probe of the structure and electronic properties of individual point and extended defects in both crystals and inhomogeneous complex nanostructures. The method can be extended to probe magnetic and vibrational properties with atomic resolution.« less

  8. Step-scan T cell-based differential Fourier transform infrared photoacoustic spectroscopy (DFTIR-PAS) for detection of ambient air contaminants

    NASA Astrophysics Data System (ADS)

    Liu, Lixian; Mandelis, Andreas; Huan, Huiting; Melnikov, Alexander

    2016-10-01

    A step-scan differential Fourier transform infrared photoacoustic spectroscopy (DFTIR-PAS) using a commercial FTIR spectrometer was developed theoretically and experimentally for air contaminant monitoring. The configuration comprises two identical, small-size and low-resonance-frequency T cells satisfying the conflicting requirements of low chopping frequency and limited space in the sample compartment. Carbon dioxide (CO2) IR absorption spectra were used to demonstrate the capability of the DFTIR-PAS method to detect ambient pollutants. A linear amplitude response to CO2 concentrations from 100 to 10,000 ppmv was observed, leading to a theoretical detection limit of 2 ppmv. The differential mode was able to suppress the coherent noise, thereby imparting the DFTIR-PAS method with a better signal-to-noise ratio and lower theoretical detection limit than the single mode. The results indicate that it is possible to use step-scan DFTIR-PAS with T cells as a quantitative method for high sensitivity analysis of ambient contaminants.

  9. On the calculation of the complex wavenumber of plane waves in rigid-walled low-Mach-number turbulent pipe flows

    NASA Astrophysics Data System (ADS)

    Weng, Chenyang; Boij, Susann; Hanifi, Ardeshir

    2015-10-01

    A numerical method for calculating the wavenumbers of axisymmetric plane waves in rigid-walled low-Mach-number turbulent flows is proposed, which is based on solving the linearized Navier-Stokes equations with an eddy-viscosity model. In addition, theoretical models for the wavenumbers are reviewed, and the main effects (the viscothermal effects, the mean flow convection and refraction effects, the turbulent absorption, and the moderate compressibility effects) which may influence the sound propagation are discussed. Compared to the theoretical models, the proposed numerical method has the advantage of potentially including more effects in the computed wavenumbers. The numerical results of the wavenumbers are compared with the reviewed theoretical models, as well as experimental data from the literature. It shows that the proposed numerical method can give satisfactory prediction of both the real part (phase shift) and the imaginary part (attenuation) of the measured wavenumbers, especially when the refraction effects or the turbulent absorption effects become important.

  10. Surface Segregation in Multicomponent Systems: Modeling of Surface Alloys and Alloy Surfaces

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Ferrante, John; Noebe, Ronald D.; Good, Brian; Honecy, Frank S.; Abel, Phillip

    1999-01-01

    The study of surface segregation, although of great technological importance, has been largely restricted to experimental work due to limitations associated with theoretical methods. However, recent improvements in both first-particle and semi-empirical methods are opening, the doors to an array of new possibilities for surface scientists. We apply one of these techniques, the Bozzolo, Ferrante and Smith (BFS) method for alloys, which is particularly suitable for complex systems, to several aspects of the computational modeling of surfaces and segregation, including alloy surface segregation, structure and composition of alloy surfaces, and the formation of surface alloys. We conclude with the study of complex NiAl-based binary, ternary and quaternary thin films (with Ti, Cr and Cu additions to NiAl). Differences and similarities between bulk and surface compositions are discussed, illustrated by the results of Monte Carlo simulations. For some binary and ternary cases, the theoretical predictions are compared to experimental results, highlighting the accuracy and value of this developing theoretical tool.

  11. Action First--Understanding Follows: An Expansion of Skills-Based Training Using Action Method.

    ERIC Educational Resources Information Center

    Martin, Colin

    1988-01-01

    This paper discusses the concept of training trainers in the skills they need to perform competently as trainers and how they follow their skills mastery with discussion on their new theoretical insight. Moreno's action method (psychodrama, sociodrama, sociometry, and role training) is the model used. (JOW)

  12. An Ecological Approach to the On-Line Assessment of Problem-Solving Paths: Principles and Applications.

    ERIC Educational Resources Information Center

    Shaw, Robert E.; And Others

    1997-01-01

    Proposes a theoretical framework for designing online-situated assessment tools for multimedia instructional systems. Uses a graphic method based on ecological psychology to monitor student performance through a learning activity. Explores the method's feasibility in case studies describing instructional systems teaching critical-thinking and…

  13. Numerical Calabi-Yau metrics

    NASA Astrophysics Data System (ADS)

    Douglas, Michael R.; Karp, Robert L.; Lukic, Sergio; Reinbacher, René

    2008-03-01

    We develop numerical methods for approximating Ricci flat metrics on Calabi-Yau hypersurfaces in projective spaces. Our approach is based on finding balanced metrics and builds on recent theoretical work by Donaldson. We illustrate our methods in detail for a one parameter family of quintics. We also suggest several ways to extend our results.

  14. A Survey of Model Evaluation Approaches with a Tutorial on Hierarchical Bayesian Methods

    ERIC Educational Resources Information Center

    Shiffrin, Richard M.; Lee, Michael D.; Kim, Woojae; Wagenmakers, Eric-Jan

    2008-01-01

    This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures; simulation approaches, including model mimicry evaluations; and practical approaches, such as validation and generalization measures. This article argues…

  15. Algorithms that Defy the Gravity of Learning Curve

    DTIC Science & Technology

    2017-04-28

    three nearest neighbour-based anomaly detectors, i.e., an ensemble of nearest neigh- bours, a recent nearest neighbour-based ensemble method called iNNE...streams. Note that the change in sample size does not alter the geometrical data characteristics discussed here. 3.1 Experimental Methodology ...need to be answered. 3.6 Comparison with conventional ensemble methods Given the theoretical results, the third aim of this project (i.e., identify the

  16. Theoretical Background and Prognostic Modeling for Benchmarking SHM Sensors for Composite Structures

    DTIC Science & Technology

    2010-10-01

    minimum flaw size can be detected by the existing SHM based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were...Whether it be hat stiffened, corrugated sandwich, honeycomb sandwich, or foam filled sandwich, all composite structures have one basic handicap in...based monitoring methods. Sandwich panels with foam , WebCore and honeycomb structures were considered for use in this study. Eigenmode frequency

  17. Heterogeneity among violence-exposed women: applying person-oriented research methods.

    PubMed

    Nurius, Paula S; Macy, Rebecca J

    2008-03-01

    Variability of experience and outcomes among violence-exposed people pose considerable challenges toward developing effective prevention and treatment protocols. To address these needs, the authors present an approach to research and a class of methodologies referred to as person oriented. Person-oriented tools support assessment of meaningful patterns among people that distinguish one group from another, subgroups for whom different interventions are indicated. The authors review the conceptual base of person-oriented methods, outline their distinction from more familiar variable-oriented methods, present descriptions of selected methods as well as empirical applications of person-oriented methods germane to violence exposure, and conclude with discussion of implications for future research and translation between research and practice. The authors focus on violence against women as a population, drawing on stress and coping theory as a theoretical framework. However, person-oriented methods hold utility for investigating diversity among violence-exposed people's experiences and needs across populations and theoretical foundations.

  18. Theoretical Calculations of Atomic Data for Spectroscopy

    NASA Technical Reports Server (NTRS)

    Bautista, Manuel A.

    2000-01-01

    Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.

  19. [The triad configuration, humanist-existential-personal: a theoretical and methodological approach to psychiatric and mental health nursing].

    PubMed

    Vietta, E P

    1995-01-01

    The author establishes a research line based on a theoretical-methodological referential for the qualitative investigation of psychiatric nursing and mental health. Aspects of humanist and existential philosophies and personalism were evaluated integrating them in a unique perspective. In order to maintain the scientific method of research in this referential the categorization process which will be adopted in this kind of investigation was explained.

  20. Lateral Load Capacity of Piles: A Comparative Study Between Indian Standards and Theoretical Approach

    NASA Astrophysics Data System (ADS)

    Jayasree, P. K.; Arun, K. V.; Oormila, R.; Sreelakshmi, H.

    2018-05-01

    As per Indian Standards, laterally loaded piles are usually analysed using the method adopted by IS 2911-2010 (Part 1/Section 2). But the practising engineers are of the opinion that the IS method is very conservative in design. This work aims at determining the extent to which the conventional IS design approach is conservative. This is done through a comparative study between IS approach and the theoretical model based on Vesic's equation. Bore log details for six different bridges were collected from the Kerala Public Works Department. Cast in situ fixed head piles embedded in three soil conditions both end bearing as well as friction piles were considered and analyzed separately. Piles were also modelled in STAAD.Pro software based on IS approach and the results were validated using Matlock and Reese (In Proceedings of fifth international conference on soil mechanics and foundation engineering, 1961) equation. The results were presented as the percentage variation in values of bending moment and deflection obtained by different methods. The results obtained from the mathematical model based on Vesic's equation and that obtained as per the IS approach were compared and the IS method was found to be uneconomical and conservative.

  1. On the road to metallic nanoparticles by rational design: bridging the gap between atomic-level theoretical modeling and reality by total scattering experiments

    NASA Astrophysics Data System (ADS)

    Prasai, Binay; Wilson, A. R.; Wiley, B. J.; Ren, Y.; Petkov, Valeri

    2015-10-01

    The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au100-xPdx (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when ``tuned up'' against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design.The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au100-xPdx (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when ``tuned up'' against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design. Electronic supplementary information (ESI) available: XRD patterns, TEM and 3D structure modelling methodology. See DOI: 10.1039/c5nr04678e

  2. Research on Sustainable Development Level Evaluation of Resource-based Cities Based on Shapely Entropy and Chouqet Integral

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Qu, Weilu; Qiu, Weiting

    2018-03-01

    In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.

  3. Identification of source velocities on 3D structures in non-anechoic environments: Theoretical background and experimental validation of the inverse patch transfer functions method

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; Totaro, N.; Guyader, J.-L.

    2010-08-01

    In noise control, identification of the source velocity field remains a major problem open to investigation. Consequently, methods such as nearfield acoustical holography (NAH), principal source projection, the inverse frequency response function and hybrid NAH have been developed. However, these methods require free field conditions that are often difficult to achieve in practice. This article presents an alternative method known as inverse patch transfer functions, designed to identify source velocities and developed in the framework of the European SILENCE project. This method is based on the definition of a virtual cavity, the double measurement of the pressure and particle velocity fields on the aperture surfaces of this volume, divided into elementary areas called patches and the inversion of impedances matrices, numerically computed from a modal basis obtained by FEM. Theoretically, the method is applicable to sources with complex 3D geometries and measurements can be carried out in a non-anechoic environment even in the presence of other stationary sources outside the virtual cavity. In the present paper, the theoretical background of the iPTF method is described and the results (numerical and experimental) for a source with simple geometry (two baffled pistons driven in antiphase) are presented and discussed.

  4. Optical properties of LiGaS2: an ab initio study and spectroscopic ellipsometry measurement

    NASA Astrophysics Data System (ADS)

    Atuchin, V. V.; Lin, Z. S.; Isaenko, L. I.; Kesler, V. G.; Kruchinin, V. N.; Lobanov, S. I.

    2009-11-01

    Electronic and optical properties of lithium thiogallate crystal, LiGaS2, have been investigated by both experimental and theoretical methods. The plane-wave pseudopotential method based on DFT theory has been used for band structure calculations. The electronic parameters of Ga 3d orbitals have been corrected by the DFT+U methods to be consistent with those measured with x-ray photoemission spectroscopy. Evolution of optical constants of LiGaS2 over a wide spectral range was determined by developed first-principles theory and dispersion curves were compared with optical parameters defined by spectroscopic ellipsometry in the photon energy range 1.2-5.0 eV. Good agreement has been achieved between theoretical and experimental results.

  5. A parametric method for determining the number of signals in narrow-band direction finding

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Fuhrmann, Daniel R.

    1991-08-01

    A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).

  6. Evaluation of the communication quality of free-space laser communication based on the power-in-the-bucket method.

    PubMed

    Yin, Xianghui; Wang, Rui; Wang, Shaoxin; Wang, Yukun; Jin, Chengbin; Cao, Zhaoliang; Xuan, Li

    2018-02-01

    Atmospheric turbulence seriously affects the quality of free-space laser communication. The Strehl ratio (SR) is used to evaluate the effect of atmospheric turbulence on the receiving energy of free-space laser communication systems. However, the SR method does not consider the area of the laser-receiving end face. In this study, the power-in-the-bucket (PIB) method is demonstrated to accurately evaluate the effect of turbulence on the receiving energy. A theoretical equation is first obtained to calculate PIB. Simulated and experimental validations are then performed to verify the effectiveness of the theoretical equation. This work may provide effective guidance for the design and evaluation of free-space laser communication systems.

  7. An Introduction to the BFS Method and Its Use to Model Binary NiAl Alloys

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Noebe, Ronald D.; Ferrante, J.; Amador, C.

    1998-01-01

    We introduce the Bozzolo-Ferrante-Smith (BFS) method for alloys as a computationally efficient tool for aiding in the process of alloy design. An intuitive description of the BFS method is provided, followed by a formal discussion of its implementation. The method is applied to the study of the defect structure of NiAl binary alloys. The groundwork is laid for a detailed progression to higher order NiAl-based alloys linking theoretical calculations and computer simulations based on the BFS method and experimental work validating each step of the alloy design process.

  8. Robust and accurate vectorization of line drawings.

    PubMed

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  9. A guide to using the Theoretical Domains Framework of behaviour change to investigate implementation problems.

    PubMed

    Atkins, Lou; Francis, Jill; Islam, Rafat; O'Connor, Denise; Patey, Andrea; Ivers, Noah; Foy, Robbie; Duncan, Eilidh M; Colquhoun, Heather; Grimshaw, Jeremy M; Lawton, Rebecca; Michie, Susan

    2017-06-21

    Implementing new practices requires changes in the behaviour of relevant actors, and this is facilitated by understanding of the determinants of current and desired behaviours. The Theoretical Domains Framework (TDF) was developed by a collaboration of behavioural scientists and implementation researchers who identified theories relevant to implementation and grouped constructs from these theories into domains. The collaboration aimed to provide a comprehensive, theory-informed approach to identify determinants of behaviour. The first version was published in 2005, and a subsequent version following a validation exercise was published in 2012. This guide offers practical guidance for those who wish to apply the TDF to assess implementation problems and support intervention design. It presents a brief rationale for using a theoretical approach to investigate and address implementation problems, summarises the TDF and its development, and describes how to apply the TDF to achieve implementation objectives. Examples from the implementation research literature are presented to illustrate relevant methods and practical considerations. Researchers from Canada, the UK and Australia attended a 3-day meeting in December 2012 to build an international collaboration among researchers and decision-makers interested in the advancing use of the TDF. The participants were experienced in using the TDF to assess implementation problems, design interventions, and/or understand change processes. This guide is an output of the meeting and also draws on the authors' collective experience. Examples from the implementation research literature judged by authors to be representative of specific applications of the TDF are included in this guide. We explain and illustrate methods, with a focus on qualitative approaches, for selecting and specifying target behaviours key to implementation, selecting the study design, deciding the sampling strategy, developing study materials, collecting and analysing data, and reporting findings of TDF-based studies. Areas for development include methods for triangulating data, e.g. from interviews, questionnaires and observation and methods for designing interventions based on TDF-based problem analysis. We offer this guide to the implementation community to assist in the application of the TDF to achieve implementation objectives. Benefits of using the TDF include the provision of a theoretical basis for implementation studies, good coverage of potential reasons for slow diffusion of evidence into practice and a method for progressing from theory-based investigation to intervention.

  10. Valence electronic properties of porphyrin derivatives.

    PubMed

    Stenuit, G; Castellarin-Cudia, C; Plekan, O; Feyer, V; Prince, K C; Goldoni, A; Umari, P

    2010-09-28

    We present a combined experimental and theoretical investigation of the valence electronic structure of porphyrin-derived molecules. The valence photoemission spectra of the free-base tetraphenylporphyrin and of the octaethylporphyrin molecule were measured using synchrotron radiation and compared with theoretical spectra calculated using the GW method and the density-functional method within the generalized gradient approximation. Only the GW results could reproduce the experimental data. We found that the contribution to the orbital energies due to electronic correlations has the same linear behavior in both molecules, with larger deviations in the vicinity of the HOMO level. This shows the importance of adequate treatment of electronic correlations in these organic systems.

  11. Guiding principles for vortex flow controls

    NASA Technical Reports Server (NTRS)

    Wu, J. Z.; Wu, J. M.

    1991-01-01

    In the practice of vortex flow controls, the most important factor is that the persistency and obstinacy of a concentrated vortex depend on its stability and dissipation. In this paper, the modern nonlinear stability theory for circulation-preserving flows is summarized, and the dissipation for general viscous flows is analyzed in terms of the evolution of total enstrophy. These analyses provide a theoretical base for understanding relevant physics of vortex flows, and lead to some guiding principles and methods for their controls. Case studies taken from various theoretical and/or experimental works of vortex controls, due to the present authors as well as others, confirm the feasibility of the recommended principles and methods.

  12. Breaking the diffraction barrier using coherent anti-Stokes Raman scattering difference microscopy.

    PubMed

    Wang, Dong; Liu, Shuanglong; Chen, Yue; Song, Jun; Liu, Wei; Xiong, Maozhen; Wang, Guangsheng; Peng, Xiao; Qu, Junle

    2017-05-01

    We propose a method to improve the resolution of coherent anti-Stokes Raman scattering microscopy (CARS), and present a theoretical model. The proposed method, coherent anti-Stokes Raman scattering difference microscopy (CARS-D), is based on the intensity difference between two differently acquired images. One being the conventional CARS image, and the other obtained when the sample is illuminated by a doughnut shaped spot. The final super-resolution CARS-D image is constructed by intensity subtraction of these two images. However, there is a subtractive factor between them, and the theoretical model sets this factor to obtain the best imaging effect.

  13. IMMAN: free software for information theory-based chemometric analysis.

    PubMed

    Urias, Ricardo W Pino; Barigye, Stephen J; Marrero-Ponce, Yovani; García-Jacas, César R; Valdes-Martiní, José R; Perez-Gimenez, Facundo

    2015-05-01

    The features and theoretical background of a new and free computational program for chemometric analysis denominated IMMAN (acronym for Information theory-based CheMoMetrics ANalysis) are presented. This is multi-platform software developed in the Java programming language, designed with a remarkably user-friendly graphical interface for the computation of a collection of information-theoretic functions adapted for rank-based unsupervised and supervised feature selection tasks. A total of 20 feature selection parameters are presented, with the unsupervised and supervised frameworks represented by 10 approaches in each case. Several information-theoretic parameters traditionally used as molecular descriptors (MDs) are adapted for use as unsupervised rank-based feature selection methods. On the other hand, a generalization scheme for the previously defined differential Shannon's entropy is discussed, as well as the introduction of Jeffreys information measure for supervised feature selection. Moreover, well-known information-theoretic feature selection parameters, such as information gain, gain ratio, and symmetrical uncertainty are incorporated to the IMMAN software ( http://mobiosd-hub.com/imman-soft/ ), following an equal-interval discretization approach. IMMAN offers data pre-processing functionalities, such as missing values processing, dataset partitioning, and browsing. Moreover, single parameter or ensemble (multi-criteria) ranking options are provided. Consequently, this software is suitable for tasks like dimensionality reduction, feature ranking, as well as comparative diversity analysis of data matrices. Simple examples of applications performed with this program are presented. A comparative study between IMMAN and WEKA feature selection tools using the Arcene dataset was performed, demonstrating similar behavior. In addition, it is revealed that the use of IMMAN unsupervised feature selection methods improves the performance of both IMMAN and WEKA supervised algorithms. Graphic representation for Shannon's distribution of MD calculating software.

  14. A new fictitious domain approach for Stokes equation

    NASA Astrophysics Data System (ADS)

    Yang, Min

    2017-10-01

    The purpose of this paper is to present a new fictitious domain approach based on the Nietzsche’s method combining with a penalty method for the Stokes equation. This method allows for an easy and flexible handling of the geometrical aspects. Stability and a priori error estimate are proved. Finally, a numerical experiment is provided to verify the theoretical findings.

  15. Acceptance- versus Change-Based Pain Management: The Role of Psychological Acceptance

    ERIC Educational Resources Information Center

    Blacker, Kara J.; Herbert, James D.; Forman, Evan M.; Kounios, John

    2012-01-01

    This study compared two theoretically opposed strategies for acute pain management: an acceptance-based and a change-based approach. These two strategies were compared in a within-subjects design using the cold pressor test as an acute pain induction method. Participants completed a baseline pain tolerance assessment followed by one of the two…

  16. Pilot-Testing CATCH Early Childhood: A Preschool-Based Healthy Nutrition and Physical Activity Program

    ERIC Educational Resources Information Center

    Sharma, Shreela; Chuang, Ru-Jye; Hedberg, Ann Marie

    2011-01-01

    Background: The literature on theoretically-based programs targeting healthy nutrition and physical activity in preschools is scarce. Purpose: To pilot test CATCH Early Childhood (CEC), a preschool-based nutrition and physical activity program among children ages three to five in Head Start. Methods: The study was conducted in two Head Start…

  17. Theoretical methods for estimating moments of inertia of trees and boles.

    Treesearch

    John A. Sturos

    1973-01-01

    Presents a theoretical method for estimating the mass moments of inertia of full trees and boles about a transverse axis. Estimates from the theoretical model compared closely with experimental data on aspen and red pine trees obtained in the field by the pendulum method. The theoretical method presented may be used to estimate the mass moments of inertia and other...

  18. Automatic Syllabification in English: A Comparison of Different Algorithms

    ERIC Educational Resources Information Center

    Marchand, Yannick; Adsett, Connie R.; Damper, Robert I.

    2009-01-01

    Automatic syllabification of words is challenging, not least because the syllable is not easy to define precisely. Consequently, no accepted standard algorithm for automatic syllabification exists. There are two broad approaches: rule-based and data-driven. The rule-based method effectively embodies some theoretical position regarding the…

  19. Toward an Instructionally Oriented Theory of Example-Based Learning

    ERIC Educational Resources Information Center

    Renkl, Alexander

    2014-01-01

    Learning from examples is a very effective means of initial cognitive skill acquisition. There is an enormous body of research on the specifics of this learning method. This article presents an instructionally oriented theory of example-based learning that integrates theoretical assumptions and findings from three research areas: learning from…

  20. Strength of single-pole utility structures

    Treesearch

    Ronald W. Wolfe

    2006-01-01

    This section presents three basic methods for deriving and documenting Rn as an LTL value along with the coefficient of variation (COVR) for single-pole structures. These include the following: 1. An empirical analysis based primarily on tests of full-sized poles. 2. A theoretical analysis of mechanics-based models used in...

  1. On Anticipatory Development of Dual Education Based on the Systemic Approach

    ERIC Educational Resources Information Center

    Alshynbayeva, Zhuldyz; Sarbassova, Karlygash; Galiyeva, Temir; Kaltayeva, Gulnara; Bekmagambetov, Aidos

    2016-01-01

    The article addresses separate theoretical and methodical aspects of the anticipatory development of dual education in the Republic of Kazakhstan based on the systemic approach. It states the need to develop orientating basis of prospective professional activities in students. We define the concepts of anticipatory cognition and anticipatory…

  2. Inversion of residual stress profiles from ultrasonic Rayleigh wave dispersion data

    NASA Astrophysics Data System (ADS)

    Mora, P.; Spies, M.

    2018-05-01

    We investigate theoretically and with synthetic data the performance of several inversion methods to infer a residual stress state from ultrasonic surface wave dispersion data. We show that this particular problem may reveal in relevant materials undesired behaviors for some methods that could be reliably applied to infer other properties. We focus on two methods, one based on a Taylor-expansion, and another one based on a piecewise linear expansion regularized by a singular value decomposition. We explain the instabilities of the Taylor-based method by highlighting singularities in the series of coefficients. At the same time, we show that the other method can successfully provide performances which only weakly depend on the material.

  3. VORSTAB: A computer program for calculating lateral-directional stability derivatives with vortex flow effect

    NASA Technical Reports Server (NTRS)

    Lan, C. Edward

    1985-01-01

    A computer program based on the Quasi-Vortex-Lattice Method of Lan is presented for calculating longitudinal and lateral-directional aerodynamic characteristics of nonplanar wing-body combination. The method is based on the assumption of inviscid subsonic flow. Both attached and vortex-separated flows are treated. For the vortex-separated flow, the calculation is based on the method of suction analogy. The effect of vortex breakdown is accounted for by an empirical method. A summary of the theoretical method, program capabilities, input format, output variables and program job control set-up are described. Three test cases are presented as guides for potential users of the code.

  4. Determining similarity in histological images using graph-theoretic description and matching methods for content-based image retrieval in medical diagnostics.

    PubMed

    Sharma, Harshita; Alekseychuk, Alexander; Leskovsky, Peter; Hellwich, Olaf; Anand, R S; Zerbe, Norman; Hufnagl, Peter

    2012-10-04

    Computer-based analysis of digitalized histological images has been gaining increasing attention, due to their extensive use in research and routine practice. The article aims to contribute towards the description and retrieval of histological images by employing a structural method using graphs. Due to their expressive ability, graphs are considered as a powerful and versatile representation formalism and have obtained a growing consideration especially by the image processing and computer vision community. The article describes a novel method for determining similarity between histological images through graph-theoretic description and matching, for the purpose of content-based retrieval. A higher order (region-based) graph-based representation of breast biopsy images has been attained and a tree-search based inexact graph matching technique has been employed that facilitates the automatic retrieval of images structurally similar to a given image from large databases. The results obtained and evaluation performed demonstrate the effectiveness and superiority of graph-based image retrieval over a common histogram-based technique. The employed graph matching complexity has been reduced compared to the state-of-the-art optimal inexact matching methods by applying a pre-requisite criterion for matching of nodes and a sophisticated design of the estimation function, especially the prognosis function. The proposed method is suitable for the retrieval of similar histological images, as suggested by the experimental and evaluation results obtained in the study. It is intended for the use in Content Based Image Retrieval (CBIR)-requiring applications in the areas of medical diagnostics and research, and can also be generalized for retrieval of different types of complex images. The virtual slide(s) for this article can be found here: http://www.diagnosticpathology.diagnomx.eu/vs/1224798882787923.

  5. From lists of behaviour change techniques (BCTs) to structured hierarchies: comparison of two methods of developing a hierarchy of BCTs.

    PubMed

    Cane, James; Richardson, Michelle; Johnston, Marie; Ladha, Ruhina; Michie, Susan

    2015-02-01

    Behaviour change technique (BCT) Taxonomy v1 is a hierarchically grouped, consensus-based taxonomy of 93 BCTs for reporting intervention content. To enhance the use and understanding of BCTs, the aims of the present study were to (1) quantitatively examine the 'bottom-up' hierarchical structure of Taxonomy v1, (2) identify whether BCTs can be reliably mapped to theoretical domains using a 'top-down' theoretically driven approach, and (3) identify any overlap between the 'bottom-up' and 'top-down' groupings. The 'bottom-up' structure was examined for higher-order groupings using a dendrogram derived from hierarchical cluster analysis. For the theory-based 'top-down' structure, 18 experts sorted BCTs into 14 theoretical domains. Discriminant Content Validity was used to identify groupings, and chi-square tests and Pearson's residuals were used to examine the overlap between groupings. Behaviour change techniques relating to 'Reward and Punishment' and 'Cues and Cue Responses' were perceived as markedly different to other BCTs. Fifty-nine of the BCTs were reliably allocated to 12 of the 14 theoretical domains; 47 were significant and 12 were of borderline significance. Thirty-four of 208 'bottom-up' × 'top-down' pairings showed greater overlap than expected by chance. However, only six combinations achieved satisfactory evidence of similarity. The moderate overlap between the groupings indicates some tendency to implicitly conceptualize BCTs in terms of the same theoretical domains. Understanding the nature of the overlap will aid the conceptualization of BCTs in terms of theory and application. Further research into different methods of developing a hierarchical taxonomic structure of BCTs for international, interdisciplinary work is now required. Statement of contribution What is already known on this subject? Behaviour change interventions are effective in improving health care and health outcomes. The 'active' components of these interventions are behaviour change techniques and over 93 have been identified. Taxonomies of behaviour change techniques require structure to enable potential applications. What does this study add? This study identifies groups of BCTs to aid the recall of BCTs for intervention coding and design. It compares two methods of grouping--'bottom-up' and theory-based 'top-down'--and finds a moderate overlap. Building on identified BCT groups, it examines relationships between theoretical domains and BCTs. © 2014 The British Psychological Society.

  6. Developing Theoretically Based and Culturally Appropriate Interventions to Promote Hepatitis B Testing in 4 Asian American Populations, 2006–2011

    PubMed Central

    Bastani, Roshan; Glenn, Beth A.; Taylor, Victoria M.; Nguyen, Tung T.; Stewart, Susan L.; Burke, Nancy J.; Chen, Moon S.

    2014-01-01

    Introduction Hepatitis B infection is 5 to 12 times more common among Asian Americans than in the general US population and is the leading cause of liver disease and liver cancer among Asians. The purpose of this article is to describe the step-by-step approach that we followed in community-based participatory research projects in 4 Asian American groups, conducted from 2006 through 2011 in California and Washington state to develop theoretically based and culturally appropriate interventions to promote hepatitis B testing. We provide examples to illustrate how intervention messages addressing identical theoretical constructs of the Health Behavior Framework were modified to be culturally appropriate for each community. Methods Intervention approaches included mass media in the Vietnamese community, small-group educational sessions at churches in the Korean community, and home visits by lay health workers in the Hmong and Cambodian communities. Results Use of the Health Behavior Framework allowed a systematic approach to intervention development across populations, resulting in 4 different culturally appropriate interventions that addressed the same set of theoretical constructs. Conclusions The development of theory-based health promotion interventions for different populations will advance our understanding of which constructs are critical to modify specific health behaviors. PMID:24784908

  7. Theoretical study of local structure for Ni2+ ions at tetragonal sites in K2ZnF4:Ni2+ system.

    PubMed

    Wang, Su-Juan; Kuang, Xiao-Yu; Lu, Cheng

    2008-12-15

    A theoretical method for studying the local lattice structure of Ni2+ ions in (NiF6)(4-) coordination complex is presented. Using the ligand-field model, the formulas relating the microscopic spin Hamiltonian parameters with the crystal structure parameters are derived. Based on the theoretical formulas, the 45 x 45 complete energy matrices for d8 (d2) configuration ions in a tetragonal ligand-field are constructed. By diagonalizing the complete energy matrices, the local distortion structure parameters (R perpendicular and R || ) of Ni2+ ions in K2ZnF4:Ni2+ system have been investigated. The theoretical results are accorded well with the experimental values. Moreover, to understand the detailed physical and chemical properties of the fluoroperovskite crystals, the theoretical values of the g factor of K2ZnF4:Ni2+ system at 78 and 290 K are reported first.

  8. Theoretical study of local structure for Ni 2+ ions at tetragonal sites in K 2ZnF 4:Ni 2+ system

    NASA Astrophysics Data System (ADS)

    Wang, Su-Juan; Kuang, Xiao-Yu; Lu, Cheng

    2008-12-01

    A theoretical method for studying the local lattice structure of Ni 2+ ions in (NiF 6) 4- coordination complex is presented. Using the ligand-field model, the formulas relating the microscopic spin Hamiltonian parameters with the crystal structure parameters are derived. Based on the theoretical formulas, the 45 × 45 complete energy matrices for d8 ( d2) configuration ions in a tetragonal ligand-field are constructed. By diagonalizing the complete energy matrices, the local distortion structure parameters ( R⊥ and R||) of Ni 2+ ions in K 2ZnF 4:Ni 2+ system have been investigated. The theoretical results are accorded well with the experimental values. Moreover, to understand the detailed physical and chemical properties of the fluoroperovskite crystals, the theoretical values of the g factor of K 2ZnF 4:Ni 2+ system at 78 and 290 K are reported first.

  9. The Teamwork Assessment Scale: A Novel Instrument to Assess Quality of Undergraduate Medical Students' Teamwork Using the Example of Simulation-based Ward-Rounds

    PubMed Central

    Kiesewetter, Jan; Fischer, Martin R.

    2015-01-01

    Background: Simulation-based teamwork trainings are considered a powerful training method to advance teamwork, which becomes more relevant in medical education. The measurement of teamwork is of high importance and several instruments have been developed for various medical domains to meet this need. To our knowledge, no theoretically-based and easy-to-use measurement instrument has been published nor developed specifically for simulation-based teamwork trainings of medical students. Internist ward-rounds function as an important example of teamwork in medicine. Purposes: The purpose of this study was to provide a validated, theoretically-based instrument that is easy-to-use. Furthermore, this study aimed to identify if and when rater scores relate to performance. Methods: Based on a theoretical framework for teamwork behaviour, items regarding four teamwork components (Team Coordination, Team Cooperation, Information Exchange, Team Adjustment Behaviours) were developed. In study one, three ward-round scenarios, simulated by 69 students, were videotaped and rated independently by four trained raters. The instrument was tested for the embedded psychometric properties and factorial structure. In study two, the instrument was tested for construct validity with an external criterion with a second set of 100 students and four raters. Results: In study one, the factorial structure matched the theoretical components but was unable to separate Information Exchange and Team Cooperation. The preliminary version showed adequate psychometric properties (Cronbach’s α=.75). In study two, the instrument showed physician rater scores were more reliable in measurement than those of student raters. Furthermore, a close correlation between the scale and clinical performance as an external criteria was shown (r=.64) and the sufficient psychometric properties were replicated (Cronbach’s α=.78). Conclusions: The validation allows for use of the simulated teamwork assessment scale in undergraduate medical ward-round trainings to reliably measure teamwork by physicians. Further studies are needed to verify the applicability of the instrument. PMID:26038684

  10. A Theoretical Review on Interfacial Thermal Transport at the Nanoscale.

    PubMed

    Zhang, Ping; Yuan, Peng; Jiang, Xiong; Zhai, Siping; Zeng, Jianhua; Xian, Yaoqi; Qin, Hongbo; Yang, Daoguo

    2018-01-01

    With the development of energy science and electronic technology, interfacial thermal transport has become a key issue for nanoelectronics, nanocomposites, energy transmission, and conservation, etc. The application of thermal interfacial materials and other physical methods can reliably improve the contact between joined surfaces and enhance interfacial thermal transport at the macroscale. With the growing importance of thermal management in micro/nanoscale devices, controlling and tuning the interfacial thermal resistance (ITR) at the nanoscale is an urgent task. This Review examines nanoscale interfacial thermal transport mainly from a theoretical perspective. Traditional theoretical models, multiscale models, and atomistic methodologies for predicting ITR are introduced. Based on the analysis and summary of the factors that influence ITR, new methods to control and reduce ITR at the nanoscale are described in detail. Furthermore, the challenges facing interfacial thermal management and the further progress required in this field are discussed. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Nature and magnitude of aromatic base stacking in DNA and RNA: Quantum chemistry, molecular mechanics, and experiment.

    PubMed

    Sponer, Jiří; Sponer, Judit E; Mládek, Arnošt; Jurečka, Petr; Banáš, Pavel; Otyepka, Michal

    2013-12-01

    Base stacking is a major interaction shaping up and stabilizing nucleic acids. During the last decades, base stacking has been extensively studied by experimental and theoretical methods. Advanced quantum-chemical calculations clarified that base stacking is a common interaction, which in the first approximation can be described as combination of the three most basic contributions to molecular interactions, namely, electrostatic interaction, London dispersion attraction and short-range repulsion. There is not any specific π-π energy term associated with the delocalized π electrons of the aromatic rings that cannot be described by the mentioned contributions. The base stacking can be rather reasonably approximated by simple molecular simulation methods based on well-calibrated common force fields although the force fields do not include nonadditivity of stacking, anisotropy of dispersion interactions, and some other effects. However, description of stacking association in condensed phase and understanding of the stacking role in biomolecules remain a difficult problem, as the net base stacking forces always act in a complex and context-specific environment. Moreover, the stacking forces are balanced with many other energy contributions. Differences in definition of stacking in experimental and theoretical studies are explained. Copyright © 2013 Wiley Periodicals, Inc.

  12. Examining Neuronal Connectivity and Its Role in Learning and Memory

    NASA Astrophysics Data System (ADS)

    Gala, Rohan

    Learning and long-term memory formation are accompanied with changes in the patterns and weights of synaptic connections in the underlying neuronal network. However, the fundamental rules that drive connectivity changes, and the precise structure-function relationships within neuronal networks remain elusive. Technological improvements over the last few decades have enabled the observation of large but specific subsets of neurons and their connections in unprecedented detail. Devising robust and automated computational methods is critical to distill information from ever-increasing volumes of raw experimental data. Moreover, statistical models and theoretical frameworks are required to interpret the data and assemble evidence into understanding of brain function. In this thesis, I first describe computational methods to reconstruct connectivity based on light microscopy imaging experiments. Next, I use these methods to quantify structural changes in connectivity based on in vivo time-lapse imaging experiments. Finally, I present a theoretical model of associative learning that can explain many stereotypical features of experimentally observed connectivity.

  13. Monte Carlo simulations guided by imaging to predict the in vitro ranking of radiosensitizing nanoparticles

    PubMed Central

    Retif, Paul; Reinhard, Aurélie; Paquot, Héna; Jouan-Hureaux, Valérie; Chateau, Alicia; Sancey, Lucie; Barberi-Heyob, Muriel; Pinel, Sophie; Bastogne, Thierry

    2016-01-01

    This article addresses the in silico–in vitro prediction issue of organometallic nanoparticles (NPs)-based radiosensitization enhancement. The goal was to carry out computational experiments to quickly identify efficient nanostructures and then to preferentially select the most promising ones for the subsequent in vivo studies. To this aim, this interdisciplinary article introduces a new theoretical Monte Carlo computational ranking method and tests it using 3 different organometallic NPs in terms of size and composition. While the ranking predicted in a classical theoretical scenario did not fit the reference results at all, in contrast, we showed for the first time how our accelerated in silico virtual screening method, based on basic in vitro experimental data (which takes into account the NPs cell biodistribution), was able to predict a relevant ranking in accordance with in vitro clonogenic efficiency. This corroborates the pertinence of such a prior ranking method that could speed up the preclinical development of NPs in radiation therapy. PMID:27920524

  14. Platform construction and extraction mechanism study of magnetic mixed hemimicelles solid-phase extraction

    NASA Astrophysics Data System (ADS)

    Xiao, Deli; Zhang, Chan; He, Jia; Zeng, Rong; Chen, Rong; He, Hua

    2016-12-01

    Simple, accurate and high-throughput pretreatment method would facilitate large-scale studies of trace analysis in complex samples. Magnetic mixed hemimicelles solid-phase extraction has the power to become a key pretreatment method in biological, environmental and clinical research. However, lacking of experimental predictability and unsharpness of extraction mechanism limit the development of this promising method. Herein, this work tries to establish theoretical-based experimental designs for extraction of trace analytes from complex samples using magnetic mixed hemimicelles solid-phase extraction. We selected three categories and six sub-types of compounds for systematic comparative study of extraction mechanism, and comprehensively illustrated the roles of different force (hydrophobic interaction, π-π stacking interactions, hydrogen-bonding interaction, electrostatic interaction) for the first time. What’s more, the application guidelines for supporting materials, surfactants and sample matrix were also summarized. The extraction mechanism and platform established in the study render its future promising for foreseeable and efficient pretreatment under theoretical based experimental design for trace analytes from environmental, biological and clinical samples.

  15. Monte Carlo simulations guided by imaging to predict the in vitro ranking of radiosensitizing nanoparticles.

    PubMed

    Retif, Paul; Reinhard, Aurélie; Paquot, Héna; Jouan-Hureaux, Valérie; Chateau, Alicia; Sancey, Lucie; Barberi-Heyob, Muriel; Pinel, Sophie; Bastogne, Thierry

    This article addresses the in silico-in vitro prediction issue of organometallic nanoparticles (NPs)-based radiosensitization enhancement. The goal was to carry out computational experiments to quickly identify efficient nanostructures and then to preferentially select the most promising ones for the subsequent in vivo studies. To this aim, this interdisciplinary article introduces a new theoretical Monte Carlo computational ranking method and tests it using 3 different organometallic NPs in terms of size and composition. While the ranking predicted in a classical theoretical scenario did not fit the reference results at all, in contrast, we showed for the first time how our accelerated in silico virtual screening method, based on basic in vitro experimental data (which takes into account the NPs cell biodistribution), was able to predict a relevant ranking in accordance with in vitro clonogenic efficiency. This corroborates the pertinence of such a prior ranking method that could speed up the preclinical development of NPs in radiation therapy.

  16. Analysis of single-degree-of-freedom piezoelectric energy harvester with stopper by incremental harmonic balance method

    NASA Astrophysics Data System (ADS)

    Zhao, Dan; Wang, Xiaoman; Cheng, Yuan; Liu, Shaogang; Wu, Yanhong; Chai, Liqin; Liu, Yang; Cheng, Qianju

    2018-05-01

    Piecewise-linear structure can effectively broaden the working frequency band of the piezoelectric energy harvester, and improvement of its research can promote the practical process of energy collection device to meet the requirements for powering microelectronic components. In this paper, the incremental harmonic balance (IHB) method is introduced for the complicated and difficult analysis process of the piezoelectric energy harvester to solve these problems. After obtaining the nonlinear dynamic equation of the single-degree-of-freedom piecewise-linear energy harvester by mathematical modeling and the equation is solved based on the IHB method, the theoretical amplitude-frequency curve of open-circuit voltage is achieved. Under 0.2 g harmonic excitation, a piecewise-linear energy harvester is experimentally tested by unidirectional frequency-increasing scanning. The results demonstrate that the theoretical and experimental amplitudes have the same trend, and the width of the working band with high voltage output are 4.9 Hz and 4.7 Hz, respectively, and the relative error is 4.08%. The open-output peak voltage are 21.53 V and 18.25 V, respectively, and the relative error is 15.23%. Since the theoretical value is consistent with the experimental results, the theoretical model and the incremental harmonic balance method used in this paper are suitable for solving single-degree-of-freedom piecewise-linear piezoelectric energy harvester and can be applied to further parameter optimized design.

  17. [Theoretical and methodological uses of research in Social and Human Sciences in Health].

    PubMed

    Deslandes, Suely Ferreira; Iriart, Jorge Alberto Bernstein

    2012-12-01

    The current article aims to map and critically reflect on the current theoretical and methodological uses of research in the subfield of social and human sciences in health. A convenience sample was used to select three Brazilian public health journals. Based on a reading of 1,128 abstracts published from 2009 to 2010, 266 articles were selected that presented the empirical base of research stemming from social and human sciences in health. The sample was classified thematically as "theoretical/ methodological reference", "study type/ methodological design", "analytical categories", "data production techniques", and "analytical procedures". We analyze the sample's emic categories, drawing on the authors' literal statements. All the classifications and respective variables were tabulated in Excel. Most of the articles were self-described as qualitative and used more than one data production technique. There was a wide variety of theoretical references, in contrast with the almost total predominance of a single type of data analysis (content analysis). In several cases, important gaps were identified in expounding the study methodology and instrumental use of the qualitative research techniques and methods. However, the review did highlight some new objects of study and innovations in theoretical and methodological approaches.

  18. Resonance-Based Sparse Signal Decomposition and its Application in Mechanical Fault Diagnosis: A Review.

    PubMed

    Huang, Wentao; Sun, Hongjian; Wang, Weijie

    2017-06-03

    Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.

  19. Resonance-Based Sparse Signal Decomposition and Its Application in Mechanical Fault Diagnosis: A Review

    PubMed Central

    Huang, Wentao; Sun, Hongjian; Wang, Weijie

    2017-01-01

    Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD’s theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis. PMID:28587198

  20. MULTIVARIATERESIDUES : A Mathematica package for computing multivariate residues

    NASA Astrophysics Data System (ADS)

    Larsen, Kasper J.; Rietkerk, Robbert

    2018-01-01

    Multivariate residues appear in many different contexts in theoretical physics and algebraic geometry. In theoretical physics, they for example give the proper definition of generalized-unitarity cuts, and they play a central role in the Grassmannian formulation of the S-matrix by Arkani-Hamed et al. In realistic cases their evaluation can be non-trivial. In this paper we provide a Mathematica package for efficient evaluation of multivariate residues based on methods from computational algebraic geometry.

  1. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution.

    PubMed

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M; Bai, Ruibin

    2016-11-16

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition.

  2. A Theoretical and Empirical Integrated Method to Select the Optimal Combined Signals for Geometry-Free and Geometry-Based Three-Carrier Ambiguity Resolution

    PubMed Central

    Zhao, Dongsheng; Roberts, Gethin Wyn; Lau, Lawrence; Hancock, Craig M.; Bai, Ruibin

    2016-01-01

    Twelve GPS Block IIF satellites, out of the current constellation, can transmit on three-frequency signals (L1, L2, L5). Taking advantages of these signals, Three-Carrier Ambiguity Resolution (TCAR) is expected to bring much benefit for ambiguity resolution. One of the research areas is to find the optimal combined signals for a better ambiguity resolution in geometry-free (GF) and geometry-based (GB) mode. However, the existing researches select the signals through either pure theoretical analysis or testing with simulated data, which might be biased as the real observation condition could be different from theoretical prediction or simulation. In this paper, we propose a theoretical and empirical integrated method, which first selects the possible optimal combined signals in theory and then refines these signals with real triple-frequency GPS data, observed at eleven baselines of different lengths. An interpolation technique is also adopted in order to show changes of the AR performance with the increase in baseline length. The results show that the AR success rate can be improved by 3% in GF mode and 8% in GB mode at certain intervals of the baseline length. Therefore, the TCAR can perform better by adopting the combined signals proposed in this paper when the baseline meets the length condition. PMID:27854324

  3. A rigorous and simpler method of image charges

    NASA Astrophysics Data System (ADS)

    Ladera, C. L.; Donoso, G.

    2016-07-01

    The method of image charges relies on the proven uniqueness of the solution of the Laplace differential equation for an electrostatic potential which satisfies some specified boundary conditions. Granted by that uniqueness, the method of images is rightly described as nothing but shrewdly guessing which and where image charges are to be placed to solve the given electrostatics problem. Here we present an alternative image charges method that is based not on guessing but on rigorous and simpler theoretical grounds, namely the constant potential inside any conductor and the application of powerful geometric symmetries. The aforementioned required uniqueness and, more importantly, guessing are therefore both altogether dispensed with. Our two new theoretical fundaments also allow the image charges method to be introduced in earlier physics courses for engineering and sciences students, instead of its present and usual introduction in electromagnetic theory courses that demand familiarity with the Laplace differential equation and its boundary conditions.

  4. Reply to comment by Fred L. Ogden et al. on "Beyond the SCS-CN method: A theoretical framework for spatially lumped rainfall-runoff response"

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.

    2017-07-01

    Though Ogden et al. list several shortcomings of the original SCS-CN method, fit for purpose is a key consideration in hydrological modelling, as shown by the adoption of SCS-CN method in many design standards. The theoretical framework of Bartlett et al. [2016a] reveals a family of semidistributed models, of which the SCS-CN method is just one member. Other members include event-based versions of the Variable Infiltration Capacity (VIC) model and TOPMODEL. This general model allows us to move beyond the limitations of the original SCS-CN method under different rainfall-runoff mechanisms and distributions for soil and rainfall variability. Future research should link this general model approach to different hydrogeographic settings, in line with the call for action proposed by Ogden et al.

  5. High sensitivity phase retrieval method in grating-based x-ray phase contrast imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Zhao; Gao, Kun; Chen, Jian

    2015-02-15

    Purpose: Grating-based x-ray phase contrast imaging is considered as one of the most promising techniques for future medical imaging. Many different methods have been developed to retrieve phase signal, among which the phase stepping (PS) method is widely used. However, further practical implementations are hindered, due to its complex scanning mode and high radiation dose. In contrast, the reverse projection (RP) method is a novel fast and low dose extraction approach. In this contribution, the authors present a quantitative analysis of the noise properties of the refraction signals retrieved by the two methods and compare their sensitivities. Methods: Using themore » error propagation formula, the authors analyze theoretically the signal-to-noise ratios (SNRs) of the refraction images retrieved by the two methods. Then, the sensitivities of the two extraction methods are compared under an identical exposure dose. Numerical experiments are performed to validate the theoretical results and provide some quantitative insight. Results: The SNRs of the two methods are both dependent on the system parameters, but in different ways. Comparison between their sensitivities reveals that for the refraction signal, the RP method possesses a higher sensitivity, especially in the case of high visibility and/or at the edge of the object. Conclusions: Compared with the PS method, the RP method has a superior sensitivity and provides refraction images with a higher SNR. Therefore, one can obtain highly sensitive refraction images in grating-based phase contrast imaging. This is very important for future preclinical and clinical implementations.« less

  6. Bringing the Teacher into Teacher Preparation: Learning from Mentor Teachers in Joint Methods Activities

    ERIC Educational Resources Information Center

    Wood, Marcy B.; Turner, Erin E.

    2015-01-01

    Studies of mathematics teacher preparation frequently lament the divide between the more theoretically based university methods course and the practically grounded classroom field experience. In many instances, attempts to mediate this gap involve creating hybrid or third spaces, which seek to dissipate the differences in knowledge status as…

  7. Cost estimating methods for advanced space systems

    NASA Technical Reports Server (NTRS)

    Cyr, Kelley

    1988-01-01

    The development of parametric cost estimating methods for advanced space systems in the conceptual design phase is discussed. The process of identifying variables which drive cost and the relationship between weight and cost are discussed. A theoretical model of cost is developed and tested using a historical data base of research and development projects.

  8. Calculation of the bending stresses in helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    De Guillenchmidt, P

    1951-01-01

    A comparatively rapid method is presented for determining theoretically the bending stresses of helicopter rotor blades in forward flight. The method is based on the analysis of the properties of a vibrating beam, and its uniqueness lies in the simple solution of the differential equation which governs the motion of the bent blades.

  9. Teaching Inorganic Photophysics and Photochemistry with Three Ruthenium(II) Polypyridyl Complexes: A Computer-Based Exercise

    ERIC Educational Resources Information Center

    Garino, Claudio; Terenzi, Alessio; Barone, Giampaolo; Salassa, Luca

    2016-01-01

    Among computational methods, DFT (density functional theory) and TD-DFT (time-dependent DFT) are widely used in research to describe, "inter alia," the optical properties of transition metal complexes. Inorganic/physical chemistry courses for undergraduate students treat such methods, but quite often only from the theoretical point of…

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bogdanov, Yu. I., E-mail: bogdanov-yurii@inbox.ru; Avosopyants, G. V.; Belinskii, L. V.

    We describe a new method for reconstructing the quantum state of the electromagnetic field from the results of mutually complementary optical quadrature measurements. This method is based on the root approach and displaces squeezed Fock states are used as the basis. Theoretical analysis and numerical experiments demonstrate the considerable advantage of the developed tools over those described in the literature.

  11. Identification of the critical depth-of-cut through a 2D image of the cutting region resulting from taper cutting of brittle materials

    NASA Astrophysics Data System (ADS)

    Gu, Wen; Zhu, Zhiwei; Zhu, Wu-Le; Lu, Leyao; To, Suet; Xiao, Gaobo

    2018-05-01

    An automatic identification method for obtaining the critical depth-of-cut (DoC) of brittle materials with nanometric accuracy and sub-nanometric uncertainty is proposed in this paper. With this method, a two-dimensional (2D) microscopic image of the taper cutting region is captured and further processed by image analysis to extract the margin of generated micro-cracks in the imaging plane. Meanwhile, an analytical model is formulated to describe the theoretical curve of the projected cutting points on the imaging plane with respect to a specified DoC during the whole cutting process. By adopting differential evolution algorithm-based minimization, the critical DoC can be identified by minimizing the deviation between the extracted margin and the theoretical curve. The proposed method is demonstrated through both numerical simulation and experimental analysis. Compared with conventional 2D- and 3D-microscopic-image-based methods, determination of the critical DoC in this study uses the envelope profile rather than the onset point of the generated cracks, providing a more objective approach with smaller uncertainty.

  12. A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.

    PubMed

    Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem

    2018-06-12

    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.

  13. Recruiting for values in healthcare: a preliminary review of the evidence.

    PubMed

    Patterson, Fiona; Prescott-Clements, Linda; Zibarras, Lara; Edwards, Helena; Kerrin, Maire; Cousans, Fran

    2016-10-01

    Displaying compassion, benevolence and respect, and preserving the dignity of patients are important for any healthcare professional to ensure the provision of high quality care and patient outcomes. This paper presents a structured search and thematic review of the research evidence relating to values-based recruitment within healthcare. Several different databases, journals and government reports were searched to retrieve studies relating to values-based recruitment published between 1998 and 2013, both in healthcare settings and other occupational contexts. There is limited published research related to values-based recruitment directly, so the available theoretical context of values is explored alongside an analysis of the impact of value congruence. The implications for the design of selection methods to measure values is explored beyond the scope of the initial literature search. Research suggests some selection methods may be appropriate for values-based recruitment, such as situational judgment tests (SJTs), structured interviews and multiple-mini interviews (MMIs). Personality tests were also identified as having the potential to compliment other methods (e.g. structured interviews), as part of a values-based recruitment agenda. Methods including personal statements, references and unstructured/'traditional' interviews were identified as inappropriate for values-based recruitment. Practical implications are discussed in the context of values-based recruitment in the healthcare context. Theoretical implications of our findings imply that prosocial implicit trait policies, which could be measured by selection tools such as SJTs and MMIs, may be linked to individuals' values via the behaviours individuals consider to be effective in given situations. Further research is required to state this conclusively however, and methods for values-based recruitment represent an exciting and relatively unchartered territory for further research.

  14. Assessment of two theoretical methods to estimate potentiometrictitration curves of peptides: comparison with experiment

    PubMed Central

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A.

    2008-01-01

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric-titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH) and dimethylsulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the Electrostatically Driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, MD simulations are run with the AMBER force field and the Generalized-Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethyl amine and propyl amine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach, and the titration curve in water calculated using the MD-based approach, have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the non-aqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models. PMID:16509748

  15. Assessment of two theoretical methods to estimate potentiometric titration curves of peptides: comparison with experiment.

    PubMed

    Makowska, Joanna; Bagiñska, Katarzyna; Makowski, Mariusz; Jagielska, Anna; Liwo, Adam; Kasprzykowski, Franciszek; Chmurzyñski, Lech; Scheraga, Harold A

    2006-03-09

    We compared the ability of two theoretical methods of pH-dependent conformational calculations to reproduce experimental potentiometric titration curves of two models of peptides: Ac-K5-NHMe in 95% methanol (MeOH)/5% water mixture and Ac-XX(A)7OO-NH2 (XAO) (where X is diaminobutyric acid, A is alanine, and O is ornithine) in water, methanol (MeOH), and dimethyl sulfoxide (DMSO), respectively. The titration curve of the former was taken from the literature, and the curve of the latter was determined in this work. The first theoretical method involves a conformational search using the electrostatically driven Monte Carlo (EDMC) method with a low-cost energy function (ECEPP/3 plus the SRFOPT surface-solvation model, assumming that all titratable groups are uncharged) and subsequent reevaluation of the free energy at a given pH with the Poisson-Boltzmann equation, considering variable protonation states. In the second procedure, molecular dynamics (MD) simulations are run with the AMBER force field and the generalized Born model of electrostatic solvation, and the protonation states are sampled during constant-pH MD runs. In all three solvents, the first pKa of XAO is strongly downshifted compared to the value for the reference compounds (ethylamine and propylamine, respectively); the water and methanol curves have one, and the DMSO curve has two jumps characteristic of remarkable differences in the dissociation constants of acidic groups. The predicted titration curves of Ac-K5-NHMe are in good agreement with the experimental ones; better agreement is achieved with the MD-based method. The titration curves of XAO in methanol and DMSO, calculated using the MD-based approach, trace the shape of the experimental curves, reproducing the pH jump, while those calculated with the EDMC-based approach and the titration curve in water calculated using the MD-based approach have smooth shapes characteristic of the titration of weak multifunctional acids with small differences between the dissociation constants. Nevertheless, quantitative agreement between theoretically predicted and experimental titration curves is not achieved in all three solvents even with the MD-based approach, which is manifested by a smaller pH range of the calculated titration curves with respect to the experimental curves. The poorer agreement obtained for water than for the nonaqueous solvents suggests a significant role of specific solvation in water, which cannot be accounted for by the mean-field solvation models.

  16. A comparative study of theoretical graph models for characterizing structural networks of human brain.

    PubMed

    Li, Xiaojin; Hu, Xintao; Jin, Changfeng; Han, Junwei; Liu, Tianming; Guo, Lei; Hao, Wei; Li, Lingjiang

    2013-01-01

    Previous studies have investigated both structural and functional brain networks via graph-theoretical methods. However, there is an important issue that has not been adequately discussed before: what is the optimal theoretical graph model for describing the structural networks of human brain? In this paper, we perform a comparative study to address this problem. Firstly, large-scale cortical regions of interest (ROIs) are localized by recently developed and validated brain reference system named Dense Individualized Common Connectivity-based Cortical Landmarks (DICCCOL) to address the limitations in the identification of the brain network ROIs in previous studies. Then, we construct structural brain networks based on diffusion tensor imaging (DTI) data. Afterwards, the global and local graph properties of the constructed structural brain networks are measured using the state-of-the-art graph analysis algorithms and tools and are further compared with seven popular theoretical graph models. In addition, we compare the topological properties between two graph models, namely, stickiness-index-based model (STICKY) and scale-free gene duplication model (SF-GD), that have higher similarity with the real structural brain networks in terms of global and local graph properties. Our experimental results suggest that among the seven theoretical graph models compared in this study, STICKY and SF-GD models have better performances in characterizing the structural human brain network.

  17. Investigating the Impact of a LEGO(TM)-Based, Engineering-Oriented Curriculum Compared to an Inquiry-Based Curriculum on Fifth Graders' Content Learning of Simple Machines

    ERIC Educational Resources Information Center

    Marulcu, Ismail

    2010-01-01

    This mixed method study examined the impact of a LEGO-based, engineering-oriented curriculum compared to an inquiry-based curriculum on fifth graders' content learning of simple machines. This study takes a social constructivist theoretical stance that science learning involves learning scientific concepts and their relations to each other. From…

  18. Use abstracted patient-specific features to assist an information-theoretic measurement to assess similarity between medical cases

    PubMed Central

    Cao, Hui; Melton, Genevieve B.; Markatou, Marianthi; Hripcsak, George

    2008-01-01

    Inter-case similarity metrics can potentially help find similar cases from a case base for evidence-based practice. While several methods to measure similarity between cases have been proposed, developing an effective means for measuring patient case similarity remains a challenging problem. We were interested in examining how abstracting could potentially assist computing case similarity. In this study, abstracted patient-specific features from medical records were used to improve an existing information-theoretic measurement. The developed metric, using a combination of abstracted disease, finding, procedure and medication features, achieved a correlation between 0.6012 and 0.6940 to experts. PMID:18487093

  19. Analysis of the tunable asymmetric fiber F-P cavity for fiber sensor edge-filter demodulation

    NASA Astrophysics Data System (ADS)

    Chen, Haitao; Liang, Youcheng

    2014-12-01

    An asymmetric fiber (Fabry-Pérot,F-P) interferometric cavity with good linearity and wide dynamic range is successfully designed basing on optical thin film characteristic matrix theory; by choosing the material of two different thin metallic layers, the asymmetric fiber F-P interferometric cavity is fabricated by depositing the multi-layer thin films on the optical fiber's end face. The demodulation method for the wavelength shift of fiber Bragg grating (FBG) sensor basing on the F-P cavity is demonstrated and a theoretical formula is obtained. And the experimental results coincide well with computational results obtained from the theoretical model.

  20. Halfway Houses for Alcohol Dependents: From Theoretical Bases to Implications for the Organization of Facilities

    PubMed Central

    Reis, Alessandra Diehl; Laranjeira, Ronaldo

    2008-01-01

    The purpose of this paper is to supply a narrative review of the concepts, history, functions, methods, development and theoretical bases for the use of halfway houses for patients with mental disorders, and their correlations, for the net construction of chemical dependence model. This theme, in spite of its relevance, is still infrequently explored in the national literature. The authors report international and national uses of this model and discuss its applicability for the continuity of services for alcohol dependents. The results suggest that this area is in need of more attention and interest for future research. PMID:19061008

  1. The in Silico Insight into Carbon Nanotube and Nucleic Acid Bases Interaction.

    PubMed

    Karimi, Ali Asghar; Ghalandari, Behafarid; Tabatabaie, Seyed Saleh; Farhadi, Mohammad

    2016-05-01

    To explore practical applications of carbon nanotubes (CNTs) in biomedical fields the properties of their interaction with biomolecules must be revealed. Recent years, the interaction of CNTs with biomolecules is a subject of research interest for practical applications so that previous research explored that CNTs have complementary structure properties with single strand DNA (ssDNA). Hence, the quantum mechanics (QM) method based on ab initio was used for this purpose. Therefore values of binding energy, charge distribution, electronic energy and other physical properties of interaction were studied for interaction of nucleic acid bases and SCNT. In this study, the interaction between nucleic acid bases and a (4, 4) single-walled carbon nanotube (SCNT) were investigated through calculations within quantum mechanics (QM) method at theoretical level of Hartree-Fock (HF) method using 6-31G basis set. Hence, the physical properties such as electronic energy, total dipole moment, charge distributions and binding energy of nucleic acid bases interaction with SCNT were investigated based on HF method. It has been found that the guanine base adsorption is bound stronger to the outer surface of nanotube in comparison to the other bases, consistent with the recent theoretical studies. In the other words, the results explored that guanine interaction with SCNT has optimum level of electronic energy so that their interaction is stable. Also, the calculations illustrated that SCNT interact to nucleic acid bases by noncovalent interaction because of charge distribution an electrostatic area is created in place of interaction. Consequently, small diameter SCNT interaction with nucleic acid bases is noncovalent. Also, the results revealed that small diameter SCNT interaction especially SCNT (4, 4) with nucleic acid bases can be useful in practical application area of biomedical fields such detection and drug delivery.

  2. Molecular beam epitaxy growth method for vertical-cavity surface-emitting laser resonators based on substrate thermal emission

    NASA Astrophysics Data System (ADS)

    Talghader, J. J.; Hadley, M. A.; Smith, J. S.

    1995-12-01

    A molecular beam epitaxy growth monitoring method is developed for distributed Bragg reflectors and vertical-cavity surface-emitting laser (VCSEL) resonators. The wavelength of the substrate thermal emission that corresponds to the optical cavity resonant wavelength is selected by a monochromator and monitored during growth. This method allows VCSEL cavities of arbitrary design wavelength to be grown with a single control program. This letter also presents a theoretical model for the technique which is based on transmission matrices and simple thermal emission properties. Demonstrated reproducibility of the method is well within 0.1%.

  3. Examinations of electron temperature calculation methods in Thomson scattering diagnostics.

    PubMed

    Oh, Seungtae; Lee, Jong Ha; Wi, Hanmin

    2012-10-01

    Electron temperature from Thomson scattering diagnostic is derived through indirect calculation based on theoretical model. χ-square test is commonly used in the calculation, and the reliability of the calculation method highly depends on the noise level of input signals. In the simulations, noise effects of the χ-square test are examined and scale factor test is proposed as an alternative method.

  4. Free boundary skin current MHD (magnetohydrodynamic) equilibria

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reusch, M.F.

    1988-02-01

    Function theoretic methods in the complex plane are used to develop simple parametric hodograph formulae which generate sharp boundary equilibria of arbitrary shape. The related method of Gorenflo and Merkel is discussed. A numerical technique for the construction of solutions, based on one of the methods is presented. A study is made of the bifurcations of an equilibrium of general form. 28 refs., 9 figs.

  5. UNO DMRG CASCI calculations of effective exchange integrals for m-phenylene-bis-methylene spin clusters

    NASA Astrophysics Data System (ADS)

    Kawakami, Takashi; Sano, Shinsuke; Saito, Toru; Sharma, Sandeep; Shoji, Mitsuo; Yamada, Satoru; Takano, Yu; Yamanaka, Shusuke; Okumura, Mitsutaka; Nakajima, Takahito; Yamaguchi, Kizashi

    2017-09-01

    Theoretical examinations of the ferromagnetic coupling in the m-phenylene-bis-methylene molecule and its oligomer were carried out. These systems are good candidates for exchange-coupled systems to investigate strong electronic correlations. We studied effective exchange integrals (J), which indicated magnetic coupling between interacting spins in these species. First, theoretical calculations based on a broken-symmetry single-reference procedure, i.e. the UHF, UMP2, UMP4, UCCSD(T) and UB3LYP methods, were carried out with a GAUSSIAN program code under an SR wave function. From these results, the J value by the UHF method was largely positive because of the strong ferromagnetic spin polarisation effect. The J value by the UCCSD(T) and UB3LYP methods improved an overestimation problem by correcting the dynamical electronic correlation. Next, magnetic coupling among these spins was studied using the CAS-based method of the symmetry-adapted multireference methods procedure. Thus, the UNO DMRG CASCI (UNO, unrestricted natural orbital; DMRG, density matrix renormalised group; CASCI, complete active space configuration interaction) method was mainly employed with a combination of ORCA and BLOCK program codes. DMRG CASCI calculations in valence electron counting, which included all orbitals to full valence CI, provided the most reliable result, and support the UB3LYP method for extended systems.

  6. Theoretical investigation of metal magnetic memory testing technique for detection of magnetic flux leakage signals from buried defect

    NASA Astrophysics Data System (ADS)

    Xu, Kunshan; Qiu, Xingqi; Tian, Xiaoshuai

    2018-01-01

    The metal magnetic memory testing (MMMT) technique has been extensively applied in various fields because of its unique advantages of easy operation, low cost and high efficiency. However, very limited theoretical research has been conducted on application of MMMT to buried defects. To promote study in this area, the equivalent magnetic charge method is employed to establish a self-magnetic flux leakage (SMFL) model of a buried defect. Theoretical results based on the established model successfully capture basic characteristics of the SMFL signals of buried defects, as confirmed via experiment. In particular, the newly developed model can calculate the buried depth of a defect based on the SMFL signals obtained via testing. The results show that the new model can successfully assess the characteristics of buried defects, which is valuable in the application of MMMT in non-destructive testing.

  7. Comparison of holographic and field theoretic complexities for time dependent thermofield double states

    NASA Astrophysics Data System (ADS)

    Yang, Run-Qiu; Niu, Chao; Zhang, Cheng-Yong; Kim, Keun-Young

    2018-02-01

    We compute the time-dependent complexity of the thermofield double states by four different proposals: two holographic proposals based on the "complexity-action" (CA) conjecture and "complexity-volume" (CV) conjecture, and two quantum field theoretic proposals based on the Fubini-Study metric (FS) and Finsler geometry (FG). We find that four different proposals yield both similarities and differences, which will be useful to deepen our understanding on the complexity and sharpen its definition. In particular, at early time the complexity linearly increase in the CV and FG proposals, linearly decreases in the FS proposal, and does not change in the CA proposal. In the late time limit, the CA, CV and FG proposals all show that the growth rate is 2 E/(πℏ) saturating the Lloyd's bound, while the FS proposal shows the growth rate is zero. It seems that the holographic CV conjecture and the field theoretic FG method are more correlated.

  8. Measuring housing quality in the absence of a monetized real estate market.

    PubMed

    Rindfuss, Ronald R; Piotrowski, Martin; Thongthai, Varachai; Prasartkul, Pramote

    2007-03-01

    Measuring housing quality or value or both has been a weak component of demographic and development research in less developed countries that lack an active real estate (housing) market. We describe a new method based on a standardized subjective rating process. It is designed to be used in settings that do not have an active, monetized housing market. The method is applied in an ongoing longitudinal study in north-east Thailand and could be straightforwardly used in many other settings. We develop a conceptual model of the process whereby households come to reside in high-quality or low-quality housing units. We use this theoretical model in conjunction with longitudinal data to show that the new method of measuring housing quality behaves as theoretically expected, thus providing evidence of face validity.

  9. Design rules for biomolecular adhesion: lessons from force measurements.

    PubMed

    Leckband, Deborah

    2010-01-01

    Cell adhesion to matrix, other cells, or pathogens plays a pivotal role in many processes in biomolecular engineering. Early macroscopic methods of quantifying adhesion led to the development of quantitative models of cell adhesion and migration. The more recent use of sensitive probes to quantify the forces that alter or manipulate adhesion proteins has revealed much greater functional diversity than was apparent from population average measurements of cell adhesion. This review highlights theoretical and experimental methods that identified force-dependent molecular properties that are central to the biological activity of adhesion proteins. Experimental and theoretical methods emphasized in this review include the surface force apparatus, atomic force microscopy, and vesicle-based probes. Specific examples given illustrate how these tools have revealed unique properties of adhesion proteins and their structural origins.

  10. On the exactness of effective Floquet Hamiltonians employed in solid-state NMR spectroscopy

    NASA Astrophysics Data System (ADS)

    Garg, Rajat; Ramachandran, Ramesh

    2017-05-01

    Development of theoretical models based on analytic theory has remained an active pursuit in molecular spectroscopy for its utility both in the design of experiments as well as in the interpretation of spectroscopic data. In particular, the role of "Effective Hamiltonians" in the evolution of theoretical frameworks is well known across all forms of spectroscopy. Nevertheless, a constant revalidation of the approximations employed in the theoretical frameworks is necessitated by the constant improvements on the experimental front in addition to the complexity posed by the systems under study. Here in this article, we confine our discussion to the derivation of effective Floquet Hamiltonians based on the contact transformation procedure. While the importance of the effective Floquet Hamiltonians in the qualitative description of NMR experiments has been realized in simpler cases, its extension in quantifying spectral data deserves a cautious approach. With this objective, the validity of the approximations employed in the derivation of the effective Floquet Hamiltonians is re-examined through a comparison with exact numerical methods under differing experimental conditions. The limitations arising from the existing analytic methods are outlined along with remedial measures for improving the accuracy of the derived effective Floquet Hamiltonians.

  11. Chaotic advection at large Péclet number: Electromagnetically driven experiments, numerical simulations, and theoretical predictions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Figueroa, Aldo; Meunier, Patrice; Villermaux, Emmanuel

    2014-01-15

    We present a combination of experiment, theory, and modelling on laminar mixing at large Péclet number. The flow is produced by oscillating electromagnetic forces in a thin electrolytic fluid layer, leading to oscillating dipoles, quadrupoles, octopoles, and disordered flows. The numerical simulations are based on the Diffusive Strip Method (DSM) which was recently introduced (P. Meunier and E. Villermaux, “The diffusive strip method for scalar mixing in two-dimensions,” J. Fluid Mech. 662, 134–172 (2010)) to solve the advection-diffusion problem by combining Lagrangian techniques and theoretical modelling of the diffusion. Numerical simulations obtained with the DSM are in reasonable agreement withmore » quantitative dye visualization experiments of the scalar fields. A theoretical model based on log-normal Probability Density Functions (PDFs) of stretching factors, characteristic of homogeneous turbulence in the Batchelor regime, allows to predict the PDFs of scalar in agreement with numerical and experimental results. This model also indicates that the PDFs of scalar are asymptotically close to log-normal at late stages, except for the large concentration levels which correspond to low stretching factors.« less

  12. Magnetoelastic Effect-Based Transmissive Stress Detection for Steel Strips: Theory and Experiment

    PubMed Central

    Zhang, Qingdong; Su, Yuanxiao; Zhang, Liyuan; Bi, Jia; Luo, Jiang

    2016-01-01

    For the deficiencies of traditional stress detection methods for steel strips in industrial production, this paper proposes a non-contact stress detection scheme based on the magnetoelastic effect. The theoretical model of the transmission-type stress detection is established, in which the output voltage and the tested stress obey a linear relation. Then, a stress detection device is built for the experiment, and Q235 steel under uniaxial tension is tested as an example. The result shows that the output voltage rises linearly with the increase of the tensile stress, consistent with the theoretical prediction. To ensure the accuracy of the stress detection method in actual application, the temperature compensation, magnetic shielding and some other key technologies are investigated to reduce the interference of the external factors, such as environment temperature and surrounding magnetic field. The present research develops the theoretical and experimental foundations for the magnetic stress detection system, which can be used for online non-contact monitoring of strip flatness-related stress (tension distribution or longitudinal residual stress) in the steel strip rolling process, the quality evaluation of strip flatness after rolling, the life and safety assessment of metal construction and other industrial production links. PMID:27589742

  13. Entering the Historical Problem Space: Whole-Class Text-Based Discussion in History Class

    ERIC Educational Resources Information Center

    Reisman, Abby

    2015-01-01

    Background/Context: The Common Core State Standards Initiative reveals how little we understand about the components of effective discussion-based instruction in disciplinary history. Although the case for classroom discussion as a core method for subject matter learning stands on stable theoretical and empirical ground, to date, none of the…

  14. Dewey's Concept of Experience for Inquiry-Based Landscape Drawing during Field Studies

    ERIC Educational Resources Information Center

    Tillmann, Alexander; Albrecht, Volker; Wunderlich, Jürgen

    2017-01-01

    The epistemological and educational philosophy of John Dewey is used as a theoretical basis to analyze processes of knowledge construction during geographical field studies. The experience of landscape drawing as a method of inquiry and a starting point for research-based learning is empirically evaluated. The basic drawing skills are acquired…

  15. Measuring Disorientation Based on the Needleman-Wunsch Algorithm

    ERIC Educational Resources Information Center

    Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel

    2015-01-01

    This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…

  16. Statistical Measures for Usage-Based Linguistics

    ERIC Educational Resources Information Center

    Gries, Stefan Th.; Ellis, Nick C.

    2015-01-01

    The advent of usage-/exemplar-based approaches has resulted in a major change in the theoretical landscape of linguistics, but also in the range of methodologies that are brought to bear on the study of language acquisition/learning, structure, and use. In particular, methods from corpus linguistics are now frequently used to study distributional…

  17. An Empirical Typology of Residential Care/Assisted Living Based on a Four-State Study

    ERIC Educational Resources Information Center

    Park, Nan Sook; Zimmerman, Sheryl; Sloane, Philip D.; Gruber-Baldini, Ann L.; Eckert, J. Kevin

    2006-01-01

    Purpose: Residential care/assisted living describes diverse facilities providing non-nursing home care to a heterogeneous group of primarily elderly residents. This article derives typologies of assisted living based on theoretically and practically grounded evidence. Design and Methods: We obtained data from the Collaborative Studies of Long-Term…

  18. Why Problem-Based Learning Works: Theoretical Foundations

    ERIC Educational Resources Information Center

    Marra, Rose M.; Jonassen, David H.; Palmer, Betsy; Luft, Steve

    2014-01-01

    Problem-based learning (PBL) is an instructional method where student learning occurs in the context of solving an authentic problem. PBL was initially developed out of an instructional need to help medical school students learn their basic sciences knowledge in a way that would be more lasting while helping to develop clinical skills…

  19. Using Bourdieu’s Theoretical Framework to Examine How the Pharmacy Educator Views Pharmacy Knowledge

    PubMed Central

    2015-01-01

    Objective. To explore how different pharmacy educators view pharmacy knowledge within the United Kingdom MPharm program and to relate these findings to Pierre Bourdieu’s theoretical framework. Methods. Twelve qualitative interviews were conducted with 4 faculty members from 3 different types of schools of pharmacy in the United Kingdom: a newer school, an established teaching-based school, and an established research-intensive school. Selection was based on a representation of both science-based and practice-based disciplines, gender balance, and teaching experience. Results. The interview transcripts indicated how these members of the academic community describe knowledge. There was a polarization between science-based and practice-based educators in terms of Bourdieu’s description of field, species of capital, and habitus. Conclusion. A Bourdieusian perspective on the differences among faculty member responses supports our understanding of curriculum integration and offers some practical implications for the future development of pharmacy programs. PMID:26889065

  20. Comparison of Coarse-Grained Approaches in Predicting Polymer Nanocomposite Phase Behavior

    DOE PAGES

    Koski, Jason P.; Ferrier, Robert C.; Krook, Nadia M.; ...

    2017-11-02

    Because of the considerable parameter space, efficient theoretical and simulation methods are required to predict the morphology and guide experiments in polymer nanocomposites (PNCs). Unfortunately, theoretical and simulation methods are restricted in their ability to accurately map to experiments based on necessary approximations and numerical limitations. In this study, we provide direct comparisons of two recently developed coarse-grained approaches for modeling polymer nanocomposites (PNCs): polymer nanocomposite field theory (PNC-FT) and dynamic mean-field theory (DMFT). These methods are uniquely suited to efficiently capture mesoscale phase behavior of PNCs in comparison to other theoretical and simulation frameworks. We demonstrate the ability ofmore » both methods to capture macrophase separation and describe the thermodynamics of PNCs. We systematically test how the nanoparticle morphology in PNCs is affected by a uniform probability distribution of grafting sites, common in field-based methods, versus random discrete grafting sites on the nanoparticle surface. We also analyze the accuracy of the mean-field approximation in capturing the phase behavior of PNCs. Moreover, the DMFT method introduces the ability to describe nonequilibrium phase behavior while the PNC-FT method is strictly an equilibrium method. With the DMFT method we are able to show the evolution of nonequilibrium states toward their equilibrium state and a qualitative assessment of the dynamics in these systems. These simulations are compared to experiments consisting of polystyrene grafted gold nanorods in a poly(methyl methacrylate) matrix to ensure the model gives results that qualitatively agree with the experiments. This study reveals that nanoparticles in a relatively high matrix molecular weight are trapped in a nonequilibrium state and demonstrates the utility of the DMFT framework in capturing nonequilibrium phase behavior of PNCs. In conclusion, both the PNC-FT and DMFT framework are promising methods to describe the thermodynamic and nonequilibrium phase behavior of PNCs.« less

  1. Comparison of Coarse-Grained Approaches in Predicting Polymer Nanocomposite Phase Behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koski, Jason P.; Ferrier, Robert C.; Krook, Nadia M.

    Because of the considerable parameter space, efficient theoretical and simulation methods are required to predict the morphology and guide experiments in polymer nanocomposites (PNCs). Unfortunately, theoretical and simulation methods are restricted in their ability to accurately map to experiments based on necessary approximations and numerical limitations. In this study, we provide direct comparisons of two recently developed coarse-grained approaches for modeling polymer nanocomposites (PNCs): polymer nanocomposite field theory (PNC-FT) and dynamic mean-field theory (DMFT). These methods are uniquely suited to efficiently capture mesoscale phase behavior of PNCs in comparison to other theoretical and simulation frameworks. We demonstrate the ability ofmore » both methods to capture macrophase separation and describe the thermodynamics of PNCs. We systematically test how the nanoparticle morphology in PNCs is affected by a uniform probability distribution of grafting sites, common in field-based methods, versus random discrete grafting sites on the nanoparticle surface. We also analyze the accuracy of the mean-field approximation in capturing the phase behavior of PNCs. Moreover, the DMFT method introduces the ability to describe nonequilibrium phase behavior while the PNC-FT method is strictly an equilibrium method. With the DMFT method we are able to show the evolution of nonequilibrium states toward their equilibrium state and a qualitative assessment of the dynamics in these systems. These simulations are compared to experiments consisting of polystyrene grafted gold nanorods in a poly(methyl methacrylate) matrix to ensure the model gives results that qualitatively agree with the experiments. This study reveals that nanoparticles in a relatively high matrix molecular weight are trapped in a nonequilibrium state and demonstrates the utility of the DMFT framework in capturing nonequilibrium phase behavior of PNCs. In conclusion, both the PNC-FT and DMFT framework are promising methods to describe the thermodynamic and nonequilibrium phase behavior of PNCs.« less

  2. Some Basic Laws of Isotropic Turbulent Flow

    NASA Technical Reports Server (NTRS)

    Loitsianskii, L. G.

    1945-01-01

    An Investigation is made of the diffusion of artificially produced turbulence behind screens or other turbulence producers. The method is based on the author's concept of disturbance moment as a certain theoretically well-founded measure of turbulent disturbances.

  3. On Utilizing Optimal and Information Theoretic Syntactic Modeling for Peptide Classification

    NASA Astrophysics Data System (ADS)

    Aygün, Eser; Oommen, B. John; Cataltepe, Zehra

    Syntactic methods in pattern recognition have been used extensively in bioinformatics, and in particular, in the analysis of gene and protein expressions, and in the recognition and classification of bio-sequences. These methods are almost universally distance-based. This paper concerns the use of an Optimal and Information Theoretic (OIT) probabilistic model [11] to achieve peptide classification using the information residing in their syntactic representations. The latter has traditionally been achieved using the edit distances required in the respective peptide comparisons. We advocate that one can model the differences between compared strings as a mutation model consisting of random Substitutions, Insertions and Deletions (SID) obeying the OIT model. Thus, in this paper, we show that the probability measure obtained from the OIT model can be perceived as a sequence similarity metric, using which a Support Vector Machine (SVM)-based peptide classifier, referred to as OIT_SVM, can be devised.

  4. Insight into the C-F bond mechanism of molecular analogs for antibacterial drug design.

    PubMed

    Liu, Junna; Lv, Biyu; Liu, Huaqing; Li, Xin; Yin, Weiping

    2018-06-01

    The activities of biological molecules usually rely on both of intra-molecular and intermolecular interactions between their function groups. These interactions include interonic attraction theory, Van der Waal's forces and the function of geometry on the individual molecules, whether they are naturally or synthetic. The purpose of this study was to evaluate the antibacterial activity of C-F bond compound using combination of experiments verification and theoretical calculation. We target on the insect natural products from the maggots of Chrysomyis megacephala Fabricius. Based on density functional theory(DFT) and B3LYP method, a theoretical study of the C-F bond on fluoride was designed to explore compounds 2 and 4 antibacterial structure-activity relationship. With the progress in DFT, first-principle calculation based on DFT has gradually become a routine method for drug design, quantum chemistry and other science fields.

  5. Feature extraction of micro-motion frequency and the maximum wobble angle in a small range of missile warhead based on micro-Doppler effect

    NASA Astrophysics Data System (ADS)

    Li, M.; Jiang, Y. S.

    2014-11-01

    Micro-Doppler effect is induced by the micro-motion dynamics of the radar target itself or any structure on the target. In this paper, a simplified cone-shaped model for ballistic missile warhead with micro-nutation is established, followed by the theoretical formula of micro-nutation is derived. It is confirmed that the theoretical results are identical to simulation results by using short-time Fourier transform. Then we propose a new method for nutation period extraction via signature maximum energy fitting based on empirical mode decomposition and short-time Fourier transform. The maximum wobble angle is also extracted by distance approximate approach in a small range of wobble angle, which is combined with the maximum likelihood estimation. By the simulation studies, it is shown that these two feature extraction methods are both valid even with low signal-to-noise ratio.

  6. Linking theory with qualitative research through study of stroke caregiving families.

    PubMed

    Pierce, Linda L; Steiner, Victoria; Cervantez Thompson, Teresa L; Friedemann, Marie-Luise

    2014-01-01

    This theoretical article outlines the deliberate process of applying a qualitative data analysis method rooted in Friedemann's Framework of Systemic Organization through the study of a web-based education and support intervention for stroke caregiving families. Directed by Friedemann's framework, the analytic method involved developing, refining, and using a coding rubric to explore interactive patterns between caregivers and care recipients from this 3-month feasibility study using this education and support intervention. Specifically, data were gathered from the intervention's web-based discussion component between caregivers and the nurse specialist, as well as from telephone caregiver interviews. A theoretical framework guided the process of developing and refining this coding rubric for the purpose of organizing data; but, more importantly, guided the investigators' thought processes, allowing them to extract rich information from the data set, as well as synthesize this information to generate a broad understanding of the caring situation. © 2013 Association of Rehabilitation Nurses.

  7. Exponential stability of stochastic complex networks with multi-weights based on graph theory

    NASA Astrophysics Data System (ADS)

    Zhang, Chunmei; Chen, Tianrui

    2018-04-01

    In this paper, a novel approach to exponential stability of stochastic complex networks with multi-weights is investigated by means of the graph-theoretical method. New sufficient conditions are provided to ascertain the moment exponential stability and almost surely exponential stability of stochastic complex networks with multiple weights. It is noted that our stability results are closely related with multi-weights and the intensity of stochastic disturbance. Numerical simulations are also presented to substantiate the theoretical results.

  8. A scheme for parameterizing ice cloud water content in general circulation models

    NASA Technical Reports Server (NTRS)

    Heymsfield, Andrew J.; Donner, Leo J.

    1989-01-01

    A method for specifying ice water content in GCMs is developed, based on theory and in-cloud measurements. A theoretical development of the conceptual precipitation model is given and the aircraft flights used to characterize the ice mass distribution in deep ice clouds is discussed. Ice water content values derived from the theoretical parameterization are compared with the measured values. The results demonstrate that a simple parameterization for atmospheric ice content can account for ice contents observed in several synoptic contexts.

  9. Theoretical gravity darkening as a function of optical depth. A first approach to fast rotating stars

    NASA Astrophysics Data System (ADS)

    Claret, A.

    2016-04-01

    Aims: Recent observations of very fast rotating stars show systematic deviations from the von Zeipel theorem and pose a challenge to the theory of gravity-darkening exponents (β1). In this paper, we present a new insight into the problem of temperature distribution over distorted stellar surfaces to try to reduce these discrepancies. Methods: We use a variant of the numerical method based on the triangles strategy, which we previously introduced, to evaluate the gravity-darkening exponents. The novelty of the present method is that the theoretical β1 is now computed as a function of the optical depth, that is, β1 ≡ β1(τ). The stellar evolutionary models, which are necessary to obtain the physical conditions of the stellar envelopes/atmospheres inherent to the numerical method, are computed via the code GRANADA. Results: When the resulting theoretical β1(τ) are compared with the best accurate data of very fast rotators, a good agreement for the six systems is simultaneously achieved. In addition, we derive an equation that relates the locus of constant convective efficiency in the Hertzsprung-Russell (HR) diagram with gravity-darkening exponents.

  10. [An anti-Taylor approach: the invention of a method for the cogovernance of health care institutions in order to produce freedom and compromise].

    PubMed

    Campos, G W

    1998-01-01

    This paper describes a new health care management method. A triangular confrontation system was constructed, based on a theoretical review, empirical facts observed from health services, and the researcher's knowledge, jointly analyzed. This new management model was termed 'health-team-focused collegiate management', entailing several original organizational concepts: production unity, matrix-based reference team, collegiate management system, cogovernance, and product/production interface.

  11. A biomechanical model for fibril recruitment: Evaluation in tendons and arteries.

    PubMed

    Bevan, Tim; Merabet, Nadege; Hornsby, Jack; Watton, Paul N; Thompson, Mark S

    2018-06-06

    Simulations of soft tissue mechanobiological behaviour are increasingly important for clinical prediction of aneurysm, tendinopathy and other disorders. Mechanical behaviour at low stretches is governed by fibril straightening, transitioning into load-bearing at recruitment stretch, resulting in a tissue stiffening effect. Previous investigations have suggested theoretical relationships between stress-stretch measurements and recruitment probability density function (PDF) but not derived these rigorously nor evaluated these experimentally. Other work has proposed image-based methods for measurement of recruitment but made use of arbitrary fibril critical straightness parameters. The aim of this work was to provide a sound theoretical basis for estimating recruitment PDF from stress-stretch measurements and to evaluate this relationship using image-based methods, clearly motivating the choice of fibril critical straightness parameter in rat tail tendon and porcine artery. Rigorous derivation showed that the recruitment PDF may be estimated from the second stretch derivative of the first Piola-Kirchoff tissue stress. Image-based fibril recruitment identified the fibril straightness parameter that maximised Pearson correlation coefficients (PCC) with estimated PDFs. Using these critical straightness parameters the new method for estimating recruitment PDF showed a PCC with image-based measures of 0.915 and 0.933 for tendons and arteries respectively. This method may be used for accurate estimation of fibril recruitment PDF in mechanobiological simulation where fibril-level mechanical parameters are important for predicting cell behaviour. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    NASA Technical Reports Server (NTRS)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  13. [Learning strategies of autonomous medical students].

    PubMed

    Márquez U, Carolina; Fasce H, Eduardo; Ortega B, Javiera; Bustamante D, Carolina; Pérez V, Cristhian; Ibáñez G, Pilar; Ortiz M, Liliana; Espinoza P, Camila; Bastías V, Nancy

    2015-12-01

    Understanding how autonomous students are capable of regulating their own learning process is essential to develop self-directed teaching methods. To understand how self-directed medical students approach learning in medical schools at University of Concepción, Chile. A qualitative and descriptive study, performed according to Grounded Theory guidelines, following Strauss & Corbin was performed. Twenty medical students were selected by the maximum variation sampling method. The data collection technique was carried out by a semi-structured thematic interview. Students were interviewed by researchers after an informed consent procedure. Data were analyzed by the open coding method using Atlas-ti 7.5.2 software. Self-directed learners were characterized by being good planners and managing their time correctly. Students performed a diligent selection of contents to study based on reliable literature sources, theoretical relevance and type of evaluation. They also emphasized the discussion of clinical cases, where theoretical contents can be applied. This modality allows them to gain a global view of theoretical contents, to verbalize knowledge and to obtain a learning feedback. The learning process of autonomous students is intentional and planned.

  14. Velocity distribution of electrons in time-varying low-temperature plasmas: progress in theoretical procedures over the past 70 years

    NASA Astrophysics Data System (ADS)

    Makabe, Toshiaki

    2018-03-01

    A time-varying low-temperature plasma sustained by electrical powers with various kinds of fRequencies has played a key role in the historical development of new technologies, such as gas lasers, ozonizers, micro display panels, dry processing of materials, medical care, and so on, since World War II. Electrons in a time-modulated low-temperature plasma have a proper velocity spectrum, i.e. velocity distribution dependent on the microscopic quantum characteristics of the feed gas molecule and on the external field strength and the frequency. In order to solve and evaluate the time-varying velocity distribution, we have mostly two types of theoretical methods based on the classical and linear Boltzmann equations, namely, the expansion method using the orthogonal function and the procedure of non-expansional temporal evolution. Both methods have been developed discontinuously and progressively in synchronization with those technological developments. In this review, we will explore the historical development of the theoretical procedure to evaluate the electron velocity distribution in a time-varying low-temperature plasma over the past 70 years.

  15. Novel optical gyroscope: proof of principle demonstration and future scope

    PubMed Central

    Srivastava, Shailesh; Rao D. S., Shreesha; Nandakumar, Hari

    2016-01-01

    We report the first proof-of-principle demonstration of the resonant optical gyroscope with reflector that we have recently proposed. The device is very different from traditional optical gyroscopes since it uses the inherent coupling between the clockwise and counterclockwise propagating waves to sense the rotation. Our demonstration confirms our theoretical analysis and simulations. We also demonstrate a novel method of biasing the gyroscope using orthogonal polarization states. The simplicity of the structure and the readout method, the theoretically predicted high sensitivities (better than 0.001 deg/hr), and the possibility of further performance enhancement using a related laser based active device, all have immense potential for attracting fresh research and technological initiatives. PMID:27694987

  16. Contact spectroscopy of high-temperature superconductors (Review). I - Physical and methodological principles of the contact spectroscopy of high-temperature superconductors. Experimental results for La(2-x)Sr(x)CuO4 and their discussion

    NASA Astrophysics Data System (ADS)

    Ianson, I. K.

    1991-03-01

    Research in the field of high-temperature superconductors based on methods of tunneling and microcontact spectroscopy is reviewed in a systematic manner. The theoretical principles of the methods are presented, and various types of contacts are described and classified. Attention is given to deviations of the measured volt-ampere characteristics from those predicted by simple theoretical models and those observed for conventional superconductors. Results of measurements of the energy gap and fine structure of volt ampere characteristic derivatives are presented for La(2-x)Sr(x)CuO4.

  17. Validating Alternative Modes of Scoring for Coloured Progressive Matrices.

    ERIC Educational Resources Information Center

    Razel, Micha; Eylon, Bat-Sheva

    Conventional scoring of the Coloured Progressive Matrices (CPM) was compared with three methods of multiple weight scoring. The methods include: (1) theoretical weighting in which the weights were based on a theory of cognitive processing; (2) judged weighting in which the weights were given by a group of nine adult expert judges; and (3)…

  18. Type Theory, Computation and Interactive Theorem Proving

    DTIC Science & Technology

    2015-09-01

    postdoc Cody Roux, to develop new methods of verifying real-valued inequalities automatically. They developed a prototype implementation in Python [8] (an...he has developed new heuristic, geometric methods of verifying real-valued inequalities. A python -based implementation has performed surprisingly...express complex mathematical and computational assertions. In this project, Avigad and Harper developed type-theoretic algorithms and formalisms that

  19. Alignment of Standards and Assessment: A Theoretical and Empirical Study of Methods for Alignment

    ERIC Educational Resources Information Center

    Nasstrom, Gunilla; Henriksson, Widar

    2008-01-01

    Introduction: In a standards-based school-system alignment of policy documents with standards and assessment is important. To be able to evaluate whether schools and students have reached the standards, the assessment should focus on the standards. Different models and methods can be used for measuring alignment, i.e. the correspondence between…

  20. Mindful Leaders in Highly Effective Schools: A Mixed-Method Application of Hoy's M-Scale

    ERIC Educational Resources Information Center

    Kearney, W. Sean; Kelsey, Cheryl; Herrington, David

    2013-01-01

    This article presents a mixed-method study utilizing teacher ratings of principal mindfulness from 149 public schools in Texas and follow-up qualitative data analysis through semi-structured interviews conducted with the top 10 percent of princeipals identified as mindful. This research is based on the theoretical framework of mindfulness as…

  1. An evidential link prediction method and link predictability based on Shannon entropy

    NASA Astrophysics Data System (ADS)

    Yin, Likang; Zheng, Haoyang; Bian, Tian; Deng, Yong

    2017-09-01

    Predicting missing links is of both theoretical value and practical interest in network science. In this paper, we empirically investigate a new link prediction method base on similarity and compare nine well-known local similarity measures on nine real networks. Most of the previous studies focus on the accuracy, however, it is crucial to consider the link predictability as an initial property of networks itself. Hence, this paper has proposed a new link prediction approach called evidential measure (EM) based on Dempster-Shafer theory. Moreover, this paper proposed a new method to measure link predictability via local information and Shannon entropy.

  2. Strengthening Theoretical Testing in Criminology Using Agent-based Modeling.

    PubMed

    Johnson, Shane D; Groff, Elizabeth R

    2014-07-01

    The Journal of Research in Crime and Delinquency ( JRCD ) has published important contributions to both criminological theory and associated empirical tests. In this article, we consider some of the challenges associated with traditional approaches to social science research, and discuss a complementary approach that is gaining popularity-agent-based computational modeling-that may offer new opportunities to strengthen theories of crime and develop insights into phenomena of interest. Two literature reviews are completed. The aim of the first is to identify those articles published in JRCD that have been the most influential and to classify the theoretical perspectives taken. The second is intended to identify those studies that have used an agent-based model (ABM) to examine criminological theories and to identify which theories have been explored. Ecological theories of crime pattern formation have received the most attention from researchers using ABMs, but many other criminological theories are amenable to testing using such methods. Traditional methods of theory development and testing suffer from a number of potential issues that a more systematic use of ABMs-not without its own issues-may help to overcome. ABMs should become another method in the criminologists toolbox to aid theory testing and falsification.

  3. Positron-alkali atom scattering

    NASA Technical Reports Server (NTRS)

    Mceachran, R. P.; Horbatsch, M.; Stauffer, A. D.; Ward, S. J.

    1990-01-01

    Positron-alkali atom scattering was recently investigated both theoretically and experimentally in the energy range from a few eV up to 100 eV. On the theoretical side calculations of the integrated elastic and excitation cross sections as well as total cross sections for Li, Na and K were based upon either the close-coupling method or the modified Glauber approximation. These theoretical results are in good agreement with experimental measurements of the total cross section for both Na and K. Resonance structures were also found in the L = 0, 1 and 2 partial waves for positron scattering from the alkalis. The structure of these resonances appears to be quite complex and, as expected, they occur in conjunction with the atomic excitation thresholds. Currently both theoretical and experimental work is in progress on positron-Rb scattering in the same energy range.

  4. Cosmological parameter estimation using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Prasad, J.; Souradeep, T.

    2014-03-01

    Constraining parameters of a theoretical model from observational data is an important exercise in cosmology. There are many theoretically motivated models, which demand greater number of cosmological parameters than the standard model of cosmology uses, and make the problem of parameter estimation challenging. It is a common practice to employ Bayesian formalism for parameter estimation for which, in general, likelihood surface is probed. For the standard cosmological model with six parameters, likelihood surface is quite smooth and does not have local maxima, and sampling based methods like Markov Chain Monte Carlo (MCMC) method are quite successful. However, when there are a large number of parameters or the likelihood surface is not smooth, other methods may be more effective. In this paper, we have demonstrated application of another method inspired from artificial intelligence, called Particle Swarm Optimization (PSO) for estimating cosmological parameters from Cosmic Microwave Background (CMB) data taken from the WMAP satellite.

  5. A fast determination method for transverse relaxation of spin-exchange-relaxation-free magnetometer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Jixi, E-mail: lujixi@buaa.edu.cn; Qian, Zheng; Fang, Jiancheng

    2015-04-15

    We propose a fast and accurate determination method for transverse relaxation of the spin-exchange-relaxation-free (SERF) magnetometer. This method is based on the measurement of magnetic resonance linewidth via a chirped magnetic field excitation and the amplitude spectrum analysis. Compared with the frequency sweeping via separate sinusoidal excitation, our method can realize linewidth determination within only few seconds and meanwhile obtain good frequency resolution. Therefore, it can avoid the drift error in long term measurement and improve the accuracy of the determination. As the magnetic resonance frequency of the SERF magnetometer is very low, we include the effect of the negativemore » resonance frequency caused by the chirp and achieve the coefficient of determination of the fitting results better than 0.998 with 95% confidence bounds to the theoretical equation. The experimental results are in good agreement with our theoretical analysis.« less

  6. Dakota, a multilevel parallel object-oriented framework for design optimization, parameter estimation, uncertainty quantification, and sensitivity analysis version 6.0 theory manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S

    The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components requiredmore » for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the Dakota software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of Dakota-related research publications in the areas of surrogate-based optimization, uncertainty quanti cation, and optimization under uncertainty that provide the foundation for many of Dakota's iterative analysis capabilities.« less

  7. Assessing Repetitive Negative Thinking Using Categorical and Transdiagnostic Approaches: A Comparison and Validation of Three Polish Language Adaptations of Self-Report Questionnaires.

    PubMed

    Kornacka, Monika; Buczny, Jacek; Layton, Rebekah L

    2016-01-01

    Repetitive negative thinking (RNT) is a transdiagnostic process involved in the risk, maintenance, and relapse of serious conditions including mood disorders, anxiety, eating disorders, and addictions. Processing mode theory provides a theoretical model to assess, research, and treat RNT using a transdiagnostic approach. Clinical researchers also often employ categorical approaches to RNT, including a focus on depressive rumination or worry, for similar purposes. Three widely used self-report questionnaires have been developed to assess these related constructs: the Ruminative Response Scale (RRS), the Perseverative Thinking Questionnaire (PTQ), and the Mini-Cambridge Exeter Repetitive Thought Scale (Mini-CERTS). Yet these scales have not previously been used in conjunction, despite useful theoretical distinctions only available in Mini-CERTS. The present validation of the methods in a Polish speaking population provides psychometric parameters estimates that contribute to current efforts to increase reliable replication of theoretical outcomes. Moreover, the following study aims to present particular characteristics and a comparison of the three methods. Although there has been some exploration of a categorical approach, the comparison of transdiagnostic methods is still lacking. These methods are particularly relevant for developing and evaluating theoretically based interventions like concreteness training, an emerging field of increasing interest, which can be used to address the maladaptive processing mode in RNT that can lead to depression and other disorders. Furthermore, the translation of these measures enables the examination of possible cross-cultural structural differences that may lead to important theoretical progress in the measurement and classification of RNT. The results support the theoretical hypothesis. As expected, the dimensions of brooding, general repetitive negative thinking, as well as abstract analytical thinking, can all be classified as unconstructive repetitive thinking. The particular characteristics of each scale and potential practical applications in clinical and research are discussed.

  8. Connectionist Interaction Information Retrieval.

    ERIC Educational Resources Information Center

    Dominich, Sandor

    2003-01-01

    Discussion of connectionist views for adaptive clustering in information retrieval focuses on a connectionist clustering technique and activation spreading-based information retrieval model using the interaction information retrieval method. Presents theoretical as well as simulation results as regards computational complexity and includes…

  9. The influence of anharmonic and solvent effects on the theoretical vibrational spectra of the guanine-cytosine base pairs in Watson-Crick and Hoogsteen configurations.

    PubMed

    Bende, Attila; Muntean, Cristina M

    2014-03-01

    The theoretical IR and Raman spectra of the guanine-cytosine DNA base pairs in Watson-Crick and Hoogsteen configurations were computed using DFT method with M06-2X meta-hybrid GGA exchange-correlation functional, including the anharmonic corrections and solvent effects. The results for harmonic frequencies and their anharmonic corrections were compared with our previously calculated values obtained with the B3PW91 hybrid GGA functional. Significant differences were obtained for the anharmonic corrections calculated with the two different DFT functionals, especially for the stretching modes, while the corresponding harmonic frequencies did not differ considerable. For the Hoogtseen case the H⁺ vibration between the G-C base pair can be characterized as an asymmetric Duffing oscillator and therefore unrealistic anharmonic corrections for normal modes where this proton vibration is involved have been obtained. The spectral modification due to the anharmonic corrections, solvent effects and the influence of sugar-phosphate group for the Watson-Crick and Hoogsteen base pair configurations, respectively, were also discussed. For the Watson-Crick case also the influence of the stacking interaction on the theoretical IR and Raman spectra was analyzed. Including the anharmonic correction in our normal mode analysis is essential if one wants to obtain correct assignments of the theoretical frequency values as compared with the experimental spectra.

  10. Information-theoretic indices usage for the prediction and calculation of octanol-water partition coefficient.

    PubMed

    Persona, Marek; Kutarov, Vladimir V; Kats, Boris M; Persona, Andrzej; Marczewska, Barbara

    2007-01-01

    The paper describes the new prediction method of octanol-water partition coefficient, which is based on molecular graph theory. The results obtained using the new method are well correlated with experimental values. These results were compared with the ones obtained by use of ten other structure correlated methods. The comparison shows that graph theory can be very useful in structure correlation research.

  11. Video- or text-based e-learning when teaching clinical procedures? A randomized controlled trial

    PubMed Central

    Buch, Steen Vigh; Treschow, Frederik Philip; Svendsen, Jesper Brink; Worm, Bjarne Skjødt

    2014-01-01

    Background and aims This study investigated the effectiveness of two different levels of e-learning when teaching clinical skills to medical students. Materials and methods Sixty medical students were included and randomized into two comparable groups. The groups were given either a video- or text/picture-based e-learning module and subsequently underwent both theoretical and practical examination. A follow-up test was performed 1 month later. Results The students in the video group performed better than the illustrated text-based group in the practical examination, both in the primary test (P<0.001) and in the follow-up test (P<0.01). Regarding theoretical knowledge, no differences were found between the groups on the primary test, though the video group performed better on the follow-up test (P=0.04). Conclusion Video-based e-learning is superior to illustrated text-based e-learning when teaching certain practical clinical skills. PMID:25152638

  12. Comparison of Pre-Service Physics Teachers' Conceptual Understanding of Dynamics in Model-Based Scientific Inquiry and Scientific Inquiry Environments

    ERIC Educational Resources Information Center

    Arslan Buyruk, Arzu; Ogan Bekiroglu, Feral

    2018-01-01

    The focus of this study was to evaluate the impact of model-based inquiry on pre-service physics teachers' conceptual understanding of dynamics. Theoretical framework of this research was based on models-of-data theory. True-experimental design using quantitative and qualitative research methods was carried out for this research. Participants of…

  13. A framework for comparing different image segmentation methods and its use in studying equivalences between level set and fuzzy connectedness frameworks

    PubMed Central

    Ciesielski, Krzysztof Chris; Udupa, Jayaram K.

    2011-01-01

    In the current vast image segmentation literature, there seems to be considerable redundancy among algorithms, while there is a serious lack of methods that would allow their theoretical comparison to establish their similarity, equivalence, or distinctness. In this paper, we make an attempt to fill this gap. To accomplish this goal, we argue that: (1) every digital segmentation algorithm A should have a well defined continuous counterpart MA, referred to as its model, which constitutes an asymptotic of A when image resolution goes to infinity; (2) the equality of two such models MA and MA′ establishes a theoretical (asymptotic) equivalence of their digital counterparts A and A′. Such a comparison is of full theoretical value only when, for each involved algorithm A, its model MA is proved to be an asymptotic of A. So far, such proofs do not appear anywhere in the literature, even in the case of algorithms introduced as digitizations of continuous models, like level set segmentation algorithms. The main goal of this article is to explore a line of investigation for formally pairing the digital segmentation algorithms with their asymptotic models, justifying such relations with mathematical proofs, and using the results to compare the segmentation algorithms in this general theoretical framework. As a first step towards this general goal, we prove here that the gradient based thresholding model M∇ is the asymptotic for the fuzzy connectedness Udupa and Samarasekera segmentation algorithm used with gradient based affinity A∇. We also argue that, in a sense, M∇ is the asymptotic for the original front propagation level set algorithm of Malladi, Sethian, and Vemuri, thus establishing a theoretical equivalence between these two specific algorithms. Experimental evidence of this last equivalence is also provided. PMID:21442014

  14. Net alkalinity and net acidity 1: Theoretical considerations

    USGS Publications Warehouse

    Kirby, C.S.; Cravotta, C.A.

    2005-01-01

    Net acidity and net alkalinity are widely used, poorly defined, and commonly misunderstood parameters for the characterization of mine drainage. The authors explain theoretical expressions of 3 types of alkalinity (caustic, phenolphthalein, and total) and acidity (mineral, CO2, and total). Except for rarely-invoked negative alkalinity, theoretically defined total alkalinity is closely analogous to measured alkalinity and presents few practical interpretation problems. Theoretically defined "CO 2-acidity" is closely related to most standard titration methods with an endpoint pH of 8.3 used for determining acidity in mine drainage, but it is unfortunately named because CO2 is intentionally driven off during titration of mine-drainage samples. Using the proton condition/mass- action approach and employing graphs to illustrate speciation with changes in pH, the authors explore the concept of principal components and how to assign acidity contributions to aqueous species commonly present in mine drainage. Acidity is defined in mine drainage based on aqueous speciation at the sample pH and on the capacity of these species to undergo hydrolysis to pH 8.3. Application of this definition shows that the computed acidity in mg L -1 as CaCO3 (based on pH and analytical concentrations of dissolved FeII, FeIII, Mn, and Al in mg L -1):aciditycalculated=50{1000(10-pH)+[2(FeII)+3(FeIII)]/56+2(Mn)/ 55+3(Al)/27}underestimates contributions from HSO4- and H+, but overestimates the acidity due to Fe3+ and Al3+. However, these errors tend to approximately cancel each other. It is demonstrated that "net alkalinity" is a valid mathematical construction based on theoretical definitions of alkalinity and acidity. Further, it is shown that, for most mine-drainage solutions, a useful net alkalinity value can be derived from: (1) alkalinity and acidity values based on aqueous speciation, (2) measured alkalinity minus calculated acidity, or (3) taking the negative of the value obtained in a standard method "hot peroxide" acidity titration, provided that labs report negative values. The authors recommend the third approach; i.e., net alkalinity = -Hot Acidity. ?? 2005 Elsevier Ltd. All rights reserved.

  15. Theoretical distribution of gutta-percha within root canals filled using cold lateral compaction based on numeric calculus.

    PubMed

    Min, Yi; Song, Ying; Gao, Yuan; Dummer, Paul M H

    2016-08-01

    This study aimed to present a new method based on numeric calculus to provide data on the theoretical volume ratio of voids when using the cold lateral compaction technique in canals with various diameters and tapers. Twenty-one simulated mathematical root canal models were created with different tapers and sizes of apical diameter, and were filled with defined sizes of standardized accessory gutta-percha cones. The areas of each master and accessory gutta-percha cone as well as the depth of their insertion into the canals were determined mathematically in Microsoft Excel. When the first accessory gutta-percha cone had been positioned, the residual area of void was measured. The areas of the residual voids were then measured repeatedly upon insertion of additional accessary cones until no more could be inserted in the canal. The volume ratio of voids was calculated through measurement of the volume of the root canal and mass of gutta-percha cones. The theoretical volume ratio of voids was influenced by the taper of canal, the size of apical preparation and the size of accessory gutta-percha cones. Greater apical preparation size and larger taper together with the use of smaller accessory cones reduced the volume ratio of voids in the apical third. The mathematical model provided a precise method to determine the theoretical volume ratio of voids in root-filled canals when using cold lateral compaction.

  16. Case study and case-based research in emergency nursing and care: Theoretical foundations and practical application in paramedic pre-hospital clinical judgment and decision-making of patients with mental illness.

    PubMed

    Shaban, Ramon Z; Considine, Julie; Fry, Margaret; Curtis, Kate

    2017-02-01

    Generating knowledge through quality research is fundamental to the advancement of professional practice in emergency nursing and care. There are multiple paradigms, designs and methods available to researchers to respond to challenges in clinical practice. Systematic reviews, randomised control trials and other forms of experimental research are deemed the gold standard of evidence, but there are comparatively few such trials in emergency care. In some instances it is not possible or appropriate to undertake experimental research. When exploring new or emerging problems where there is limited evidence available, non-experimental methods are required and appropriate. This paper provides the theoretical foundations and an exemplar of the use of case study and case-based research to explore a new and emerging problem in the context of emergency care. It examines pre-hospital clinical judgement and decision-making of mental illness by paramedics. Using an exemplar the paper explores the theoretical foundations and conceptual frameworks of case study, it explains how cases are defined and the role researcher in this form of inquiry, it details important principles and the procedures for data gathering and analysis, and it demonstrates techniques to enhance trustworthiness and credibility of the research. Moreover, it provides theoretically and practical insights into using case study in emergency care. Copyright © 2017 College of Emergency Nursing Australasia. Published by Elsevier Ltd. All rights reserved.

  17. Self-homodyne free-space optical communication system based on orthogonally polarized binary phase shift keying.

    PubMed

    Cai, Guangyu; Sun, Jianfeng; Li, Guangyuan; Zhang, Guo; Xu, Mengmeng; Zhang, Bo; Yue, Chaolei; Liu, Liren

    2016-06-10

    A self-homodyne laser communication system based on orthogonally polarized binary phase shift keying is demonstrated. The working principles of this method and the structure of a transceiver are described using theoretical calculations. Moreover, the signal-to-noise ratio, sensitivity, and bit error rate are analyzed for the amplifier-noise-limited case. The reported experiment validates the feasibility of the proposed method and demonstrates its advantageous sensitivity as a self-homodyne communication system.

  18. Strain gauge using Si-based optical microring resonator.

    PubMed

    Lei, Longhai; Tang, Jun; Zhang, Tianen; Guo, Hao; Li, Yanna; Xie, Chengfeng; Shang, Chenglong; Bi, Yu; Zhang, Wendong; Xue, Chenyang; Liu, Jun

    2014-12-20

    This paper presents a strain gauge using the mechanical-optical coupling method. The Si-based optical microring resonator was employed as the sensing element, which was embedded on the microcantilevers. The experimental results show that applying external strain triggers a clear redshift of the output resonant spectrum of the structure. The sensitivity of 93.72  pm/MPa was achieved, which also was verified using theoretical simulations. This paper provides what we believe is a new method to develop micro-opto-electromechanical system (MOEMS) sensors.

  19. Development of a Theoretically Based Treatment for Sentence Comprehension Deficits in Individuals with Aphasia

    ERIC Educational Resources Information Center

    Kiran, Swathi; Caplan, David; Sandberg, Chaleece; Levy, Joshua; Berardino, Alex; Ascenso, Elsa; Villard, Sarah; Tripodis, Yorghos

    2012-01-01

    Purpose: Two new treatments, 1 based on sentence to picture matching (SPM) and the other on object manipulation (OM), that train participants on the thematic roles of sentences using pictures or by manipulating objects were piloted. Method: Using a single-subject multiple-baseline design, sentence comprehension was trained on the affected sentence…

  20. Psychosocial constructs were not mediators of intervention effects for dietary and physical activity outcomes in a church-based,lifestyle intervention: Delta Body and Soul III

    USDA-ARS?s Scientific Manuscript database

    Background: While using theory-based methods when designing and implementing behavioral health interventions is essential, it also has become increasingly important to evaluate an intervention’s theoretical basis. Such evaluations can be accomplished through the use of mediation analysis which can ...

  1. A Computer-Assisted Learning Model Based on the Digital Game Exponential Reward System

    ERIC Educational Resources Information Center

    Moon, Man-Ki; Jahng, Surng-Gahb; Kim, Tae-Yong

    2011-01-01

    The aim of this research was to construct a motivational model which would stimulate voluntary and proactive learning using digital game methods offering players more freedom and control. The theoretical framework of this research lays the foundation for a pedagogical learning model based on digital games. We analyzed the game reward system, which…

  2. Empirical Bayes methods for smoothing data and for simultaneous estimation of many parameters.

    PubMed Central

    Yanagimoto, T; Kashiwagi, N

    1990-01-01

    A recent successful development is found in a series of innovative, new statistical methods for smoothing data that are based on the empirical Bayes method. This paper emphasizes their practical usefulness in medical sciences and their theoretically close relationship with the problem of simultaneous estimation of parameters, depending on strata. The paper also presents two examples of analyzing epidemiological data obtained in Japan using the smoothing methods to illustrate their favorable performance. PMID:2148512

  3. Spatiotemporal coding in the cortex: information flow-based learning in spiking neural networks.

    PubMed

    Deco, G; Schürmann, B

    1999-05-15

    We introduce a learning paradigm for networks of integrate-and-fire spiking neurons that is based on an information-theoretic criterion. This criterion can be viewed as a first principle that demonstrates the experimentally observed fact that cortical neurons display synchronous firing for some stimuli and not for others. The principle can be regarded as the postulation of a nonparametric reconstruction method as optimization criteria for learning the required functional connectivity that justifies and explains synchronous firing for binding of features as a mechanism for spatiotemporal coding. This can be expressed in an information-theoretic way by maximizing the discrimination ability between different sensory inputs in minimal time.

  4. A game-theoretical pricing mechanism for multiuser rate allocation for video over WiMAX

    NASA Astrophysics Data System (ADS)

    Chen, Chao-An; Lo, Chi-Wen; Lin, Chia-Wen; Chen, Yung-Chang

    2010-07-01

    In multiuser rate allocation in a wireless network, strategic users can bias the rate allocation by misrepresenting their bandwidth demands to a base station, leading to an unfair allocation. Game-theoretical approaches have been proposed to address the unfair allocation problems caused by the strategic users. However, existing approaches rely on a timeconsuming iterative negotiation process. Besides, they cannot completely prevent unfair allocations caused by inconsistent strategic behaviors. To address these problems, we propose a Search Based Pricing Mechanism to reduce the communication time and to capture a user's strategic behavior. Our simulation results show that the proposed method significantly reduce the communication time as well as converges stably to an optimal allocation.

  5. Measuring cognition in teams: a cross-domain review.

    PubMed

    Wildman, Jessica L; Salas, Eduardo; Scott, Charles P R

    2014-08-01

    The purpose of this article is twofold: to provide a critical cross-domain evaluation of team cognition measurement options and to provide novice researchers with practical guidance when selecting a measurement method. A vast selection of measurement approaches exist for measuring team cognition constructs including team mental models, transactive memory systems, team situation awareness, strategic consensus, and cognitive processes. Empirical studies and theoretical articles were reviewed to identify all of the existing approaches for measuring team cognition. These approaches were evaluated based on theoretical perspective assumed, constructs studied, resources required, level of obtrusiveness, internal consistency reliability, and predictive validity. The evaluations suggest that all existing methods are viable options from the point of view of reliability and validity, and that there are potential opportunities for cross-domain use. For example, methods traditionally used only to measure mental models may be useful for examining transactive memory and situation awareness. The selection of team cognition measures requires researchers to answer several key questions regarding the theoretical nature of team cognition and the practical feasibility of each method. We provide novice researchers with guidance regarding how to begin the search for a team cognition measure and suggest several new ideas regarding future measurement research. We provide (1) a broad overview and evaluation of existing team cognition measurement methods, (2) suggestions for new uses of those methods across research domains, and (3) critical guidance for novice researchers looking to measure team cognition.

  6. On the road to metallic nanoparticles by rational design: bridging the gap between atomic-level theoretical modeling and reality by total scattering experiments.

    PubMed

    Prasai, Binay; Wilson, A R; Wiley, B J; Ren, Y; Petkov, Valeri

    2015-11-14

    The extent to which current theoretical modeling alone can reveal real-world metallic nanoparticles (NPs) at the atomic level was scrutinized and demonstrated to be insufficient and how it can be improved by using a pragmatic approach involving straightforward experiments is shown. In particular, 4 to 6 nm in size silica supported Au(100-x)Pd(x) (x = 30, 46 and 58) explored for catalytic applications is characterized structurally by total scattering experiments including high-energy synchrotron X-ray diffraction (XRD) coupled to atomic pair distribution function (PDF) analysis. Atomic-level models for the NPs are built by molecular dynamics simulations based on the archetypal for current theoretical modeling Sutton-Chen (SC) method. Models are matched against independent experimental data and are demonstrated to be inaccurate unless their theoretical foundation, i.e. the SC method, is supplemented with basic yet crucial information on the length and strength of metal-to-metal bonds and, when necessary, structural disorder in the actual NPs studied. An atomic PDF-based approach for accessing such information and implementing it in theoretical modeling is put forward. For completeness, the approach is concisely demonstrated on 15 nm in size water-dispersed Au particles explored for bio-medical applications and 16 nm in size hexane-dispersed Fe48Pd52 particles explored for magnetic applications as well. It is argued that when "tuned up" against experiments relevant to metals and alloys confined to nanoscale dimensions, such as total scattering coupled to atomic PDF analysis, rather than by mere intuition and/or against data for the respective solids, atomic-level theoretical modeling can provide a sound understanding of the synthesis-structure-property relationships in real-world metallic NPs. Ultimately this can help advance nanoscience and technology a step closer to producing metallic NPs by rational design.

  7. Superresolution confocal technology for displacement measurements based on total internal reflection.

    PubMed

    Kuang, Cuifang; Ali, M Yakut; Hao, Xiang; Wang, Tingting; Liu, Xu

    2010-10-01

    In order to achieve a higher axial resolution for displacement measurement, a novel method is proposed based on total internal reflection filter and confocal microscope principle. A theoretical analysis of the basic measurement principles is presented. The analysis reveals that the proposed confocal detection scheme is effective in enhancing the resolution of nonlinearity of the reflectance curve greatly. In addition, a simple prototype system has been developed based on the theoretical analysis and a series of experiments have been performed under laboratory conditions to verify the system feasibility, accuracy, and stability. The experimental results demonstrate that the axial resolution in displacement measurements is better than 1 nm in a range of 200 nm which is threefold better than that can be achieved using the plane reflector.

  8. Derivation and application of an analytical rock displacement solution on rectangular cavern wall using the inverse mapping method.

    PubMed

    Gao, Mingzhong; Yu, Bin; Qiu, Zhiqiang; Yin, Xiangang; Li, Shengwei; Liu, Qiang

    2017-01-01

    Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method's validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure.

  9. Crystal structure prediction supported by incomplete experimental data

    NASA Astrophysics Data System (ADS)

    Tsujimoto, Naoto; Adachi, Daiki; Akashi, Ryosuke; Todo, Synge; Tsuneyuki, Shinji

    2018-05-01

    We propose an efficient theoretical scheme for structure prediction on the basis of the idea of combining methods, which optimize theoretical calculation and experimental data simultaneously. In this scheme, we formulate a cost function based on a weighted sum of interatomic potential energies and a penalty function which is defined with partial experimental data totally insufficient for conventional structure analysis. In particular, we define the cost function using "crystallinity" formulated with only peak positions within the small range of the x-ray-diffraction pattern. We apply this method to well-known polymorphs of SiO2 and C with up to 108 atoms in the simulation cell and show that it reproduces the correct structures efficiently with very limited information of diffraction peaks. This scheme opens a new avenue for determining and predicting structures that are difficult to determine by conventional methods.

  10. Impulsive control of stochastic systems with applications in chaos control, chaos synchronization, and neural networks.

    PubMed

    Li, Chunguang; Chen, Luonan; Aihara, Kazuyuki

    2008-06-01

    Real systems are often subject to both noise perturbations and impulsive effects. In this paper, we study the stability and stabilization of systems with both noise perturbations and impulsive effects. In other words, we generalize the impulsive control theory from the deterministic case to the stochastic case. The method is based on extending the comparison method to the stochastic case. The method presented in this paper is general and easy to apply. Theoretical results on both stability in the pth mean and stability with disturbance attenuation are derived. To show the effectiveness of the basic theory, we apply it to the impulsive control and synchronization of chaotic systems with noise perturbations, and to the stability of impulsive stochastic neural networks. Several numerical examples are also presented to verify the theoretical results.

  11. Dimensions of vegetable parenting practices among preschoolers

    USDA-ARS?s Scientific Manuscript database

    The objective of this study was to determine the factor structure of 31 effective and ineffective vegetable parenting practices used by parents of preschool children based on three theoretically proposed factors: responsiveness, control, and structure. The methods employed included both corrected it...

  12. Theoretical Understanding the Relations of Melting-point Determination Methods from Gibbs Thermodynamic Surface and Applications on Melting Curves of Lower Mantle Minerals

    NASA Astrophysics Data System (ADS)

    Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.

    2016-12-01

    The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.

  13. Studies of EXAFSSpectra using Copper (II) Schiff Base complexes and Determination of Bond lengths Using Synchrotron Radiation

    NASA Astrophysics Data System (ADS)

    Mishra, A.; Vibhute, V.; Ninama, S.; Parsai, N.; Jha, S. N.; Sharma, P.

    2016-10-01

    X-ray absorption fine structure (XAFS) at the K-edge of copper has been studied in some copper (II) complexes with substituted anilines like (2Cl, 4Br, 2NO2, 4NO2 and pure aniline) with o-PDA (orthophenylenediamine) as ligand. The X-ray absorption measurements have been performed at the recently developed BL-8 dispersive EXAFS beam line at 2.5 GeV Indus-2 Synchrotron Source at RRCAT, Indore, India. The data obtained has been processed using EXAFS data analysis program Athena.The graphical method gives the useful information about bond length and also the environment of the absorbing atom. The theoretical bond lengths of the complexes were calculated by using interactive fitting of EXAFS using fast Fourier inverse transformation (IFEFFIT) method. This method is also called as Fourier transform method. The Lytle, Sayers and Stern method and Levy's method have been used for determination of bond lengths experimentally of the studied complexes. The results of both methods have been compared with theoretical IFEFFIT method.

  14. Systematic development of a self-help and motivational enhancement intervention to promote sexual health in HIV-positive men who have sex with men.

    PubMed

    Van Kesteren, Nicole M C; Kok, Gerjo; Hospers, Harm J; Schippers, Jan; De Wildt, Wencke

    2006-12-01

    The objective of this study was to describe the application of a systematic process-Intervention Mapping-to developing a theory- and evidence-based intervention to promote sexual health in HIV-positive men who have sex with men (MSM). Intervention Mapping provides a framework that gives program planners a systematic method for decision-making in each phase of intervention development. In Step 1, we focused on the improvement of two health-promoting behaviors: satisfactory sexual functioning and safer sexual behavior. These behaviors were then linked with selected personal and external determinants, such as attitudes and social support, to produce a set of proximal program objectives. In Step 2, theoretical methods were identified to influence the proximal program objectives and were translated into practical strategies. Although theoretical methods were derived from various theories, self-regulation theory and a cognitive model of behavior change provided the main framework for selecting the intervention methods. The main strategies chosen were bibliotherapy (i.e., the use of written material to help people solve problems or change behavior) and motivational interviewing. In Step 3, the theoretical methods and practical strategies were applied in a program that comprised a self-help guide, a motivational interviewing session and a motivational interviewing telephone call, both delivered by specialist nurses in HIV treatment centers. In Step 4, implementation was anticipated by developing a linkage group to ensure involvement of program users in the planning process and conducting additional research to understand how to implement our program better. In Step 5, program evaluation was anticipated based on the planning process from the previous Intervention Mapping steps.

  15. Management of Teacher Scientific-Methodical Work in Vocational Educational Institutions on the Basis of Project-Target Approach

    ERIC Educational Resources Information Center

    Shakuto, Elena A.; Dorozhkin, Evgenij M.; Kozlova, Anastasia A.

    2016-01-01

    The relevance of the subject under analysis is determined by the lack of theoretical development of the problem of management of teacher scientific-methodical work in vocational educational institutions based upon innovative approaches in the framework of project paradigm. The purpose of the article is to develop and test a science-based…

  16. The global contribution of energy consumption by product exports from China.

    PubMed

    Tang, Erzi; Peng, Chong

    2017-06-01

    This paper presents a model to analyze the mechanism of the global contribution of energy usage by product exports. The theoretical analysis is based on the perspective that contribution estimates should be in relatively smaller sectors in which the production characteristics could be considered, such as the productivity distribution for each sector. Then, we constructed a method to measure the global contribution of energy usage. The simple method to estimate the global contribution is the percentage of goods export volume compared to the GDP as a multiple of total energy consumption, but this method underestimates the global contribution because it ignores the structure of energy consumption and product export in China. According to our measurement method and based on the theoretical analysis, we calculated the global contribution of energy consumption only by industrial manufactured product exports in a smaller sector per industry or manufacturing sector. The results indicated that approximately 42% of the total energy usage in the whole economy for China in 2013 was contributed to foreign regions. Along with the primary products and service export in China, the global contribution of energy consumption for China in 2013 by export was larger than 42% of the total energy usage.

  17. Platform construction and extraction mechanism study of magnetic mixed hemimicelles solid-phase extraction

    PubMed Central

    Xiao, Deli; Zhang, Chan; He, Jia; Zeng, Rong; Chen, Rong; He, Hua

    2016-01-01

    Simple, accurate and high-throughput pretreatment method would facilitate large-scale studies of trace analysis in complex samples. Magnetic mixed hemimicelles solid-phase extraction has the power to become a key pretreatment method in biological, environmental and clinical research. However, lacking of experimental predictability and unsharpness of extraction mechanism limit the development of this promising method. Herein, this work tries to establish theoretical-based experimental designs for extraction of trace analytes from complex samples using magnetic mixed hemimicelles solid-phase extraction. We selected three categories and six sub-types of compounds for systematic comparative study of extraction mechanism, and comprehensively illustrated the roles of different force (hydrophobic interaction, π-π stacking interactions, hydrogen-bonding interaction, electrostatic interaction) for the first time. What’s more, the application guidelines for supporting materials, surfactants and sample matrix were also summarized. The extraction mechanism and platform established in the study render its future promising for foreseeable and efficient pretreatment under theoretical based experimental design for trace analytes from environmental, biological and clinical samples. PMID:27924944

  18. ζ Oph and the weak-wind problem

    NASA Astrophysics Data System (ADS)

    Gvaramadze, V. V.; Langer, N.; Mackey, J.

    2012-11-01

    Mass-loss rate, ?, is one of the key parameters affecting evolution and observational manifestations of massive stars and their impact on the ambient medium. Despite its importance, there is a factor of ˜100 discrepancy between empirical and theoretical ? of late-type O dwarfs, the so-called weak-wind problem. In this Letter, we propose a simple novel method to constrain ? of runaway massive stars through observation of their bow shocks and Strömgren spheres, which might be of decisive importance for resolving the weak-wind problem. Using this method, we found that ? of the well-known runaway O9.5 V star ζ Oph is more than an order of magnitude higher than that derived from ultraviolet (UV) line fitting and is by a factor of 6-7 lower than those based on the theoretical recipe by Vink et al. and the Hα line. The discrepancy between ? derived by our method and that based on UV lines would be even more severe if the stellar wind is clumpy. At the same time, our estimate of ? agrees with that predicted by the moving reversing layer theory by Lucy.

  19. Theoretical Analysis of Penalized Maximum-Likelihood Patlak Parametric Image Reconstruction in Dynamic PET for Lesion Detection.

    PubMed

    Yang, Li; Wang, Guobao; Qi, Jinyi

    2016-04-01

    Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.

  20. A fractional Fourier transform analysis of a bubble excited by an ultrasonic chirp.

    PubMed

    Barlow, Euan; Mulholland, Anthony J

    2011-11-01

    The fractional Fourier transform is proposed here as a model based, signal processing technique for determining the size of a bubble in a fluid. The bubble is insonified with an ultrasonic chirp and the radiated pressure field is recorded. This experimental bubble response is then compared with a series of theoretical model responses to identify the most accurate match between experiment and theory which allows the correct bubble size to be identified. The fractional Fourier transform is used to produce a more detailed description of each response, and two-dimensional cross correlation is then employed to identify the similarities between the experimental response and each theoretical response. In this paper the experimental bubble response is simulated by adding various levels of noise to the theoretical model output. The method is compared to the standard technique of using time-domain cross correlation. The proposed method is shown to be far more robust at correctly sizing the bubble and can cope with much lower signal to noise ratios.

  1. Analysis of Drop Oscillations Excited by an Electrical Point Force in AC EWOD

    NASA Astrophysics Data System (ADS)

    Oh, Jung Min; Ko, Sung Hee; Kang, Kwan Hyoung

    2008-03-01

    Recently, a few researchers have reported the oscillation of a sessile drop in AC EWOD (electrowetting on dielectrics), and some of its consequences. The drop oscillation problem in AC EWOD is associated with various applications based on electrowetting such as LOC (lab-on-a-chip), liquid lens, and electronic display. However, no theoretical analysis of the problem has been attempted yet. In the present paper, we propose a theoretical model to analyze the oscillation by applying the conventional method to analyze the drop oscillation. The domain perturbation method is used to derive the shape mode equations under the assumptions of weak viscous flow and small deformation. The Maxwell stress is exerted on the three-phase contact line of the droplet like a point force. The force is regarded as a delta function, and is decomposed into the driving forces of each shape mode. The theoretical results on the shape and the frequency responses are compared with experiments, which shows a qualitative agreement.

  2. Constrained orbital intercept-evasion

    NASA Astrophysics Data System (ADS)

    Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh

    2014-06-01

    An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.

  3. Spline-based Rayleigh-Ritz methods for the approximation of the natural modes of vibration for flexible beams with tip bodies

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1985-01-01

    Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.

  4. Formulation analysis and computation of an optimization-based local-to-nonlocal coupling method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Elia, Marta; Bochev, Pavel Blagoveston

    2017-01-01

    In this paper, we present an optimization-based coupling method for local and nonlocal continuum models. Our approach couches the coupling of the models into a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the local and nonlocal problem domains, and the virtual controls are the nonlocal volume constraint and the local boundary condition. We present the method in the context of Local-to-Nonlocal di usion coupling. Numerical examples illustrate the theoretical properties of the approach.

  5. Stokes vector based interpolation method to improve the efficiency of bio-inspired polarization-difference imaging in turbid media

    NASA Astrophysics Data System (ADS)

    Guan, Jinge; Ren, Wei; Cheng, Yaoyu

    2018-04-01

    We demonstrate an efficient polarization-difference imaging system in turbid conditions by using the Stokes vector of light. The interaction of scattered light with the polarizer is analyzed by the Stokes-Mueller formalism. An interpolation method is proposed to replace the mechanical rotation of the polarization axis of the analyzer theoretically, and its performance is verified by the experiment at different turbidity levels. We show that compared with direct imaging, the Stokes vector based imaging method can effectively reduce the effect of light scattering and enhance the image contrast.

  6. Generalized theoretical method for the interaction between arbitrary nonuniform electric field and molecular vibrations: Toward near-field infrared spectroscopy and microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iwasa, Takeshi, E-mail: tiwasa@mail.sci.hokudai.ac.jp; Takenaka, Masato; Taketsugu, Tetsuya

    A theoretical method to compute infrared absorption spectra when a molecule is interacting with an arbitrary nonuniform electric field such as near-fields is developed and numerically applied to simple model systems. The method is based on the multipolar Hamiltonian where the light-matter interaction is described by a spatial integral of the inner product of the molecular polarization and applied electric field. The computation scheme is developed under the harmonic approximation for the molecular vibrations and the framework of modern electronic structure calculations such as the density functional theory. Infrared reflection absorption and near-field infrared absorption are considered as model systems.more » The obtained IR spectra successfully reflect the spatial structure of the applied electric field and corresponding vibrational modes, demonstrating applicability of the present method to analyze modern nanovibrational spectroscopy using near-fields. The present method can use arbitral electric fields and thus can integrate two fields such as computational chemistry and electromagnetics.« less

  7. Generalized theoretical method for the interaction between arbitrary nonuniform electric field and molecular vibrations: Toward near-field infrared spectroscopy and microscopy.

    PubMed

    Iwasa, Takeshi; Takenaka, Masato; Taketsugu, Tetsuya

    2016-03-28

    A theoretical method to compute infrared absorption spectra when a molecule is interacting with an arbitrary nonuniform electric field such as near-fields is developed and numerically applied to simple model systems. The method is based on the multipolar Hamiltonian where the light-matter interaction is described by a spatial integral of the inner product of the molecular polarization and applied electric field. The computation scheme is developed under the harmonic approximation for the molecular vibrations and the framework of modern electronic structure calculations such as the density functional theory. Infrared reflection absorption and near-field infrared absorption are considered as model systems. The obtained IR spectra successfully reflect the spatial structure of the applied electric field and corresponding vibrational modes, demonstrating applicability of the present method to analyze modern nanovibrational spectroscopy using near-fields. The present method can use arbitral electric fields and thus can integrate two fields such as computational chemistry and electromagnetics.

  8. Coupled-cluster based R-matrix codes (CCRM): Recent developments

    NASA Astrophysics Data System (ADS)

    Sur, Chiranjib; Pradhan, Anil K.

    2008-05-01

    We report the ongoing development of the new coupled-cluster R-matrix codes (CCRM) for treating electron-ion scattering and radiative processes within the framework of the relativistic coupled-cluster method (RCC), interfaced with the standard R-matrix methodology. The RCC method is size consistent and in principle equivalent to an all-order many-body perturbation theory. The RCC method is one of the most accurate many-body theories, and has been applied for several systems. This project should enable the study of electron-interactions with heavy atoms/ions, utilizing not only high speed computing platforms but also improved theoretical description of the relativistic and correlation effects for the target atoms/ions as treated extensively within the RCC method. Here we present a comprehensive outline of the newly developed theoretical method and a schematic representation of the new suite of CCRM codes. We begin with the flowchart and description of various stages involved in this development. We retain the notations and nomenclature of different stages as analogous to the standard R-matrix codes.

  9. Recent theoretical developments and experimental studies pertinent to vortex flow aerodynamics - With a view towards design

    NASA Technical Reports Server (NTRS)

    Lamar, J. E.; Luckring, J. M.

    1978-01-01

    A review is presented of recent progress in a research program directed towards the development of an improved vortex-flow technology base. It is pointed out that separation induced vortex-flows from the leading and side edges play an important role in the high angle-of-attack aerodynamic characteristics of a wide range of modern aircraft. In the analysis and design of high-speed aircraft, a detailed knowledge of this type of separation is required, particularly with regard to critical wind loads and the stability and performance at various off-design conditions. A description of analytical methods is presented. The theoretical methods employed are divided into two classes which are dependent upon the underlying aerodynamic assumptions. One conical flow method is considered along with three different nonconical flow methods. Comparisons are conducted between the described methods and available aerodynamic data. Attention is also given to a vortex flow drag study and a vortex flow wing design using suction analogy.

  10. Design of a rotary dielectric elastomer actuator using a topology optimization method based on pairs of curves

    NASA Astrophysics Data System (ADS)

    Wang, Nianfeng; Guo, Hao; Chen, Bicheng; Cui, Chaoyu; Zhang, Xianmin

    2018-05-01

    Dielectric elastomers (DE), known as electromechanical transducers, have been widely used in the field of sensors, generators, actuators and energy harvesting for decades. A large number of DE actuators including bending actuators, linear actuators and rotational actuators have been designed utilizing an experience design method. This paper proposes a new method for the design of DE actuators by using a topology optimization method based on pairs of curves. First, theoretical modeling and optimization design are discussed, after which a rotary dielectric elastomer actuator has been designed using this optimization method. Finally, experiments and comparisons between several DE actuators have been made to verify the optimized result.

  11. Use of Mass- and Area-Dimensional Power Laws for Determining Precipitation Particle Terminal Velocities.

    NASA Astrophysics Data System (ADS)

    Mitchell, David L.

    1996-06-01

    Based on boundary layer theory and a comparison of empirical power laws relating the Reynolds and Best numbers, it was apparent that the primary variables governing a hydrometeor's terminal velocity were its mass, its area projected to the flow, and its maximum dimension. The dependence of terminal velocities on surface roughness appeared secondary, with surface roughness apparently changing significantly only during phase changes (i.e., ice to liquid). In the theoretical analysis, a new, comprehensive expression for the drag force, which is valid for both inertial and viscous-dominated flow, was derived.A hydrometeor's mass and projected area were simply and accurately represented in terms of its maximum dimension by using dimensional power laws. Hydrometeor terminal velocities were calculated by using mass- and area-dimensional power laws to parameterize the Best number, X. Using a theoretical relationship general for all particle types, the Reynolds number, Re, was then calculated from the Best number. Terminal velocities were calculated from Re.Alternatively, four Re-X power-law expressions were extracted from the theoretical Re-X relationship. These expressions collectively describe the terminal velocities of all ice particle types. These were parameterized using mass- and area-dimensional power laws, yielding four theoretically based power-law expressions predicting fall speeds in terms of ice particle maximum dimension. When parameterized for a given ice particle type, the theoretical fall speed power law can be compared directly with empirical fall speed-dimensional power laws in the literature for the appropriate Re range. This provides a means of comparing theory with observations.Terminal velocities predicted by this method were compared with fall speeds given by empirical fall speed expressions for the same ice particle type, which were curve fits to measured fall speeds. Such comparisons were done for nine types of ice particles. Fall speeds predicted by this method differed from those based on measurements by no more than 20%.The features that distinguish this method of determining fall speeds from others are that it does not represent particles as spheroids, it is general for any ice particle shape and size, it is conceptually and mathematically simple, it appears accurate, and it provides for physical insight. This method also allows fall speeds to be determined from aircraft measurements of ice particle mass and projected area, rather than directly measuring fall speeds. This approach may be useful for ice crystals characterizing cirrus clouds, for which direct fall speed measurements are difficult.

  12. Aerosol characteristics inversion based on the improved lidar ratio profile with the ground-based rotational Raman-Mie lidar

    NASA Astrophysics Data System (ADS)

    Ji, Hongzhu; Zhang, Yinchao; Chen, Siying; Chen, He; Guo, Pan

    2018-06-01

    An iterative method, based on a derived inverse relationship between atmospheric backscatter coefficient and aerosol lidar ratio, is proposed to invert the lidar ratio profile and aerosol extinction coefficient. The feasibility of this method is investigated theoretically and experimentally. Simulation results show the inversion accuracy of aerosol optical properties for iterative method can be improved in the near-surface aerosol layer and the optical thick layer. Experimentally, as a result of the reduced insufficiency error and incoherence error, the aerosol optical properties with higher accuracy can be obtained in the near-surface region and the region of numerical derivative distortion. In addition, the particle component can be distinguished roughly based on this improved lidar ratio profile.

  13. Integrated primary care, the collaboration imperative inter-organizational cooperation in the integrated primary care field: a theoretical framework

    PubMed Central

    Valentijn, Pim P; Bruijnzeels, Marc A; de Leeuw, Rob J; Schrijvers, Guus J.P

    2012-01-01

    Purpose Capacity problems and political pressures have led to a rapid change in the organization of primary care from mono disciplinary small business to complex inter-organizational relationships. It is assumed that inter-organizational collaboration is the driving force to achieve integrated (primary) care. Despite the importance of collaboration and integration of services in primary care, there is no unambiguous definition for both concepts. The purpose of this study is to examine and link the conceptualisation and validation of the terms inter-organizational collaboration and integrated primary care using a theoretical framework. Theory The theoretical framework is based on the complex collaboration process of negotiation among multiple stakeholder groups in primary care. Methods A literature review of health sciences and business databases, and targeted grey literature sources. Based on the literature review we operationalized the constructs of inter-organizational collaboration and integrated primary care in a theoretical framework. The framework is being validated in an explorative study of 80 primary care projects in the Netherlands. Results and conclusions Integrated primary care is considered as a multidimensional construct based on a continuum of integration, extending from segregation to integration. The synthesis of the current theories and concepts of inter-organizational collaboration is insufficient to deal with the complexity of collaborative issues in primary care. One coherent and integrated theoretical framework was found that could make the complex collaboration process in primary care transparent. This study presented theoretical framework is a first step to understand the patterns of successful collaboration and integration in primary care services. These patterns can give insights in the organization forms needed to create a good working integrated (primary) care system that fits the local needs of a population. Preliminary data of the patterns of collaboration and integration will be presented.

  14. Categorical data processing for real estate objects valuation using statistical analysis

    NASA Astrophysics Data System (ADS)

    Parygin, D. S.; Malikov, V. P.; Golubev, A. V.; Sadovnikova, N. P.; Petrova, T. M.; Finogeev, A. G.

    2018-05-01

    Theoretical and practical approaches to the use of statistical methods for studying various properties of infrastructure objects are analyzed in the paper. Methods of forecasting the value of objects are considered. A method for coding categorical variables describing properties of real estate objects is proposed. The analysis of the results of modeling the price of real estate objects using regression analysis and an algorithm based on a comparative approach is carried out.

  15. Methods for Evaluating Flammability Characteristics of Shipboard Materials

    DTIC Science & Technology

    1994-02-28

    E 23 • smoke optical properties; and • (toxic) gas production rates. In general, the prediction of these full-scale burning characteristics requires ...Method. The ASTM Room/Corner Test Method can be used to calculate the heat release rate of a material based upon oxygen depletion calorimetry. As can be...Clearly, more validation is required for the theoretical calculations . All are consistent in the use of calorimeter and UFT-type property data, all show

  16. Relationships between the decoupled and coupled transfer functions: Theoretical studies and experimental validation

    NASA Astrophysics Data System (ADS)

    Wang, Zengwei; Zhu, Ping; Liu, Zhao

    2018-01-01

    A generalized method for predicting the decoupled transfer functions based on in-situ transfer functions is proposed. The method allows predicting the decoupled transfer functions using coupled transfer functions, without disassembling the system. Two ways to derive relationships between the decoupled and coupled transfer functions are presented. Issues related to immeasurability of coupled transfer functions are also discussed. The proposed method is validated by numerical and experimental case studies.

  17. Portable apparatus with CRT display for nondestructive testing of concrete by the ultrasonic pulse method

    NASA Technical Reports Server (NTRS)

    Manta, G.; Gurau, Y.; Nica, P.; Facacaru, I.

    1974-01-01

    The development of methods for the nondestructive study of concrete structures is discussed. The nondestructive test procedure is based on the method of ultrasonic pulse transmission through the material. The measurements indicate that the elastic properties of concrete or other heterogeneous materials are a function of the rate of ultrasonic propagation. Diagrams of the test equipment are provided. Mathematical models are included to support the theoretical aspects.

  18. Free boundary skin current magnetohydrodynamic equilibria

    NASA Astrophysics Data System (ADS)

    Reusch, Michael F.

    1988-10-01

    Function theoretic methods in the complex plane are used to develop simple parametric hodograph formulas that generate sharp boundary equilibria of arbitrary shape. The related method of Gorenflo [Z. Angew. Math. Phys. 16, 279 (1965)] and Merkel (Ph.D. thesis, University of Munich, 1965) is discussed. A numerical technique for the construction of solutions, based on one of the methods, is presented. A study is made of the bifurcations of an equilibrium of general form.

  19. Stochastic symplectic and multi-symplectic methods for nonlinear Schrödinger equation with white noise dispersion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cui, Jianbo, E-mail: jianbocui@lsec.cc.ac.cn; Hong, Jialin, E-mail: hjl@lsec.cc.ac.cn; Liu, Zhihui, E-mail: liuzhihui@lsec.cc.ac.cn

    We indicate that the nonlinear Schrödinger equation with white noise dispersion possesses stochastic symplectic and multi-symplectic structures. Based on these structures, we propose the stochastic symplectic and multi-symplectic methods, which preserve the continuous and discrete charge conservation laws, respectively. Moreover, we show that the proposed methods are convergent with temporal order one in probability. Numerical experiments are presented to verify our theoretical results.

  20. Heuristic approach to capillary pressures averaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coca, B.P.

    1980-10-01

    Several methods are available to average capillary pressure curves. Among these are the J-curve and regression equations of the wetting-fluid saturation in porosity and permeability (capillary pressure held constant). While the regression equation seem completely empiric, the J-curve method seems to be theoretically sound due to its expression based on a relation between the average capillary radius and the permeability-porosity ratio. An analysis is given of each of these methods.

  1. The cosmological lithium problem revisited

    NASA Astrophysics Data System (ADS)

    Bertulani, C. A.; Mukhamedzhanov, A. M.; Shubhchintak

    2016-07-01

    After a brief review of the cosmological lithium problem, we report a few recent attempts to find theoretical solutions by our group at Texas A&M University (Commerce & College Station). We will discuss our studies on the theoretical description of electron screening, the possible existence of parallel universes of dark matter, and the use of non-extensive statistics during the Big Bang nucleosynthesis epoch. Last but not least, we discuss possible solutions within nuclear physics realm. The impact of recent measurements of relevant nuclear reaction cross sections for the Big Bang nucleosynthesis based on indirect methods is also assessed. Although our attempts may not able to explain the observed discrepancies between theory and observations, they suggest theoretical developments that can be useful also for stellar nucleosynthesis.

  2. Compact SOI optimized slot microring coupled phase-shifted Bragg grating resonator for sensing

    NASA Astrophysics Data System (ADS)

    Zhao, Chao Ying; Zhang, Lei; Zhang, Cheng Mei

    2018-05-01

    We propose a novel sensor structure composed of a slot microring and a phase-shifted sidewall Bragg gratings in a slot waveguide. We first present a theoretical analysis of transmission by using the transfer matrix. Then, the mode-field distributions of transmission spectrum obtained from 3D simulations based on FDTD method demonstrates that our sensor exhibit theoretical sensitivity of 297 . 13 nm / RIU, a minimum detection limit of 1 . 1 × 10-4 RIU, the maximum extinction ratio of 20 dB, the quality factor of 2 × 103 and a compact dimension-theoretical structure of 15 μm × 8 . 5 μm. Finally, the sensor's performance is simulated for NaCl solution.

  3. Electron Capture in Slow Collision of He^2++H : Revisited

    NASA Astrophysics Data System (ADS)

    Krstic, Ps

    2003-05-01

    Very early experimental data (Fite et al. al., Proc. R. Soc. A 268, 527 (1962)) for He^2++H, recent ORNL measurements for Ne^2+ + H and our theoretical estimates suggest that the electron capture cross sections for these strongly exoergic collision systems drop slower toward low collision energies than expected from previous theories. We perform a theoretical study to establish and understand the true nature of this controversy. The calculations are based on the Hidden Crossings MOCC method, augmented with rotational and turning point effects.

  4. Generation of radially polarized beams based on thermal analysis of a working cavity.

    PubMed

    He, Guangyuan; Guo, Jing; Wang, Biao; Jiao, Zhongxing

    2011-09-12

    The laser oscillation and polarization behavior of a side-pumped Nd:YAG laser are studied theoretically and experimentally by a thermal model for a working cavity. We use this model along with the Magni method, which gives a new stability diagram, to show important characteristics of the resonator. High-power radially and azimuthally polarized laser beams are obtained with a Nd:YAG module in a plano-plano cavity. Special regions and thermal hysteresis loops are observed in the experiments, which are concordant with the theoretical predictions.

  5. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yueqi; Lava, Pascal; Reu, Phillip

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  6. Theoretical analysis on the measurement errors of local 2D DIC: Part I temporal and spatial uncertainty quantification of displacement measurements

    DOE PAGES

    Wang, Yueqi; Lava, Pascal; Reu, Phillip; ...

    2015-12-23

    This study presents a theoretical uncertainty quantification of displacement measurements by subset-based 2D-digital image correlation. A generalized solution to estimate the random error of displacement measurement is presented. The obtained solution suggests that the random error of displacement measurements is determined by the image noise, the summation of the intensity gradient in a subset, the subpixel part of displacement, and the interpolation scheme. The proposed method is validated with virtual digital image correlation tests.

  7. Suspended tungsten-based nanowires with enhanced mechanical properties grown by focused ion beam induced deposition

    NASA Astrophysics Data System (ADS)

    Córdoba, Rosa; Lorenzoni, Matteo; Pablo-Navarro, Javier; Magén, César; Pérez-Murano, Francesc; María De Teresa, José

    2017-11-01

    The implementation of three-dimensional (3D) nano-objects as building blocks for the next generation of electro-mechanical, memory and sensing nano-devices is at the forefront of technology. The direct writing of functional 3D nanostructures is made feasible by using a method based on focused ion beam induced deposition (FIBID). We use this technique to grow horizontally suspended tungsten nanowires and then study their nano-mechanical properties by three-point bending method with atomic force microscopy. These measurements reveal that these nanowires exhibit a yield strength up to 12 times higher than that of the bulk tungsten, and near the theoretical value of 0.1 times the Young’s modulus (E). We find a size dependence of E that is adequately described by a core-shell model, which has been confirmed by transmission electron microscopy and compositional analysis at the nanoscale. Additionally, we show that experimental resonance frequencies of suspended nanowires (in the MHz range) are in good agreement with theoretical values. These extraordinary mechanical properties are key to designing electro-mechanically robust nanodevices based on FIBID tungsten nanowires.

  8. Blood oxygenation and flow measurements using a single 720-nm tunable V-cavity laser.

    PubMed

    Feng, Yafei; Deng, Haoyu; Chen, Xin; He, Jian-Jun

    2017-08-01

    We propose and demonstrate a single-laser-based sensing method for measuring both blood oxygenation and microvascular blood flow. Based on the optimal wavelength range found from theoretical analysis on differential absorption based blood oxygenation measurement, we designed and fabricated a 720-nm-band wavelength tunable V-cavity laser. Without any grating or bandgap engineering, the laser has a wavelength tuning range of 14.1 nm. By using the laser emitting at 710.3 nm and 724.4 nm to measure the oxygenation and blood flow, we experimentally demonstrate the proposed method.

  9. A method of designing smartphone interface based on the extended user's mental model

    NASA Astrophysics Data System (ADS)

    Zhao, Wei; Li, Fengmin; Bian, Jiali; Pan, Juchen; Song, Song

    2017-01-01

    The user's mental model is the core guiding theory of product design, especially practical products. The essence of practical product is a tool which is used by users to meet their needs. Then, the most important feature of a tool is usability. The design method based on the user's mental model provides a series of practical and feasible theoretical guidance for improving the usability of the product according to the user's awareness of things. In this paper, we propose a method of designing smartphone interface based on the extended user's mental model according to further research on user groups. This approach achieves personalized customization of smartphone application interface and enhance application using efficiency.

  10. Derivation and application of an analytical rock displacement solution on rectangular cavern wall using the inverse mapping method

    PubMed Central

    Gao, Mingzhong; Qiu, Zhiqiang; Yin, Xiangang; Li, Shengwei; Liu, Qiang

    2017-01-01

    Rectangular caverns are increasingly used in underground engineering projects, the failure mechanism of rectangular cavern wall rock is significantly different as a result of the cross-sectional shape and variations in wall stress distributions. However, the conventional computational method always results in a long-winded computational process and multiple displacement solutions of internal rectangular wall rock. This paper uses a Laurent series complex method to obtain a mapping function expression based on complex variable function theory and conformal transformation. This method is combined with the Schwarz-Christoffel method to calculate the mapping function coefficient and to determine the rectangular cavern wall rock deformation. With regard to the inverse mapping concept, the mapping relation between the polar coordinate system within plane ς and a corresponding unique plane coordinate point inside the cavern wall rock is discussed. The disadvantage of multiple solutions when mapping from the plane to the polar coordinate system is addressed. This theoretical formula is used to calculate wall rock boundary deformation and displacement field nephograms inside the wall rock for a given cavern height and width. A comparison with ANSYS numerical software results suggests that the theoretical solution and numerical solution exhibit identical trends, thereby demonstrating the method’s validity. This method greatly improves the computing accuracy and reduces the difficulty in solving for cavern boundary and internal wall rock displacements. The proposed method provides a theoretical guide for controlling cavern wall rock deformation failure. PMID:29155892

  11. Case-based learning facilitates critical thinking in undergraduate nutrition education: students describe the big picture.

    PubMed

    Harman, Tara; Bertrand, Brenda; Greer, Annette; Pettus, Arianna; Jennings, Jill; Wall-Bassett, Elizabeth; Babatunde, Oyinlola Toyin

    2015-03-01

    The vision of dietetics professions is based on interdependent education, credentialing, and practice. Case-based learning is a method of problem-based learning that is designed to heighten higher-order thinking. Case-based learning can assist students to connect education and specialized practice while developing professional skills for entry-level practice in nutrition and dietetics. This study examined student perspectives of their learning after immersion into case-based learning in nutrition courses. The theoretical frameworks of phenomenology and Bloom's Taxonomy of Educational Objectives triangulated the design of this qualitative study. Data were drawn from 426 written responses and three focus group discussions among 85 students from three upper-level undergraduate nutrition courses. Coding served to deconstruct the essence of respondent meaning given to case-based learning as a learning method. The analysis of the coding was the constructive stage that led to configuration of themes and theoretical practice pathways about student learning. Four leading themes emerged. Story or Scenario represents the ways that students described case-based learning, changes in student thought processes to accommodate case-based learning are illustrated in Method of Learning, higher cognitive learning that was achieved from case-based learning is represented in Problem Solving, and Future Practice details how students explained perceived professional competency gains from case-based learning. The skills that students acquired are consistent with those identified as essential to professional practice. In addition, the common concept of Big Picture was iterated throughout the themes and demonstrated that case-based learning prepares students for multifaceted problems that they are likely to encounter in professional practice. Copyright © 2015 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.

  12. Novel Maximum-based Timing Acquisition for Spread-Spectrum Communications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sibbetty, Taylor; Moradiz, Hussein; Farhang-Boroujeny, Behrouz

    This paper proposes and analyzes a new packet detection and timing acquisition method for spread spectrum systems. The proposed method provides an enhancement over the typical thresholding techniques that have been proposed for direct sequence spread spectrum (DS-SS). The effective implementation of thresholding methods typically require accurate knowledge of the received signal-to-noise ratio (SNR), which is particularly difficult to estimate in spread spectrum systems. Instead, we propose a method which utilizes a consistency metric of the location of maximum samples at the output of a filter matched to the spread spectrum waveform to achieve acquisition, and does not require knowledgemore » of the received SNR. Through theoretical study, we show that the proposed method offers a low probability of missed detection over a large range of SNR with a corresponding probability of false alarm far lower than other methods. Computer simulations that corroborate our theoretical results are also presented. Although our work here has been motivated by our previous study of a filter bank multicarrier spread-spectrum (FB-MC-SS) system, the proposed method is applicable to DS-SS systems as well.« less

  13. Inverse problems and optimal experiment design in unsteady heat transfer processes identification

    NASA Technical Reports Server (NTRS)

    Artyukhin, Eugene A.

    1991-01-01

    Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tuo, Rui; Wu, C. F. Jeff

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  15. An improved method for determination of refractive index of absorbing films: A simulation study

    NASA Astrophysics Data System (ADS)

    Özcan, Seçkin; Coşkun, Emre; Kocahan, Özlem; Özder, Serhat

    2017-02-01

    In this work an improved version of the method presented by Gandhi was presented for determination of refractive index of absorbing films. In this method local maxima of consecutive interference order in transmittance spectrum are used. The method is based on the minimizing procedure leading to the determination of interference order accurately by using reasonable Cauchy parameters. It was tested on theoretically generated transmittance spectrum of absorbing film and the details of the minimization procedure were discussed.

  16. Modeling of the attenuation of stress waves in concrete based on the Rayleigh damping model using time-reversal and PZT transducers

    NASA Astrophysics Data System (ADS)

    Tian, Zhen; Huo, Linsheng; Gao, Weihang; Li, Hongnan; Song, Gangbing

    2017-10-01

    Wave-based concrete structural health monitoring has attracted much attention. A stress wave experiences significant attenuation in concrete, however there is a lack of a unified method for predicting the attenuation coefficient of the stress wave. In this paper, a simple and effective absorption attenuation model of stress waves in concrete is developed based on the Rayleigh damping model, which indicates that the absorption attenuation coefficient of stress waves in concrete is directly proportional to the square of the stress wave frequency when the damping ratio is small. In order to verify the theoretical model, related experiments were carried out. During the experiments, a concrete beam was designed in which the d33-model piezoelectric smart aggregates were embedded to detect the propagation of stress waves. It is difficult to distinguish direct stress waves due to the complex propagation paths and the reflection and scattering of stress waves in concrete. Hence, as another innovation of this paper, a new method for computing the absorption attenuation coefficient based on the time-reversal method is developed. Due to the self-adaptive focusing properties of the time-reversal method, the time-reversed stress wave focuses and generates a peak value. The time-reversal method eliminates the adverse effects of multipaths, reflection, and scattering. The absorption attenuation coefficient is computed by analyzing the peak value changes of the time-reversal focused signal. Finally, the experimental results are found to be in good agreement with the theoretical model.

  17. Eddy current loss analysis of open-slot fault-tolerant permanent-magnet machines based on conformal mapping method

    NASA Astrophysics Data System (ADS)

    Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang

    2017-05-01

    This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.

  18. Isoelectric Point, Electric Charge, and Nomenclature of the Acid-Base Residues of Proteins

    ERIC Educational Resources Information Center

    Maldonado, Andres A.; Ribeiro, Joao M.; Sillero, Antonio

    2010-01-01

    The main object of this work is to present the pedagogical usefulness of the theoretical methods, developed in this laboratory, for the determination of the isoelectric point (pI) and the net electric charge of proteins together with some comments on the naming of the acid-base residues of proteins. (Contains 8 figures and 4 tables.)

  19. A singlechip-computer-controlled conductivity meter based on conductance-frequency transformation

    NASA Astrophysics Data System (ADS)

    Chen, Wenxiang; Hong, Baocai

    2005-02-01

    A portable conductivity meter controlled by singlechip computer was designed. The instrument uses conductance-frequency transformation method to measure the conductivity of solution. The circuitry is simple and reliable. Another feature of the instrument is that the temperature compensation is realised by changing counting time of the timing counter. The theoretical based and the usage of temperature compensation are narrated.

  20. An Analysis of Categorical and Quantitative Methods for Planning Under Uncertainty

    PubMed Central

    Langlotz, Curtis P.; Shortliffe, Edward H.

    1988-01-01

    Decision theory and logical reasoning are both methods for representing and solving medical decision problems. We analyze the usefulness of these two approaches to medical therapy planning by establishing a simple correspondence between decision theory and non-monotonic logic, a formalization of categorical logical reasoning. The analysis indicates that categorical approaches to planning can be viewed as comprising two decision-theoretic concepts: probabilities (degrees of belief in planning hypotheses) and utilities (degrees of desirability of planning outcomes). We present and discuss examples of the following lessons from this decision-theoretic view of categorical (nonmonotonic) reasoning: (1) Decision theory and artificial intelligence techniques are intended to solve different components of the planning problem. (2) When considered in the context of planning under uncertainty, nonmonotonic logics do not retain the domain-independent characteristics of classical logical reasoning for planning under certainty. (3) Because certain nonmonotonic programming paradigms (e.g., frame-based inheritance, rule-based planning, protocol-based reminders) are inherently problem-specific, they may be inappropriate to employ in the solution of certain types of planning problems. We discuss how these conclusions affect several current medical informatics research issues, including the construction of “very large” medical knowledge bases.

  1. A study on the theoretical and practical accuracy of conoscopic holography-based surface measurements: toward image registration in minimally invasive surgery†

    PubMed Central

    Burgner, J.; Simpson, A. L.; Fitzpatrick, J. M.; Lathrop, R. A.; Herrell, S. D.; Miga, M. I.; Webster, R. J.

    2013-01-01

    Background Registered medical images can assist with surgical navigation and enable image-guided therapy delivery. In soft tissues, surface-based registration is often used and can be facilitated by laser surface scanning. Tracked conoscopic holography (which provides distance measurements) has been recently proposed as a minimally invasive way to obtain surface scans. Moving this technique from concept to clinical use requires a rigorous accuracy evaluation, which is the purpose of our paper. Methods We adapt recent non-homogeneous and anisotropic point-based registration results to provide a theoretical framework for predicting the accuracy of tracked distance measurement systems. Experiments are conducted a complex objects of defined geometry, an anthropomorphic kidney phantom and a human cadaver kidney. Results Experiments agree with model predictions, producing point RMS errors consistently < 1 mm, surface-based registration with mean closest point error < 1 mm in the phantom and a RMS target registration error of 0.8 mm in the human cadaver kidney. Conclusions Tracked conoscopic holography is clinically viable; it enables minimally invasive surface scan accuracy comparable to current clinical methods that require open surgery. PMID:22761086

  2. A theoretical introduction to "combinatory SYBRGreen qPCR screening", a matrix-based approach for the detection of materials derived from genetically modified plants.

    PubMed

    Van den Bulcke, Marc; Lievens, Antoon; Barbau-Piednoir, Elodie; MbongoloMbella, Guillaume; Roosens, Nancy; Sneyers, Myriam; Casi, Amaya Leunda

    2010-03-01

    The detection of genetically modified (GM) materials in food and feed products is a complex multi-step analytical process invoking screening, identification, and often quantification of the genetically modified organisms (GMO) present in a sample. "Combinatory qPCR SYBRGreen screening" (CoSYPS) is a matrix-based approach for determining the presence of GM plant materials in products. The CoSYPS decision-support system (DSS) interprets the analytical results of SYBRGREEN qPCR analysis based on four values: the C(t)- and T(m) values and the LOD and LOQ for each method. A theoretical explanation of the different concepts applied in CoSYPS analysis is given (GMO Universe, "Prime number tracing", matrix/combinatory approach) and documented using the RoundUp Ready soy GTS40-3-2 as an example. By applying a limited set of SYBRGREEN qPCR methods and through application of a newly developed "prime number"-based algorithm, the nature of subsets of corresponding GMO in a sample can be determined. Together, these analyses provide guidance for semi-quantitative estimation of GMO presence in a food and feed product.

  3. Theory and applications of refractive index-based optical microscopy to measure protein mass transfer in spherical adsorbent particles.

    PubMed

    Bankston, Theresa E; Stone, Melani C; Carta, Giorgio

    2008-04-25

    This work provides the theoretical foundation and a range of practical application examples of a recently developed method to measure protein mass transfer in adsorbent particles using refractive index-based optical microscopy. A ray-theoretic approach is first used to predict the behavior of light traveling through a particle during transient protein adsorption. When the protein concentration gradient in the particle is sharp, resulting in a steep refractive index gradient, the rays bend and intersect, thereby concentrating light in a sharp ring that marks the position of the adsorption front. This behavior is observed when mass transfer is dominated by pore diffusion and the adsorption isotherm is highly favorable. Applications to protein cation-exchange, hydrophobic interaction, and affinity adsorption are then considered using, as examples, the three commercial, agarose-based stationary phases SP-Sepharose-FF, Butyl Sepharose 4FF, and MabSelect. In all three cases, the method provides results that are consistent with measurements based on batch adsorption and previously published data confirming its utility for the determination of protein mass transfer kinetics under a broad range of practically relevant conditions.

  4. pp ii Brain, behaviour and mathematics: Are we using the right approaches? [review article

    NASA Astrophysics Data System (ADS)

    Perez Velazquez, Jose Luis

    2005-12-01

    Mathematics are used in biological sciences mostly as a quantifying tool, for it is the science of numbers after all. There is a long-standing interest in the application of mathematical methods and concepts to neuroscience in attempts to decipher brain activity. While there has been a very wide use of mathematical/physical methodologies, less effort has been made to formulate a comprehensive and integrative theory of brain function. This review concentrates on recent developments, uses and abuses of mathematical formalisms and techniques that are being applied in brain research, particularly the current trend of using dynamical system theory to unravel the global, collective dynamics of brain activity. It is worth emphasising that the theoretician-neuroscientist, eager to apply mathematical analysis to neuronal recordings, has to consider carefully some crucial anatomo-physiological assumptions, that may not be as accurate as the specific methods require. On the other hand, the experimentalist neuro-physicist, with an inclination to implement mathematical thoughts in brain science, has to make an effort to comprehend the bases of the theoretical concepts that can be used as frameworks or as analysis methods of brain electrophysiological recordings, and to critically inspect the accuracy of the interpretations of the results based on the neurophysiological ground. It is hoped that this brief overview of anatomical and physiological presumptions and their relation to theoretical paradigms will help clarify some particular points of interest in current trends in brain science, and may provoke further reflections on how certain or uncertain it is to conceptualise brain function based on these theoretical frameworks, if the physiological and experimental constraints are not as accurate as the models prescribe.

  5. A note on probabilistic models over strings: the linear algebra approach.

    PubMed

    Bouchard-Côté, Alexandre

    2013-12-01

    Probabilistic models over strings have played a key role in developing methods that take into consideration indels as phylogenetically informative events. There is an extensive literature on using automata and transducers on phylogenies to do inference on these probabilistic models, in which an important theoretical question is the complexity of computing the normalization of a class of string-valued graphical models. This question has been investigated using tools from combinatorics, dynamic programming, and graph theory, and has practical applications in Bayesian phylogenetics. In this work, we revisit this theoretical question from a different point of view, based on linear algebra. The main contribution is a set of results based on this linear algebra view that facilitate the analysis and design of inference algorithms on string-valued graphical models. As an illustration, we use this method to give a new elementary proof of a known result on the complexity of inference on the "TKF91" model, a well-known probabilistic model over strings. Compared to previous work, our proving method is easier to extend to other models, since it relies on a novel weak condition, triangular transducers, which is easy to establish in practice. The linear algebra view provides a concise way of describing transducer algorithms and their compositions, opens the possibility of transferring fast linear algebra libraries (for example, based on GPUs), as well as low rank matrix approximation methods, to string-valued inference problems.

  6. Gradient descent for robust kernel-based regression

    NASA Astrophysics Data System (ADS)

    Guo, Zheng-Chu; Hu, Ting; Shi, Lei

    2018-06-01

    In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.

  7. The stability of the contact interface of cylindrical and spherical shock tubes

    NASA Astrophysics Data System (ADS)

    Crittenden, Paul E.; Balachandar, S.

    2018-06-01

    The stability of the contact interface for radial shock tubes is investigated as a model for explosive dispersal. The advection upstream splitting method with velocity and pressure diffusion (AUSM+-up) is used to solve for the radial base flow. To investigate the stability of the resulting contact interface, perturbed governing equations are derived assuming harmonic modes in the transverse directions. The perturbed harmonic flow is solved by assuming an initial disturbance and using a perturbed version of AUSM+-up derived in this paper. The intensity of the perturbation near the contact interface is computed and compared to theoretical results obtained by others. Despite the simplifying assumptions of the theoretical analysis, very good agreement is observed. Not only can the magnitude of the instability be predicted during the initial expansion, but also remarkably the agreement between the numerical and theoretical results can be maintained through the collision between the secondary shock and the contact interface. Since the theoretical results only depend upon the time evolution of the base flow, the stability of various modes could be quickly investigated without explicitly solving a system of partial differential equations for the perturbed flow.

  8. Analysis of collapse in flattening a micro-grooved heat pipe by lateral compression

    NASA Astrophysics Data System (ADS)

    Li, Yong; He, Ting; Zeng, Zhixin

    2012-11-01

    The collapse of thin-walled micro-grooved heat pipes is a common phenomenon in the tube flattening process, which seriously influences the heat transfer performance and appearance of heat pipe. At present, there is no other better method to solve this problem. A new method by heating the heat pipe is proposed to eliminate the collapse during the flattening process. The effectiveness of the proposed method is investigated through a theoretical model, a finite element(FE) analysis, and experimental method. Firstly, A theoretical model based on a deformation model of six plastic hinges and the Antoine equation of the working fluid is established to analyze the collapse of thin walls at different temperatures. Then, the FE simulation and experiments of flattening process at different temperatures are carried out and compared with theoretical model. Finally, the FE model is followed to study the loads of the plates at different temperatures and heights of flattened heat pipes. The results of the theoretical model conform to those of the FE simulation and experiments in the flattened zone. The collapse occurs at room temperature. As the temperature increases, the collapse decreases and finally disappears at approximately 130 °C for various heights of flattened heat pipes. The loads of the moving plate increase as the temperature increases. Thus, the reasonable temperature for eliminating the collapse and reducing the load is approximately 130 °C. The advantage of the proposed method is that the collapse is reduced or eliminated by means of the thermal deformation characteristic of heat pipe itself instead of by external support. As a result, the heat transfer efficiency of heat pipe is raised.

  9. Solidification kinetics of a Cu-Zr alloy: ground-based and microgravity experiments

    NASA Astrophysics Data System (ADS)

    Galenko, P. K.; Hanke, R.; Paul, P.; Koch, S.; Rettenmayr, M.; Gegner, J.; Herlach, D. M.; Dreier, W.; Kharanzhevski, E. V.

    2017-04-01

    Experimental and theoretical results obtained in the MULTIPHAS-project (ESA-European Space Agency and DLR-German Aerospace Center) are critically discussed regarding solidification kinetics of congruently melting and glass forming Cu50Zr50 alloy samples. The samples are investigated during solidification using a containerless technique in the Electromagnetic Levitation Facility [1]. Applying elaborated methodologies for ground-based and microgravity experimental investigations [2], the kinetics of primary dendritic solidification is quantitatively evaluated. Electromagnetic Levitator in microgravity (parabolic flights and on board of the International Space Station) and Electrostatic Levitator on Ground are employed. The solidification kinetics is determined using a high-speed camera and applying two evaluation methods: “Frame by Frame” (FFM) and “First Frame - Last Frame” (FLM). In the theoretical interpretation of the solidification experiments, special attention is given to the behavior of the cluster structure in Cu50Zr50 samples with the increase of undercooling. Experimental results on solidification kinetics are interpreted using a theoretical model of diffusion controlled dendrite growth.

  10. Theoretical studies on a new furazan compound bis[4-nitramino-furazanyl-3-azoxy]azofurazan (ADNAAF).

    PubMed

    Zheng, Chunmei; Chu, Yuting; Xu, Liwen; Wang, Fengyun; Lei, Wu; Xia, Mingzhu; Gong, Xuedong

    2016-06-01

    Bis[4-nitraminofurazanyl-3-azoxy]azofurazan (ADNAAF), synthesized in our previous work [1], contains four furazan units connected to the linkage of the azo-group and azoxy-group. For further research, some theoretical characters were studied by the density functional theoretical (DFT) method. The optimized structures and the energy gaps between the HOMO and LUMO were studied at the B3LYP/6-311++G** level. The isodesmic reaction method was used for estimating the enthalpy of formation. The detonation performances were estimated with Kamlet-Jacobs equations based on the predicted density and enthalpy of formation in the solid state. ADAAF was also calculated by the same method for comparison. It was found that the nitramino group of ADNAAF can elongate the length of adjacent C-N bonds than the amino group of ADAAF. The gas-phase and solid-phase enthalpies of formation of ADNAAF are larger than those of ADAAF. The detonation performances of ADNAAF are better than ADAAF and RDX, and similar to HMX. The trigger bond of ADNAAF is the N-N bonds in the nitramino groups, and the nitramino group is more active than the amino group (-NH2).

  11. Theoretical model of an optothermal microactuator directly driven by laser beams

    NASA Astrophysics Data System (ADS)

    Han, Xu; Zhang, Haijun; Xu, Rui; Wang, Shuying; Qin, Chun

    2015-07-01

    This paper proposes a novel method of optothermal microactuation based on single and dual laser beams (spots). The theoretical model of the optothermal temperature distribution of an expansion arm is established and simulated, indicating that the maximum temperature of the arm irradiated by dual laser spots, at the same laser power level, is much lower than that irradiated by one single spot, and thus the risk of burning out and damaging the optothermal microactuator (OTMA) can be effectively avoided. To verify the presented method, a 750 μm long OTMA with a 100 μm wide expansion arm is designed and microfabricated, and single/dual laser beams with a wavelength of 650 nm are adopted to carry out experiments. The experimental results showed that the optothermal deflection of the OTMA under the irradiation of dual laser spots is larger than that under the irradiation of a single spot with the same power, which is in accordance with theoretical prediction. This method of optothermal microactuation may expand the practical applications of microactuators, which serve as critical units in micromechanical devices and micro-opto-electro-mechanical systems (MOEMS).

  12. Vibrationally resolved photoelectron spectroscopy of electronic excited states of DNA bases: application to the ã state of thymine cation.

    PubMed

    Hochlaf, Majdi; Pan, Yi; Lau, Kai-Chung; Majdi, Youssef; Poisson, Lionel; Garcia, Gustavo A; Nahon, Laurent; Al Mogren, Muneerah Mogren; Schwell, Martin

    2015-02-19

    For fully understanding the light-molecule interaction dynamics at short time scales, recent theoretical and experimental studies proved the importance of accurate characterizations not only of the ground (D0) but also of the electronic excited states (e.g., D1) of molecules. While ground state investigations are currently straightforward, those of electronic excited states are not. Here, we characterized the à electronic state of ionic thymine (T(+)) DNA base using explicitly correlated coupled cluster ab initio methods and state-of-the-art synchrotron-based electron/ion coincidence techniques. The experimental spectrum is composed of rich and long vibrational progressions corresponding to the population of the low frequency modes of T(+)(Ã). This work challenges previous numerous works carried out on DNA bases using common synchrotron and VUV-based photoelectron spectroscopies. We provide hence a powerful theoretical and experimental framework to study the electronic structure of ionized DNA bases that could be generalized to other medium-sized biologically relevant systems.

  13. Supercoherent states and physical systems

    NASA Technical Reports Server (NTRS)

    Fatyga, B. W.; Kostelecky, V. Alan; Nieto, Michael Martin; Truax, D. Rodney

    1992-01-01

    A method is developed for obtaining coherent states of a system admitting a supersymmetry. These states are called supercoherent states. The presented approach is based on an extension to supergroups of the usual group-theoretic approach. The example of the supersymmetric harmonic oscillator is discussed, thereby illustrating some of the attractive features of the method. Supercoherent states of an electron moving in a constant magnetic field are also described.

  14. Methods for estimating private forest ownership statistics: revised methods for the USDA Forest Service's National Woodland Owner Survey

    Treesearch

    Brenton J. ​Dickinson; Brett J. Butler

    2013-01-01

    The USDA Forest Service's National Woodland Owner Survey (NWOS) is conducted to better understand the attitudes and behaviors of private forest ownerships, which control more than half of US forestland. Inferences about the populations of interest should be based on theoretically sound estimation procedures. A recent review of the procedures disclosed an error in...

  15. Determination of hydroxide and carbonate contents of alkaline electrolytes containing zinc

    NASA Technical Reports Server (NTRS)

    Otterson, D. A.

    1975-01-01

    A method to prevent zinc interference with the titration of OH- and CO3-2 ions in alkaline electrolytes with standard acid is presented. The Ba-EDTA complex was tested and shown to prevent zinc interference with acid-base titrations without introducing other types of interference. Theoretical considerations indicate that this method can be used to prevent interference by other metals.

  16. Complex dark-field contrast and its retrieval in x-ray phase contrast imaging implemented with Talbot interferometry.

    PubMed

    Yang, Yi; Tang, Xiangyang

    2014-10-01

    Under the existing theoretical framework of x-ray phase contrast imaging methods implemented with Talbot interferometry, the dark-field contrast refers to the reduction in interference fringe visibility due to small-angle x-ray scattering of the subpixel microstructures of an object to be imaged. This study investigates how an object's subpixel microstructures can also affect the phase of the intensity oscillations. Instead of assuming that the object's subpixel microstructures distribute in space randomly, the authors' theoretical derivation starts by assuming that an object's attenuation projection and phase shift vary at a characteristic size that is not smaller than the period of analyzer grating G₂ and a characteristic length dc. Based on the paraxial Fresnel-Kirchhoff theory, the analytic formulae to characterize the zeroth- and first-order Fourier coefficients of the x-ray irradiance recorded at each detector cell are derived. Then the concept of complex dark-field contrast is introduced to quantify the influence of the object's microstructures on both the interference fringe visibility and the phase of intensity oscillations. A method based on the phase-attenuation duality that holds for soft tissues and high x-ray energies is proposed to retrieve the imaginary part of the complex dark-field contrast for imaging. Through computer simulation study with a specially designed numerical phantom, they evaluate and validate the derived analytic formulae and the proposed retrieval method. Both theoretical analysis and computer simulation study show that the effect of an object's subpixel microstructures on x-ray phase contrast imaging method implemented with Talbot interferometry can be fully characterized by a complex dark-field contrast. The imaginary part of complex dark-field contrast quantifies the influence of the object's subpixel microstructures on the phase of intensity oscillations. Furthermore, at relatively high energies, for soft tissues it can be retrieved for imaging with a method based on the phase-attenuation duality. The analytic formulae derived in this work to characterize the complex dark-field contrast in x-ray phase contrast imaging method implemented with Talbot interferometry are of significance, which may initiate more activities in the research and development of x-ray differential phase contrast imaging for extensive biomedical applications.

  17. MRF energy minimization and beyond via dual decomposition.

    PubMed

    Komodakis, Nikos; Paragios, Nikos; Tziritas, Georgios

    2011-03-01

    This paper introduces a new rigorous theoretical framework to address discrete MRF-based optimization in computer vision. Such a framework exploits the powerful technique of Dual Decomposition. It is based on a projected subgradient scheme that attempts to solve an MRF optimization problem by first decomposing it into a set of appropriately chosen subproblems, and then combining their solutions in a principled way. In order to determine the limits of this method, we analyze the conditions that these subproblems have to satisfy and demonstrate the extreme generality and flexibility of such an approach. We thus show that by appropriately choosing what subproblems to use, one can design novel and very powerful MRF optimization algorithms. For instance, in this manner we are able to derive algorithms that: 1) generalize and extend state-of-the-art message-passing methods, 2) optimize very tight LP-relaxations to MRF optimization, and 3) take full advantage of the special structure that may exist in particular MRFs, allowing the use of efficient inference techniques such as, e.g., graph-cut-based methods. Theoretical analysis on the bounds related with the different algorithms derived from our framework and experimental results/comparisons using synthetic and real data for a variety of tasks in computer vision demonstrate the extreme potentials of our approach.

  18. Developing Emotion-Based Case Formulations: A Research-Informed Method.

    PubMed

    Pascual-Leone, Antonio; Kramer, Ueli

    2017-01-01

    New research-informed methods for case conceptualization that cut across traditional therapy approaches are increasingly popular. This paper presents a trans-theoretical approach to case formulation based on the research observations of emotion. The sequential model of emotional processing (Pascual-Leone & Greenberg, 2007) is a process research model that provides concrete markers for therapists to observe the emerging emotional development of their clients. We illustrate how this model can be used by clinicians to track change and provides a 'clinical map,' by which therapist may orient themselves in-session and plan treatment interventions. Emotional processing offers as a trans-theoretical framework for therapists who wish to conduct emotion-based case formulations. First, we present criteria for why this research model translates well into practice. Second, two contrasting case studies are presented to demonstrate the method. The model bridges research with practice by using client emotion as an axis of integration. Key Practitioner Message Process research on emotion can offer a template for therapists to make case formulations while using a range of treatment approaches. The sequential model of emotional processing provides a 'process map' of concrete markers for therapists to (1) observe the emerging emotional development of their clients, and (2) help therapists develop a treatment plan. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Prediction of the air-water partition coefficient for perfluoro-2-methyl-3-pentanone using high-level Gaussian-4 composite theoretical methods.

    PubMed

    Rayne, Sierra; Forest, Kaya

    2014-09-19

    The air-water partition coefficient (Kaw) of perfluoro-2-methyl-3-pentanone (PFMP) was estimated using the G4MP2/G4 levels of theory and the SMD solvation model. A suite of 31 fluorinated compounds was employed to calibrate the theoretical method. Excellent agreement between experimental and directly calculated Kaw values was obtained for the calibration compounds. The PCM solvation model was found to yield unsatisfactory Kaw estimates for fluorinated compounds at both levels of theory. The HENRYWIN Kaw estimation program also exhibited poor Kaw prediction performance on the training set. Based on the resulting regression equation for the calibration compounds, the G4MP2-SMD method constrained the estimated Kaw of PFMP to the range 5-8 × 10(-6) M atm(-1). The magnitude of this Kaw range indicates almost all PFMP released into the atmosphere or near the land-atmosphere interface will reside in the gas phase, with only minor quantities dissolved in the aqueous phase as the parent compound and/or its hydrate/hydrate conjugate base. Following discharge into aqueous systems not at equilibrium with the atmosphere, significant quantities of PFMP will be present as the dissolved parent compound and/or its hydrate/hydrate conjugate base.

  20. Analysis of the tunable asymmetric fiber F-P cavity for fiber strain sensor edge-filter demodulation

    NASA Astrophysics Data System (ADS)

    Chen, Haotao; Liang, Youcheng

    2014-12-01

    An asymmetric fiber (Fabry-Pérot, F-P) interferometric cavity with the good linearity and wide dynamic range was successfully designed based on the optical thin film characteristic matrix theory; by adjusting the material of two different thin metallic layers, the asymmetric fiber F-P interferometric cavity was fabricated by depositing the multi-layer thin films on the optical fiber's end face. The asymmetric F-P cavity has the extensive potential application. In this paper, the demodulation method for the wavelength shift of the fiber Bragg grating (FBG) sensor based on the F-P cavity is demonstrated, and a theoretical formula is obtained. And the experimental results coincide well with the computational results obtained from the theoretical model.

  1. Computer-aided molecular modeling techniques for predicting the stability of drug cyclodextrin inclusion complexes in aqueous solutions

    NASA Astrophysics Data System (ADS)

    Faucci, Maria Teresa; Melani, Fabrizio; Mura, Paola

    2002-06-01

    Molecular modeling was used to investigate factors influencing complex formation between cyclodextrins and guest molecules and predict their stability through a theoretical model based on the search for a correlation between experimental stability constants ( Ks) and some theoretical parameters describing complexation (docking energy, host-guest contact surfaces, intermolecular interaction fields) calculated from complex structures at a minimum conformational energy, obtained through stochastic methods based on molecular dynamic simulations. Naproxen, ibuprofen, ketoprofen and ibuproxam were used as model drug molecules. Multiple Regression Analysis allowed identification of the significant factors for the complex stability. A mathematical model ( r=0.897) related log Ks with complex docking energy and lipophilic molecular fields of cyclodextrin and drug.

  2. Theoretical model of a polarization diffractive elements for the light beams conversion holographic formation in PDLCs

    NASA Astrophysics Data System (ADS)

    Sharangovich, Sergey N.; Semkin, Artem O.

    2017-12-01

    In this work a theoretical model of the holographic formation of the polarization diffractive optical elements for the transformation of Gaussian light beams into Bessel-like ones in polymer-dispersed liquid crystals (PDLC) is developed. The model is based on solving the equations of photo-induced Fredericks transition processes for polarization diffractive elements formation by orthogonally polarized light beams with inhomogeneous amplitude and phase profiles. The results of numerical simulation of the material's dielectric tensor changing due to the structure's formation process are presented for various recording beams' polarization states. Based on the results of numerical simulation, the ability to form the diffractive optical elements for light beams transformation by the polarization holography methods is shown.

  3. New method of contour image processing based on the formalism of spiral light beams

    NASA Astrophysics Data System (ADS)

    Volostnikov, Vladimir G.; Kishkin, S. A.; Kotova, S. P.

    2013-07-01

    The possibility of applying the mathematical formalism of spiral light beams to the problems of contour image recognition is theoretically studied. The advantages and disadvantages of the proposed approach are evaluated; the results of numerical modelling are presented.

  4. Irrigation scheduling and controlling crop water use efficiency with Infrared Thermometry

    USDA-ARS?s Scientific Manuscript database

    Scientific methods for irrigation scheduling include weather, soil and plant-based techniques. Infrared thermometers can be used a non-invasive practice to monitor canopy temperature and better manage irrigation scheduling. This presentation will discuss the theoretical basis for monitoring crop can...

  5. Network Learning for Educational Change. Professional Learning

    ERIC Educational Resources Information Center

    Veugelers, Wiel, Ed.; O'Hair, Mary John, Ed.

    2005-01-01

    School-university networks are becoming an important method to enhance educational renewal and student achievement. Networks go beyond tensions of top-down versus bottom-up, school development and professional development of individuals, theory and practice, and formal and informal organizational structures. The theoretical base of networking…

  6. THE USE OF ELECTRONIC DATA PROCESSING IN CORRECTIONS AND LAW ENFORCEMENT,

    DTIC Science & Technology

    Reviews the reasons, methods, accomplishments and goals of the use of electronic data processing in the fields of correction and law enforcement . Suggest...statistical and case history data in building a sounder theoretical base in the field of law enforcement . (Author)

  7. Development of Monitoring and Diagnostic Methods for Robots Used In Remediation of Waste Sites - Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, M.

    2000-04-01

    This project is the first evaluation of model-based diagnostics to hydraulic robot systems. A greater understanding of fault detection for hydraulic robots has been gained, and a new theoretical fault detection model developed and evaluated.

  8. Accurate image-charge method by the use of the residue theorem for core-shell dielectric sphere

    NASA Astrophysics Data System (ADS)

    Fu, Jing; Xu, Zhenli

    2018-02-01

    An accurate image-charge method (ICM) is developed for ionic interactions outside a core-shell structured dielectric sphere. Core-shell particles have wide applications for which the theoretical investigation requires efficient methods for the Green's function used to calculate pairwise interactions of ions. The ICM is based on an inverse Mellin transform from the coefficients of spherical harmonic series of the Green's function such that the polarization charge due to dielectric boundaries is represented by a series of image point charges and an image line charge. The residue theorem is used to accurately calculate the density of the line charge. Numerical results show that the ICM is promising in fast evaluation of the Green's function, and thus it is useful for theoretical investigations of core-shell particles. This routine can also be applicable for solving other problems with spherical dielectric interfaces such as multilayered media and Debye-Hückel equations.

  9. Ground and excited state dipole moments of some flavones using solvatochromic methods: An experimental and theoretical study

    NASA Astrophysics Data System (ADS)

    Kumar, Sanjay; Kapoor, Vinita; Bansal, Ritu; Tandon, H. C.

    2018-03-01

    The absorption and fluorescence characteristics of biologically active flavone derivatives 6-Hydroxy-7,3‧,4‧,5‧-tetramethoxyflavone (6HTMF) and 7-Hydroxy-6,3‧,4‧,5‧-tetramethoxyflavone (7HTMF) are studied at room temperature (298 K) in solvents of different polarities. Excited state dipole moments of these compounds have been determined using the solvatochromic shift method based on the microscopic solvent polarity parameter ETN . Dipole moments in excited state were found to be higher than those in the ground state in both the molecules. A reasonable agreement has been observed between experimental and theoretically calculated dipole moments (using AM1 method). Slightly large value of ground and excited state dipole moments of 7HTMF than 6HTMF are in conformity with predicted electrostatic potential maps. Our results would be helpful in understanding use of these compounds as tunable dye lasers, optical brighteners and biosensors.

  10. Quantum entanglement of identical particles by standard information-theoretic notions

    PubMed Central

    Lo Franco, Rosario; Compagno, Giuseppe

    2016-01-01

    Quantum entanglement of identical particles is essential in quantum information theory. Yet, its correct determination remains an open issue hindering the general understanding and exploitation of many-particle systems. Operator-based methods have been developed that attempt to overcome the issue. Here we introduce a state-based method which, as second quantization, does not label identical particles and presents conceptual and technical advances compared to the previous ones. It establishes the quantitative role played by arbitrary wave function overlaps, local measurements and particle nature (bosons or fermions) in assessing entanglement by notions commonly used in quantum information theory for distinguishable particles, like partial trace. Our approach furthermore shows that bringing identical particles into the same spatial location functions as an entangling gate, providing fundamental theoretical support to recent experimental observations with ultracold atoms. These results pave the way to set and interpret experiments for utilizing quantum correlations in realistic scenarios where overlap of particles can count, as in Bose-Einstein condensates, quantum dots and biological molecular aggregates. PMID:26857475

  11. Dispersion of speckle suppression efficiency for binary DOE structures: spectral domain and coherent matrix approaches.

    PubMed

    Lapchuk, Anatoliy; Prygun, Olexandr; Fu, Minglei; Le, Zichun; Xiong, Qiyuan; Kryuchyn, Andriy

    2017-06-26

    We present the first general theoretical description of speckle suppression efficiency based on an active diffractive optical element (DOE). The approach is based on spectral analysis of diffracted beams and a coherent matrix. Analytical formulae are obtained for the dispersion of speckle suppression efficiency using different DOE structures and different DOE activation methods. We show that a one-sided 2D DOE structure has smaller speckle suppression range than a two-sided 1D DOE structure. Both DOE structures have sufficient speckle suppression range to suppress low-order speckles in the entire visible range, but only the two-sided 1D DOE can suppress higher-order speckles. We also show that a linear shift 2D DOE in a laser projector with a large numerical aperture has higher effective speckle suppression efficiency than the method using switching or step-wise shift DOE structures. The generalized theoretical models elucidate the mechanism and practical realization of speckle suppression.

  12. A Combined Theoretical and Experimental Study for Silver Electroplating

    PubMed Central

    Liu, Anmin; Ren, Xuefeng; An, Maozhong; Zhang, Jinqiu; Yang, Peixia; Wang, Bo; Zhu, Yongming; Wang, Chong

    2014-01-01

    A novel method combined theoretical and experimental study for environmental friendly silver electroplating was introduced. Quantum chemical calculations and molecular dynamic (MD) simulations were employed for predicting the behaviour and function of the complexing agents. Electronic properties, orbital information, and single point energies of the 5,5-dimethylhydantoin (DMH), nicotinic acid (NA), as well as their silver(I)-complexes were provided by quantum chemical calculations based on density functional theory (DFT). Adsorption behaviors of the agents on copper and silver surfaces were investigated using MD simulations. Basing on the data of quantum chemical calculations and MD simulations, we believed that DMH and NA could be the promising complexing agents for silver electroplating. The experimental results, including of electrochemical measurement and silver electroplating, further confirmed the above prediction. This efficient and versatile method thus opens a new window to study or design complexing agents for generalized metal electroplating and will vigorously promote the level of this research region. PMID:24452389

  13. Design study of beam position monitors for measuring second-order moments of charged particle beams

    NASA Astrophysics Data System (ADS)

    Yanagida, Kenichi; Suzuki, Shinsuke; Hanaki, Hirofumi

    2012-01-01

    This paper presents a theoretical investigation on the multipole moments of charged particle beams in two-dimensional polar coordinates. The theoretical description of multipole moments is based on a single-particle system that is expanded to a multiparticle system by superposition, i.e., summing over all single-particle results. This paper also presents an analysis and design method for a beam position monitor (BPM) that detects higher-order (multipole) moments of a charged particle beam. To calculate the electric fields, a numerical analysis based on the finite difference method was created and carried out. Validity of the numerical analysis was proven by comparing the numerical with the analytical results for a BPM with circular cross section. Six-electrode BPMs with circular and elliptical cross sections were designed for the SPring-8 linac. The results of the numerical calculations show that the second-order moment can be detected for beam sizes ≧420μm (circular) and ≧550μm (elliptical).

  14. A collocation-shooting method for solving fractional boundary value problems

    NASA Astrophysics Data System (ADS)

    Al-Mdallal, Qasem M.; Syam, Muhammed I.; Anwar, M. N.

    2010-12-01

    In this paper, we discuss the numerical solution of special class of fractional boundary value problems of order 2. The method of solution is based on a conjugating collocation and spline analysis combined with shooting method. A theoretical analysis about the existence and uniqueness of exact solution for the present class is proven. Two examples involving Bagley-Torvik equation subject to boundary conditions are also presented; numerical results illustrate the accuracy of the present scheme.

  15. Experimental and theoretical studies on tautomeric structures of a newly synthesized 2,2‧(hydrazine-1,2-diylidenebis(propan-1-yl-1-ylidene))diphenol

    NASA Astrophysics Data System (ADS)

    Karakurt, Tuncay; Cukurovali, Alaaddin; Subasi, Nuriye Tuna; Onaran, Abdurrahman; Ece, Abdulilah; Eker, Sıtkı; Kani, Ibrahim

    2018-02-01

    In the present study, a single crystal of a Schiff base, 2,2‧(hydrazine-1,2-diylidenebis(propan-1-yl-1-ylidene))diphenol, was synthesized. The structure of the synthesized crystal was confirmed by 1H and 13C NMR spectroscopic and X-ray diffraction analysis techniques. Experimental and theoretical studies were carried out on two tautomeric structures. It has been observed that the title compound studied can be in two different tautomeric forms, phenol-imine and keto-amine. Theoretical calculations have been performed to support experimental results. Accordingly, the geometric parameters of the compound were optimized by the density functional theory (DFT) method using the Gaussian 09 and Quantum Espresso (QE) packet program was used for periodic boundary conditions (PBC) studies. Furthermore, the compound was also tested for in vitro antifungal activity against Sclerotinia sclerotiorum, Alternaria solani, Fusarium oxysporum f. sp. lycopersici and Monilinia fructigena plant pathogens. Promising inhibition profiles were observed especially towards A. solani. Finally, molecular docking studies and post-docking procedure based on Molecular Mechanics-Generalized Born Surface Area (MM-GBSA) were also carried out to get insight into the compound's binding interactions with the potential. Although theoretical calculations showed that the phenol-imine form was more stable, keto-amine form was predicted to have better binding affinity which was concluded to result from loss of rotational entropy in phenol-imine upon binding. The results obtained here from both experimental and computational methods might serve as a potential lead in the development of novel anti-fungal agents.

  16. Theoretical effects of substituting butter with margarine on risk of cardiovascular disease

    PubMed Central

    Liu, Qing; Rossouw, Jacques E.; Roberts, Mary B.; Liu, Simin; Johnson, Karen C.; Shikany, James M.; Manson, JoAnn E.; Tinker, Lesley F.; Eaton, Charles B.

    2017-01-01

    Background Several recent papers have called into question the deleterious effects of high animal fat diets due to mixed results from epidemiologic studies and the lack of clinical trial evidence in meta-analyses of dietary intervention trials. We were interested in examining the theoretical effects of substituting plant-based fats from different types of margarine for animal based fat from butter on the risk of atherosclerosis-related cardiovascular disease (CVD). Methods We prospectively studied 71,410 women, aged 50–79 years, and evaluated their risk for clinical myocardial infarction (MI), total coronary heart disease (CHD), ischemic stroke and atherosclerosis-related CVD with an average of 13.2 years of follow-up. Butter and margarine intakes were obtained at baseline and Year 3 by means of a validated food frequency questionnaire. Cox proportional hazards regression using a cumulative average diet method was used to estimate the theoretical effect of substituting 1 teaspoon/day of three types of margarine for the same amount of butter. Results Substituting butter or stick margarine with tub margarine was associated with lower risk of MI (HRs=0.95 and 0.91). Subgroup analyses, which evaluated these substitutions among participants with a single source of spreadable fat, showed stronger associations for MI (HRs=0.92 and 0.87). Outcomes of total CHD, ischemic stroke, and atherosclerosis-related CVD showed wide confidence intervals but the same trends as the MI results. Conclusions This theoretical dietary substitution analysis suggests that substituting butter and stick margarine with tub margarine when spreadable fats are eaten may be associated with reduced risk of myocardial infarction. PMID:27648593

  17. Linear Transforms for Fourier Data on the Sphere: Application to High Angular Resolution Diffusion MRI of the Brain

    PubMed Central

    Haldar, Justin P.; Leahy, Richard M.

    2013-01-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. PMID:23353603

  18. Polarization ratio property and material classification method in passive millimeter wave polarimetric imaging

    NASA Astrophysics Data System (ADS)

    Cheng, Yayun; Qi, Bo; Liu, Siyuan; Hu, Fei; Gui, Liangqi; Peng, Xiaohui

    2016-10-01

    Polarimetric measurements can provide additional information as compared to unpolarized ones. In this paper, linear polarization ratio (LPR) is created to be a feature discriminator. The LPR properties of several materials are investigated using Fresnel theory. The theoretical results show that LPR is sensitive to the material type (metal or dielectric). Then a linear polarization ratio-based (LPR-based) method is presented to distinguish between metal and dielectric materials. In order to apply this method to practical applications, the optimal range of incident angle have been discussed. The typical outdoor experiments including various objects such as aluminum plate, grass, concrete, soil and wood, have been conducted to validate the presented classification method.

  19. Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7

    NASA Technical Reports Server (NTRS)

    Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip

    2007-01-01

    This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.

  20. Learning and dynamics in social systems. Comment on "Collective learning modeling based on the kinetic theory of active particles" by D. Burini et al.

    NASA Astrophysics Data System (ADS)

    Dolfin, Marina

    2016-03-01

    The interesting novelty of the paper by Burini et al. [1] is that the authors present a survey and a new approach of collective learning based on suitable development of methods of the kinetic theory [2] and theoretical tools of evolutionary game theory [3]. Methods of statistical dynamics and kinetic theory lead naturally to stochastic and collective dynamics. Indeed, the authors propose the use of games where the state of the interacting entities is delivered by probability distributions.

  1. Polarization-insensitive techniques for optical signal processing

    NASA Astrophysics Data System (ADS)

    Salem, Reza

    2006-12-01

    This thesis investigates polarization-insensitive methods for optical signal processing. Two signal processing techniques are studied: clock recovery based on two-photon absorption in silicon and demultiplexing based on cross-phase modulation in highly nonlinear fiber. The clock recovery system is tested at an 80 Gb/s data rate for both back-to-back and transmission experiments. The demultiplexer is tested at a 160 Gb/s data rate in a back-to-back experiment. We experimentally demonstrate methods for eliminating polarization dependence in both systems. Our experimental results are confirmed by theoretical and numerical analysis.

  2. Theoretical foundation, methods, and criteria for calibrating human vibration models using frequency response functions

    PubMed Central

    Dong, Ren G.; Welcome, Daniel E.; McDowell, Thomas W.; Wu, John Z.

    2015-01-01

    While simulations of the measured biodynamic responses of the whole human body or body segments to vibration are conventionally interpreted as summaries of biodynamic measurements, and the resulting models are considered quantitative, this study looked at these simulations from a different angle: model calibration. The specific aims of this study are to review and clarify the theoretical basis for model calibration, to help formulate the criteria for calibration validation, and to help appropriately select and apply calibration methods. In addition to established vibration theory, a novel theorem of mechanical vibration is also used to enhance the understanding of the mathematical and physical principles of the calibration. Based on this enhanced understanding, a set of criteria was proposed and used to systematically examine the calibration methods. Besides theoretical analyses, a numerical testing method is also used in the examination. This study identified the basic requirements for each calibration method to obtain a unique calibration solution. This study also confirmed that the solution becomes more robust if more than sufficient calibration references are provided. Practically, however, as more references are used, more inconsistencies can arise among the measured data for representing the biodynamic properties. To help account for the relative reliabilities of the references, a baseline weighting scheme is proposed. The analyses suggest that the best choice of calibration method depends on the modeling purpose, the model structure, and the availability and reliability of representative reference data. PMID:26740726

  3. A hierarchy of effective teaching and learning to acquire competence in evidenced-based medicine

    PubMed Central

    Khan, Khalid S; Coomarasamy, Arri

    2006-01-01

    Background A variety of methods exists for teaching and learning evidence-based medicine (EBM). However, there is much debate about the effectiveness of various EBM teaching and learning activities, resulting in a lack of consensus as to what methods constitute the best educational practice. There is a need for a clear hierarchy of educational activities to effectively impart and acquire competence in EBM skills. This paper develops such a hierarchy based on current empirical and theoretical evidence. Discussion EBM requires that health care decisions be based on the best available valid and relevant evidence. To achieve this, teachers delivering EBM curricula need to inculcate amongst learners the skills to gain, assess, apply, integrate and communicate new knowledge in clinical decision-making. Empirical and theoretical evidence suggests that there is a hierarchy of teaching and learning activities in terms of their educational effectiveness: Level 1, interactive and clinically integrated activities; Level 2(a), interactive but classroom based activities; Level 2(b), didactic but clinically integrated activities; and Level 3, didactic, classroom or standalone teaching. Summary All health care professionals need to understand and implement the principles of EBM to improve care of their patients. Interactive and clinically integrated teaching and learning activities provide the basis for the best educational practice in this field. PMID:17173690

  4. Estimating evapotranspiration and drought stress with ground-based thermal remote sensing in agriculture: a review.

    PubMed

    Maes, W H; Steppe, K

    2012-08-01

    As evaporation of water is an energy-demanding process, increasing evapotranspiration rates decrease the surface temperature (Ts) of leaves and plants. Based on this principle, ground-based thermal remote sensing has become one of the most important methods for estimating evapotranspiration and drought stress and for irrigation. This paper reviews its application in agriculture. The review consists of four parts. First, the basics of thermal remote sensing are briefly reviewed. Second, the theoretical relation between Ts and the sensible and latent heat flux is elaborated. A modelling approach was used to evaluate the effect of weather conditions and leaf or vegetation properties on leaf and canopy temperature. Ts increases with increasing air temperature and incoming radiation and with decreasing wind speed and relative humidity. At the leaf level, the leaf angle and leaf dimension have a large influence on Ts; at the vegetation level, Ts is strongly impacted by the roughness length; hence, by canopy height and structure. In the third part, an overview of the different ground-based thermal remote sensing techniques and approaches used to estimate drought stress or evapotranspiration in agriculture is provided. Among other methods, stress time, stress degree day, crop water stress index (CWSI), and stomatal conductance index are discussed. The theoretical models are used to evaluate the performance and sensitivity of the most important methods, corroborating the literature data. In the fourth and final part, a critical view on the future and remaining challenges of ground-based thermal remote sensing is presented.

  5. Acidity and alkalinity in mine drainage: Theoretical considerations

    USGS Publications Warehouse

    Kirby, Carl S.; Cravotta,, Charles A.

    2004-01-01

    Acidity, net acidity, and net alkalinity are widely used parameters for the characterization of mine drainage, but these terms are not well defined and are often misunderstood. Incorrect interpretation of acidity, alkalinity, and derivative terms can lead to inadequate treatment design or poor regulatory decisions. We briefly explain derivations of theoretical expressions of three types of alkalinities (caustic, phenolphthalein, and total) and acidities (mineral, CO2, and total). Theoretically defined total alkalinity is closely analogous to measured alkalinity and presents few practical interpretation problems. Theoretically defined “CO2- acidity” is closely related to most standard titration methods used for mine drainage with an endpoint pH of 8.3, but it presents numerous interpretation problems, and it is unfortunately named because CO2 is intentionally driven off during titration of mine-drainage samples. Using the proton condition/massaction approach and employing graphs for visualization, we explore the concept of principal components and how to assign acidity contributions to solution species, including aqueous complexes, commonly found in mine drainage. We define a comprehensive theoretical definition of acidity in mine drainage on the basis of aqueous speciation at the sample pH and the capacity of these species to undergo hydrolysis to pH 8.3. This definition indicates the computed acidity in milligrams per liter (mg L-1 ) as CaCO3 (based on pH and analytical concentrations of dissolved FeIII , FeII , Mn, and Al in mg L-1 ): Aciditycomputed = 50. (10(3-pH) + 3.CFeIII/55.8 + 2.CFeII/55.8 + 2.CMn/54.9 + 3.CAl/27.0) underestimates contributions from HSO4 - and H+ , but overestimates the acidity due to Fe3+. These errors tend to approximately cancel each other. We demonstrate that “net alkalinity” is a valid mathematical construction based on theoretical definitions of alkalinity and acidity. We demonstrate that, for most mine-drainage solutions, a useful net alkalinity value can be derived from: 1) alkalinity and acidity values based on aqueous speciation, 2) measured alkalinity - computed acidity, or 3) taking the negative of the value obtained in a standard method “hot peroxide” acidity titration, provided that labs report negative values. We recommend the third approach; i.e., Net alkalinity = - Hot Acidity.

  6. A theoretical approach to artificial intelligence systems in medicine.

    PubMed

    Spyropoulos, B; Papagounos, G

    1995-10-01

    The various theoretical models of disease, the nosology which is accepted by the medical community and the prevalent logic of diagnosis determine both the medical approach as well as the development of the relevant technology including the structure and function of the A.I. systems involved. A.I. systems in medicine, in addition to the specific parameters which enable them to reach a diagnostic and/or therapeutic proposal, entail implicitly theoretical assumptions and socio-cultural attitudes which prejudice the orientation and the final outcome of the procedure. The various models -causal, probabilistic, case-based etc. -are critically examined and their ethical and methodological limitations are brought to light. The lack of a self-consistent theoretical framework in medicine, the multi-faceted character of the human organism as well as the non-explicit nature of the theoretical assumptions involved in A.I. systems restrict them to the role of decision supporting "instruments" rather than regarding them as decision making "devices". This supporting role and, especially, the important function which A.I. systems should have in the structure, the methods and the content of medical education underscore the need of further research in the theoretical aspects and the actual development of such systems.

  7. Optimization Design of Minimum Total Resistance Hull Form Based on CFD Method

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Zhang, Sheng-long; Zhang, Hui

    2018-06-01

    In order to reduce the resistance and improve the hydrodynamic performance of a ship, two hull form design methods are proposed based on the potential flow theory and viscous flow theory. The flow fields are meshed using body-fitted mesh and structured grids. The parameters of the hull modification function are the design variables. A three-dimensional modeling method is used to alter the geometry. The Non-Linear Programming (NLP) method is utilized to optimize a David Taylor Model Basin (DTMB) model 5415 ship under the constraints, including the displacement constraint. The optimization results show an effective reduction of the resistance. The two hull form design methods developed in this study can provide technical support and theoretical basis for designing green ships.

  8. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE PAGES

    Steyer, Andrew J.; Van Vleck, Erik S.

    2018-04-13

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  9. A Lyapunov and Sacker–Sell spectral stability theory for one-step methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steyer, Andrew J.; Van Vleck, Erik S.

    Approximation theory for Lyapunov and Sacker–Sell spectra based upon QR techniques is used to analyze the stability of a one-step method solving a time-dependent (nonautonomous) linear ordinary differential equation (ODE) initial value problem in terms of the local error. Integral separation is used to characterize the conditioning of stability spectra calculations. The stability of the numerical solution by a one-step method of a nonautonomous linear ODE using real-valued, scalar, nonautonomous linear test equations is justified. This analysis is used to approximate exponential growth/decay rates on finite and infinite time intervals and establish global error bounds for one-step methods approximating uniformly,more » exponentially stable trajectories of nonautonomous and nonlinear ODEs. A time-dependent stiffness indicator and a one-step method that switches between explicit and implicit Runge–Kutta methods based upon time-dependent stiffness are developed based upon the theoretical results.« less

  10. Machine vision application in animal trajectory tracking.

    PubMed

    Koniar, Dušan; Hargaš, Libor; Loncová, Zuzana; Duchoň, František; Beňo, Peter

    2016-04-01

    This article was motivated by the doctors' demand to make a technical support in pathologies of gastrointestinal tract research [10], which would be based on machine vision tools. Proposed solution should be less expensive alternative to already existing RF (radio frequency) methods. The objective of whole experiment was to evaluate the amount of animal motion dependent on degree of pathology (gastric ulcer). In the theoretical part of the article, several methods of animal trajectory tracking are presented: two differential methods based on background subtraction, the thresholding methods based on global and local threshold and the last method used for animal tracking was the color matching with a chosen template containing a searched spectrum of colors. The methods were tested offline on five video samples. Each sample contained situation with moving guinea pig locked in a cage under various lighting conditions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. A new Schiff base compound N,N'-(2,2-dimetylpropane)-bis(dihydroxylacetophenone): synthesis, experimental and theoretical studies on its crystal structure, FTIR, UV-visible, 1H NMR and 13C NMR spectra.

    PubMed

    Saheb, Vahid; Sheikhshoaie, Iran

    2011-10-15

    The Schiff base compound, N,N'-(2,2-dimetylpropane)-bis(dihydroxylacetophenone) (NDHA) is synthesized through the condensation of 2-hydroxylacetophenone and 2,2-dimethyl 1,3-amino propane in methanol at ambient temperature. The yellow crystalline precipitate is used for X-ray single-crystal determination and measuring Fourier transform infrared (FTIR), UV-visible, (1)H NMR and (13)C NMR spectra. Electronic structure calculations at the B3LYP, PBEPBE and PW91PW91 levels of theory are performed to optimize the molecular geometry and to calculate the FTIR, (1)H NMR and (13)C NMR spectra of the compound. Time-dependent density functional theory (TDDFT) method is used to calculate the UV-visible spectrum of NDHA. Vibrational frequencies are determined experimentally and compared with those obtained theoretically. Vibrational assignments and analysis of the fundamental modes of the compound are also performed. All theoretical methods can well reproduce the structure of the compound. The (1)H NMR and (13)C NMR chemical shifts calculated by all DFT methods are consistent with the experimental data. However, the NMR shielding tensors computed at the B3LYP/6-31+G(d,p) level of theory are in better agreement with experimental (1)H NMR and (13)C NMR spectra. The electronic absorption spectrum calculated at the B3LYP/6-31+G(d,p) level by using TD-DFT method is in accordance with the observed UV-visible spectrum of NDHA. In addition, some quantum descriptors of the molecule are calculated and conformational analysis is performed and the results were compared with the crystallographic data. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. The theoretical study of passive and active optical devices via planewave based transfer (scattering) matrix method and other approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhuo, Ye

    2011-01-01

    In this thesis, we theoretically study the electromagnetic wave propagation in several passive and active optical components and devices including 2-D photonic crystals, straight and curved waveguides, organic light emitting diodes (OLEDs), and etc. Several optical designs are also presented like organic photovoltaic (OPV) cells and solar concentrators. The first part of the thesis focuses on theoretical investigation. First, the plane-wave-based transfer (scattering) matrix method (TMM) is briefly described with a short review of photonic crystals and other numerical methods to study them (Chapter 1 and 2). Next TMM, the numerical method itself is investigated in details and developed inmore » advance to deal with more complex optical systems. In chapter 3, TMM is extended in curvilinear coordinates to study curved nanoribbon waveguides. The problem of a curved structure is transformed into an equivalent one of a straight structure with spatially dependent tensors of dielectric constant and magnetic permeability. In chapter 4, a new set of localized basis orbitals are introduced to locally represent electromagnetic field in photonic crystals as alternative to planewave basis. The second part of the thesis focuses on the design of optical devices. First, two examples of TMM applications are given. The first example is the design of metal grating structures as replacements of ITO to enhance the optical absorption in OPV cells (chapter 6). The second one is the design of the same structure as above to enhance the light extraction of OLEDs (chapter 7). Next, two design examples by ray tracing method are given, including applying a microlens array to enhance the light extraction of OLEDs (chapter 5) and an all-angle wide-wavelength design of solar concentrator (chapter 8). In summary, this dissertation has extended TMM which makes it capable of treating complex optical systems. Several optical designs by TMM and ray tracing method are also given as a full complement of this work.« less

  13. Development of Innovative Business Model of Modern Manager's Qualities

    ERIC Educational Resources Information Center

    Yashkova, Elena V.; Sineva, Nadezda L.; Shkunova, Angelika A.; Bystrova, Natalia V.; Smirnova, Zhanna V.; Kolosova, Tatyana V.

    2016-01-01

    The paper defines a complex of manager's qualities based on theoretical and methodological analysis and synthesis methods, available national and world literature, research papers and publications. The complex approach methodology was used, which provides an innovative view of the development of modern manager's qualities. The methodological…

  14. Promoting Technology-Assisted Active Learning in Computer Science Education

    ERIC Educational Resources Information Center

    Gao, Jinzhu; Hargis, Jace

    2010-01-01

    This paper describes specific active learning strategies for teaching computer science, integrating both instructional technologies and non-technology-based strategies shown to be effective in the literature. The theoretical learning components addressed include an intentional method to help students build metacognitive abilities, as well as…

  15. An Experiment in Teaching Human Ethology

    ERIC Educational Resources Information Center

    Barnett, S. A.

    1977-01-01

    Students of ethology are often confused about the validity of arguments based on comparisons of animal and human behavior. The problem can be dealt with purely theoretically or through observational or experimental studies of human behavior. Some results of using these two methods are described and discussed. (Author/MA)

  16. The cosmological lithium problem revisited

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertulani, C. A., E-mail: carlos.bertulani@tamuc.edu; Department of Physics and Astronomy, Texas A&M University, College Station, TX 75429; Mukhamedzhanov, A. M., E-mail: akram@comp.tamu.edu

    After a brief review of the cosmological lithium problem, we report a few recent attempts to find theoretical solutions by our group at Texas A&M University (Commerce & College Station). We will discuss our studies on the theoretical description of electron screening, the possible existence of parallel universes of dark matter, and the use of non-extensive statistics during the Big Bang nucleosynthesis epoch. Last but not least, we discuss possible solutions within nuclear physics realm. The impact of recent measurements of relevant nuclear reaction cross sections for the Big Bang nucleosynthesis based on indirect methods is also assessed. Although ourmore » attempts may not able to explain the observed discrepancies between theory and observations, they suggest theoretical developments that can be useful also for stellar nucleosynthesis.« less

  17. Viscous/potential flow about multi-element two-dimensional and infinite-span swept wings: Theory and experiment

    NASA Technical Reports Server (NTRS)

    Olson, L. E.; Dvorak, F. A.

    1975-01-01

    The viscous subsonic flow past two-dimensional and infinite-span swept multi-component airfoils is studied theoretically and experimentally. The computerized analysis is based on iteratively coupled boundary layer and potential flow analysis. The method, which is restricted to flows with only slight separation, gives surface pressure distribution, chordwise and spanwise boundary layer characteristics, lift, drag, and pitching moment for airfoil configurations with up to four elements. Merging confluent boundary layers are treated. Theoretical predictions are compared with an exact theoretical potential flow solution and with experimental measures made in the Ames 40- by 80-Foot Wind Tunnel for both two-dimensional and infinite-span swept wing configurations. Section lift characteristics are accurately predicted for zero and moderate sweep angles where flow separation effects are negligible.

  18. Elastic Cherenkov effects in transversely isotropic soft materials-I: Theoretical analysis, simulations and inverse method

    NASA Astrophysics Data System (ADS)

    Li, Guo-Yang; Zheng, Yang; Liu, Yanlin; Destrade, Michel; Cao, Yanping

    2016-11-01

    A body force concentrated at a point and moving at a high speed can induce shear-wave Mach cones in dusty-plasma crystals or soft materials, as observed experimentally and named the elastic Cherenkov effect (ECE). The ECE in soft materials forms the basis of the supersonic shear imaging (SSI) technique, an ultrasound-based dynamic elastography method applied in clinics in recent years. Previous studies on the ECE in soft materials have focused on isotropic material models. In this paper, we investigate the existence and key features of the ECE in anisotropic soft media, by using both theoretical analysis and finite element (FE) simulations, and we apply the results to the non-invasive and non-destructive characterization of biological soft tissues. We also theoretically study the characteristics of the shear waves induced in a deformed hyperelastic anisotropic soft material by a source moving with high speed, considering that contact between the ultrasound probe and the soft tissue may lead to finite deformation. On the basis of our theoretical analysis and numerical simulations, we propose an inverse approach to infer both the anisotropic and hyperelastic parameters of incompressible transversely isotropic (TI) soft materials. Finally, we investigate the properties of the solutions to the inverse problem by deriving the condition numbers in analytical form and performing numerical experiments. In Part II of the paper, both ex vivo and in vivo experiments are conducted to demonstrate the applicability of the inverse method in practical use.

  19. Model of twelve properties of a set of organic solvents with graph-theoretical and/or experimental parameters.

    PubMed

    Pogliani, Lionello

    2010-01-30

    Twelve properties of a highly heterogeneous class of organic solvents have been modeled with a graph-theoretical molecular connectivity modified (MC) method, which allows to encode the core electrons and the hydrogen atoms. The graph-theoretical method uses the concepts of simple, general, and complete graphs, where these last types of graphs are used to encode the core electrons. The hydrogen atoms have been encoded by the aid of a graph-theoretical perturbation parameter, which contributes to the definition of the valence delta, delta(v), a key parameter in molecular connectivity studies. The model of the twelve properties done with a stepwise search algorithm is always satisfactory, and it allows to check the influence of the hydrogen content of the solvent molecules on the choice of the type of descriptor. A similar argument holds for the influence of the halogen atoms on the type of core electron representation. In some cases the molar mass, and in a minor way, special "ad hoc" parameters have been used to improve the model. A very good model of the surface tension could be obtained by the aid of five experimental parameters. A mixed model method based on experimental parameters plus molecular connectivity indices achieved, instead, to consistently improve the model quality of five properties. To underline is the importance of the boiling point temperatures as descriptors in these last two model methodologies. Copyright 2009 Wiley Periodicals, Inc.

  20. Biosensing via light scattering from plasmonic core-shell nanospheres coated with DNA molecules

    NASA Astrophysics Data System (ADS)

    Xie, Huai-Yi; Chen, Minfeng; Chang, Yia-Chung; Moirangthem, Rakesh Singh

    2017-05-01

    We present both experimental and theoretical studies for investigating DNA molecules attached on metallic nanospheres. We have developed an efficient and accurate numerical method to investigate light scattering from plasmonic nanospheres on a substrate covered by a shell, based on the Green's function approach with suitable spherical harmonic basis. Next, we use this method to study optical scattering from DNA molecules attached to metallic nanoparticles placed on a substrate and compare with experimental results. We obtain fairly good agreement between theoretical predictions and the measured ellipsometric spectra. The metallic nanoparticles were used to detect the binding with DNA molecules in a microfluidic setup via spectroscopic ellipsometry (SE), and a detectable change in ellipsometric spectra was found when DNA molecules are captured on Au nanoparticles. Our theoretical simulation indicates that the coverage of Au nanosphere by a submonolayer of DNA molecules, which is modeled by a thin layer of dielectric material (which may absorb light), can lead to a small but detectable spectroscopic shift in both the Ψ and Δ spectra with more significant change in Δ spectra in agreement with experimental results. Our studies demonstrated the ultrasensitive capability of SE for sensing submonolayer coverage of DNA molecules on Au nanospheres. Hence the spectroscopic ellipsometric measurements coupled with theoretical analysis via an efficient computation method can be an effective tool for detecting DNA molecules attached on Au nanoparticles, thus achieving label-free, non-destructive, and high-sensitivity biosensing with nanoscale resolution.

  1. Reconstructing in-vivo reflectance spectrum of pigmented skin lesion by Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Wang, Shuang; He, Qingli; Zhao, Jianhua; Lui, Harvey; Zeng, Haishan

    2012-03-01

    In dermatology applications, diffuse reflectance spectroscopy has been extensively investigated as a promising tool for the noninvasive method to distinguish melanoma from benign pigmented skin lesion (nevus), which is concentrated with the skin chromophores like melanin and hemoglobin. We carried out a theoretical study to examine melanin distribution in human skin tissue and establish a practical optical model for further pigmented skin investigation. The theoretical simulation was using junctional nevus as an example. A multiple layer skin optical model was developed on established anatomy structures of skin, the published optical parameters of different skin layers, blood and melanin. Monte Carlo simulation was used to model the interaction between excitation light and skin tissue and rebuild the diffuse reflectance process from skin tissue. A testified methodology was adopted to determine melanin contents in human skin based on in vivo diffuse reflectance spectra. The rebuild diffuse reflectance spectra were investigated by adding melanin into different layers of the theoretical model. One of in vivo reflectance spectra from Junctional nevi and their surrounding normal skin was studied by compare the ratio between nevus and normal skin tissue in both the experimental and simulated diffuse reflectance spectra. The simulation result showed a good agreement with our clinical measurements, which indicated that our research method, including the spectral ratio method, skin optical model and modifying the melanin content in the model, could be applied in further theoretical simulation of pigmented skin lesions.

  2. Filtered maximum likelihood expectation maximization based global reconstruction for bioluminescence tomography.

    PubMed

    Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli

    2018-05-17

    The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.

  3. Packaging consideration of two-dimensional polymer-based photonic crystals for laser beam steering

    NASA Astrophysics Data System (ADS)

    Dou, Xinyuan; Chen, Xiaonan; Chen, Maggie Yihong; Wang, Alan Xiaolong; Jiang, Wei; Chen, Ray T.

    2009-02-01

    In this paper, we report the theoretical study of polymer-based photonic crystals for laser beam steering which is based on the superprism effect as well as the experiment fabrication of the two dimensional photonic crystals for the laser beam steering. Superprism effect, the principle for beam steering, was separately studied in details through EFC (Equifrequency Contour) analysis. Polymer based photonic crystals were fabricated through double exposure holographic interference method using SU8-2007. The experiment results were also reported.

  4. [Expectations and patient satisfaction in hospitals: construction and application of an expectation-based experience typology and its use in the management of quality and expectations].

    PubMed

    Gehrlach, Christoph; Güntert, Bernhard

    2015-01-01

    Patient satisfaction (PS) surveys are frequently used evaluation methods to show performance from the customer's view. This approach has some fundamental deficits, especially with respect to theory, methodology and usage. Because of the significant theoretical value of the expectation confirmation/disconfirmation concept in the development of PS, an expectation-based experience typology has been developed and tested to check whether this approach could be a theoretical and practical alternative to the survey of PS. Due to the mainly cognitive-rational process of comparison between expectations and expectation fulfilment, it is easier to make changes in this stage of the process than in the subsequent stage of the development of PS that is mainly based on emotional-affective processes. The paper contains a literature review of the common concept of PS and its causal and influencing factors. Based on the theoretical part of this study, an expectation-based experience typology was developed. In the next step, the typology was subjected to exploratory testing, based on two patient surveys. In some parts of the tested typology explorative differences could be found between hospitals. Despite this rather more complex and unusual approach to expectation-based experience typology, this concept offers the chance to change conditions not only retrospectively (based on data), but also in a prospective way in terms of a "management of expectations". Copyright © 2014. Published by Elsevier GmbH.

  5. Recent progress in high-mobility thin-film transistors based on multilayer 2D materials

    NASA Astrophysics Data System (ADS)

    Hong, Young Ki; Liu, Na; Yin, Demin; Hong, Seongin; Kim, Dong Hak; Kim, Sunkook; Choi, Woong; Yoon, Youngki

    2017-04-01

    Two-dimensional (2D) layered semiconductors are emerging as promising candidates for next-generation thin-film electronics because of their high mobility, relatively large bandgap, low-power switching, and the availability of large-area growth methods. Thin-film transistors (TFTs) based on multilayer transition metal dichalcogenides or black phosphorus offer unique opportunities for next-generation electronic and optoelectronic devices. Here, we review recent progress in high-mobility transistors based on multilayer 2D semiconductors. We describe the theoretical background on characterizing methods of TFT performance and material properties, followed by their applications in flexible, transparent, and optoelectronic devices. Finally, we highlight some of the methods used in metal-semiconductor contacts, hybrid structures, heterostructures, and chemical doping to improve device performance.

  6. Theoretical repeatability assessment without repetitive measurements in gradient high-performance liquid chromatography.

    PubMed

    Kotani, Akira; Tsutsumi, Risa; Shoji, Asaki; Hayashi, Yuzuru; Kusu, Fumiyo; Yamamoto, Kazuhiro; Hakamata, Hideki

    2016-07-08

    This paper puts forward a time and material-saving method for evaluating the repeatability of area measurements in gradient HPLC with UV detection (HPLC-UV), based on the function of mutual information (FUMI) theory which can theoretically provide the measurement standard deviation (SD) and detection limits through the stochastic properties of baseline noise with no recourse to repetitive measurements of real samples. The chromatographic determination of terbinafine hydrochloride and enalapril maleate is taken as an example. The best choice of the number of noise data points, inevitable for the theoretical evaluation, is shown to be 512 data points (10.24s at 50 point/s sampling rate of an A/D converter). Coupled with the relative SD (RSD) of sample injection variability in the instrument used, the theoretical evaluation is proved to give identical values of area measurement RSDs to those estimated by the usual repetitive method (n=6) over a wide concentration range of the analytes within the 95% confidence intervals of the latter RSD. The FUMI theory is not a statistical one, but the "statistical" reliability of its SD estimates (n=1) is observed to be as high as that attained by thirty-one measurements of the same samples (n=31). Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Theory and interpretation in qualitative studies from general practice: Why and how?

    PubMed

    Malterud, Kirsti

    2016-03-01

    In this article, I want to promote theoretical awareness and commitment among qualitative researchers in general practice and suggest adequate and feasible theoretical approaches. I discuss different theoretical aspects of qualitative research and present the basic foundations of the interpretative paradigm. Associations between paradigms, philosophies, methodologies and methods are examined and different strategies for theoretical commitment presented. Finally, I discuss the impact of theory for interpretation and the development of general practice knowledge. A scientific theory is a consistent and soundly based set of assumptions about a specific aspect of the world, predicting or explaining a phenomenon. Qualitative research is situated in an interpretative paradigm where notions about particular human experiences in context are recognized from different subject positions. Basic theoretical features from the philosophy of science explain why and how this is different from positivism. Reflexivity, including theoretical awareness and consistency, demonstrates interpretative assumptions, accounting for situated knowledge. Different types of theoretical commitment in qualitative analysis are presented, emphasizing substantive theories to sharpen the interpretative focus. Such approaches are clearly within reach for a general practice researcher contributing to clinical practice by doing more than summarizing what the participants talked about, without trying to become a philosopher. Qualitative studies from general practice deserve stronger theoretical awareness and commitment than what is currently established. Persistent attention to and respect for the distinctive domain of knowledge and practice where the research deliveries are targeted is necessary to choose adequate theoretical endeavours. © 2015 the Nordic Societies of Public Health.

  8. [A therapy concept based on aphasia diagnostic criteria].

    PubMed

    Frühauf, K

    1989-08-01

    Four concepts of therapy, their theoretical basis, their aims, and their methods are presented and the effectiveness of each measured psychometrically. All call for early betterment under optimum organisation. Therapeutic methods for use in special forms of aphasia are being tested but display little in the way of uniform results. Special importance is laid on a complex of treatment which would cover movement therapy, communication therapy, and occupational therapy.

  9. Vibrational Spectral Studies of Gemfibrozil

    NASA Astrophysics Data System (ADS)

    Benitta, T. Asenath; Balendiran, G. K.; James, C.

    2008-11-01

    The Fourier Transform Raman and infrared spectra of the crystallized drug molecule 5-(2,5-Dimethylphenoxy)-2,2-dimethylpentanoic acid (Gemfibrozil) have been recorded and analyzed. Quantum chemical computational methods have been employed using Gaussian 03 software package based on Hartree Fock method for theoretically modeling the grown molecule. The optimized geometry and vibrational frequencies have been predicted. Observed vibrational modes have been assigned with the aid of normal coordinate analysis.

  10. Analysis of entropy extraction efficiencies in random number generation systems

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Wang, Shuang; Chen, Wei; Yin, Zhen-Qiang; Han, Zheng-Fu

    2016-05-01

    Random numbers (RNs) have applications in many areas: lottery games, gambling, computer simulation, and, most importantly, cryptography [N. Gisin et al., Rev. Mod. Phys. 74 (2002) 145]. In cryptography theory, the theoretical security of the system calls for high quality RNs. Therefore, developing methods for producing unpredictable RNs with adequate speed is an attractive topic. Early on, despite the lack of theoretical support, pseudo RNs generated by algorithmic methods performed well and satisfied reasonable statistical requirements. However, as implemented, those pseudorandom sequences were completely determined by mathematical formulas and initial seeds, which cannot introduce extra entropy or information. In these cases, “random” bits are generated that are not at all random. Physical random number generators (RNGs), which, in contrast to algorithmic methods, are based on unpredictable physical random phenomena, have attracted considerable research interest. However, the way that we extract random bits from those physical entropy sources has a large influence on the efficiency and performance of the system. In this manuscript, we will review and discuss several randomness extraction schemes that are based on radiation or photon arrival times. We analyze the robustness, post-processing requirements and, in particular, the extraction efficiency of those methods to aid in the construction of efficient, compact and robust physical RNG systems.

  11. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  12. Wavelength routing beyond the standard graph coloring approach

    NASA Astrophysics Data System (ADS)

    Blankenhorn, Thomas

    2004-04-01

    When lightpaths are routed in the planning stage of transparent optical networks, the textbook approach is to use algorithms that try to minimize the overall number of wavelengths used in the . We demonstrate that this method cannot be expected to minimize actual costs when the marginal cost of instlling more wavelengths is a declining function of the number of wavelengths already installed, as is frequently the case. We further demonstrate how cost optimization can theoretically be improved with algorithms based on Prim"s algorithm. Finally, we test this theory with simulaion on a series of actual network topologies, which confirm the theoretical analysis.

  13. Theoretical analysis of two nonpolarizing beam splitters in asymmetrical glass cubes.

    PubMed

    Shi, Jin Hui; Wang, Zheng Ping

    2008-05-01

    The design principle for a nonpolarizing beam splitter based on the Brewster condition in a cube is introduced. Nonpolarizing beam splitters in an asymmetrical glass cube are proposed and theoretically investigated, and applied examples are given. To realize 50% reflectance and 50% transmittance at specified wavelengths for both polarization components with an error of less than 2%, two measurements are taken by adjusting the refractive index of the substrate material and optimizing the thicknesses of each film in the design procedures. The simulated results show that the targets are achieved using the method reported here.

  14. The mechanism and process of spontaneous boron doping in graphene in the theoretical perspective

    NASA Astrophysics Data System (ADS)

    Deng, Xiaohui; Zeng, Jing; Si, Mingsu; Lu, Wei

    2016-10-01

    A theoretical model is presented that reveals the mechanism of spontaneous boron doping of graphene and is consistent with the microwave plasma experiment choosing trimethylboron as the doping source (Tang et al. (2012) [19]). The spontaneous boron doping originates from the synergistic effect of B and other groups (C, H, CH, CH2 or CH3) decomposing from trimethylboron. This work successfully explains the above experimental phenomenon and proposes a novel and feasible method aiming at B doping of graphene. The mechanism presented here may be also suitable for other two-dimensional carbon-based materials.

  15. Calculative techniques for transonic flows about certain classes of wing-body combinations, phase 2

    NASA Technical Reports Server (NTRS)

    Stahara, S. S.; Spreiter, J. R.

    1972-01-01

    Theoretical analysis and associated computer programs were developed for predicting properties of transonic flows about certain classes of wing-body combinations. The procedures used are based on the transonic equivalence rule and employ either an arbitrarily-specified solution or the local linerization method for determining the nonlifting transonic flow about the equivalent body. The class of wind planform shapes include wings having sweptback trailing edges and finite tip chord. Theoretical results are presented for surface and flow-field pressure distributions for both nonlifting and lifting situations at Mach number one.

  16. Remarks on a financial inverse problem by means of Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Cuomo, Salvatore; Di Somma, Vittorio; Sica, Federica

    2017-10-01

    Estimating the price of a barrier option is a typical inverse problem. In this paper we present a numerical and statistical framework for a market with risk-free interest rate and a risk asset, described by a Geometric Brownian Motion (GBM). After approximating the risk asset with a numerical method, we find the final option price by following an approach based on sequential Monte Carlo methods. All theoretical results are applied to the case of an option whose underlying is a real stock.

  17. Calculation of laminar heating rates on three-dimensional configurations using the axisymmetric analogue

    NASA Technical Reports Server (NTRS)

    Hamilton, H. H., II

    1980-01-01

    A theoretical method was developed for computing approximate laminar heating rates on three dimensional configurations at angle of attack. The method is based on the axisymmetric analogue which is used to reduce the three dimensional boundary layer equations along surface streamlines to an equivalent axisymmetric form by using the metric coefficient which describes streamline divergence (or convergence). The method was coupled with a three dimensional inviscid flow field program for computing surface streamline paths, metric coefficients, and boundary layer edge conditions.

  18. Fourier analysis and signal processing by use of the Moebius inversion formula

    NASA Technical Reports Server (NTRS)

    Reed, Irving S.; Yu, Xiaoli; Shih, Ming-Tang; Tufts, Donald W.; Truong, T. K.

    1990-01-01

    A novel Fourier technique for digital signal processing is developed. This approach to Fourier analysis is based on the number-theoretic method of the Moebius inversion of series. The Fourier transform method developed is shown also to yield the convolution of two signals. A computer simulation shows that this method for finding Fourier coefficients is quite suitable for digital signal processing. It competes with the classical FFT (fast Fourier transform) approach in terms of accuracy, complexity, and speed.

  19. An error analysis of least-squares finite element method of velocity-pressure-vorticity formulation for Stokes problem

    NASA Technical Reports Server (NTRS)

    Chang, Ching L.; Jiang, Bo-Nan

    1990-01-01

    A theoretical proof of the optimal rate of convergence for the least-squares method is developed for the Stokes problem based on the velocity-pressure-vorticity formula. The 2D Stokes problem is analyzed to define the product space and its inner product, and the a priori estimates are derived to give the finite-element approximation. The least-squares method is found to converge at the optimal rate for equal-order interpolation.

  20. Finite Volume Method for Pricing European Call Option with Regime-switching Volatility

    NASA Astrophysics Data System (ADS)

    Lista Tauryawati, Mey; Imron, Chairul; Putri, Endah RM

    2018-03-01

    In this paper, we present a finite volume method for pricing European call option using Black-Scholes equation with regime-switching volatility. In the first step, we formulate the Black-Scholes equations with regime-switching volatility. we use a finite volume method based on fitted finite volume with spatial discretization and an implicit time stepping technique for the case. We show that the regime-switching scheme can revert to the non-switching Black Scholes equation, both in theoretical evidence and numerical simulations.

  1. Refinements in the hierarchical structure of externalizing psychiatric disorders: Patterns of lifetime liability from mid-adolescence through early adulthood.

    PubMed

    Farmer, Richard F; Seeley, John R; Kosty, Derek B; Lewinsohn, Peter M

    2009-11-01

    Research on hierarchical modeling of psychopathology has frequently identified 2 higher order latent factors, internalizing and externalizing. When based on the comorbidity of psychiatric diagnoses, the externalizing domain has usually been modeled as a single latent factor. Multivariate studies of externalizing symptom features, however, suggest multidimensionality. To address this apparent contradiction, confirmatory factor analytic methods and information-theoretic criteria were used to evaluate 4 theoretically plausible measurement models based on lifetime comorbidity patterns of 7 putative externalizing disorders. Diagnostic information was collected at 4 assessment waves from an age-based cohort of 816 persons between the ages of 14 and 33. A 2-factor model that distinguished oppositional behavior disorders (attention-deficit/hyperactivity disorder, oppositional defiant disorder) from social norm violation disorders (conduct disorder, adult antisocial behavior, alcohol use disorder, cannabis use disorder, hard drug use disorder) demonstrated consistently good fit and superior approximating abilities. Analyses of psychosocial outcomes measured at the last assessment wave supported the validity of this 2-factor model. Implications of this research for the theoretical understanding of domain-related disorders and the organization of classification systems are discussed. PsycINFO Database Record 2009 APA, all rights reserved.

  2. Theoretical investigation on multilayer nanocomposite-based fiber optic SPR sensor

    NASA Astrophysics Data System (ADS)

    Shojaie, Ehsan; Madanipour, Khosro; Gharibzadeh, Azadeh; Abbasi, Shabnam

    2017-06-01

    In this work, a multilayer nanocomposite based fiber optic SPR sensor is considered and especially designed for CO2 gas detection. This proposed fiber sensor consists of fiber core, gold-silver alloy and the absorber layers. The investigation is based on the evaluation of the transmitted-power derived under the transfer matrix method and the multiple-reflection in the sensing area. In terms of sensitivity, the sensor performance is studied theoretically under various conditions related to the metal layer and its gold and silver nanoparticles to form a single alloy film. Effect of additional parameters such as the ratio of the alloy composition and the thickness of the alloy film on the performance of the SPR sensor is studied, as well. Finally, a four-layer structure is introduced to detect carbon dioxide gas. It contains core fiber, gold-silver alloy layer, an absorbent layer of carbon dioxide gas (KOH) and measurement environment. Lower price and size are the main advantages of using such a sensor in compare with commercial (NDIR) gas sensor. Theoretical results show by increasing the metal layer thickness the sensitivity of sensor is increased, and by increasing the ratio of the gold in alloy the sensitivity is decreased.

  3. Patient centredness in integrated care: results of a qualitative study based on a systems theoretical framework.

    PubMed

    Lüdecke, Daniel

    2014-10-01

    Health care providers seek to improve patient-centred care. Due to fragmentation of services, this can only be achieved by establishing integrated care partnerships. The challenge is both to control costs while enhancing the quality of care and to coordinate this process in a setting with many organisations involved. The problem is to establish control mechanisms, which ensure sufficiently consideration of patient centredness. Seventeen qualitative interviews have been conducted in hospitals of metropolitan areas in northern Germany. The documentary method, embedded into a systems theoretical framework, was used to describe and analyse the data and to provide an insight into the specific perception of organisational behaviour in integrated care. The findings suggest that integrated care partnerships rely on networks based on professional autonomy in the context of reliability. The relationships of network partners are heavily based on informality. This correlates with a systems theoretical conception of organisations, which are assumed autonomous in their decision-making. Networks based on formal contracts may restrict professional autonomy and competition. Contractual bindings that suppress the competitive environment have negative consequences for patient-centred care. Drawbacks remain due to missing self-regulation of the network. To conclude, less regimentation of integrated care partnerships is recommended.

  4. Wideband optical sensing using pulse interferometry.

    PubMed

    Rosenthal, Amir; Razansky, Daniel; Ntziachristos, Vasilis

    2012-08-13

    Advances in fabrication of high-finesse optical resonators hold promise for the development of miniaturized, ultra-sensitive, wide-band optical sensors, based on resonance-shift detection. Many potential applications are foreseen for such sensors, among them highly sensitive detection in ultrasound and optoacoustic imaging. Traditionally, sensor interrogation is performed by tuning a narrow linewidth laser to the resonance wavelength. Despite the ubiquity of this method, its use has been mostly limited to lab conditions due to its vulnerability to environmental factors and the difficulty of multiplexing - a key factor in imaging applications. In this paper, we develop a new optical-resonator interrogation scheme based on wideband pulse interferometry, potentially capable of achieving high stability against environmental conditions without compromising sensitivity. Additionally, the method can enable multiplexing several sensors. The unique properties of the pulse-interferometry interrogation approach are studied theoretically and experimentally. Methods for noise reduction in the proposed scheme are presented and experimentally demonstrated, while the overall performance is validated for broadband optical detection of ultrasonic fields. The achieved sensitivity is equivalent to the theoretical limit of a 6 MHz narrow-line width laser, which is 40 times higher than what can be usually achieved by incoherent interferometry for the same optical resonator.

  5. A deeper look at two concepts of measuring gene-gene interactions: logistic regression and interaction information revisited.

    PubMed

    Mielniczuk, Jan; Teisseyre, Paweł

    2018-03-01

    Detection of gene-gene interactions is one of the most important challenges in genome-wide case-control studies. Besides traditional logistic regression analysis, recently the entropy-based methods attracted a significant attention. Among entropy-based methods, interaction information is one of the most promising measures having many desirable properties. Although both logistic regression and interaction information have been used in several genome-wide association studies, the relationship between them has not been thoroughly investigated theoretically. The present paper attempts to fill this gap. We show that although certain connections between the two methods exist, in general they refer two different concepts of dependence and looking for interactions in those two senses leads to different approaches to interaction detection. We introduce ordering between interaction measures and specify conditions for independent and dependent genes under which interaction information is more discriminative measure than logistic regression. Moreover, we show that for so-called perfect distributions those measures are equivalent. The numerical experiments illustrate the theoretical findings indicating that interaction information and its modified version are more universal tools for detecting various types of interaction than logistic regression and linkage disequilibrium measures. © 2017 WILEY PERIODICALS, INC.

  6. A method of increasing the depth of the plastically deformed layer in the roller burnishing process

    NASA Astrophysics Data System (ADS)

    Kowalik, Marek; Trzepiecinski, Tomasz

    2018-05-01

    The subject of this paper is an analysis of the determination of the depth of the plastically deformed layer in the process of roller burnishing a shaft using a newly developed method in which a braking moment is applied to the roller. It is possible to increase the depth of the plastically deformed layer by applying the braking moment to the roller during the burnishing process. The theoretical considerations presented are based on the Hertz-Bielayev and Huber-Mises theories and permit the calculation of the depth of plastic deformation of the top layer of the burnished shaft. The theoretical analysis has been verified experimentally and using numerical calculations based on the finite element method using the Msc.MARC program. Experimental tests were carried out on ring-shaped samples made of C45 carbon steel. The samples were burnished at different values of roller force and different values of braking moment. A significant increase was found in the depth of the plastically deformed surface layer of roller burnished shafts. Usage of the phenomenon of strain hardening of steel allows the technology presented here to increase the fatigue life of the shafts.

  7. Number of perceptually distinct surface colors in natural scenes.

    PubMed

    Marín-Franch, Iván; Foster, David H

    2010-09-30

    The ability to perceptually identify distinct surfaces in natural scenes by virtue of their color depends not only on the relative frequency of surface colors but also on the probabilistic nature of observer judgments. Previous methods of estimating the number of discriminable surface colors, whether based on theoretical color gamuts or recorded from real scenes, have taken a deterministic approach. Thus, a three-dimensional representation of the gamut of colors is divided into elementary cells or points which are spaced at one discrimination-threshold unit intervals and which are then counted. In this study, information-theoretic methods were used to take into account both differing surface-color frequencies and observer response uncertainty. Spectral radiances were calculated from 50 hyperspectral images of natural scenes and were represented in a perceptually almost uniform color space. The average number of perceptually distinct surface colors was estimated as 7.3 × 10(3), much smaller than that based on counting methods. This number is also much smaller than the number of distinct points in a scene that are, in principle, available for reliable identification under illuminant changes, suggesting that color constancy, or the lack of it, does not generally determine the limit on the use of color for surface identification.

  8. Nonlinear vibration absorption for a flexible arm via a virtual vibration absorber

    NASA Astrophysics Data System (ADS)

    Bian, Yushu; Gao, Zhihui

    2017-07-01

    A semi-active vibration absorption method is put forward to attenuate nonlinear vibration of a flexible arm based on the internal resonance. To maintain the 2:1 internal resonance condition and the desirable damping characteristic, a virtual vibration absorber is suggested. It is mathematically equivalent to a vibration absorber but its frequency and damping coefficients can be readily adjusted by simple control algorithms, thereby replacing those hard-to-implement mechanical designs. Through theoretical analyses and numerical simulations, it is proven that the internal resonance can be successfully established for the flexible arm, and the vibrational energy of flexible arm can be transferred to and dissipated by the virtual vibration absorber. Finally, experimental results are presented to validate the theoretical predictions. Since the proposed method absorbs rather than suppresses vibrational energy of the primary system, it is more convenient to reduce strong vibration than conventional active vibration suppression methods based on smart material actuators with limited energy output. Furthermore, since it aims to establish an internal vibrational energy transfer channel from the primary system to the vibration absorber rather than directly respond to external excitations, it is especially applicable for attenuating nonlinear vibration excited by unpredictable excitations.

  9. A method for eliminating Faraday rotation in cryostat windows in longitudinal magneto-optical Kerr effect measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Polewko-Klim, A., E-mail: anetapol@uwb.edu.pl; Uba, S.; Uba, L.

    2014-07-15

    A solution to the problem of disturbing effect of the background Faraday rotation in the cryostat windows on longitudinal magneto-optical Kerr effect (LMOKE) measured under vacuum conditions and/or at low temperatures is proposed. The method for eliminating the influence of Faraday rotation in cryostat windows is based on special arrangement of additional mirrors placed on sample holder. In this arrangement, the orientation of the cryostat window is perpendicular to the light beam direction and parallel to an external magnetic field generated by the H-frame electromagnet. The operation of the LMOKE magnetometer with the special sample holder based on polarization modulationmore » technique with a photo-elastic modulator is theoretically analyzed with the use of Jones matrices, and formulas for evaluating of the actual Kerr rotation and ellipticity of the sample are derived. The feasibility of the method and good performance of the magnetometer is experimentally demonstrated for the LMOKE effect measured in Fe/Au multilayer structures. The influence of imperfect alignment of the magnetometer setup on the Kerr angles, as derived theoretically through the analytic model and verified experimentally, is examined and discussed.« less

  10. Efficient calibration for imperfect computer models

    DOE PAGES

    Tuo, Rui; Wu, C. F. Jeff

    2015-12-01

    Many computer models contain unknown parameters which need to be estimated using physical observations. Furthermore, the calibration method based on Gaussian process models may lead to unreasonable estimate for imperfect computer models. In this work, we extend their study to calibration problems with stochastic physical data. We propose a novel method, called the L 2 calibration, and show its semiparametric efficiency. The conventional method of the ordinary least squares is also studied. Theoretical analysis shows that it is consistent but not efficient. Here, numerical examples show that the proposed method outperforms the existing ones.

  11. Purification of photon subtraction from continuous squeezed light by filtering

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Jun-ichi; Asavanant, Warit; Furusawa, Akira

    2017-11-01

    Photon subtraction from squeezed states is a powerful scheme to create good approximation of so-called Schrödinger cat states. However, conventional continuous-wave-based methods actually involve some impurity in squeezing of localized wave packets, even in the ideal case of no optical losses. Here, we theoretically discuss this impurity by introducing mode match of squeezing. Furthermore, here we propose a method to remove this impurity by filtering the photon-subtraction field. Our method in principle enables creation of pure photon-subtracted squeezed states, which was not possible with conventional methods.

  12. On the effect of boundary layer growth on the stability of compressible flows

    NASA Technical Reports Server (NTRS)

    El-Hady, N. M.

    1981-01-01

    The method of multiple scales is used to describe a formally correct method based on the nonparallel linear stability theory, that examines the two and three dimensional stability of compressible boundary layer flows. The method is applied to the supersonic flat plate layer at Mach number 4.5. The theoretical growth rates are in good agreement with experimental results. The method is also applied to the infinite-span swept wing transonic boundary layer with suction to evaluate the effect of the nonparallel flow on the development of crossflow disturbances.

  13. Reconstructing a plasmonic metasurface for a broadband high-efficiency optical vortex in the visible frequency.

    PubMed

    Lu, Bing-Rui; Deng, Jianan; Li, Qi; Zhang, Sichao; Zhou, Jing; Zhou, Lei; Chen, Yifang

    2018-06-14

    Metasurfaces consisting of a two-dimensional metallic nano-antenna array are capable of transferring a Gaussian beam into an optical vortex with a helical phase front and a phase singularity by manipulating the polarization/phase status of light. This miniaturizes a laboratory scaled optical system into a wafer scale component, opening up a new area for broad applications in optics. However, the low conversion efficiency to generate a vortex beam from circularly polarized light hinders further development. This paper reports our recent success in improving the efficiency over a broad waveband at the visible frequency compared with the existing work. The choice of material, the geometry and the spatial organization of meta-atoms, and the fabrication fidelity are theoretically investigated by the Jones matrix method. The theoretical conversion efficiency over 40% in the visible wavelength range is worked out by systematic calculation using the finite difference time domain (FDTD) method. The fabricated metasurface based on the parameters by theoretical optimization demonstrates a high quality vortex in optical frequencies with a significantly enhanced efficiency of over 20% in a broad waveband.

  14. Structure and Electronic Spectra of Purine-Methyl Viologen Charge Transfer Complexes

    PubMed Central

    Jalilov, Almaz S.; Patwardhan, Sameer; Singh, Arunoday; Simeon, Tomekia; Sarjeant, Amy A.; Schatz, George C.; Lewis, Frederick D.

    2014-01-01

    The structure and properties of the electron donor-acceptor complexes formed between methyl viologen (MV) and purine nucleosides and nucleotides in water and the solid state have been investigated using a combination of experimental and theoretical methods. Solution studies were performed using UV-vis and 1H NMR spectroscopy. Theoretical calculations were performed within the framework of density functional theory (DFT). Energy decomposition analysis indicates that dispersion and induction (charge-transfer) interactions dominate the total binding energy, whereas electrostatic interactions are largely repulsive. The appearance of charge transfer bands in the absorption spectra of the complexes are well described by time-dependent (TD) DFT and are further explained in terms of the redox properties of purine monomers and solvation effects. Crystal structures are reported for complexes of methyl viologen with the purines 2′-deoxyguanosine 3′-monophosphate GMP (DAD′DAD′ type) and 7-deazaguanosine zG (DAD′ADAD′ type). Comparison of the structures determined in the solid state and by theoretical methods in solution provides valuable insights into the nature of charge-transfer interactions involving purine bases as electron donors. PMID:24294996

  15. On the discrepancy between eddy covariance and lysimetry-based surface flux measurements under strongly advective conditions

    USDA-ARS?s Scientific Manuscript database

    Discrepancies can arise among surface flux measurements collected using disparate techniques due to differences in both the instrumentation and theoretical underpinnings of the different measurement methods. Using data collected primarily over a pair of irrigated cotton fields as a part of the Bushl...

  16. Development and Validation of Cognitive Screening Instruments.

    ERIC Educational Resources Information Center

    Jarman, Ronald F.

    The author suggests that most research on the early detection of learning disabilities is characterisized by an ineffective and a theoretical method of selecting and validating tasks. An alternative technique is proposed, based on a neurological theory of cognitive processes, whereby task analysis is a first step, with empirical analyses as…

  17. School Violence Assessment: A Conceptual Framework, Instruments, and Methods

    ERIC Educational Resources Information Center

    Benbenishty, Rami; Astor, Ron Avi; Estrada, Joey Nunez

    2008-01-01

    This article outlines a philosophical and theoretical framework for conducting school violence assessments at the local level. The authors advocate that assessments employ a strong conceptual foundation based on social work values. These values include the active measurement of ecological factors inside and outside the school that reflect the…

  18. Mediators and Moderators of a Psychosocial Intervention for Children Affected by Political Violence

    ERIC Educational Resources Information Center

    Tol, Wietse A.; Komproe, Ivan H.; Jordans, Mark J. D.; Gross, Alden L.; Susanty, Dessy; Macy, Robert D.; de Jong, Joop T. V. M.

    2010-01-01

    Objective: The authors examined moderators and mediators of a school-based psychosocial intervention for children affected by political violence, according to an ecological resilience theoretical framework. Method: The authors examined data from a cluster randomized trial, involving children aged 8-13 in Central Sulawesi, Indonesia (treatment…

  19. Online Fan Fiction, Global Identities, and Imagination

    ERIC Educational Resources Information Center

    Black, Rebecca W.

    2009-01-01

    Based on longitudinal data from a three year ethnographic study, this article uses discourse analytic methods to explore the literacy and social practices of three adolescent English language learners writing in an online fan fiction community. Theoretical constructs within globalization and literacy studies are used to describe the influences of…

  20. Cost Optimization in E-Learning-Based Education Systems: Implementation and Learning Sequence

    ERIC Educational Resources Information Center

    Fazlollahtabar, Hamed; Yousefpoor, Narges

    2009-01-01

    Increasing the effectiveness of e-learning has become one of the most practically and theoretically important issues within both educational engineering and information system fields. The development of information technologies has contributed to growth in online training as an important education method. The online training environment enables…

  1. The New Intelligence, Surveillance, and Reconnaissance Cockpit: Examining the Contributions of Emerging Unmanned Aircraft Systems

    DTIC Science & Technology

    2010-04-25

    other method involves the development of software-based pheromones ; borrowing from the genetic behaviors employed by ants and termites . 170 UAVs and...UCAVs employing this theoretical technique can essentially mark coverage areas and targets with “digital pheromones .” 171 Both concepts are

  2. Teachers' Views of Their Assessment Practice

    ERIC Educational Resources Information Center

    Atjonen, Päivi

    2014-01-01

    The main aim of this research was to analyse teachers' views of pupil assessment. The theoretical framework was based on existing literature on advances and challenges of pupil assessment in regard to support for learning, fairness, educational partnership, feedback, and favourable methods. The data were gathered by means of a questionnaire…

  3. The Educational Governance of German School Social Science: The Example of Globalization

    ERIC Educational Resources Information Center

    Szukala, Andrea

    2016-01-01

    Purpose: This article challenges the outsiders' views on European school social science adopting genuine cosmopolitan views, when globalisation is treated in social science classrooms. Method: The article is based on the theoretical framework of educational governance analysis and on qualitative corpus analysis of representative German Laenders'…

  4. Training Programs for Observers of Behavior; A Review.

    ERIC Educational Resources Information Center

    Spool, Mark D.

    1978-01-01

    This review covers the past 25 years of research literature on training observers of behavior, specifically in the areas of interviewing, reducing rater bias, interpersonal perception and observation as a research tool. The focus is on determining the most successful training methods and their theoretical bases. (Author/SJL)

  5. Checking of individuality by DNA profiling.

    PubMed

    Brdicka, R; Nürnberg, P

    1993-08-25

    A review of methods of DNA analysis used in forensic medicine for identification, paternity testing, etc. is provided. Among other techniques, DNA fingerprinting using different probes and polymerase chain reaction-based techniques such as amplified sequence polymorphisms and minisatellite variant repeat mapping are thoroughly described and both theoretical and practical aspects are discussed.

  6. On the equivalence of spherical splines with least-squares collocation and Stokes's formula for regional geoid computation

    NASA Astrophysics Data System (ADS)

    Ophaug, Vegard; Gerlach, Christian

    2017-11-01

    This work is an investigation of three methods for regional geoid computation: Stokes's formula, least-squares collocation (LSC), and spherical radial base functions (RBFs) using the spline kernel (SK). It is a first attempt to compare the three methods theoretically and numerically in a unified framework. While Stokes integration and LSC may be regarded as classic methods for regional geoid computation, RBFs may still be regarded as a modern approach. All methods are theoretically equal when applied globally, and we therefore expect them to give comparable results in regional applications. However, it has been shown by de Min (Bull Géod 69:223-232, 1995. doi: 10.1007/BF00806734) that the equivalence of Stokes's formula and LSC does not hold in regional applications without modifying the cross-covariance function. In order to make all methods comparable in regional applications, the corresponding modification has been introduced also in the SK. Ultimately, we present numerical examples comparing Stokes's formula, LSC, and SKs in a closed-loop environment using synthetic noise-free data, to verify their equivalence. All agree on the millimeter level.

  7. Problem-based learning in optical engineering studies

    NASA Astrophysics Data System (ADS)

    Voznesenskaya, Anna

    2016-09-01

    Nowadays, the Problem-Based Learning (PBL) is one of the most prospective educational technologies. PBL is based on evaluation of learning outcomes of a student, both professional and personal, instead of traditional evaluation of theoretical knowledge and selective practical skills. Such an approach requires changes in the curricula development. There should be introduced projects (cases) imitating real tasks from the professional life. These cases should include a problem summary with necessary theoretic description, charts, graphs, information sources etc, task to implement and evaluation indicators and criteria. Often these cases are evaluated with the assessment-center method. To motivate students for the given task they could be divided into groups and have a contest. Whilst it looks easy to implement in social, economic or teaching fields PBL is pretty complicated in engineering studies. Examples of cases in the first-cycle optical engineering studies are shown in this paper. Procedures of the PBL implementation and evaluation are described.

  8. An Accurate and Stable FFT-based Method for Pricing Options under Exp-Lévy Processes

    NASA Astrophysics Data System (ADS)

    Ding, Deng; Chong U, Sio

    2010-05-01

    An accurate and stable method for pricing European options in exp-Lévy models is presented. The main idea of this new method is combining the quadrature technique and the Carr-Madan Fast Fourier Transform methods. The theoretical analysis shows that the overall complexity of this new method is still O(N log N) with N grid points as the fast Fourier transform methods. Numerical experiments for different exp-Lévy processes also show that the numerical algorithm proposed by this new method has an accuracy and stability for the small strike prices K. That develops and improves the Carr-Madan method.

  9. Comparative study between the results of effective index based matrix method and characterization of fabricated SU-8 waveguide

    NASA Astrophysics Data System (ADS)

    Samanta, Swagata; Dey, Pradip Kumar; Banerji, Pallab; Ganguly, Pranabendu

    2017-01-01

    A study regarding the validity of effective-index based matrix method (EIMM) for the fabricated SU-8 channel waveguides is reported. The design method is extremely fast compared to other existing numerical techniques, such as, BPM and FDTD. In EIMM, the effective index method was applied in depth direction of the waveguide and the resulted lateral index profile was analyzed by a transfer matrix method. By EIMM one can compute the guided mode propagation constants and mode profiles for each mode for any dimensions of the waveguides. The technique may also be used to design single mode waveguide. SU-8 waveguide fabrication was carried out by continuous-wave direct laser writing process at 375 nm wavelength. The measured propagation losses of these wire waveguides having air and PDMS as superstrates were 0.51 dB/mm and 0.3 dB/mm respectively. The number of guided modes, obtained theoretically as well as experimentally, for air-cladded waveguide was much more than that of PDMS-cladded waveguide. We were able to excite the isolated fundamental mode for the later by precise fiber positioning, and mode image was recorded. The mode profiles, mode indices, and refractive index profiles were extracted from this mode image of the fundamental mode which matched remarkably well with the theoretical predictions.

  10. Prediction of stress- and strain-based forming limits of automotive thin sheets by numerical, theoretical and experimental methods

    NASA Astrophysics Data System (ADS)

    Béres, Gábor; Weltsch, Zoltán; Lukács, Zsolt; Tisza, Miklós

    2018-05-01

    Forming limit is a complex concept of limit values related to the onset of local necking in the sheet metal. In cold sheet metal forming, major and minor limit strains are influenced by the sheet thickness, strain path (deformation history) as well as material parameters and microstructure. Forming Limit Curves are plotted in ɛ1 - ɛ2 coordinate system providing the classic strain-based Forming Limit Diagram (FLD). Using the appropriate constitutive model, the limit strains can be changed into the stress-based Forming Limit Diagram (SFLD), irrespective of the strain path. This study is about the effect of the hardening model parameters on defining of limit stress values during Nakazima tests for automotive dual phase (DP) steels. Five limit strain pairs were specified experimentally with the loading of five different sheet geometries, which performed different strain-paths from pure shear (-2ɛ2=ɛ1) up to biaxial stretching (ɛ2=ɛ1). The former works of Hill, Levy-Tyne and Keeler-Brazier made possible some kind of theoretical strain determination, too. This was followed by the stress calculation based on the experimental and theoretical strain data. Since the n exponent in the Nádai expression is varying with the strain at some DP steels, we applied the least-squares method to fit other hardening model parameters (Ludwik, Voce, Hockett-Sherby) to calculate the stress fields belonging to each limit strains. The results showed that each model parameters could produce some discrepancies between the limit stress states in the range of higher equivalent strains than uniaxial stretching. The calculated hardening models were imported to FE code to extend and validate the results by numerical simulations.

  11. Theoretical aspects and the experience of studying spectra of low-frequency microseisms

    NASA Astrophysics Data System (ADS)

    Birialtsev, E.; Vildanov, A.; Eronina, E.; Rizhov, D.; Rizhov, V.; Sharapov, I.

    2009-04-01

    The appearance of low-frequency spectral anomalies in natural microseismic noise over oil and gas deposits is observed since 1989 in different oil and gas regions (S. Arutunov, S. Dangel, G. Goloshubin). Several methods of prospecting and exploration of oil and gas deposits based on this effect (NTK ANCHAR, Spectraseis AG). There are several points of view (S. Arutunov, E. Birialtsev, Y. Podladchikov) about the physical model of effect which are based on fundamentally different geophysical mechanisms. One of them is based on the hypothesis of generation of the microseismic noise in to an oil and gas reservoir. Another point of view is based on the mechanism of the filtering microseismic noise in the geological medium where oil and gas reservoir is the contrast layer. For the first hypothesis an adequate quantity physical-mathematical model is absent. Second hypothesis has a discrepancy of distribution energy on theoretical calculated frequencies of waveguides «ground surface - oil deposit» eigenmodes. The fundamental frequency (less than 1 Hz for most cases) should have a highest amplitude as opposed to the regular observation range is 1-10 Hz. During 2005-2008 years by specialists of «Gradient» JSC were processed microsesmic signals from more 50 geological objects. The parameters of low-frequency anomalies were compared with medium properties (porosity, saturation and viscosity) defined according to drilling, allowed to carry out a statistical analysis and to establish some correlation. This paper presents results of theoretical calculation of spectra of microseisms in the zone of oil and gas deposits by mathematical modeling of propagation of seismic waves and comparing spectra of model microseisms with actually observed. Mathematical modeling of microseismic vibrations spectra showed good correlation of theoretical spectra and observed in practice. This is proof the applicability of microseismic methods of exploration for oil and gas. Correlation between spectral parameters of microseisms and reservoir parameters were investigated on results of subsequent drilling. Dependences of the low-frequency seismic signal from collecting properties of the reservoir which have been identified indicate that the change in the spectrum of microseisms occurs when changing filtration and capacitive properties of the reservoir-collector. Changes of physical properties of oil also affect to spectral anomalies of the microseismic field. Obtained dependencies of the influence of a deposit and fluid parameters on spectral characteristics of microseisms are consistent with theoretical ideas about the nature of this influence. In general, performed the research allows confirming previously expressed hypothesis according the physical model of effect of low-frequency spectral anomalies in natural microseismic noise over oil and gas deposits and significantly refining the approach in the method of interpretation. Since the 2005 year the method of interpretation of microseismic spectrum anomalies which based on the hypothesis of filtering microseisms by geological medium widely are using by «Gradient» JSC on the territory of the Volga-Ural oil province. About 70 wells were drills according to results of our researches. According by results of independent experts the effectiveness of the forecasting is more 80%.

  12. Aligning professional skills and active learning methods: an application for information and communications technology engineering

    NASA Astrophysics Data System (ADS)

    Llorens, Ariadna; Berbegal-Mirabent, Jasmina; Llinàs-Audet, Xavier

    2017-07-01

    Engineering education is facing new challenges to effectively provide the appropriate skills to future engineering professionals according to market demands. This study proposes a model based on active learning methods, which is expected to facilitate the acquisition of the professional skills most highly valued in the information and communications technology (ICT) market. The theoretical foundations of the study are based on the specific literature on active learning methodologies. The Delphi method is used to establish the fit between learning methods and generic skills required by the ICT sector. An innovative proposition is therefore presented that groups the required skills in relation to the teaching method that best develops them. The qualitative research suggests that a combination of project-based learning and the learning contract is sufficient to ensure a satisfactory skills level for this profile of engineers.

  13. Focal length hysteresis of a double-liquid lens based on electrowetting

    NASA Astrophysics Data System (ADS)

    Peng, Runling; Wang, Dazhen; Hu, Zhiwei; Chen, Jiabi; Zhuang, Songlin

    2013-02-01

    In this paper, an extended Young equation especially suited for an ideal cylindrical double-liquid variable-focus lens is derived by means of an energy minimization method. Based on the extended Young equation, a kind of focal length hysteresis effect is introduced into the double-liquid variable-focus lens. Such an effect can be explained theoretically by adding a force of friction to the tri-phase contact line. Theoretical analysis shows that the focal length at a particular voltage can be different depending on whether the applied voltage is increasing or decreasing, that is, there is a focal length hysteresis effect. Moreover, the focal length at a particular voltage must be larger when the voltage is rising than when it is dropping. These conclusions are also verified by experiments.

  14. Combinatorial compatibility as habit-controlling factor in lysozyme crystallization I. Monomeric and tetrameric F faces derived graph-theoretically

    NASA Astrophysics Data System (ADS)

    Strom, C. S.; Bennema, P.

    1997-03-01

    A series of two articles discusses possible morphological evidence for oligomerization of growth units in the crystallization of tetragonal lysozyme, based on a rigorous graph-theoretic derivation of the F faces. In the first study (Part I), the growth layers are derived as valid networks satisfying the conditions of F slices in the context of the PBC theory using the graph-theoretic method implemented in program FFACE [C.S. Strom, Z. Krist. 172 (1985) 11]. The analysis is performed in monomeric and alternative tetrameric and octameric formulations of the unit cell, assuming tetramer formation according to the strongest bonds. F (flat) slices with thickness Rdhkl ( {1}/{2} < R ≤ 1 ) are predicted theoretically in the forms 1 1 0, 0 1 1, 1 1 1. The relevant energies are established in the broken bond model. The relation between possible oligomeric specifications of the unit cell and combinatorially feasible F slice compositions in these orientations is explored.

  15. An analysis of random projection for changeable and privacy-preserving biometric verification.

    PubMed

    Wang, Yongjin; Plataniotis, Konstantinos N

    2010-10-01

    Changeability and privacy protection are important factors for widespread deployment of biometrics-based verification systems. This paper presents a systematic analysis of a random-projection (RP)-based method for addressing these problems. The employed method transforms biometric data using a random matrix with each entry an independent and identically distributed Gaussian random variable. The similarity- and privacy-preserving properties, as well as the changeability of the biometric information in the transformed domain, are analyzed in detail. Specifically, RP on both high-dimensional image vectors and dimensionality-reduced feature vectors is discussed and compared. A vector translation method is proposed to improve the changeability of the generated templates. The feasibility of the introduced solution is well supported by detailed theoretical analyses. Extensive experimentation on a face-based biometric verification problem shows the effectiveness of the proposed method.

  16. Exploring super-Gaussianity toward robust information-theoretical time delay estimation.

    PubMed

    Petsatodis, Theodoros; Talantzis, Fotios; Boukis, Christos; Tan, Zheng-Hua; Prasad, Ramjee

    2013-03-01

    Time delay estimation (TDE) is a fundamental component of speaker localization and tracking algorithms. Most of the existing systems are based on the generalized cross-correlation method assuming gaussianity of the source. It has been shown that the distribution of speech, captured with far-field microphones, is highly varying, depending on the noise and reverberation conditions. Thus the performance of TDE is expected to fluctuate depending on the underlying assumption for the speech distribution, being also subject to multi-path reflections and competitive background noise. This paper investigates the effect upon TDE when modeling the source signal with different speech-based distributions. An information theoretical TDE method indirectly encapsulating higher order statistics (HOS) formed the basis of this work. The underlying assumption of Gaussian distributed source has been replaced by that of generalized Gaussian distribution that allows evaluating the problem under a larger set of speech-shaped distributions, ranging from Gaussian to Laplacian and Gamma. Closed forms of the univariate and multivariate entropy expressions of the generalized Gaussian distribution are derived to evaluate the TDE. The results indicate that TDE based on the specific criterion is independent of the underlying assumption for the distribution of the source, for the same covariance matrix.

  17. The reliability of photoneutron cross sections for 90,91,92,94Zr

    NASA Astrophysics Data System (ADS)

    Varlamov, V. V.; Davydov, A. I.; Ishkhanov, B. S.; Orlin, V. N.

    2018-05-01

    Data on partial photoneutron reaction cross sections (γ,1n) and (γ,2n) for 90,91,92,94Zr obtained at Livermore (USA) and for 90Zr obtained at Saclay (France) were analyzed. Experimental data were obtained using quasimonoenergetic photon beams from the annihilation in flight of relativistic positrons. The method of photoneutron multiplicity sorting based on the neutron energy measuring was used to separate partial reactions. The research carried out is based on the objective of using the physical criteria of data reliability. The large systematic uncertainties were found in partial cross sections, since they do not satisfy those criteria. To obtain the reliable cross sections of the partial (γ,1n) and (γ,2n) and total (γ,1n) + (γ,2n) reactions on 90,91,92,94Zr and (γ,3n) reaction on 94Zr, the experimental-theoretical method was used. It is based on the experimental data for neutron yield cross section rather independent from the neutron multiplicity and theoretical equations of the combined photonucleon reaction model (CPNRM). Newly evaluated data are compared with experimental ones. The reasons of noticeable disagreements between those are discussed.

  18. Design method of redundancy of brace-anchor sharing supporting based on cooperative deformation

    NASA Astrophysics Data System (ADS)

    Liu, Jun-yan; Li, Bing; Liu, Yan; Cai, Shan-bing

    2017-11-01

    Because of the complicated environment requirement, the support form of foundation pit is diversified, and the brace-anchor sharing support is widely used. However, the research on the force deformation characteristics and the related aspects of the cooperative response of the brace-anchor sharing support is insufficient. The application of redundancy theory in structural engineering has been more mature, but there is little theoretical research on redundancy theory in underground engineering. Based on the idea of collaborative deformation, the paper calculates the ratio of the redundancy degree of the cooperative deformation by using the local reinforcement design method and the structural component redundancy parameter calculation formula based on Frangopol. Combined with the engineering case, through the calculation of the ratio of cooperative deformation redundancy in the joint of brace-anchor sharing support. This paper explores the optimal anchor distribution form under the condition of cooperative deformation, and through the analysis and research of displacement field and stress field, the results of the collaborative deformation are validated by comparing the field monitoring data. It provides theoretical basis for the design of this kind of foundation pit in the future.

  19. Beyond the SCS-CN method: A theoretical framework for spatially lumped rainfall-runoff response

    NASA Astrophysics Data System (ADS)

    Bartlett, M. S.; Parolari, A. J.; McDonnell, J. J.; Porporato, A.

    2016-06-01

    Since its introduction in 1954, the Soil Conservation Service curve number (SCS-CN) method has become the standard tool, in practice, for estimating an event-based rainfall-runoff response. However, because of its empirical origins, the SCS-CN method is restricted to certain geographic regions and land use types. Moreover, it does not describe the spatial variability of runoff. To move beyond these limitations, we present a new theoretical framework for spatially lumped, event-based rainfall-runoff modeling. In this framework, we describe the spatially lumped runoff model as a point description of runoff that is upscaled to a watershed area based on probability distributions that are representative of watershed heterogeneities. The framework accommodates different runoff concepts and distributions of heterogeneities, and in doing so, it provides an implicit spatial description of runoff variability. Heterogeneity in storage capacity and soil moisture are the basis for upscaling a point runoff response and linking ecohydrological processes to runoff modeling. For the framework, we consider two different runoff responses for fractions of the watershed area: "prethreshold" and "threshold-excess" runoff. These occur before and after infiltration exceeds a storage capacity threshold. Our application of the framework results in a new model (called SCS-CNx) that extends the SCS-CN method with the prethreshold and threshold-excess runoff mechanisms and an implicit spatial description of runoff. We show proof of concept in four forested watersheds and further that the resulting model may better represent geographic regions and site types that previously have been beyond the scope of the traditional SCS-CN method.

  20. Assessing implementation difficulties in tobacco use prevention and cessation counselling among dental providers

    PubMed Central

    2011-01-01

    Background Tobacco use adversely affects oral health. Clinical guidelines recommend that dental providers promote tobacco abstinence and provide patients who use tobacco with brief tobacco use cessation counselling. Research shows that these guidelines are seldom implemented, however. To improve guideline adherence and to develop effective interventions, it is essential to understand provider behaviour and challenges to implementation. This study aimed to develop a theoretically informed measure for assessing among dental providers implementation difficulties related to tobacco use prevention and cessation (TUPAC) counselling guidelines, to evaluate those difficulties among a sample of dental providers, and to investigate a possible underlying structure of applied theoretical domains. Methods A 35-item questionnaire was developed based on key theoretical domains relevant to the implementation behaviours of healthcare providers. Specific items were drawn mostly from the literature on TUPAC counselling studies of healthcare providers. The data were collected from dentists (n = 73) and dental hygienists (n = 22) in 36 dental clinics in Finland using a web-based survey. Of 95 providers, 73 participated (76.8%). We used Cronbach's alpha to ascertain the internal consistency of the questionnaire. Mean domain scores were calculated to assess different aspects of implementation difficulties and exploratory factor analysis to assess the theoretical domain structure. The authors agreed on the labels assigned to the factors on the basis of their component domains and the broader behavioural and theoretical literature. Results Internal consistency values for theoretical domains varied from 0.50 ('emotion') to 0.71 ('environmental context and resources'). The domain environmental context and resources had the lowest mean score (21.3%; 95% confidence interval [CI], 17.2 to 25.4) and was identified as a potential implementation difficulty. The domain emotion provided the highest mean score (60%; 95% CI, 55.0 to 65.0). Three factors were extracted that explain 70.8% of the variance: motivation (47.6% of variance, α = 0.86), capability (13.3% of variance, α = 0.83), and opportunity (10.0% of variance, α = 0.71). Conclusions This study demonstrated a theoretically informed approach to identifying possible implementation difficulties in TUPAC counselling among dental providers. This approach provides a method for moving from diagnosing implementation difficulties to designing and evaluating interventions. PMID:21615948

  1. Tunneling Splittings in Vibronic Structure of CH_3F^+ ( X^2E): Studied by High Resolution Photoelectron Spectra and AB Initio Theoretical Method

    NASA Astrophysics Data System (ADS)

    Mo, Yuxiang; Gao, Shuming; Dai, Zuyang; Li, Hua

    2013-06-01

    We report a combined experimental and theoretical study on the vibronic structure of CH_3F^+. The results show that the tunneling splittings of vibrational energy levels occur in CH_3F^+ due to the Jahn-Teller effect. Experimentally, we have measured a high resolution ZEKE spectrum of CH_3F up to 3500 cm^-^1 above the ground state. Theoretically, we performed an ab initio calculation based on the diabatic model. The adiabatic potential energy surfaces (APES) of CH_3F^+ have been calculated at the MRCI/CAS/avq(t)z level and expressed by Taylor expansions with normal coordinates as variables. The energy gradients for the lower and upper APES, the derivative couplings between them and also the energies of the APES have been used to determine the coefficients in the Taylor expansion. The spin-vibronic energy levels have been calculated by accounting all six vibrational modes and their couplings. The experimental ZEKE spectra were assigned based on the theoretical calculations. W. Domcke, D. R. Yarkony, and H. Köpple (Eds.), Conical Intersections: Eletronic Structure, Dynamics and Spectroscopy (World Scientific, Singapore, 2004). M. S. Schuurman, D. E. Weinberg, and D. R. Yarkony, J. Chem. Phys. 127, 104309 (2007).

  2. Promoting mental wellbeing: developing a theoretically and empirically sound complex intervention.

    PubMed

    Millar, S L; Donnelly, M

    2014-06-01

    This paper describes the development of a complex intervention to promote mental wellbeing using the revised framework for developing and evaluating complex interventions produced by the UK Medical Research Council (UKMRC). Application of the first two phases of the framework is described--development and feasibility and piloting. The theoretical case and evidence base were examined analytically to explicate the theoretical and empirical foundations of the intervention. These findings informed the design of a 12-week mental wellbeing promotion programme providing early intervention for people showing signs of mental health difficulties. The programme is based on the theoretical constructs of self-efficacy, self-esteem, purpose in life, resilience and social support and comprises 10 steps. A mixed methods approach was used to conduct a feasibility study with community and voluntary sector service users and in primary care. A significant increase in mental wellbeing was observed following participation in the intervention. Qualitative data corroborated this finding and suggested that the intervention was feasible to deliver and acceptable to participants, facilitators and health professionals. The revised UKMRC framework can be successfully applied to the development of public health interventions. © The Author 2013. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Supervision of Facilitators in a Multisite Study: Goals, Process, and Outcomes

    PubMed Central

    2010-01-01

    Objective To describe the aims, implementation, and desired outcomes of facilitator supervision for both interventions (treatment and control) in Project Eban and to present the Eban Theoretical Framework for Supervision that guided the facilitators’ supervision. The qualifications and training of supervisors and facilitators are also described. Design This article provides a detailed description of supervision in a multisite behavioral intervention trial. The Eban Theoretical Framework for Supervision is guided by 3 theories: cognitive behavior therapy, the Life-long Model of Supervision, and “Empowering supervisees to empower others: a culturally responsive supervision model.” Methods Supervision is based on the Eban Theoretical Framework for Supervision, which provides guidelines for implementing both interventions using goals, process, and outcomes. Results Because of effective supervision, the interventions were implemented with fidelity to the protocol and were standard across the multiple sites. Conclusions Supervision of facilitators is a crucial aspect of multisite intervention research quality assurance. It provides them with expert advice, optimizes the effectiveness of facilitators, and increases adherence to the protocol across multiple sites. Based on the experience in this trial, some of the challenges that arise when conducting a multisite randomized control trial and how they can be handled by implementing the Eban Theoretical Framework for Supervision are described. PMID:18724192

  4. The Effects of Amine Based Missile Fuels on the Activated Sludge Process.

    DTIC Science & Technology

    1979-10-01

    centrations found to cause no significant effect on sewage treatment efficiency are 74 mg/i for UDMH, 44 mg/k for HZ, and 蕔 mg/k for MMH. Ammonia ...EXPERIMENTAL METHODS AND MATERIALS ........... 9 1. Substrate Base .. ................... 9 2. Supplemental Requirements. ............. 11 a. Nitrogen...Recycle .. ....... 5 3 Tyndall Sewage Treatment Plant .. .............. 10 4 Theoretical Effluent COD and Ammonia Nitrogen as a Function of Mean Cell

  5. 3-D surface profilometry based on modulation measurement by applying wavelet transform method

    NASA Astrophysics Data System (ADS)

    Zhong, Min; Chen, Feng; Xiao, Chao; Wei, Yongchao

    2017-01-01

    A new analysis of 3-D surface profilometry based on modulation measurement technique by the application of Wavelet Transform method is proposed. As a tool excelling for its multi-resolution and localization in the time and frequency domains, Wavelet Transform method with good localized time-frequency analysis ability and effective de-noizing capacity can extract the modulation distribution more accurately than Fourier Transform method. Especially for the analysis of complex object, more details of the measured object can be well remained. In this paper, the theoretical derivation of Wavelet Transform method that obtains the modulation values from a captured fringe pattern is given. Both computer simulation and elementary experiment are used to show the validity of the proposed method by making a comparison with the results of Fourier Transform method. The results show that the Wavelet Transform method has a better performance than the Fourier Transform method in modulation values retrieval.

  6. Numerical optimization using flow equations.

    PubMed

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  7. Numerical optimization using flow equations

    NASA Astrophysics Data System (ADS)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  8. Dynamics and Control of Newtonian and Viscoelastic Fluids

    NASA Astrophysics Data System (ADS)

    Lieu, Binh K.

    Transition to turbulence represents one of the most intriguing natural phenomena. Flows that are smooth and ordered may become complex and disordered as the flow strength increases. This process is known as transition to turbulence. In this dissertation, we develop theoretical and computational tools for analysis and control of transition and turbulence in shear flows of Newtonian, such as air and water, and complex viscoelastic fluids, such as polymers and molten plastics. Part I of the dissertation is devoted to the design and verification of sensor-free and feedback-based strategies for controlling the onset of turbulence in channel flows of Newtonian fluids. We use high fidelity simulations of the nonlinear flow dynamics to demonstrate the effectiveness of our model-based approach to flow control design. In Part II, we utilize systems theoretic tools to study transition and turbulence in channel flows of viscoelastic fluids. For flows with strong elastic forces, we demonstrate that flow fluctuations can experience significant amplification even in the absence of inertia. We use our theoretical developments to uncover the underlying physical mechanism that leads to this high amplification. For turbulent flows with polymer additives, we develop a model-based method for analyzing the influence of polymers on drag reduction. We demonstrate that our approach predicts drag reducing trends observed in full-scale numerical simulations. In Part III, we develop mathematical framework and computational tools for calculating frequency responses of spatially distributed systems. Using state-of-the-art automatic spectral collocation techniques and new integral formulation, we show that our approach yields more reliable and accurate solutions than currently available methods.

  9. Configurations of base-pair complexes in solutions. [nucleotide chemistry

    NASA Technical Reports Server (NTRS)

    Egan, J. T.; Nir, S.; Rein, R.; Macelroy, R.

    1978-01-01

    A theoretical search for the most stable conformations (i.e., stacked or hydrogen bonded) of the base pairs A-U and G-C in water, CCl4, and CHCl3 solutions is presented. The calculations of free energies indicate a significant role of the solvent in determining the conformations of the base-pair complexes. The application of the continuum method yields preferred conformations in good agreement with experiment. Results of the calculations with this method emphasize the importance of both the electrostatic interactions between the two bases in a complex, and the dipolar interaction of the complex with the entire medium. In calculations with the solvation shell method, the last term, i.e., dipolar interaction of the complex with the entire medium, was added. With this modification the prediction of the solvation shell model agrees both with the continuum model and with experiment, i.e., in water the stacked conformation of the bases is preferred.

  10. Bidirectional composition on lie groups for gradient-based image alignment.

    PubMed

    Mégret, Rémi; Authesserre, Jean-Baptiste; Berthoumieu, Yannick

    2010-09-01

    In this paper, a new formulation based on bidirectional composition on Lie groups (BCL) for parametric gradient-based image alignment is presented. Contrary to the conventional approaches, the BCL method takes advantage of the gradients of both template and current image without combining them a priori. Based on this bidirectional formulation, two methods are proposed and their relationship with state-of-the-art gradient based approaches is fully discussed. The first one, i.e., the BCL method, relies on the compositional framework to provide the minimization of the compensated error with respect to an augmented parameter vector. The second one, the projected BCL (PBCL), corresponds to a close approximation of the BCL approach. A comparative study is carried out dealing with computational complexity, convergence rate and frequence of convergence. Numerical experiments using a conventional benchmark show the performance improvement especially for asymmetric levels of noise, which is also discussed from a theoretical point of view.

  11. The design and testing of a caring teaching model based on the theoretical framework of caring in the Chinese Context: a mixed-method study.

    PubMed

    Guo, Yujie; Shen, Jie; Ye, Xuchun; Chen, Huali; Jiang, Anli

    2013-08-01

    This paper aims to report the design and test the effectiveness of an innovative caring teaching model based on the theoretical framework of caring in the Chinese context. Since the 1970's, caring has been a core value in nursing education. In a previous study, a theoretical framework of caring in the Chinese context is explored employing a grounded theory study, considered beneficial for caring education. A caring teaching model was designed theoretically and a one group pre- and post-test quasi-experimental study was administered to test its effectiveness. From Oct, 2009 to Jul, 2010, a cohort of grade-2 undergraduate nursing students (n=64) in a Chinese medical school was recruited to participate in the study. Data were gathered through quantitative and qualitative methods to evaluate the effectiveness of the caring teaching model. The caring teaching model created an esthetic situation and experiential learning style for teaching caring that was integrated within the curricula. Quantitative data from the quasi-experimental study showed that the post-test scores of each item were higher than those on the pre-test (p<0.01). Thematic analysis of 1220 narratives from students' caring journals and reports of participant class observation revealed two main thematic categories, which reflected, from the students' points of view, the development of student caring character and the impact that the caring teaching model had on this regard. The model could be used as an integrated approach to teach caring in nursing curricula. It would also be beneficial for nursing administrators in cultivating caring nurse practitioners. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Conjugal violence in the perspective of "Family Health Strategy" professionals: a public health problem and the need to provide care for the women

    PubMed Central

    Gomes, Nadirlene Pereira; Erdmann, Alacoque Lorenzini

    2014-01-01

    Objective to construct a theoretical matrix based on the meanings of the interactions and actions experienced by the professionals regarding the nursing care practices and the health of women in situations of conjugal violence in the ambit of the Family Health Strategy. Methods research based in Grounded Theory. Following approval by the Research Ethics Committee, 52 professionals were interviewed in Santa Catarina, Brazil. The analysis was based on open, axial and selective codifications. Results the theoretical model was delimited based on the phenomenon "Recognizing conjugal violence as a public health problem, and the need for management of the care for the woman", which reflects the experience of the professionals in relation to care for the woman, as well as the meanings attributed to this care. Conclusions the phenomenon allows one to understand the movement of action and interaction regarding the care for the woman in a situation of conjugal violence. PMID:24553706

  13. Lunar-edge based on-orbit modulation transfer function (MTF) measurement

    NASA Astrophysics Data System (ADS)

    Cheng, Ying; Yi, Hongwei; Liu, Xinlong

    2017-10-01

    Modulation transfer function (MTF) is an important parameter for image quality evaluation of on-orbit optical image systems. Various methods have been proposed to determine the MTF of an imaging system which are based on images containing point, pulse and edge features. In this paper, the edge of the moon can be used as a high contrast target to measure on-orbit MTF of image systems based on knife-edge methods. The proposed method is an extension of the ISO 12233 Slanted-edge Spatial Frequency Response test, except that the shape of the edge is a circular arc instead of a straight line. In order to get more accurate edge locations and then obtain a more authentic edge spread function (ESF), we choose circular fitting method based on least square to fit lunar edge in sub-pixel edge detection process. At last, simulation results show that the MTF value at Nyquist frequency calculated using our lunar edge method is reliable and accurate with error less than 2% comparing with theoretical MTF value.

  14. Contrast Gradient-Based Blood Velocimetry With Computed Tomography: Theory, Simulations, and Proof of Principle in a Dynamic Flow Phantom.

    PubMed

    Korporaal, Johannes G; Benz, Matthias R; Schindera, Sebastian T; Flohr, Thomas G; Schmidt, Bernhard

    2016-01-01

    The aim of this study was to introduce a new theoretical framework describing the relationship between the blood velocity, computed tomography (CT) acquisition velocity, and iodine contrast enhancement in CT images, and give a proof of principle of contrast gradient-based blood velocimetry with CT. The time-averaged blood velocity (v(blood)) inside an artery along the axis of rotation (z axis) is described as the mathematical division of a temporal (Hounsfield unit/second) and spatial (Hounsfield unit/centimeter) iodine contrast gradient. From this new theoretical framework, multiple strategies for calculating the time-averaged blood velocity from existing clinical CT scan protocols are derived, and contrast gradient-based blood velocimetry was introduced as a new method that can calculate v(blood) directly from contrast agent gradients and the changes therein. Exemplarily, the behavior of this new method was simulated for image acquisition with an adaptive 4-dimensional spiral mode consisting of repeated spiral acquisitions with alternating scan direction. In a dynamic flow phantom with flow velocities between 5.1 and 21.2 cm/s, the same acquisition mode was used to validate the simulations and give a proof of principle of contrast gradient-based blood velocimetry in a straight cylinder of 2.5 cm diameter, representing the aorta. In general, scanning with the direction of blood flow results in decreased and scanning against the flow in increased temporal contrast agent gradients. Velocity quantification becomes better for low blood and high acquisition speeds because the deviation of the measured contrast agent gradient from the temporal gradient will increase. In the dynamic flow phantom, a modulation of the enhancement curve, and thus alternation of the contrast agent gradients, can be observed for the adaptive 4-dimensional spiral mode and is in agreement with the simulations. The measured flow velocities in the downslopes of the enhancement curves were in good agreement with the expected values, although the accuracy and precision worsened with increasing flow velocities. The new theoretical framework increases the understanding of the relationship between the blood velocity, CT acquisition velocity, and iodine contrast enhancement in CT images, and it interconnects existing blood velocimetry methods with research on transluminary attenuation gradients. With these new insights, novel strategies for CT blood velocimetry, such as the contrast gradient-based method presented in this article, may be developed.

  15. Global Methods for Image Motion Analysis

    DTIC Science & Technology

    1992-10-01

    a variant of the same error function as in Adiv [2]. Another related approach was presented by Maybank [46,45]. Nearly all researchers in motion...with an application to stereo vision. In Proc. 7th Intern. Joint Conference on AI, pages 674{679, Vancouver, 1981. [45] S. J. Maybank . Algorithm for...analysing optical ow based on the least-squares method. Image and Vision Computing, 4:38{42, 1986. [46] S. J. Maybank . A Theoretical Study of Optical

  16. A Method for the Calculation of Lattice Energies of Complex Crystals with Application to the Oxides of Molybdenum

    NASA Technical Reports Server (NTRS)

    Chaney, William S.

    1961-01-01

    A theoretical study has been made of molybdenum dioxide and molybdenum trioxide in order to extend the knowledge of factors Involved in the oxidation of molybdenum. New methods were developed for calculating the lattice energies based on electrostatic valence theory, and the coulombic, polarization, Van der Waals, and repulsion energie's were calculated. The crystal structure was examined and structure details were correlated with lattice energy.

  17. Boundary elements; Proceedings of the Fifth International Conference, Hiroshima, Japan, November 8-11, 1983

    NASA Astrophysics Data System (ADS)

    Brebbia, C. A.; Futagami, T.; Tanaka, M.

    The boundary-element method (BEM) in computational fluid and solid mechanics is examined in reviews and reports of theoretical studies and practical applications. Topics presented include the fundamental mathematical principles of BEMs, potential problems, EM-field problems, heat transfer, potential-wave problems, fluid flow, elasticity problems, fracture mechanics, plates and shells, inelastic problems, geomechanics, dynamics, industrial applications of BEMs, optimization methods based on the BEM, numerical techniques, and coupling.

  18. High spatial resolution shortwave infrared imaging technology based on time delay and digital accumulation method

    NASA Astrophysics Data System (ADS)

    Jia, Jianxin; Wang, Yueming; Zhuang, Xiaoqiong; Yao, Yi; Wang, Shengwei; Zhao, Ding; Shu, Rong; Wang, Jianyu

    2017-03-01

    Shortwave infrared (SWIR) imaging technology attracts more and more attention by its fascinating ability of penetrating haze and smoke. For application of spaceborne remote sensing, spatial resolution of SWIR is lower compared with that of visible light (VIS) wavelength. It is difficult to balance between the spatial resolution and signal to noise ratio (SNR). Some conventional methods, such as enlarging aperture of telescope, image motion compensation, and analog time delay and integration (TDI) technology are used to gain SNR. These techniques bring in higher cost of satellite, complexity of system or other negative factors. In this paper, time delay and digital accumulation (TDDA) method is proposed to achieve higher spatial resolution. The method can enhance the SNR and non-uniformity of system theoretically. A prototype of SWIR imager consists of opto-mechanical, 1024 × 128 InGaAs detector, and electronics is designed and integrated to prove TDDA method. Both of measurements and experimental results indicate TDDA method can promote SNR of system approximated of the square root of accumulative stage. The results exhibit that non-uniformity of system is also improved by this approach to some extent. The experiment results are corresponded with the theoretical analysis. Based on the experiments results, it is proved firstly that the goal of 1 m ground sample distance (GSD) in orbit of 500 km is feasible with the TDDA stage of 30 for SWIR waveband (0.9-1.7 μm).

  19. Implementation of Non-Destructive Evaluation and Process Monitoring in DLP-based Additive Manufacturing

    NASA Astrophysics Data System (ADS)

    Kovalenko, Iaroslav; Verron, Sylvain; Garan, Maryna; Šafka, Jiří; Moučka, Michal

    2017-04-01

    This article describes a method of in-situ process monitoring in the digital light processing (DLP) 3D printer. It is based on the continuous measurement of the adhesion force between printing surface and bottom of a liquid resin bath. This method is suitable only for the bottom-up DPL printers. Control system compares the force at the moment of unsticking of printed layer from the bottom of the tank, when it has the largest value in printing cycle, with theoretical value. Implementation of suggested algorithm can make detection of faults during the printing process possible.

  20. Small-displacement sensing system based on multiple total internal reflections in heterodyne interferometry.

    PubMed

    Wang, Shinn-Fwu; Chiu, Ming-Hung; Chen, Wei-Wu; Kao, Fu-Hsi; Chang, Rong-Seng

    2009-05-01

    A small-displacement sensing system based on multiple total internal reflections in heterodyne interferometry is proposed. In this paper, a small displacement can be obtained only by measuring the variation in phase difference between s- and p-polarization states for the total internal reflection effect. In order to improve the sensitivity, we increase the number of total internal reflections by using a parallelogram prism. The theoretical resolution of the method is better than 0.417 nm. The method has some merits, e.g., high resolution, high sensitivity, and real-time measurement. Also, its feasibility is demonstrated.

  1. Command Filtering-Based Fuzzy Control for Nonlinear Systems With Saturation Input.

    PubMed

    Yu, Jinpeng; Shi, Peng; Dong, Wenjie; Lin, Chong

    2017-09-01

    In this paper, command filtering-based fuzzy control is designed for uncertain multi-input multioutput (MIMO) nonlinear systems with saturation nonlinearity input. First, the command filtering method is employed to deal with the explosion of complexity caused by the derivative of virtual controllers. Then, fuzzy logic systems are utilized to approximate the nonlinear functions of MIMO systems. Furthermore, error compensation mechanism is introduced to overcome the drawback of the dynamics surface approach. The developed method will guarantee all signals of the systems are bounded. The effectiveness and advantages of the theoretic result are obtained by a simulation example.

  2. Geometric, Kinematic and Radiometric Aspects of Image-Based Measurements

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu

    2002-01-01

    This paper discusses theoretical foundations of quantitative image-based measurements for extracting and reconstructing geometric, kinematic and dynamic properties of observed objects. New results are obtained by using a combination of methods in perspective geometry, differential geometry. radiometry, kinematics and dynamics. Specific topics include perspective projection transformation. perspective developable conical surface, perspective projection under surface constraint, perspective invariants, the point correspondence problem. motion fields of curves and surfaces. and motion equations of image intensity. The methods given in this paper arc useful for determining morphology and motion fields of deformable bodies such as elastic bodies. viscoelastic mediums and fluids.

  3. Research the Gait Characteristics of Human Walking Based on a Robot Model and Experiment

    NASA Astrophysics Data System (ADS)

    He, H. J.; Zhang, D. N.; Yin, Z. W.; Shi, J. H.

    2017-02-01

    In order to research the gait characteristics of human walking in different walking ways, a robot model with a single degree of freedom is put up in this paper. The system control models of the robot are established through Matlab/Simulink toolbox. The gait characteristics of straight, uphill, turning, up the stairs, down the stairs up and down areanalyzed by the system control models. To verify the correctness of the theoretical analysis, an experiment was carried out. The comparison between theoretical results and experimental results shows that theoretical results are better agreement with the experimental ones. Analyze the reasons leading to amplitude error and phase error and give the improved methods. The robot model and experimental ways can provide foundation to further research the various gait characteristics of the exoskeleton robot.

  4. Linear transforms for Fourier data on the sphere: application to high angular resolution diffusion MRI of the brain.

    PubMed

    Haldar, Justin P; Leahy, Richard M

    2013-05-01

    This paper presents a novel family of linear transforms that can be applied to data collected from the surface of a 2-sphere in three-dimensional Fourier space. This family of transforms generalizes the previously-proposed Funk-Radon Transform (FRT), which was originally developed for estimating the orientations of white matter fibers in the central nervous system from diffusion magnetic resonance imaging data. The new family of transforms is characterized theoretically, and efficient numerical implementations of the transforms are presented for the case when the measured data is represented in a basis of spherical harmonics. After these general discussions, attention is focused on a particular new transform from this family that we name the Funk-Radon and Cosine Transform (FRACT). Based on theoretical arguments, it is expected that FRACT-based analysis should yield significantly better orientation information (e.g., improved accuracy and higher angular resolution) than FRT-based analysis, while maintaining the strong characterizability and computational efficiency of the FRT. Simulations are used to confirm these theoretical characteristics, and the practical significance of the proposed approach is illustrated with real diffusion weighted MRI brain data. These experiments demonstrate that, in addition to having strong theoretical characteristics, the proposed approach can outperform existing state-of-the-art orientation estimation methods with respect to measures such as angular resolution and robustness to noise and modeling errors. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Precise determination of protein extinction coefficients under native and denaturing conditions using SV-AUC.

    PubMed

    Hoffmann, Andreas; Grassl, Kerstin; Gommert, Janine; Schlesak, Christian; Bepperling, Alexander

    2018-04-17

    The accurate determination of protein concentration is an important though non-trivial task during the development of a biopharmaceutical. The fundamental prerequisite for this is the availability of an accurate extinction coefficient. Common approaches for the determination of an extinction coefficient for a given protein are either based on the theoretical prediction utilizing the amino acid sequence or the photometric determination combined with a measurement of absolute protein concentration. Here, we report on an improved SV-AUC based method utilizing an analytical ultracentrifuge equipped with absorbance and Rayleigh interference optics. Global fitting of datasets helped to overcome some of the obstacles encountered with the traditional method employing synthetic boundary cells. Careful calculation of dn/dc values taking glycosylation and solvent composition into account allowed the determination of the extinction coefficients of monoclonal antibodies and an Fc-fusion protein under native as well as under denaturing conditions. An intra-assay precision of 0.9% and an accuracy of 1.8% compared to the theoretical value was achieved for monoclonal antibodies. Due to the large number of data points of a single dataset, no meaningful difference between the ProteomeLab XL-I and the new Optima AUC platform could be observed. Thus, the AUC-based approach offers a precise, convenient and versatile alternative to conventional methods like total amino acid analysis (AAA).

  6. An improved method for predicting the effects of flight on jet mixing noise

    NASA Technical Reports Server (NTRS)

    Stone, J. R.

    1979-01-01

    The NASA method (1976) for predicting the effects of flight on jet mixing noise was improved. The earlier method agreed reasonably well with experimental flight data for jet velocities up to about 520 m/sec (approximately 1700 ft/sec). The poorer agreement at high jet velocities appeared to be due primarily to the manner in which supersonic convection effects were formulated. The purely empirical supersonic convection formulation of the earlier method was replaced by one based on theoretical considerations. Other improvements of an empirical nature included were based on model-jet/free-jet simulated flight tests. The revised prediction method is presented and compared with experimental data obtained from the Bertin Aerotrain with a J85 engine, the DC-10 airplane with JT9D engines, and the DC-9 airplane with refanned JT8D engines. It is shown that the new method agrees better with the data base than a recently proposed SAE method.

  7. Theoretical modeling of UV-Vis absorption and emission spectra in liquid state systems including vibrational and conformational effects: explicit treatment of the vibronic transitions.

    PubMed

    D'Abramo, Marco; Aschi, Massimiliano; Amadei, Andrea

    2014-04-28

    Here, we extend a recently introduced theoretical-computational procedure [M. D'Alessandro, M. Aschi, C. Mazzuca, A. Palleschi, and A. Amadei, J. Chem. Phys. 139, 114102 (2013)] to include quantum vibrational transitions in modelling electronic spectra of atomic molecular systems in condensed phase. The method is based on the combination of Molecular Dynamics simulations and quantum chemical calculations within the Perturbed Matrix Method approach. The main aim of the presented approach is to reproduce as much as possible the spectral line shape which results from a subtle combination of environmental and intrinsic (chromophore) mechanical-dynamical features. As a case study, we were able to model the low energy UV-vis transitions of pyrene in liquid acetonitrile in good agreement with the experimental data.

  8. Design of c-band telecontrol transmitter local oscillator for UAV data link

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Qu, Yu; Song, Zuxun

    2018-01-01

    A C-band local oscillator of an Unmanned Aerial Vehicle (UAV) data link radio frequency (RF) transmitter unit with high-stability, high-precision and lightweight was designed in this paper. Based on the highly integrated broadband phase-locked loop (PLL) chip HMC834LP6GE, the system performed fractional-N control by internal modules programming to achieve low phase noise and small frequency resolution. The simulation and testing methods were combined to optimize and select the loop filter parameters to ensure the high precision and stability of the frequency synthesis output. The theoretical analysis and engineering prototype measurement results showed that the local oscillator had stable output frequency, accurate frequency step, high spurious suppression and low phase noise, and met the design requirements. The proposed design idea and research method have theoretical guiding significance for engineering practice.

  9. Use of borated polyethylene to improve low energy response of a prompt gamma based neutron dosimeter

    NASA Astrophysics Data System (ADS)

    Priyada, P.; Ashwini, U.; Sarkar, P. K.

    2016-05-01

    The feasibility of using a combined sample of borated polyethylene and normal polyethylene to estimate neutron ambient dose equivalent from measured prompt gamma emissions is investigated theoretically to demonstrate improvements in low energy neutron dose response compared to only polyethylene. Monte Carlo simulations have been carried out using the FLUKA code to calculate the response of boron, hydrogen and carbon prompt gamma emissions to mono energetic neutrons. The weighted least square method is employed to arrive at the best linear combination of these responses that approximates the ICRP fluence to dose conversion coefficients well in the energy range of 10-8 MeV to 14 MeV. The configuration of the combined system is optimized through FLUKA simulations. The proposed method is validated theoretically with five different workplace neutron spectra with satisfactory outcome.

  10. Theoretical and experimental analyses to determine the effects of crystal orientation and grain size on the thermoelectric properties of oblique deposited bismuth telluride thin films

    NASA Astrophysics Data System (ADS)

    Morikawa, Satoshi; Satake, Yuji; Takashiri, Masayuki

    2018-06-01

    The effects of crystal orientation and grain size on the thermoelectric properties of Bi2Te3 thin films were investigated by conducting experimental and theoretical analyses. To vary the crystal orientation and grain size, we performed oblique deposition, followed by thermal annealing treatment. The crystal orientation decreased as the oblique angle was increased, while the grain size was not changed significantly. The thermoelectric properties were measured at room temperature. A theoretical analysis was performed using a first principles method based on density functional theory. Then the semi-classical Boltzmann transport equation was used in the relaxation time approximation, with the effect of grain size included. Furthermore, the effect of crystal orientation was included in the calculation based on a simple semi-experimental model. A maximum power factor of 11.6 µW/(cm·K2) was obtained at an oblique angle of 40°. The calculated thermoelectric properties were in very good agreement with the experimentally measured values.

  11. Prediction of Forming Limit Diagram for Seamed Tube Hydroforming Based on Thickness Gradient Criterion

    NASA Astrophysics Data System (ADS)

    Chen, Xianfeng; Lin, Zhongqin; Yu, Zhongqi; Chen, Xinping; Li, Shuhui

    2011-08-01

    This study establishes the forming limit diagram (FLD) for QSTE340 seamed tube hydroforming by finite element method (FEM) simulation. FLD is commonly obtained from experiment, theoretical calculation and FEM simulation. But for tube hydroforming, both of the experimental and theoretical means are restricted in the application due to the equipment costs and the lack of authoritative theoretical knowledge. In this paper, a novel approach of predicting forming limit using thickness gradient criterion (TGC) is presented for seamed tube hydroforming. Firstly, tube bulge tests and uniaxial tensile tests are performed to obtain the stress-strain curve for tube three parts. Then one FE model for a classical tube free hydroforming and another FE model for a novel experimental apparatus by applying the lateral compression force and the internal pressure are constructed. After that, the forming limit strain is calculated based on TGC in the FEM simulation. Good agreement between the simulation and experimental results is indicated. By combining the TGC and FEM, an alternative way of predicting forming limit with enough accuracy and convenience is provided.

  12. The development of retrosynthetic glycan libraries to profile and classify the human serum N-linked glycome.

    PubMed

    Kronewitter, Scott R; An, Hyun Joo; de Leoz, Maria Lorna; Lebrilla, Carlito B; Miyamoto, Suzanne; Leiserowitz, Gary S

    2009-06-01

    Annotation of the human serum N-linked glycome is a formidable challenge but is necessary for disease marker discovery. A new theoretical glycan library was constructed and proposed to provide all possible glycan compositions in serum. It was developed based on established glycobiology and retrosynthetic state-transition networks. We find that at least 331 compositions are possible in the serum N-linked glycome. By pairing the theoretical glycan mass library with a high mass accuracy and high-resolution MS, human serum glycans were effectively profiled. Correct isotopic envelope deconvolution to monoisotopic masses and the high mass accuracy instruments drastically reduced the amount of false composition assignments. The high throughput capacity enabled by this library permitted the rapid glycan profiling of large control populations. With the use of the library, a human serum glycan mass profile was developed from 46 healthy individuals. This paper presents a theoretical N-linked glycan mass library that was used for accurate high-throughput human serum glycan profiling. Rapid methods for evaluating a patient's glycome are instrumental for studying glycan-based markers.

  13. Experimental and theoretical studies of the thermal behavior of titanium dioxide-SnO2 based composites.

    PubMed

    Voga, G P; Coelho, M G; de Lima, G M; Belchior, J C

    2011-04-07

    In this paper we report experimental and theoretical studies concerning the thermal behavior of some organotin-Ti(IV) oxides employed as precursors for TiO(2)/SnO(2) semiconducting based composites, with photocatalytic properties. The organotin-TiO(2) supported materials were obtained by chemical reactions of SnBu(3)Cl (Bu = butyl), TiCl(4) with NH(4)OH in ethanol, in order to impregnate organotin oxide in a TiO(2) matrix. A theoretical model was developed to support experimental procedures. The kinetics parameters: frequency factor (A), activation energy, and reaction order (n) can be estimated through artificial intelligence methods. Genetic algorithm, fuzzy logic, and Petri neural nets were used in order to determine the kinetic parameters as a function of temperature. With this in mind, three precursors were prepared in order to obtain composites with Sn/TiO(2) ratios of 0% (1), 15% (2), and 30% (3) in weight, respectively. The thermal behavior of products (1-3) was studied by thermogravimetric experiments in oxygen.

  14. Analysis of laser energy characteristics of laser guided weapons based on the hardware-in-the-loop simulation system

    NASA Astrophysics Data System (ADS)

    Zhu, Yawen; Cui, Xiaohong; Wang, Qianqian; Tong, Qiujie; Cui, Xutai; Li, Chenyu; Zhang, Le; Peng, Zhong

    2016-11-01

    The hardware-in-the-loop simulation system, which provides a precise, controllable and repeatable test conditions, is an important part of the development of the semi-active laser (SAL) guided weapons. In this paper, laser energy chain characteristics were studied, which provides a theoretical foundation for the SAL guidance technology and the hardware-in-the-loop simulation system. Firstly, a simplified equation was proposed to adjust the radar equation according to the principles of the hardware-in-the-loop simulation system. Secondly, a theoretical model and calculation method were given about the energy chain characteristics based on the hardware-in-the-loop simulation system. We then studied the reflection characteristics of target and the distance between the missile and target with major factors such as the weather factors. Finally, the accuracy of modeling was verified by experiment as the values measured experimentally generally follow the theoretical results from the model. And experimental results revealed that ratio of attenuation of the laser energy exhibited a non-linear change vs. pulse number, which were in accord with the actual condition.

  15. Comparison of Motivational Interviewing with Acceptance and Commitment Therapy: A conceptual and clinical review

    PubMed Central

    Bricker, J.B.; Tollison, S.J.

    2011-01-01

    Background Motivational Interviewing (MI) and Acceptance and Commitment Therapy (ACT) are two emerging therapies that focus on commitment to behavior change. Aim Provide the first systematic comparison of MI with ACT. Methods A systematic comparison of MI and ACT at the conceptual level, with a focus on their philosophical and theoretical bases, and at the clinical level, with a focus on the therapeutic relationship, use of language in therapy, and use of values in therapy. Results Conceptually, MI & ACT have distinct philosophical bases. MI’s theoretical basis focuses on language content, whereas ACT’s theoretical basis focuses on language process. Clinically, ACT and MI have distinct approaches to the therapeutic relationship, fundamentally different foci on client language, and different uses of client values to motivate behavior change. ACT, but not MI, directly targets the willingness to experience thoughts, feelings, and sensations. Conclusions Despite their conceptual and clinical differences, MI and ACT are complementary interventions. Collaborations between MI and ACT researchers may yield fruitful cross-fertilization research on core processes and clinical outcomes. PMID:21338532

  16. Lung Cancer Screening Participation: Developing a Conceptual Model to Guide Research

    PubMed Central

    Carter-Harris, Lisa; Davis, Lorie L.; Rawl, Susan M.

    2017-01-01

    Purpose To describe the development of a conceptual model to guide research focused on lung cancer screening participation from the perspective of the individual in the decision-making process. Methods Based on a comprehensive review of empirical and theoretical literature, a conceptual model was developed linking key psychological variables (stigma, medical mistrust, fatalism, worry, and fear) to the health belief model and precaution adoption process model. Results Proposed model concepts have been examined in prior research of either lung or other cancer screening behavior. To date, a few studies have explored a limited number of variables that influence screening behavior in lung cancer specifically. Therefore, relationships among concepts in the model have been proposed and future research directions presented. Conclusion This proposed model is an initial step to support theoretically based research. As lung cancer screening becomes more widely implemented, it is critical to theoretically guide research to understand variables that may be associated with lung cancer screening participation. Findings from future research guided by the proposed conceptual model can be used to refine the model and inform tailored intervention development. PMID:28304262

  17. An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars.

    PubMed

    Huang, Jiyan; Zhang, Ying; Luo, Shan

    2017-12-15

    Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The simulation results verified the proposed method.

  18. An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars

    PubMed Central

    Zhang, Ying; Luo, Shan

    2017-01-01

    Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer–Rao lower bound (CRLB) are derived. The simulation results verified the proposed method. PMID:29244727

  19. Delivering spacecraft control centers with embedded knowledge-based systems: The methodology issue

    NASA Technical Reports Server (NTRS)

    Ayache, S.; Haziza, M.; Cayrac, D.

    1994-01-01

    Matra Marconi Space (MMS) occupies a leading place in Europe in the domain of satellite and space data processing systems. The maturity of the knowledge-based systems (KBS) technology, the theoretical and practical experience acquired in the development of prototype, pre-operational and operational applications, make it possible today to consider the wide operational deployment of KBS's in space applications. In this perspective, MMS has to prepare the introduction of the new methods and support tools that will form the basis of the development of such systems. This paper introduces elements of the MMS methodology initiatives in the domain and the main rationale that motivated the approach. These initiatives develop along two main axes: knowledge engineering methods and tools, and a hybrid method approach for coexisting knowledge-based and conventional developments.

  20. Influence analysis of fluctuation parameters on flow stability based on uncertainty method

    NASA Astrophysics Data System (ADS)

    Meng, Tao; Fan, Shangchun; Wang, Chi; Shi, Huichao

    2018-05-01

    The relationship between flow fluctuation and pressure in a flow facility is studied theoretically and experimentally in this paper, and a method for measuring the flow fluctuation is proposed. According to the synchronicity of pressure and flow fluctuation, the amplitude of the flow fluctuation is calculated using the pressure measured in the flow facility and measurement of the flow fluctuation in a wide range of frequency is realized. Based on the method proposed, uncertainty analysis is used to evaluate the influences of different parameters on the flow fluctuation by the help of a sample-based stochastic model established and the parameters that have great influence are found, which can be a reference for the optimization design and the stability improvement of the flow facility.

  1. Phase unwrapping using region-based markov random field model.

    PubMed

    Dong, Ying; Ji, Jim

    2010-01-01

    Phase unwrapping is a classical problem in Magnetic Resonance Imaging (MRI), Interferometric Synthetic Aperture Radar and Sonar (InSAR/InSAS), fringe pattern analysis, and spectroscopy. Although many methods have been proposed to address this problem, robust and effective phase unwrapping remains a challenge. This paper presents a novel phase unwrapping method using a region-based Markov Random Field (MRF) model. Specifically, the phase image is segmented into regions within which the phase is not wrapped. Then, the phase image is unwrapped between different regions using an improved Highest Confidence First (HCF) algorithm to optimize the MRF model. The proposed method has desirable theoretical properties as well as an efficient implementation. Simulations and experimental results on MRI images show that the proposed method provides similar or improved phase unwrapping than Phase Unwrapping MAx-flow/min-cut (PUMA) method and ZpM method.

  2. The rank correlated FSK model for prediction of gas radiation in non-uniform media, and its relationship to the rank correlated SLW model

    NASA Astrophysics Data System (ADS)

    Solovjov, Vladimir P.; Webb, Brent W.; Andre, Frederic

    2018-07-01

    Following previous theoretical development based on the assumption of a rank correlated spectrum, the Rank Correlated Full Spectrum k-distribution (RC-FSK) method is proposed. The method proves advantageous in modeling radiation transfer in high temperature gases in non-uniform media in two important ways. First, and perhaps most importantly, the method requires no specification of a reference gas thermodynamic state. Second, the spectral construction of the RC-FSK model is simpler than original correlated FSK models, requiring only two cumulative k-distributions. Further, although not exhaustive, example problems presented here suggest that the method may also yield improved accuracy relative to prior methods, and may exhibit less sensitivity to the blackbody source temperature used in the model predictions. This paper outlines the theoretical development of the RC-FSK method, comparing the spectral construction with prior correlated spectrum FSK method formulations. Further the RC-FSK model's relationship to the Rank Correlated Spectral Line Weighted-sum-of-gray-gases (RC-SLW) model is defined. The work presents predictions using the Rank Correlated FSK method and previous FSK methods in three different example problems. Line-by-line benchmark predictions are used to assess the accuracy.

  3. Tailoring a response to youth binge drinking in an Aboriginal Australian community: a grounded theory study.

    PubMed

    McCalman, Janya; Tsey, Komla; Bainbridge, Roxanne; Shakeshaft, Anthony; Singleton, Michele; Doran, Christopher

    2013-08-07

    While Aboriginal Australian health providers prioritise identification of local community health needs and strategies, they do not always have the opportunity to access or interpret evidence-based literature to inform health improvement innovations. Research partnerships are therefore important when designing or modifying Aboriginal Australian health improvement initiatives and their evaluation. However, there are few models that outline the pragmatic steps by which research partners negotiate to develop, implement and evaluate community-based initiatives. The objective of this paper is to provide a theoretical model of the tailoring of health improvement initiatives by Aboriginal community-based service providers and partner university researchers. It draws from the case of the Beat da Binge community-initiated youth binge drinking harm reduction project in Yarrabah. A theoretical model was developed using the constructivist grounded theory methods of concurrent sampling, data collection and analysis. Data was obtained from the recordings of reflective Community-Based Participatory Research (CBPR) processes with Aboriginal community partners and young people, and university researchers. CBPR data was supplemented with interviews with theoretically sampled project participants. The transcripts of CBPR recordings and interviews were imported into NVIVO and coded to identify categories and theoretical constructs. The identified categories were then developed into higher order concepts and the relationships between concepts identified until the central purpose of those involved in the project and the core process that facilitated that purpose were identified. The tailored alcohol harm reduction project resulted in clarification of the underlying local determinants of binge drinking, and a shift in the project design from a social marketing awareness campaign (based on short-term events) to a more robust advocacy for youth mentoring into education, employment and training. The community-based process undertaken by the research partnership to tailor the design, implementation and evaluation of the project was theorised as a model incorporating four overlapping stages of negotiating knowledges and meanings to tailor a community response. The theoretical model can be applied in spaces where local Aboriginal and scientific knowledges meet to support the tailored design, implementation and evaluation of other health improvement projects, particularly those that originate from Aboriginal communities themselves.

  4. A GIHS-based spectral preservation fusion method for remote sensing images using edge restored spectral modulation

    NASA Astrophysics Data System (ADS)

    Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen

    2014-02-01

    High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.

  5. On Short-Time Estimation of Vocal Tract Length from Formant Frequencies

    PubMed Central

    Lammert, Adam C.; Narayanan, Shrikanth S.

    2015-01-01

    Vocal tract length is highly variable across speakers and determines many aspects of the acoustic speech signal, making it an essential parameter to consider for explaining behavioral variability. A method for accurate estimation of vocal tract length from formant frequencies would afford normalization of interspeaker variability and facilitate acoustic comparisons across speakers. A framework for considering estimation methods is developed from the basic principles of vocal tract acoustics, and an estimation method is proposed that follows naturally from this framework. The proposed method is evaluated using acoustic characteristics of simulated vocal tracts ranging from 14 to 19 cm in length, as well as real-time magnetic resonance imaging data with synchronous audio from five speakers whose vocal tracts range from 14.5 to 18.0 cm in length. Evaluations show improvements in accuracy over previously proposed methods, with 0.631 and 1.277 cm root mean square error on simulated and human speech data, respectively. Empirical results show that the effectiveness of the proposed method is based on emphasizing higher formant frequencies, which seem less affected by speech articulation. Theoretical predictions of formant sensitivity reinforce this empirical finding. Moreover, theoretical insights are explained regarding the reason for differences in formant sensitivity. PMID:26177102

  6. A simple method used to evaluate phase-change materials based on focused-ion beam technique

    NASA Astrophysics Data System (ADS)

    Peng, Cheng; Wu, Liangcai; Rao, Feng; Song, Zhitang; Lv, Shilong; Zhou, Xilin; Du, Xiaofeng; Cheng, Yan; Yang, Pingxiong; Chu, Junhao

    2013-05-01

    A nanoscale phase-change line cell based on focused-ion beam (FIB) technique has been proposed to evaluate the electrical property of the phase-change material. Thanks to the FIB-deposited SiO2 hardmask, only one etching step has been used during the fabrication process of the cell. Reversible phase-change behaviors are observed in the line cells based on Al-Sb-Te and Ge-Sb-Te films. The low power consumption of the Al-Sb-Te based cell has been explained by theoretical calculation accompanying with thermal simulation. This line cell is considered to be a simple and reliable method in evaluating the application prospect of a certain phase-change material.

  7. Networking for Leadership, Inquiry, and Systemic Thinking: A New Approach to Inquiry-Based Learning.

    ERIC Educational Resources Information Center

    Byers, Al; Fitzgerald, Mary Ann

    2002-01-01

    Points out difficulties with a change from traditional teaching methods to a more inquiry-centered approach. Presents theoretical and empirical foundations for the Networking for Leadership, Inquiry, and Systemic Thinking (NLIST) initiative sponsored by the Council of State Science Supervisors (CSSS) and NASA, describes its progress, and outlines…

  8. Determination of the Conservation Time of Periodicals for Optimal Shelf Maintenance of a Library.

    ERIC Educational Resources Information Center

    Miyamoto, Sadaaki; Nakayama, Kazuhiko

    1981-01-01

    Presents a method based on a constrained optimization technique that determines the time of removal of scientific periodicals from the shelf of a library. A geometrical interpretation of the theoretical result is given, and a numerical example illustrates how the technique is applicable to real bibliographic data. (FM)

  9. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    ERIC Educational Resources Information Center

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  10. A Phenomenological Study on International Doctoral Students' Acculturation Experiences at a U.S. University

    ERIC Educational Resources Information Center

    Campbell, Throy A.

    2015-01-01

    A phenomenological method was used to analyze ten international doctoral students' description of their lived experiences at a United States (U.S.) university. The analysis was based on the theoretical premise of how students acculturate to their new educational settings. Three broad overlapping themes emerged: (1) participants' past experiences…

  11. Reasons and Methods to Learn the Management

    ERIC Educational Resources Information Center

    Li, Hongxin; Ding, Mengchun

    2010-01-01

    Reasons for learning the management include (1) perfecting the knowledge structure, (2) the management is the base of all organizations, (3) one person may be the manager or the managed person, (4) the management is absolutely not simple knowledge, and (5) the learning of the theoretical knowledge of the management can not be replaced by the…

  12. Using Twitter in Higher Education in Spain and the USA

    ERIC Educational Resources Information Center

    Tur, Gemma; Marín, Victoria I.; Carpenter, Jeffrey

    2017-01-01

    This article examines student teachers' use and perceptions of Twitter, based on a mixed-method comparative approach. Participants (n = 153) were education majors who used Twitter as a part of required coursework in their programs at two universities in Spain and the United States. The theoretical background covers research on international work…

  13. Patterns of Informal Reasoning in the Context of Socioscientific Decision-Making.

    ERIC Educational Resources Information Center

    Sadler, Troy D.; Zeidler, Dana L.

    The purpose of this article is to contribute to a theoretical knowledge base through research by examining factors salient to science education reform and practice in the context of socioscientific issues. The study explores how individuals negotiate and resolve genetic engineering dilemmas. A mixed-methods approach was used to examine patterns of…

  14. Higher Education Research 2000-2010: Changing Journal Publication Patterns

    ERIC Educational Resources Information Center

    Tight, Malcolm

    2012-01-01

    The articles published in 15 specialist academic journals--based in Australasia, Europe and North America--focusing on higher education in the years 2010 (n = 567) and 2000 (n = 388) are analysed. The analysis focuses on: the themes and issues addressed in the articles published, the methods and methodologies used, theoretical engagement, the…

  15. Longitudinal Academic Achievement Outcomes: Modeling the Growth Trajectories of Montessori Elementary Public School Students

    ERIC Educational Resources Information Center

    Mallett, Jan Davis

    2014-01-01

    Elementary education has theoretical underpinnings based on cognitive psychology. Ideas from cognitive psychologists such as James, Dewey, Piaget, and Vygotsky coalesce to form constructivism (Cooper, 1993; Yager, 2000; Yilmaz, 2011). Among others, the Montessori Method (1912/1964) is an exemplar of constructivism. Currently, public education in…

  16. Demonstrable Competence: An Assessment Method for Competency Domains in Learning and Leadership Doctoral Program

    ERIC Educational Resources Information Center

    Rausch, David W.; Crawford, Elizabeth K.

    2013-01-01

    Through this paper, we describe how a doctoral program in Learning and Leadership combines the best of both worlds from theory based programs and applied programs. Participants work from their embedded professional practice underpinned with the theoretical constructs of the program's seven foundational competency domains. Competencies are…

  17. Bringing Nature into Social Work Settings: Mother Earth's Presence

    ERIC Educational Resources Information Center

    Gana, Carolina

    2011-01-01

    In an urban location in the downtown core of Toronto, Ontario, the author provides both individual and group counselling to women impacted by trauma in a community-based setting. Various modalities and theoretical frameworks that include feminism and anti-oppressive methods inform her counselling practice. The approach that the author takes in the…

  18. Pre-Service Mathematics Teacher Efficacy: Its Nature and Relationship to Teacher Concerns and Orientation

    ERIC Educational Resources Information Center

    Pyper, Jamie Scott

    2014-01-01

    In a mixed method study, teacher efficacy and contributing theoretical constructs of teacher concerns and teacher orientation with Intermediate/Senior mathematics preservice teachers from two Ontario Faculties of Education are examined. Data sources include a web-based questionnaire containing two teacher efficacy scales and short answer…

  19. Interactive Exercises for an Introductory Weather and Climate Course

    ERIC Educational Resources Information Center

    Carbone, Gregory J.; Power, Helen C.

    2005-01-01

    Students learn more from introductory weather and climate courses when they can relate theoretical material to personal experience. The ubiquity of weather should make the link obvious but instructors can foster this connection with a variety of simple methods. Here we describe traditional and web-based techniques that encourage students to…

  20. Preparation and Analysis of Solid Solutions in the Potassium Perchlorate-Permanganate System.

    ERIC Educational Resources Information Center

    Johnson, Garrett K.

    1979-01-01

    Describes an experiment, designed for and tested in an advanced inorganic laboratory methods course for college seniors and graduate students, that prepares and analyzes several samples in the nearly ideal potassium perchlorate-permanganate solid solution series. The results are accounted for by a theoretical treatment based upon aqueous…

Top